Nehalem-class hosts on Einstein

archae86
archae86
Joined: 6 Dec 05
Posts: 2,741
Credit: 2,915,710,744
RAC: 3,289,183
Topic 194042

There has been talk over on SETI for a little while of a handful of Nehalem-class hosts spotted there. One recent mention got me to notice that user xPod appears to have run multiple Nehalem-class hosts on Einstein, including at least some of which may be yet-unreleased two on a board Xeon configurations.

The decoder ring for detecting the current Nehalem parts appears to be: "Family 6 Model 26 ". That applies both to the just-released desktop models (reporting 8 CPUs--one can, 4 cores, two threads/core), and to what I presume are before-release Xeon models (reporting 16 CPUs--two cans, 4 cores/can, two threads/core)

The single-result timings reported don't seem impressive, and the RACs even less so. However the ones I checked were running the stock Windows 6.04 ap, we don't know what fraction of the day these things are actually running Einstein, and--of course--there is the minor matter of running 16 tasks in parallel.

A single user ID reporting so many hosts of what appear to be pre-release parts (xPod's in particular are all or nearly all reporting 16 CPUs, and several look like part numbers for the rumored Xeon's) running in short bursts suggests a test activity of some kind, not an enthusiast who somehow got their hands on an ES part and is trying for the top of the heap.

The same user has a similar array of hosts appearing on SETI, Rosetta, Climate Prediction ... I did not sanity check, but as the usual number of "Family 6 Model 26" matches was 8, I suspect the same hosts with the same variety of OS etc. are visiting lots of the BOINC projects as yet one more form of torture testing. This would help to explain the low RACs.

[edited to add paragraph about appearance on other projects]

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,305
Credit: 1,591,159,532
RAC: 1,000,570

Nehalem-class hosts on Einstein

I should have one online sometime this weekend. CPU/mobo are due friday and I have everything else I need to get started, even if my waterblock's still TBD due to the better block makers not having anything out yet, so it'll probably drop off again the next week for extended OC testing.

ADDMP
ADDMP
Joined: 25 Feb 05
Posts: 104
Credit: 7,332,049
RAC: 0

RE: I should have one

Message 88531 in response to message 88530

Quote:
I should have one online sometime this weekend. CPU/mobo are due friday and I have everything else I need to get started, even if my waterblock's still TBD due to the better block makers not having anything out yet, so it'll probably drop off again the next week for extended OC testing.

I note "water blocks" here, and "Xeon" in the previous note.

Excuse my ignorance, but what is the OC situation with Xeons? Are there Xeon motherboards with good OC support?

ADDMP

tullio
tullio
Joined: 22 Jan 05
Posts: 2,022
Credit: 32,525,382
RAC: 3,543

RE: RE: I should have one

Message 88532 in response to message 88531

Quote:
Quote:
I should have one online sometime this weekend. CPU/mobo are due friday and I have everything else I need to get started, even if my waterblock's still TBD due to the better block makers not having anything out yet, so it'll probably drop off again the next week for extended OC testing.

I note "water blocks" here, and "Xeon" in the previous note.

Excuse my ignorance, but what is the OC situation with Xeons? Are there Xeon motherboards with good OC support?

ADDMP


There is a discussion going on on this subject on SETI's number crunching forum.
Tullio

th3
th3
Joined: 24 Aug 06
Posts: 208
Credit: 2,208,434
RAC: 0

Will have all the parts for a

Will have all the parts for a nehalem rig in 1-2 hours, waterblock is a problem for me too. Was planning to just get the lga 1366 mounting kit for Apogee GTX to make it as cheap as possible but cant find one anywhere. Might have to order directly from Swiftech and pay international shipping but i'll give the stock cooler a chance first.

It will be fun anyway, going from dualcore to 8 threads could be considered a substantial upgrade even if the cooling has to be downgraded :)

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,305
Credit: 1,591,159,532
RAC: 1,000,570

The GTZ holddown isn't

The GTZ holddown isn't anywhere I can find on swiftech.com yet. Someone on HardOCP's forums emailed and was told it wasn't going to be available until sometime next week.

th3
th3
Joined: 24 Aug 06
Posts: 208
Credit: 2,208,434
RAC: 0

RE: The GTZ holddown isn't

Message 88535 in response to message 88534

Quote:
The GTZ holddown isn't anywhere I can find on swiftech.com yet. Someone on HardOCP's forums emailed and was told it wasn't going to be available until sometime next week.


Now that explains why it isnt available in Europe then :) Thanx for the heads up.

My WC got somewhat screwed in another way as well, this X58 mainboard has the main PCIe slot lower than on any other board i had in this case, theres no space for the pump/res under the GPU.

The i7 runs quite nicely anyway, its an 920 2.66GHz, not yet OCed, 8x E@H in progress and temps seem fine considered its the stock cooler. And gotta love Linux, changing from X38 to X58 chipset and it just boots straight up using the old install (Debian Lenny), all hardware working right away.

Quote:
The single-result timings reported don't seem impressive


I'm sure they will be at least on par with Core2 with Hyperthreading disabled, maybe i can test it out already this weekend. The HT gain might be low for E@H on these things... Btw, how much is the gain for hyperthreaded netbursts with current apps?

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,305
Credit: 1,591,159,532
RAC: 1,000,570

Putting the first expansion

Putting the first expansion slot in position two instead of 1's become an increasingly common mobo feature to make more room for big air coolers. Don't expect it to get better anytime soon. Most LGA1366 mobos have the CPU socket even lower because centering it vertically on the ram slots aliviated timing issues.

My case for this build is a MozartTX, so space isn't an issue, I can fit any offending hardware behind the mobo.

Novasen169
Novasen169
Joined: 14 May 06
Posts: 43
Credit: 2,767,204
RAC: 0

Note that he's part of the

Note that he's part of the team "Intel Corporation" and has the intel-site as website.. Maybe an employee of Intel testing the performance of new CPUs?

th3
th3
Joined: 24 Aug 06
Posts: 208
Credit: 2,208,434
RAC: 0

RE: Note that he's part of

Message 88538 in response to message 88537

Quote:
Note that he's part of the team "Intel Corporation" and has the intel-site as website.. Maybe an employee of Intel testing the performance of new CPUs?


Must be, been watching xpod from time to time earlier, he has all the new stuff long before launch, for example he had several 4-way Tigerton quadcore rigs more than half a year before they were launched. Thats kindof expensive stuff too, 4 Tigerton 2.93GHz did cost around $10,000 for the cpus only.

To Hyperthread or not to Hyperthread:
I just turned off HT in Bios and started testing. Preliminary estimate is somewhere around 35% shorter runtimes without HT.

Novasen169
Novasen169
Joined: 14 May 06
Posts: 43
Credit: 2,767,204
RAC: 0

RE: I just turned off HT in

Message 88539 in response to message 88538

Quote:
I just turned off HT in Bios and started testing. Preliminary estimate is somewhere around 35% shorter runtimes without HT.


But doesn't that mean you'll run twice as many WUs at a time which last 50% longer? That would mean without HT you would run a WU in for example 1 hour and with HT you would run two WUs in 1.5 hours -> speed increase of 25% with HT.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.