S5R2

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

Got 268.99 credits on my

Got 268.99 credits on my brave old PII Deschutes running Linux. My wingman was a Dual Core AMD Opteron with 4 CPUs and it was about eight times faster. No luck.
Tullio

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 724033029
RAC: 1174735

RE: Got 268.99 credits on

Message 62533 in response to message 62532

Quote:
Got 268.99 credits on my brave old PII Deschutes running Linux. My wingman was a Dual Core AMD Opteron with 4 CPUs and it was about eight times faster. No luck.
Tullio


Yup, seems to happen only with early WUs, created in April (the one mentioned above was created as early as 18th of April).

CU

H-B

M. Schmitt
M. Schmitt
Joined: 27 Jun 05
Posts: 478
Credit: 15872262
RAC: 0

Look here cu, Michael

Look here

cu,
Michael

Hans-Peter Lehner
Hans-Peter Lehner
Joined: 10 May 05
Posts: 5
Credit: 23452872
RAC: 0

Hi Folks, I am the founder

Hi Folks,

I am the founder of team miniStein.
We have some “Weekend Crunchers� with smaller machines, not running 7*24.
Some of them are frustrated because of the long compute time of the new WUs and in the worst case they experience problems with competing the work in time.
They beginning to leave the project, what I'm unwillingly to see.

In our forum I started a survey (in German) about the favored compute time oriented on a Core Duo 1,66 Ghz per Core (it is the small Mac mini ;-) ).
Most of the votes were for a time from 2-5 h (50%) and 5-10 h (41%), nearby the former times of E@H.

# Overacting mode ON
From some posters earlier in this thread (fortunately none of the E@H developers) I saw postings which gave me the impression “Don't care about the small machines, if they leave, it's no big loss�.
# Overacting mode OFF
I think this is a very arrogant attitude and does not match the idea of distributed computing.
The purpose of distributed computing is IMHO not only to serve the big machines. The small ones do the same quality of work and they do a lot, because there are many of them!

I hope this posting brings the need of smaller work units back to the mind of the developers and i hope they are successful with this challenge.

BTW, it would be interesting to have a statistic which shows which CPU's does how many work.

Thanks for help
Hans-Peter

solaris is like a wigwam: no windows, no gates and a apache inside !

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1445
Credit: 376054831
RAC: 133868

Hans-Peter, I agree with what

Hans-Peter,
I agree with what you have said. I have seen processing times over 50hrs on P4 computers. Which would be over 5hrs of cpu time per day for office computers, and average about 4 hrs/day for home computers that are on some time every day.

I think this is an over ambitious target.

Andy

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494410
RAC: 0

I'll be so free to copy the

I'll be so free to copy the statement I've made in another thread discussing the same topic:

Quote:
The problem is that CPU performance varies more and more (meaning the difference between the slowest and the fastest hosts is growing simply gigantic). If the WUs are long, slow hosts can't cope. If the WUs are short, the fast dual, quad or "whatever" cores will kill off the server. I think what we need is what we had during the S5R1 science run: Decide whether a host counts as "slow" or "fast" and send only the shorter kind of WU to slow ones. It's the best solution I can come up with atm and I do hope the project staff is going to implement it asap (yes, I know they're up to their necks in work already but I have no idea how the crunchers should get along without that feature... honestly... not with 200 MHz Pentiums as well as hyperthreaded Quad Xeons out there)

Hans-Peter Lehner
Hans-Peter Lehner
Joined: 10 May 05
Posts: 5
Credit: 23452872
RAC: 0

RE: I'll be so free to copy

Message 62538 in response to message 62537

Quote:

I'll be so free to copy the statement I've made in another thread discussing the same topic:

Quote:
The problem is that ...

Great Idea, but is there any answer from the developers?
From which thread is this quote?

solaris is like a wigwam: no windows, no gates and a apache inside !

M. Schmitt
M. Schmitt
Joined: 27 Jun 05
Posts: 478
Credit: 15872262
RAC: 0

The number of cpus or cores

Message 62539 in response to message 62537

The number of cpus or cores doesn't matter in this case. It's no difference if you have 4 hosts with one core each or one host with 4 cores. What matters is just the performance of one core and this performance is steadily but slowly growing.
From the viewpoint of the project a lot more cpu power is needed, but the access to the db server must be in limits so it doesn't collapse.
I guess the Einstein app will get about twice as fast or even faster when it's optimized, so the crunching times will be halved. This still might be too much for many hosts, but I can understand why they have to drive this road.
From my point of view it is somehow questionable if it isn't just wasting energy to run older hardware _just_ for crunching. I think it's different if for example my little Celeron 433 gateway is running 365/24 anyway. But I switched off my AMD T-Bird(1.33GHz) during the S5RI run, because without SSE capabilities it was not effective enough.
The compromise that I prefer is the dispaching of smaller WUs to slower respectively temporarily used hosts.

cu,
Michael

Slywy
Slywy
Joined: 26 Jan 06
Posts: 12
Credit: 1531316
RAC: 4244

RE: I'll be so free to copy

Message 62540 in response to message 62537

Quote:

I'll be so free to copy the statement I've made in another thread discussing the same topic:

Quote:
The problem is that CPU performance varies more and more (meaning the difference between the slowest and the fastest hosts is growing simply gigantic). If the WUs are long, slow hosts can't cope. If the WUs are short, the fast dual, quad or "whatever" cores will kill off the server. I think what we need is what we had during the S5R1 science run: Decide whether a host counts as "slow" or "fast" and send only the shorter kind of WU to slow ones. It's the best solution I can come up with atm and I do hope the project staff is going to implement it asap (yes, I know they're up to their necks in work already but I have no idea how the crunchers should get along without that feature... honestly... not with 200 MHz Pentiums as well as hyperthreaded Quad Xeons out there)

My first two of the new WU were in the 35-40 hour range, which wasn't too bad; the old iBook and I made the due date by a few days. I received one overnight that's 75 hours, though, due June 2; I'm not so sure about that one. I was going to turn off the computer when I go on vacation Friday, so I'm a little unsure if I should abort it and not take any more from Einstein until I get back . . . now I'm thinking that's what I should do . . .

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494410
RAC: 0

@Hans-Peter: The quote is

@Hans-Peter: The quote is from this thread.
No statement from the project devs yet, I'm afraid.
@Michael: The number of cores DOES matter if 4 or more cores keep hammering the server for more work... sure, 4 boxes with one core each would be the same, but imho people are more likely to buy a dual core than two separate boxes, etc... so the total number of cores is increasing. What you say about the speed per core increasing is of course true as well.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.