Got 268.99 credits on my brave old PII Deschutes running Linux. My wingman was a Dual Core AMD Opteron with 4 CPUs and it was about eight times faster. No luck.
Tullio
Got 268.99 credits on my brave old PII Deschutes running Linux. My wingman was a Dual Core AMD Opteron with 4 CPUs and it was about eight times faster. No luck.
Tullio
Yup, seems to happen only with early WUs, created in April (the one mentioned above was created as early as 18th of April).
I am the founder of team miniStein.
We have some “Weekend Crunchers� with smaller machines, not running 7*24.
Some of them are frustrated because of the long compute time of the new WUs and in the worst case they experience problems with competing the work in time.
They beginning to leave the project, what I'm unwillingly to see.
In our forum I started a survey (in German) about the favored compute time oriented on a Core Duo 1,66 Ghz per Core (it is the small Mac mini ;-) ).
Most of the votes were for a time from 2-5 h (50%) and 5-10 h (41%), nearby the former times of E@H.
# Overacting mode ON
From some posters earlier in this thread (fortunately none of the E@H developers) I saw postings which gave me the impression “Don't care about the small machines, if they leave, it's no big loss�.
# Overacting mode OFF
I think this is a very arrogant attitude and does not match the idea of distributed computing.
The purpose of distributed computing is IMHO not only to serve the big machines. The small ones do the same quality of work and they do a lot, because there are many of them!
I hope this posting brings the need of smaller work units back to the mind of the developers and i hope they are successful with this challenge.
BTW, it would be interesting to have a statistic which shows which CPU's does how many work.
Thanks for help
Hans-Peter
solaris is like a wigwam: no windows, no gates and a apache inside !
Hans-Peter,
I agree with what you have said. I have seen processing times over 50hrs on P4 computers. Which would be over 5hrs of cpu time per day for office computers, and average about 4 hrs/day for home computers that are on some time every day.
I'll be so free to copy the statement I've made in another thread discussing the same topic:
Quote:
The problem is that CPU performance varies more and more (meaning the difference between the slowest and the fastest hosts is growing simply gigantic). If the WUs are long, slow hosts can't cope. If the WUs are short, the fast dual, quad or "whatever" cores will kill off the server. I think what we need is what we had during the S5R1 science run: Decide whether a host counts as "slow" or "fast" and send only the shorter kind of WU to slow ones. It's the best solution I can come up with atm and I do hope the project staff is going to implement it asap (yes, I know they're up to their necks in work already but I have no idea how the crunchers should get along without that feature... honestly... not with 200 MHz Pentiums as well as hyperthreaded Quad Xeons out there)
The number of cpus or cores doesn't matter in this case. It's no difference if you have 4 hosts with one core each or one host with 4 cores. What matters is just the performance of one core and this performance is steadily but slowly growing.
From the viewpoint of the project a lot more cpu power is needed, but the access to the db server must be in limits so it doesn't collapse.
I guess the Einstein app will get about twice as fast or even faster when it's optimized, so the crunching times will be halved. This still might be too much for many hosts, but I can understand why they have to drive this road.
From my point of view it is somehow questionable if it isn't just wasting energy to run older hardware _just_ for crunching. I think it's different if for example my little Celeron 433 gateway is running 365/24 anyway. But I switched off my AMD T-Bird(1.33GHz) during the S5RI run, because without SSE capabilities it was not effective enough.
The compromise that I prefer is the dispaching of smaller WUs to slower respectively temporarily used hosts.
I'll be so free to copy the statement I've made in another thread discussing the same topic:
Quote:
The problem is that CPU performance varies more and more (meaning the difference between the slowest and the fastest hosts is growing simply gigantic). If the WUs are long, slow hosts can't cope. If the WUs are short, the fast dual, quad or "whatever" cores will kill off the server. I think what we need is what we had during the S5R1 science run: Decide whether a host counts as "slow" or "fast" and send only the shorter kind of WU to slow ones. It's the best solution I can come up with atm and I do hope the project staff is going to implement it asap (yes, I know they're up to their necks in work already but I have no idea how the crunchers should get along without that feature... honestly... not with 200 MHz Pentiums as well as hyperthreaded Quad Xeons out there)
My first two of the new WU were in the 35-40 hour range, which wasn't too bad; the old iBook and I made the due date by a few days. I received one overnight that's 75 hours, though, due June 2; I'm not so sure about that one. I was going to turn off the computer when I go on vacation Friday, so I'm a little unsure if I should abort it and not take any more from Einstein until I get back . . . now I'm thinking that's what I should do . . .
@Hans-Peter: The quote is from this thread.
No statement from the project devs yet, I'm afraid.
@Michael: The number of cores DOES matter if 4 or more cores keep hammering the server for more work... sure, 4 boxes with one core each would be the same, but imho people are more likely to buy a dual core than two separate boxes, etc... so the total number of cores is increasing. What you say about the speed per core increasing is of course true as well.
Got 268.99 credits on my
)
Got 268.99 credits on my brave old PII Deschutes running Linux. My wingman was a Dual Core AMD Opteron with 4 CPUs and it was about eight times faster. No luck.
Tullio
RE: Got 268.99 credits on
)
Yup, seems to happen only with early WUs, created in April (the one mentioned above was created as early as 18th of April).
CU
H-B
Look here cu, Michael
)
Look here
cu,
Michael
Hi Folks, I am the founder
)
Hi Folks,
I am the founder of team miniStein.
We have some “Weekend Crunchers� with smaller machines, not running 7*24.
Some of them are frustrated because of the long compute time of the new WUs and in the worst case they experience problems with competing the work in time.
They beginning to leave the project, what I'm unwillingly to see.
In our forum I started a survey (in German) about the favored compute time oriented on a Core Duo 1,66 Ghz per Core (it is the small Mac mini ;-) ).
Most of the votes were for a time from 2-5 h (50%) and 5-10 h (41%), nearby the former times of E@H.
# Overacting mode ON
From some posters earlier in this thread (fortunately none of the E@H developers) I saw postings which gave me the impression “Don't care about the small machines, if they leave, it's no big loss�.
# Overacting mode OFF
I think this is a very arrogant attitude and does not match the idea of distributed computing.
The purpose of distributed computing is IMHO not only to serve the big machines. The small ones do the same quality of work and they do a lot, because there are many of them!
I hope this posting brings the need of smaller work units back to the mind of the developers and i hope they are successful with this challenge.
BTW, it would be interesting to have a statistic which shows which CPU's does how many work.
Thanks for help
Hans-Peter
solaris is like a wigwam: no windows, no gates and a apache inside !
Hans-Peter, I agree with what
)
Hans-Peter,
I agree with what you have said. I have seen processing times over 50hrs on P4 computers. Which would be over 5hrs of cpu time per day for office computers, and average about 4 hrs/day for home computers that are on some time every day.
I think this is an over ambitious target.
Andy
I'll be so free to copy the
)
I'll be so free to copy the statement I've made in another thread discussing the same topic:
RE: I'll be so free to copy
)
Great Idea, but is there any answer from the developers?
From which thread is this quote?
solaris is like a wigwam: no windows, no gates and a apache inside !
The number of cpus or cores
)
The number of cpus or cores doesn't matter in this case. It's no difference if you have 4 hosts with one core each or one host with 4 cores. What matters is just the performance of one core and this performance is steadily but slowly growing.
From the viewpoint of the project a lot more cpu power is needed, but the access to the db server must be in limits so it doesn't collapse.
I guess the Einstein app will get about twice as fast or even faster when it's optimized, so the crunching times will be halved. This still might be too much for many hosts, but I can understand why they have to drive this road.
From my point of view it is somehow questionable if it isn't just wasting energy to run older hardware _just_ for crunching. I think it's different if for example my little Celeron 433 gateway is running 365/24 anyway. But I switched off my AMD T-Bird(1.33GHz) during the S5RI run, because without SSE capabilities it was not effective enough.
The compromise that I prefer is the dispaching of smaller WUs to slower respectively temporarily used hosts.
cu,
Michael
RE: I'll be so free to copy
)
My first two of the new WU were in the 35-40 hour range, which wasn't too bad; the old iBook and I made the due date by a few days. I received one overnight that's 75 hours, though, due June 2; I'm not so sure about that one. I was going to turn off the computer when I go on vacation Friday, so I'm a little unsure if I should abort it and not take any more from Einstein until I get back . . . now I'm thinking that's what I should do . . .
@Hans-Peter: The quote is
)
@Hans-Peter: The quote is from this thread.
No statement from the project devs yet, I'm afraid.
@Michael: The number of cores DOES matter if 4 or more cores keep hammering the server for more work... sure, 4 boxes with one core each would be the same, but imho people are more likely to buy a dual core than two separate boxes, etc... so the total number of cores is increasing. What you say about the speed per core increasing is of course true as well.