Credit / hour : not fair !

ErichZann
ErichZann
Joined: 11 Feb 05
Posts: 120
Credit: 81582
RAC: 0

With the recent credit drop i

With the recent credit drop i again have the old problem:
I again get almost the same credits / hour on my old Athlon XP 2200+ running linux like on my Athlon 64 3500+ running windows.... i thought the clients are about the same speed now?
Or is the Credit system still bad? The 2200+ always gets small WU's, the 3500+ always long ones (and i read thats a thing the project wants) and it seems that the small ones give much more C / h..... Thats also not too good i think.....

Trog Dog
Trog Dog
Joined: 25 Nov 05
Posts: 191
Credit: 541562
RAC: 0

RE: With the recent credit

Message 44212 in response to message 44211

Quote:
With the recent credit drop i again have the old problem:
I again get almost the same credits / hour on my old Athlon XP 2200+ running linux like on my Athlon 64 3500+ running windows.... i thought the clients are about the same speed now?
Or is the Credit system still bad? The 2200+ always gets small WU's, the 3500+ always long ones (and i read thats a thing the project wants) and it seems that the small ones give much more C / h..... Thats also not too good i think.....

If it's any consolation your 2200+ will get lower credits. The new credits for short wu's are 13.xx (just checked and so far its being awarded 16.xx).

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: If Bruce reads this

Message 44213 in response to message 44201

Quote:

If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.

My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.

Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).

Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.

Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.

Cheers,
Bruce

Director, Einstein@Home

ErichZann
ErichZann
Joined: 11 Feb 05
Posts: 120
Credit: 81582
RAC: 0

RE: If it's any

Message 44214 in response to message 44212

Quote:

If it's any consolation your 2200+ will get lower credits. The new credits for short wu's are 13.xx (just checked and so far its being awarded 16.xx).

Hmm ok, i'll wait how much it will be soon... But still then i get 16.7 c/h on the 2200+ and 21.4 c/h on the 3500+.
In other Benchmarks etc the 3500+ has almost double the score than the 2200+, so the difference is "not big enough" in my eyes.. but ok, ill wait and see

Jim-R
Jim-R
Joined: 8 Feb 06
Posts: 12
Credit: 4352
RAC: 0

RE: Hmm ok, i'll wait how

Message 44215 in response to message 44214

Quote:

Hmm ok, i'll wait how much it will be soon... But still then i get 16.7 c/h on the 2200+ and 21.4 c/h on the 3500+.
In other Benchmarks etc the 3500+ has almost double the score than the 2200+, so the difference is "not big enough" in my eyes.. but ok, ill wait and see

Benchmarks are good for giving you an estimation of the relative speed of computers, but they are not good at predicting exactly how two computers will behave in the "real world". Benchmarks use only one or a small number of different calculations and they are only run for a short time. The benchmarks also dont write to disk or anything of that nature. So they cannot tell the whole story. Someone mentioned elsewhere that the benchmarks BOINC uses only tests using L1 cache memory while the science applications make heavy use of L2 cache. Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.

I saw one cruncher that had the exact same issue that you do (except his faster computer was actually "slower" at crunching than his "slow" computer) and he found that his memory timings were off on the "faster" computer. When he reset them to the proper value the speed jumped to near what it was supposed to be. So as I said, benchmarks don't tell the whole story. If you don't get the performance you think you should in the "real world", there may be a cure, such as memory timings, bus speed settings, that you can change, then you may have memory bandwidth problems that can only be fixed by replacing the mobo.

As a perfect example of a problem on one computer, though not related to this issue, I have a 1gig AMD Duron processor on a mobo with IDE ports running at up to 133 mhz bus speed. My limit here is my hdd which is a 100mhz bus. Windows would load programs in no time (what little time I had it on here! haha) but Linux, while usually much faster than Windows on a slower computer with only a 33mhz IDE bus, slowed to a crawl during disk i/o. I found that Linux must be told the IDE bus speed or it defaults to 33mhz!

Also some computers, as Mr. Allen pointed out, are better at one type of wu than another. So to make the best use of the computer, you may even want to switch it to some other project and use another one to crunch here, or trade it in on one more suited for these wu's.

When asked a question and you are not sure of the right answer, I've found that the best answer is always "I don't know for sure, but I'll find out!"

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 109

RE: Someone mentioned

Message 44216 in response to message 44215

Quote:
Someone mentioned elsewhere that the benchmarks BOINC uses only tests using L1 cache memory while the science applications make heavy use of L2 cache. Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.

Unless the new science app has a larger working set than the s4 ones for some reason, it will fit entirely within the 32k L1 cache of an athlon with room to spare (Akos's apps used between 10-20k depending on the variant). P4s with only an 8k l1 cache had to continually shuffle data between it and the l2 cache which meant they began to see smaller gains from akos's latest apps because they were increasinly bound not by the speed of the cpu, but by having to move data in the caches.

ErichZann
ErichZann
Joined: 11 Feb 05
Posts: 120
Credit: 81582
RAC: 0

RE: Also you can't just

Message 44217 in response to message 44215

Quote:
Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.

Yes sure, thats right.. I just dont see ANY advantage of an athlon xp at 1800 Mhz, with 266 Mhz noname ram on an old Via Chipset against an athlon 64 at 2350 Mhz with DualChanel Corsair Ram running 428 Mhz on a nForce 3 Chipset *g*
But ok, the Ram isn't really used and i think the calculations are so "basic" that most of the new features that are needed by games, movie encoding or whatever aren't really used by the science app, so that should be one reason.. i will just let it crunch and dont look at credits ;)

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 109

It think you have to blame

It think you have to blame the benchmarks for overrating your 3500. the einstien app scales linearly with cpu speed, and the 30% gain of your 3500 is a good fit to the difference in credit rates between them. If part of the benchmark uses SSE2/3 instructions (not on the XP), or does depend on the higher memory speeds of the 3500 that gives you the explanation behind the benchmark difference.

[AF>HFR>RR] Black Hole Sun
[AF>HFR>R...
Joined: 14 Mar 05
Posts: 31
Credit: 890358
RAC: 0

RE: RE: If Bruce reads

Message 44219 in response to message 44213

Quote:
Quote:

If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.

My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.

Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).

Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.

Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.

Cheers,
Bruce

Thanks for your reply Bruce

Well, it seems we have different points of view.
I agree that all Boinc project should grant the same credit / hour / cpu at the begining. But when a project makes the effort of optimizing its code to suit it better to the hosts participating in it, I think that project should also be granted by attracting rac hunters.

What I understand from your reply is that there might be a problem with new projects (or new client versions) that could be intentionally coded like shit just to gain wonderful optimizations later.
So to avoid this, the easiest way is to force an average credit / hour / cpu between Boinc projects. The project that makes effort in optimizing gains time (and saves money), but won't disturb its boinc mates. This might be the only way to keep the boinc plateform attractive for current and future projects.

By the way, my opteron is struggling with the army of yours to reach the top computers :) And you may like this software to manage them : http://forum.boincstudio.boinc.fr/boincstudio/support-international/liste_sujet-1.htm

Scott Brown
Scott Brown
Joined: 9 Feb 05
Posts: 38
Credit: 215235
RAC: 0

RE: RE: If Bruce reads

Message 44220 in response to message 44213

Quote:
Quote:

If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.

My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.

Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).

Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.

Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.

Cheers,
Bruce

While I am in agreement regarding cross-project comparability, I still cannot fathom the method that is being used to reach this goal. Attempting to correct across projects using the "averages" you discuss would seem to be a near impossibility. Since credit rate is completely arbitrary, why not negotiate a standard rate (e.g., X credits per hour on machine Y) to which all projects must conform in order to use the BOINC system? It seems to me that the BOINC developers are stuck on trying to bring credits back in to line with the pre-optimzed rate from SETI.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.