Looking at mmciastro's Celeron's score really highlights the problem with einstien credit: The huge 2-2.5x gap between the SSE and non SSE app. The SETI gap is almost as large, but that project's calibrated it's credit for the SSE boxes instead of trying to split the difference. Akos's S4 apps had a much smaller gap (1.3-1.5x), so I know the nonSSE app has plenty of room for improvement.
On a tangent: [AF]CRISTOBOOL, could you post the what your G%'s benchmarks say it should be taking per credit. I'd be interested to see where they sit for reference purposes.
Sorry, but i don't understand exactely what you want to say but i can give you result benchmark of BOINC :
-973 Mflops
-2980 Mips
so it's not very good, and with the older BOINC that calculate your credits granted with this benchmark it was catastrophic still more with optimized applications , servers sent not enough WU; fortunately that the mode of calculation changed ;
So Team MAC NN had optimize the Benchmarks of BOINC to counter these problems; and with this benchmarks the results on my computers :
-5032 Mflops
-15025 Mips
SO more than five times faster ! and servers sent the good number of WU and granted credits was larger.
Sorry but these benchmarks simply CAN'T be realistic. My reasonably fast AMD 64 3500+ benchmarks at 2006.27 Mflops and 3735.85 Mips. It could probably do a bit better if i switched instant messaging off for the benchmarks but those values are okay as rough estimates. So, however good your Mac is- 15k Mips??? I think there must be something wrong with your client...
Sorry but these benchmarks simply CAN'T be realistic. My reasonably fast AMD 64 3500+ benchmarks at 2006.27 Mflops and 3735.85 Mips. It could probably do a bit better if i switched instant messaging off for the benchmarks but those values are okay as rough estimates. So, however good your Mac is- 15k Mips??? I think there must be something wrong with your client...
However it is true , but don't be afraid by these results you can find optimized BOINC benchmarks for x86 CPU too .
But it's true that i don't know exactely What Team MAC NN have changed in this code
What i know is that optimized code use SIMD units (vector units like SSE) for MFLOPS and for MIPS whereas these benchmarks don't have for required goal to test SIMD units.
Until, SIMD units of the PowerPC970/MPC74XX/MPS8641D/POWER5/POWER5+ are very fast with integers 32 Bits
Or maybe the benchmarks differ more than the actual performance... I know that different clients can influence the benchmarks, e.g. many clients for Linux return lower benchmarks than those for Windows even if both PCs take the same computing time for a similar-sized WU.
Sorry but these benchmarks simply CAN'T be realistic. My reasonably fast AMD 64 3500+ benchmarks at 2006.27 Mflops and 3735.85 Mips. It could probably do a bit better if i switched instant messaging off for the benchmarks but those values are okay as rough estimates. So, however good your Mac is- 15k Mips??? I think there must be something wrong with your client...
Exactly. The benchmarks aren't realistic - they're pure fiction.
BOINC calculates the benchmarks, and the results are given to all projects.
The Einstein (or whatever) science application actually does the work.
So changing the version of BOINC can change the reported speed, but has absolutely no effect on the actual crunching time.
In the old days (S4 on Einstein: pre-enhanced on SETI), things were different in two ways. (a) Separate optimised (high speed) science applications were available, and (b) the amount of credit claimed was worked out from the time of calculation and the reported benchmark speed.
Because the optimised apps finished early, they claimed low credit: so the BOINC clients were artificially boosted in the opposite direction, so the credit claims ended up about right. That's why you'll still see BOINC 5.3.12.tx36 in my results: it was useful in the old days.
But now, credit is assigned centrally (Einstein), or by FLOPS (SETI). The benchmarks are not used, so I've turned off the fiction-generator in tx: I don't actually know of any project where it would still be appropriate.
But now, credit is assigned centrally (Einstein), or by FLOPS (SETI). The benchmarks are not used, so I've turned off the fiction-generator in tx: I don't actually know of any project where it would still be appropriate.
That's something of a hot topic, in general … but several other projects use the BOINC benchmarks in credit claims: SzTAKI Desktop Grid and Leiden Classical are the two I crunch for. I'm pretty sure that any project using the BOINC server package 'right out of the box' will do so as well.
But now, credit is assigned centrally (Einstein), or by FLOPS (SETI). The benchmarks are not used, so I've turned off the fiction-generator in tx: I don't actually know of any project where it would still be appropriate.
That's something of a hot topic, in general … but several other projects use the BOINC benchmarks in credit claims: SzTAKI Desktop Grid and Leiden Classical are the two I crunch for. I'm pretty sure that any project using the BOINC server package 'right out of the box' will do so as well.
Agreed, the other projects use benchmark credit: but do they have optimised science apps? That's the reason for using fictional benchmarks: to correct the underclaim when not using 'right out of the box' science apps.
No idea if some of them do, but there are certainly some cases when people (some deliberately) use overclaiming BOINC clients and plain-vanilla science apps. In HashClash I remember some very unpleasant scenes about Boinc 5.5. (hope I got the number right) when some people suddenly got 10 times as much credit as others with similar PCs. This is definitely a loophole in the system, and where you have that, there are always some lamers who take an unfair advantage.
RE: Looking at mmciastro's
)
Sorry, but i don't understand exactely what you want to say but i can give you result benchmark of BOINC :
-973 Mflops
-2980 Mips
so it's not very good, and with the older BOINC that calculate your credits granted with this benchmark it was catastrophic still more with optimized applications , servers sent not enough WU; fortunately that the mode of calculation changed ;
So Team MAC NN had optimize the Benchmarks of BOINC to counter these problems; and with this benchmarks the results on my computers :
-5032 Mflops
-15025 Mips
SO more than five times faster ! and servers sent the good number of WU and granted credits was larger.
Sorry but these benchmarks
)
Sorry but these benchmarks simply CAN'T be realistic. My reasonably fast AMD 64 3500+ benchmarks at 2006.27 Mflops and 3735.85 Mips. It could probably do a bit better if i switched instant messaging off for the benchmarks but those values are okay as rough estimates. So, however good your Mac is- 15k Mips??? I think there must be something wrong with your client...
RE: Sorry but these
)
However it is true , but don't be afraid by these results you can find optimized BOINC benchmarks for x86 CPU too .
But it's true that i don't know exactely What Team MAC NN have changed in this code
What i know is that optimized code use SIMD units (vector units like SSE) for MFLOPS and for MIPS whereas these benchmarks don't have for required goal to test SIMD units.
Until, SIMD units of the PowerPC970/MPC74XX/MPS8641D/POWER5/POWER5+ are very fast with integers 32 Bits
Or maybe the benchmarks
)
Or maybe the benchmarks differ more than the actual performance... I know that different clients can influence the benchmarks, e.g. many clients for Linux return lower benchmarks than those for Windows even if both PCs take the same computing time for a similar-sized WU.
RE: Sorry but these
)
Exactly. The benchmarks aren't realistic - they're pure fiction.
BOINC calculates the benchmarks, and the results are given to all projects.
The Einstein (or whatever) science application actually does the work.
So changing the version of BOINC can change the reported speed, but has absolutely no effect on the actual crunching time.
In the old days (S4 on Einstein: pre-enhanced on SETI), things were different in two ways. (a) Separate optimised (high speed) science applications were available, and (b) the amount of credit claimed was worked out from the time of calculation and the reported benchmark speed.
Because the optimised apps finished early, they claimed low credit: so the BOINC clients were artificially boosted in the opposite direction, so the credit claims ended up about right. That's why you'll still see BOINC 5.3.12.tx36 in my results: it was useful in the old days.
But now, credit is assigned centrally (Einstein), or by FLOPS (SETI). The benchmarks are not used, so I've turned off the fiction-generator in tx: I don't actually know of any project where it would still be appropriate.
RE: But now, credit is
)
That's something of a hot topic, in general … but several other projects use the BOINC benchmarks in credit claims: SzTAKI Desktop Grid and Leiden Classical are the two I crunch for. I'm pretty sure that any project using the BOINC server package 'right out of the box' will do so as well.
RE: RE: But now, credit
)
Agreed, the other projects use benchmark credit: but do they have optimised science apps? That's the reason for using fictional benchmarks: to correct the underclaim when not using 'right out of the box' science apps.
No idea if some of them do,
)
No idea if some of them do, but there are certainly some cases when people (some deliberately) use overclaiming BOINC clients and plain-vanilla science apps. In HashClash I remember some very unpleasant scenes about Boinc 5.5. (hope I got the number right) when some people suddenly got 10 times as much credit as others with similar PCs. This is definitely a loophole in the system, and where you have that, there are always some lamers who take an unfair advantage.