The biggest advantage of running BOINC on a Mac is that it feels good . We have just recently gotten an optimized client for Einstein, as well as a brand new optimized client for SETI that make use of the Mac-unique Altivec calculating. SETI, for example, runs about 4 times faster with the optimized client. Einstein is not quite that much improvement over the older "standard" version, but it does put us on a par with the PCs. The particular computer you pointed out is probably using the "superbench" optimized BOINC which helps us to get more work from the server ("regular" benchmarks make the server think we take much longer to get a WU finished, so it won't let us have as many WU as we need to keep the queue filled. I'm guessing this is probably not really necessary if one has a 24/7 connection, but I don't really know.
There are optimized clients available for PCs, running under Windoze or Linux, as well.
This comes from the optimized Core client version 4.44. Look at my computer as an example (http://einsteinathome.org/host/18591). It says:
Measured floating point speed 4934.05 million ops/sec
Measured integer speed 18156.87 million ops/sec
Now, these numbers are relevant for higher claimed credit.
Plus there are G5 optimized science applications (the ones that actually do the scientific work). Optimized science applications save time but by that also decrease claimed credit since credit is benchmark*calc.time. The last optimization saved more then 12,000 seconds per WU (was appr. 26,000 is now avrg. 14,000 sec.). The G5 processor features a so called Vector unit (= velocity engine in Apple language). This co-processor accelerates all vector calculations by a factor of more than 10. The latest official E@H science app is optimized for the vector unit, therefore this big jump in calculation time.
One of the "fairest" methods of making comparisons is to use the cobblestones per second ...
I have P4/Xeon computers that are running Einstein@Home and one G5 ... the Intel machines take about 40K seconds to complete a work unit and claim about 70 Cobblestones. The G5 does a work unit in about 16.5K Seconds ...
This is very "cross-project" and allows the person who wants to "boost" their total Cobblestones as fast as they can ...
In general, my experience is to run the optimized science application where possible and run the stock client ... and if you want the highest scores ... well ... test the projects and see ... :)
This is very "cross-project" and allows the person who wants to "boost" their total Cobblestones as fast as they can ...
In general, my experience is to run the optimized science application where possible and run the stock client ... and if you want the highest scores ... well ... test the projects and see ... :)
Using this is aalso unfair, especially wwith a G5. The G5 does crunch faaster due to it's velocity engine. However, this is not represented in the beenchmarks of the stock client and therefore creates unfairly reduced Claimed Credits (if you care for credits at all...)
Using this is aalso unfair, especially wwith a G5. The G5 does crunch faaster due to it's velocity engine. However, this is not represented in the beenchmarks of the stock client and therefore creates unfairly reduced Claimed Credits (if you care for credits at all...)
;-)
Why is this unfair? Your computer put less effort into the workunit it should claim less credit.
Why is this unfair? Your computer put less effort into the workunit it should claim less credit.
because in the "sales pitch" it said that credit is awarded based on actual computation done, not time spent doing it, so an optimised client should claim the same as an unoptimised one in my mind, as it's done the same amount of work, just in a shorter time
you could apply that argument to faster/slower computers, just because my your computer did it quicker, does that mean you should get any less credit for the same amount of work?
this is where paul's improved credit/benchmark system comes in, with the app reporting how much work it's done, and the credit claim being based on that, rather than an arbirtary benchmark*time=credit
> 180 claimed / work unit
)
The biggest advantage of running BOINC on a Mac is that it feels good . We have just recently gotten an optimized client for Einstein, as well as a brand new optimized client for SETI that make use of the Mac-unique Altivec calculating. SETI, for example, runs about 4 times faster with the optimized client. Einstein is not quite that much improvement over the older "standard" version, but it does put us on a par with the PCs. The particular computer you pointed out is probably using the "superbench" optimized BOINC which helps us to get more work from the server ("regular" benchmarks make the server think we take much longer to get a WU finished, so it won't let us have as many WU as we need to keep the queue filled. I'm guessing this is probably not really necessary if one has a 24/7 connection, but I don't really know.
There are optimized clients available for PCs, running under Windoze or Linux, as well.
C
[/url]
Join Team MacNN
So what does optimization
)
So what does optimization have to do with a disproportionate increase in claimed credit when completion times are similar? Note the lower two:
http://einsteinathome.org/workunit/2147177
RE: So what does
)
This comes from the optimized Core client version 4.44. Look at my computer as an example (http://einsteinathome.org/host/18591). It says:
Measured floating point speed 4934.05 million ops/sec
Measured integer speed 18156.87 million ops/sec
Now, these numbers are relevant for higher claimed credit.
Plus there are G5 optimized science applications (the ones that actually do the scientific work). Optimized science applications save time but by that also decrease claimed credit since credit is benchmark*calc.time. The last optimization saved more then 12,000 seconds per WU (was appr. 26,000 is now avrg. 14,000 sec.). The G5 processor features a so called Vector unit (= velocity engine in Apple language). This co-processor accelerates all vector calculations by a factor of more than 10. The latest official E@H science app is optimized for the vector unit, therefore this big jump in calculation time.
One of the "fairest" methods
)
One of the "fairest" methods of making comparisons is to use the cobblestones per second ...
I have P4/Xeon computers that are running Einstein@Home and one G5 ... the Intel machines take about 40K seconds to complete a work unit and claim about 70 Cobblestones. The G5 does a work unit in about 16.5K Seconds ...
So, that translates into:
G5 Claim 0.002619 CS/sec; Granted 0.0043 CS/sec
P4 Claim 0.001889 CS/sec; Granted 0.0019 CS/sec
This is very "cross-project" and allows the person who wants to "boost" their total Cobblestones as fast as they can ...
In general, my experience is to run the optimized science application where possible and run the stock client ... and if you want the highest scores ... well ... test the projects and see ... :)
RE: This is very
)
Using this is aalso unfair, especially wwith a G5. The G5 does crunch faaster due to it's velocity engine. However, this is not represented in the beenchmarks of the stock client and therefore creates unfairly reduced Claimed Credits (if you care for credits at all...)
;-)
RE: Using this is aalso
)
Why is this unfair? Your computer put less effort into the workunit it should claim less credit.
BOINC WIKI
BOINCing since 2002/12/8
RE: Why is this unfair?
)
because in the "sales pitch" it said that credit is awarded based on actual computation done, not time spent doing it, so an optimised client should claim the same as an unoptimised one in my mind, as it's done the same amount of work, just in a shorter time
you could apply that argument to faster/slower computers, just because my your computer did it quicker, does that mean you should get any less credit for the same amount of work?
this is where paul's improved credit/benchmark system comes in, with the app reporting how much work it's done, and the credit claim being based on that, rather than an arbirtary benchmark*time=credit
Want to search the BOINC Wiki, BOINCstats, or various BOINC forums from within firefox? Try the BOINC related Firefox Search Plugins