Why no explicit benchmarking runs on E@H?

Richard Schumacher
Richard Schumacher
Joined: 8 Aug 06
Posts: 32
Credit: 14212314
RAC: 0
Topic 191700

Climateprediction did them periodically. I would have thought that benchmarking would be needed to be able to set up a useful creditting scheme (that is, one which is apparently fair and, more importantly, offers incentives to attract the most appropriate architectures to each project). Or is the benchmarking done invisibly by keeping runtime statistics?

For a given work unit credits awarded to different architectures can simply be inversely proportional to completion time, true? In contrast, scaling credits between work units and between projects gets difficult because each architecture will have different efficiencies on different work units.

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1376
Credit: 20312671
RAC: 0

Why no explicit benchmarking runs on E@H?

Benchmarks are run by the BOINC client, but are not needed for most projects, anymore.

Credit is being equalized from project to project by counting the FLOPS that it takes to do a result. CPDN never used the benchmarks, they had a static crediting system, all along. Time is no longer a factor. If it takes you 2 hours to do 5000 flops, and it takes another machine 6 hours to do 5000 flops, you both get the same credit for that result. The faster machine still gets more credit per hour, because it does the flops at a 2.5x of the other machine.

A lot simpler to do it this way, and takes away a factor that people could fool the machine into thinking it was doing work faster than it was really.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.