This is a general question and E@H and other Boinc projects seem to have the answer. So I ask it here even though it only remotely related.
I'm just getting started in grid computing and have a couple of projects with huge code bases (~50K lines of physicist written C++ code).
I would like to get an estimate of resources required in say GFLOPs (# of floating point ops). I know how to roughly estimate resources available in FLOPS (ops/sec).
So my question is how do BOINC projects estimate how may FLOPs a given job requires? I've seen the estimates of TFLOPS consumed so far.
Is it analytical, looking at the code and counting operations in the CPU intensive parts? Or is it empirical estimating power of a CPU and time it takes? Or is there a better way that I haven't heard of yet?
I don't need a very accurate estimate but I'd rather not just guess at it either.
I'm interested in a discussion even if you don't have a specific suggestion.
Joe
Copyright © 2024 Einstein@Home. All rights reserved.