Hello.
Please look at following tasks:
http://einsteinathome.org/task/225297498
http://einsteinathome.org/task/225297313
First one is with CPU time 20,839.67 and granted credit 251.25.
Second CPU time is 68,329.72 and granted credit 500.00.
Since run time in second case is three times more, shouldn't granted credit be around 700?
Copyright © 2024 Einstein@Home. All rights reserved.
S5GCESSE2 vs BRP3SSE credit
)
Hi Booman,
but BRP runtime on *G*PU is less than 6000s...
Michael
Team Linux Users Everywhere
RE: RE: 1st result2nd
)
And this is relevant because ...??
Are you saying that if I upgraded some old computing device that was taking a week to crunch a task for 500 credits to a 2021 version ultra computing device (whatever it might be by that time) that could do the exact same job in 3 seconds, I shouldn't expect to get 500 credits every 3 secs?
My personal opinion is that the 'value' of the same job should be constant, irrespective of what device was actually used (and the time it took) to crunch it.
But this is irrelevant because the question was about different jobs (GC vs BRP) where it's harder to strike the proper credit relationship. My personal opinion is that the credit relationship should be determined using as many different platforms as possible to determine the full extent of the variability and then calculating some sort of (possibly weighted) average. The project has a fairly limited set of platforms to experiment with and probably can't afford the time to experiment very much even with what they have, let alone experiment more widely. It's not surprising that the ratio they settled on (500:251) is not a perfect fit for all participants.
I also find with my range of hosts that the ratio should probably be closer to 3:1 than it currently is but not all the way (perhaps around 2.5:1). In a way, it's a bit of a payback for the days when ABP credit was a little on the high side when compared to GW credit. In any case it's all about to change once again when the new S6 run gets going very shortly now.
PS: For the benefit of anybody reading this thread, I've made the OP's URLs clickable although a single link to the full tasks list would probably be a better way to see a more extensive comparison of tasks.
Cheers,
Gary.
The problem is that while the
)
The problem is that while the GW application (S5GCESSE2) runs as fast on AMD as on Intel, the BRP application is noticeably slower on AMD than on Intel CPUs.
I hope that even before the start of the next GW run we'll have the new default BOINC credit scheme installed on Einstein@home, which should automatically distribute credit between applications and application versions depending on their efficiency.
BM
BM
RE: The problem is that
)
Bernd, I do rather hope that that was an April Fool's Day joke.
Do you know whether any proper analysis has been done on David's 'CreditNew' schema yet, either statistically on whether the overall project/user/host averages remain constant for equivalent workloads, or psychologically on whether the new method fills the credit role of sustaining volunteer interest?
The only large-scale deployment of 'CreditNew' that I'm aware of is that at SETI, where it has been live on the main project for approaching 10 months, and at Beta for very slightly longer. But SETI certainly doesn't have the spare staff time to perform such an evaluation.
Prior to deployment, SETI had a pretty well-understood and deterministic method of allocating credit by flopcounting. It had its flaws, but at least one knew what a task was 'worth' - and for any one host, credit was near-enough proportional to runtime to discourage cherry-picking.
Under CreditNew, the averages seem to be much as before, but what has been lost is the deterministic relationship between a task and its payback. To a casual glance, individual task credits seem to be near-random. They may settle down after time on fast hosts which have completed thousands, or tens of thousands, of tasks - but that will take time at Einstein, and will never happen for the majority of users. This may affect the psychological evaluation... (read: we see posts on the message boards, when a user is annoyed by receiving an unusually low allocation for an individual task. There are some, but fewer, happy posts from winners when the luck of the draw goes the other way)
There are also problems, which have never been adressed, with the automatic server run-time adjustments to individual app_version DCF. Once dialled in, they work well - but they aren't calculated until the tenth workunit of each new type has been validated. So for every new run at a project like Einstein which changes application regularly, runtime estimates will go haywire for a while, then suffer a nasty, sharp, "DCF squared" transition at the tenth validation. No smoothing.
I have some off-line data from my own researches - be happy to explore the subject further. But this thread probably isn't the place.