SR5, 5.01 Credits down from SR4

Sabroe_SMC
Sabroe_SMC
Joined: 9 Oct 06
Posts: 27
Credit: 359045151
RAC: 94934

RE: RE: It makes me

Message 90096 in response to message 90095

Quote:
Quote:

It makes me depressive when I see the credits going down. On my quad it went from about 33 cr/h (S5R4)over 30 cr/h and 27 cr/h to 25 cr/h.
look here and ongoing
The runtimes went down also.

How is the way the creditcalculation is made?

There are sooooo many things that affect how many resources your computer gives to Boinc that there are many things you need to do first. Like rebooting the pc, checking for spyware, etc, etc, etc. Each of those things can slow down your pc. In Windows if you don't reboot about once a month it tends to have too many stuck resources that Windows isn't releasing, which in turn gives Boinc less and can cause a slowdown in crunching time.

MyPC is NOT slown down. Looking into my results show that the times per wu are going down. So far ok. But the credits per wu are going more down than the time to complete.That is my prob.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 689310808
RAC: 217272

Hi! While testing the new

Hi!

While testing the new parameter setup for S5R5, it became clear that the runtime variation, which was here since S5R3 (I think), would be even greater in S5R5 than it was in S5R4.

The fact that runtime varies from WU to WU depending on the area of sky searched is inherent in the search algorithms used and unfortunetely cannot be avoided atm. You would thing that it might be an option to reduce certain WUs in size to compensate, but that would interfere with the post-processing: all WUs return 10000 candidates, and making some smaller than others would give an undue greater weight to certain areas of the sky.

So an attempt was made to compensate by varying the credits: WUs which are predicted to run faster get less credit than those that are predicted (by an approximate runtime predicting model) to run slower. The runtime model is discussed in the "Performance analysis" thread for S5R5 in this forum.

If (like in S5R4), the credits were still exactly the same for all WUs of a certain frequency, the variations in credits/cpu seconds would actually be around a factor of 1.5 or more (!!!). By adjusting the credits per WU, this variation could be dampened somewhat (it seems that the credits are overcompensating the runtime variation a bit now, tho. This is the effect Sabroe_SMC noticed). Note that even with this overcompensation, the variation is still less than with a constant credit per WU as in S5R4. So overall runtime/CPU sec variation should NOT be worse now than what it was in S5R4.

Unfortunately, the compensation for runtime variations by credit variation over WUs can never be perfect:

* Different computers will show different runtime variations. Systems with faster RAM seem to show less variation, for example.

* It is not trivial to describe the runtime of a WU with a mathematical formula that fits reality.

Therefore it's virtually impossible to vary the credits in a way that exactly matches the variation of runtime.

Variation in runtime across WUs is something that is not unique to Einstein@Home, I guess we are just spoiled by the almost constant runtime that earlier runs showed. It does average out over longer times, tho.

Whether or not a general, average, rise of credits (e.g. in comparison to SETI@Home) is in order is another question that Bernd has a watchful eye on.

CU
Bikeman

Sabroe_SMC
Sabroe_SMC
Joined: 9 Oct 06
Posts: 27
Credit: 359045151
RAC: 94934

RE: Hi! While testing the

Message 90098 in response to message 90097

Quote:

Hi!

While testing the new parameter setup for S5R5, it became clear that the runtime variation, which was here since S5R3 (I think), would be even greater in S5R5 than it was in S5R4.

The fact that runtime varies from WU to WU depending on the area of sky searched is inherent in the search algorithms used and unfortunetely cannot be avoided atm. You would thing that it might be an option to reduce certain WUs in size to compensate, but that would interfere with the post-processing: all WUs return 10000 candidates, and making some smaller than others would give an undue greater weight to certain areas of the sky.

So an attempt was made to compensate by varying the credits: WUs which are predicted to run faster get less credit than those that are predicted (by an approximate runtime predicting model) to run slower. The runtime model is discussed in the "Performance analysis" thread for S5R5 in this forum.

Why dont you give simply a fixed credit per second of crunching?
If (like in S5R4), the credits were still exactly the same for all WUs of a certain frequency, the variations in credits/cpu seconds would actually be around a factor of 1.5 or more (!!!). By adjusting the credits per WU, this variation could be dampened somewhat (it seems that the credits are overcompensating the runtime variation a bit now, tho. This is the effect Sabroe_SMC noticed). Note that even with this overcompensation, the variation is still less than with a constant credit per WU as in S5R4. So overall runtime/CPU sec variation should NOT be worse now than what it was in S5R4.

Unfortunately, the compensation for runtime variations by credit variation over WUs can never be perfect:

* Different computers will show different runtime variations. Systems with faster RAM seem to show less variation, for example.

* It is not trivial to describe the runtime of a WU with a mathematical formula that fits reality.

Therefore it's virtually impossible to vary the credits in a way that exactly matches the variation of runtime.

Variation in runtime across WUs is something that is not unique to Einstein@Home, I guess we are just spoiled by the almost constant runtime that earlier runs showed. It does average out over longer times, tho.

Whether or not a general, average, rise of credits (e.g. in comparison to SETI@Home) is in order is another question that Bernd has a watchful eye on.

CU
Bikeman


Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5846
Credit: 109975390288
RAC: 29572619

RE: RE: .... So an

Message 90099 in response to message 90098

Quote:
Quote:
....
So an attempt was made to compensate by varying the credits: WUs which are predicted to run faster get less credit than those that are predicted (by an approximate runtime predicting model) to run slower. The runtime model is discussed in the "Performance analysis" thread for S5R5 in this forum.

Why dont you give simply a fixed credit per second of crunching?

Hi Sabroe_SMC

When you respond to a post and insert a one line response in the middle, you need to be very careful to make sure there is a closing quote tag immediately before the start or your reply and a new opening quote tag where your reply finishes and the previous poster's words continue on. Otherwise, to the casual observer, the whole of your reply seems to be a single quote from the previous poster and your words are lost. An even better idea is to select just the point you are responding to since if anyone wants to read the entire comments of the previous poster they can easily go back to the referenced message and do so.

So, please excuse my interference, as I'm just wanting to make sure others can easily see your question and the bit from Bikeman that your question really related to.

To answer your question, think of it this way. I have some Q6600s that can produce around 38 credits per hour per core - ie around 3600 credits per day for each machine. I also have a bunch of dual PIII 1400 servers that produce around 9 to 10 credits per hour per core or around 450 credits per day for each machine. Assuming that your suggestion was for fixed credit per second per core (not per machine) either my Q6600 should be reduced to 900 credits per day (to be comparable with the 450 per day from a PIII) or my PIII 1400 should be boosted to 1800 credits per day (to be comparable with the 3600 per day from a Q6600). Do either of those options seem to be fair??

To extend the concept to a ridiculous level, what incentive would there be for anyone to put a Q6600 into production if four old PI style machines could earn the same credit just by being run 24/7 and perhaps returning a single result each by the deadline if they were lucky?

Surely you can't really be serious to suggest fixed credit per second of crunching, irrespective of the power and/or efficiency of the machine doing the work?

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5846
Credit: 109975390288
RAC: 29572619

RE: RE: It makes me

Message 90100 in response to message 90095

Quote:
Quote:

It makes me depressive when I see the credits going down. On my quad it went from about 33 cr/h (S5R4)over 30 cr/h and 27 cr/h to 25 cr/h.
look here and ongoing
The runtimes went down also.

How is the way the creditcalculation is made?

There are sooooo many things that affect how many resources your computer gives to Boinc that there are many things you need to do first. Like rebooting the pc, checking for spyware, etc, etc, etc. Each of those things can slow down your pc. In Windows if you don't reboot about once a month it tends to have too many stuck resources that Windows isn't releasing, which in turn gives Boinc less and can cause a slowdown in crunching time.

In your response, you seem to be suggesting that lack of resources (due to malware or otherwise) can cause a decrease in credit per cpu hour of the type that the previous poster was reporting. Think about how the cr/h figures are calculated. Both the CPU seconds and the credit awarded are reported on the website for each task returned and validated. The cr/h value is calculated from just those two numbers.

What is not reported is the wall clock time that any particular task took. So if a host was infested with malware and the wall clock time was 10 times longer than the actual CPU time, you probably wouldn't actually see any significant change in the value of cr/h although the CPU efficiency would reduce dramatically.

This is why it's quite unlikely that the changes being reported in cr/h have anything to do with the things you suggest. My own preliminary estimates show that there is probably about a 10% reduction in the cr/h for R5 as compared with the previous run but don't quote me on that figure as I need to look at a lot more data from a number of different platforms to have a more useful comparison.

Cheers,
Gary.

6dj72cn8
6dj72cn8
Joined: 24 Jan 06
Posts: 24
Credit: 13321065
RAC: 0

I have no objection to

I have no objection to variation in crunch time between tasks. Nor to some variation in credit - it adds interest. But I am a little puzzled on two fronts. Firstly, if you look at the granted credits below (rightmost column), you'll see a distinct reduction happening over time. Each task was granted less credit than the one before it. Secondly, if you look at the two most recent tasks, you see that the longer-running task (17,208 sec) got less credit than the shorter (14,695). Are we sure everything is working properly in the credit code?

115414492 47371679 24 Jan 2009 2:24:29 24 Jan 2009 8:05:39 UTC 17,208.27 35.49 118.90
115090589 47221363 21 Jan 2009 0:10:10 21 Jan 2009 8:53:03 UTC 14,694.99 40.09 134.31
115090506 47221326 21 Jan 2009 0:09:03 21 Jan 2009 5:05:49 UTC 15,379.25 40.30 134.99
114983764 47172165 19 Jan 2009 23:52:52 20 Jan 2009 5:17:06 UTC 14,559.75 43.33 145.16
114912697 47139580 19 Jan 2009 3:28:09 19 Jan 2009 8:14:29 UTC 16,093.73 45.21 151.44
114394610 46901081 14 Jan 2009 5:58:23 15 Jan 2009 7:01:33 UTC 17,710.26 53.15 pending
114394609 46901080 14 Jan 2009 5:48:13 15 Jan 2009 2:53:03 UTC 18,200.57 53.44 179.03

Edit: I typed the above with double spaces between each column so that it was clear to read. I previewed before posting. But when it went 'live' the server stripped back the double spaces to singles resulting in the jumble you see above. What use is the board's preview function then?

RandyC
RandyC
Joined: 18 Jan 05
Posts: 6072
Credit: 111139797
RAC: 0

RE: I have no objection to

Message 90102 in response to message 90101

Quote:
I have no objection to variation in crunch time between tasks. Nor to some variation in credit - it adds interest. But I am a little puzzled on two fronts. Firstly, if you look at the granted credits below (rightmost column), you'll see a distinct reduction happening over time. Each task was granted less credit than the one before it. Secondly, if you look at the two most recent tasks, you see that the longer-running task (17,208 sec) got less credit than the shorter (14,695). Are we sure everything is working properly in the credit code?

See BBCode tags...
If you use the xxx BBCode tags, you will get the following (replace with []):
[pre]115414492 47371679 24 Jan 2009 2:24:29 24 Jan 2009 8:05:39 UTC 17,208.27 35.49 118.90
115090589 47221363 21 Jan 2009 0:10:10 21 Jan 2009 8:53:03 UTC 14,694.99 40.09 134.31
115090506 47221326 21 Jan 2009 0:09:03 21 Jan 2009 5:05:49 UTC 15,379.25 40.30 134.99
114983764 47172165 19 Jan 2009 23:52:52 20 Jan 2009 5:17:06 UTC 14,559.75 43.33 145.16
114912697 47139580 19 Jan 2009 3:28:09 19 Jan 2009 8:14:29 UTC 16,093.73 45.21 151.44
114394610 46901081 14 Jan 2009 5:58:23 15 Jan 2009 7:01:33 UTC 17,710.26 53.15 pending
114394609 46901080 14 Jan 2009 5:48:13 15 Jan 2009 2:53:03 UTC 18,200.57 53.44 179.03
[/pre]

Quote:

Edit: I typed the above with double spaces between each column so that it was clear to read. I previewed before posting. But when it went 'live' the server stripped back the double spaces to singles resulting in the jumble you see above. What use is the board's preview function then?


Seti Classic Final Total: 11446 WU.

6dj72cn8
6dj72cn8
Joined: 24 Jan 06
Posts: 24
Credit: 13321065
RAC: 0

Thanks. Let's try

Thanks. Let's try it.
[pre]17,208.27 118.90
14,694.99 134.31[/pre]
Ooh. Nice.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: My own preliminary

Message 90104 in response to message 90100

Quote:
My own preliminary estimates show that there is probably about a 10% reduction in the cr/h for R5 as compared with the previous run but don't quote me on that figure as I need to look at a lot more data from a number of different platforms to have a more useful comparison.

That's what I said earlier in the thread, but it got buried... My best guess based on just looking at things is 10-13%.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5846
Credit: 109975390288
RAC: 29572619

RE: ... if you look at the

Message 90105 in response to message 90101

Quote:
... if you look at the granted credits below (rightmost column), you'll see a distinct reduction happening over time. Each task was granted less credit than the one before it.


This is actually to be expected because there was also a corresponding general reduction in crunch time (with minor anomalies) over the same period. Results that are expected to take less time will be awarded less credit by design.

Quote:
if you look at the two most recent tasks, you see that the longer-running task (17,208 sec) got less credit than the shorter (14,695).


This is the only real anomaly. The 17Ksec was predicted to take even less time than the 14Ksec task and so it was awarded a lower credit based on that expectation. It has been stated in a few places already that the prediction of crunch time is rather problematic so expect some anomalies.

When you evaluate your results, you need to look at sequence numbers (seq#s) of tasks to see what is happening. Here is your data with the non-useful stuff removed and seq#s added. I've also removed the word 'pending' and replaced it with what the credit will be when the task validates. I've used tags rather than tags which eliminates the intervening blank lines in the output.

1022  17,208.27  35.49  118.90
1047  14,694.99  40.09  134.31
1048  15,379.25  40.30  134.99
1062  14,559.75  43.33  145.16
1070  16,093.73  45.21  151.44
1100  17,710.26  53.15  178.05
1101  18,200.57  53.44  179.03

Seq#s over 1100 must be heading up towards the peak in the crunch time cycle. Seq#s near 1000 must be heading down towards a trough. Unfortunately there are strong 'variations' that can't be predicted. Take a look at this plot produced by Bikeman from data from one of archae86's hosts. In the bottom left hand region of the plot (magenta points) note the huge variation in crunch times which would otherwise be predicted to follow a decreasing trend towards the left of the plot. It would be pretty hard to predict that lot.

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.