There's no formal process to determine what hte baseline is. Basically all the major projects (seti, einstien, cpdn, etc) try and keep thier credit/hour rates equal. Most of the smaller ones do as well, although they're not as consistant about it.
I don't believe there was any deliberate intention to drop credit levels. On the machine of mine that's crunched the most s5r3 WUs (A64x2-win32) I've noticed a very large spread in completion times. The fastest are completing at the same credit rate as s5r2 the slowest are only returning 80% of the credit/hr. I've seen other people report a spread of as much as 1/3rd.
What I think happened was that Bruce, et al weren't anticipating the wide size of the credit variations and the test sample WUs they used to calculate the amount of credit to give were all at the fast end of the spread.
My guess would be that they're trying to work out what is causing the spread so that they can more appropriately assign credit rather than just upping the single value to more closely match the average and leaving the variance as is.
There's no formal process to determine what hte baseline is. Basically all the major projects (seti, einstien, cpdn, etc) try and keep thier credit/hour rates equal. Most of the smaller ones do as well, although they're not as consistant about it.
I don't believe there was any deliberate intention to drop credit levels. On the machine of mine that's crunched the most s5r3 WUs (A64x2-win32) I've noticed a very large spread in completion times. The fastest are completing at the same credit rate as s5r2 the slowest are only returning 80% of the credit/hr. I've seen other people report a spread of as much as 1/3rd.
What I think happened was that Bruce, et al weren't anticipating the wide size of the credit variations and the test sample WUs they used to calculate the amount of credit to give were all at the fast end of the spread.
My guess would be that they're trying to work out what is causing the spread so that they can more appropriately assign credit rather than just upping the single value to more closely match the average and leaving the variance as is.
What I think happened was that Bruce, et al weren't anticipating the wide size of the credit variations and the test sample WUs they used to calculate the amount of credit to give were all at the fast end of the spread.
My guess would be that they're trying to work out what is causing the spread so that they can more appropriately assign credit rather than just upping the single value to more closely match the average and leaving the variance as is.
That's a very accurate description of the situation.
The setup of S5R3 (splitting up the sky to get smaller workunits) apparently has some effect on the runtime of our App that we haven't been anticipating, and we are still digging for the reason of the large variation in runtime for the same number of templates (which is expressed in the pre-assigned credits). Binding the credit to the number of templates was a natural choice for the "Fstat Search" we used up to S5RI, and we found it worked for the "Hierarchical Search" of S5R2, too, but apparently it doesn't work very well for this program with the setup of S5R3.
A credit / hour comparison of the BOINC projects based on the hosts that are attached to multiple projects can normally be found here, though currently the database seems to be down. This is what we're looking at from time to time to see if we're not too far off with the credits on average.
I've just looked through my E@H messages and noticed that starting at 17:53 on 29 Sep there was a sequence of msgs -
30/09/2007 19:41:12|Einstein@Home|Message from server: To get more Einstein@Home work, finish current work, stop BOINC, remove app_info.xml file, and restart.
The last was the one above. Then the server sent some file deletion requests and a new WU downloaded. Didn't trouble me as I had a spare WU waiting for when the (then) current one finished.
I've just looked through my E@H messages and noticed that starting at 17:53 on 29 Sep there was a sequence of msgs -
30/09/2007 19:41:12|Einstein@Home|Message from server: To get more Einstein@Home work, finish current work, stop BOINC, remove app_info.xml file, and restart.
The last was the one above. Then the server sent some file deletion requests and a new WU downloaded. Didn't trouble me as I had a spare WU waiting for when the (then) current one finished.
I ran it first with 4.01, but 7 hours before ending it I changed to 4.07 and had it resent, restarted from zero. The estimated time with 4.01 was around 29 hours. The estimated time with 4.07 and EAH_NO_GRAPHICS was around 27 hours all the time.
Managed to get it done in that amount of time with 75% CPU throttle as well.
There's no formal process to
)
There's no formal process to determine what hte baseline is. Basically all the major projects (seti, einstien, cpdn, etc) try and keep thier credit/hour rates equal. Most of the smaller ones do as well, although they're not as consistant about it.
I don't believe there was any deliberate intention to drop credit levels. On the machine of mine that's crunched the most s5r3 WUs (A64x2-win32) I've noticed a very large spread in completion times. The fastest are completing at the same credit rate as s5r2 the slowest are only returning 80% of the credit/hr. I've seen other people report a spread of as much as 1/3rd.
What I think happened was that Bruce, et al weren't anticipating the wide size of the credit variations and the test sample WUs they used to calculate the amount of credit to give were all at the fast end of the spread.
My guess would be that they're trying to work out what is causing the spread so that they can more appropriately assign credit rather than just upping the single value to more closely match the average and leaving the variance as is.
RE: There's no formal
)
Thank you for the articulate response.
RE: What I think happened
)
That's a very accurate description of the situation.
The setup of S5R3 (splitting up the sky to get smaller workunits) apparently has some effect on the runtime of our App that we haven't been anticipating, and we are still digging for the reason of the large variation in runtime for the same number of templates (which is expressed in the pre-assigned credits). Binding the credit to the number of templates was a natural choice for the "Fstat Search" we used up to S5RI, and we found it worked for the "Hierarchical Search" of S5R2, too, but apparently it doesn't work very well for this program with the setup of S5R3.
A credit / hour comparison of the BOINC projects based on the hosts that are attached to multiple projects can normally be found here, though currently the database seems to be down. This is what we're looking at from time to time to see if we're not too far off with the credits on average.
BM
BM
I've just looked through my
)
I've just looked through my E@H messages and noticed that starting at 17:53 on 29 Sep there was a sequence of msgs -
30/09/2007 19:41:12|Einstein@Home|Message from server: To get more Einstein@Home work, finish current work, stop BOINC, remove app_info.xml file, and restart.
The last was the one above. Then the server sent some file deletion requests and a new WU downloaded. Didn't trouble me as I had a spare WU waiting for when the (then) current one finished.
What was that all about?
Mike
Mike
RE: I've just looked
)
See the first post on this thread.
Oh dear. That was dumb.
)
Oh dear. That was dumb. Sorry.
Mike
RE: Oh dear. That was dumb.
)
No problem, I've done that myself! ;)
I am very happy with this
)
I am very happy with this application, no problems at all! :)
(Windows XP on two AMD athlons)
Two more successful results
)
Two more successful results to report:
http://einsteinathome.org/task/87213066 - awaiting wingman; and,
http://einsteinathome.org/task/87231416 - validated
And, apparently my earlier -161 file xfer error was just a fluke.
http://einstein.phys.uwm.edu/
)
http://einsteinathome.org/workunit/34781760
I ran it first with 4.01, but 7 hours before ending it I changed to 4.07 and had it resent, restarted from zero. The estimated time with 4.01 was around 29 hours. The estimated time with 4.07 and EAH_NO_GRAPHICS was around 27 hours all the time.
Managed to get it done in that amount of time with 75% CPU throttle as well.