We're still working on getting a grip to the large runtime variation that came up with splitting up the sky in S5R3 (it was averaged out in the "all-sky" search in S5R2).
However we found that on average we were still awarding too little credit compared to other projects recently, so we decided to raise the credit by 7%.
Note that due to server-assigned credit this will only affect newly generated workunits, so it will take some time to propagate through to your statistics.
BM
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Credit raise
)
Thanks for the update...
RE: We're still working on
)
Yep, I don't run monster caches (0.5 days currently), so I have seen the boost already.
Few more of questions regarding S5R3;
1.) Have you worked up a TauWU pdf for it like you did for S5R2?
2.) Was anything done to address the problem of sending longer running results to hosts which are reporting with their CPCS that they can't make the deadline even if run solo on EAH flat out?
3.) Do the longer running results still have a three week deadline rather than two? Along this line are you still considering variable deadlines?
Alinator
RE: We're still working on
)
That's great, thanks for that, much appreciated.
I have just downloaded Beta 4.12 on all my Linux computers to see if there is a difference over 4.09 and 4.02.
RE: RE: We're still
)
We are still not capable of even correctly anticipating this variation.
Personally I would rather be in favor of flattening the variation by measurements in the App than to react to the variation with deadlines and credits, but I still don't know whether this would be possible.
At a meeting that covers next week all relevant people (including the authors of the code in question) will be at one table, I hope things get sorted out then.
BM
BM
RE: Personally I would
)
I personally don't have a problem with the deadlines and the credits as they currently are, but I am very aware that is but one opinion and that there are a lot of other opinions as well. Flattening the variation in the application is an elegant solution if that works, but if not, then I am happy to stay with the situation as it is now! fwiw.... ;)
And thanks for your efforts overall!
RE: RE: Yep, I don't run
)
Agreed, getting the credit rate and the deadlines right is no trivial task, especially when some of the fundamentals have changed. ;-)
I also agree that getting better feedback from the hosts would ideal to assist with making the determination. That would most likely make things somewhat easier as you make refinments to the analysis in the future without having to worry about the impact on scoring as much.
The one observation I have regarding deadlines you might want to bring to the table though is when considering how long to make them a lot of folks don't particularly like projects where the 'tightness factor' is too high. On S4 and S5R1, EAH played relatively nice with other projects even on slow hosts. With S5R2 and up, the longer runtimes made it so for slow hosts and even faster hosts running multiple projects they would get driven into EDF to complete the EAH task on time, and as a side effect drove the host off a 'lockstep' resource share in the short term for all projects as BOINC worked off the LTD which had built up for the preempted tasks on other projects.
I know that's a 'dumb' way to look at the problem, but that's my take on how a lot of people look at it based on posts here and in other projects' NC fora.
I guess one could say this could be filed under, "Just when you think you have all the answers, they change all the questions!". :-)
Alinator
RE: I also agree that
)
We need some more information, but I doubt that it can be provided by participants. It would probably need some additional code in the Apps to write out some traces, and something on the server side to follow them. Anyway, this is something we are still working on, and we hope to make some progress next week.
BM
BM
RE: RE: I also agree that
)
Oh yes, it would have to be instrumentation built into the app. Hosts don't lie, but participants might. ;-)
In any event, no matter what the outcome it sounds like it's going to be a pretty heavy duty meeting (bring lots of coffee). :-)
Alinator
RE: RE: I also agree that
)
Bernd,
You posted while I was preparing the next episode for publication! I've uploaded another graph here: I think it shows the structure of the time variations even more clearly.
This sequence is looking as if I might be able to follow it all the way down to result 0: at 8 WUs/day, I should finish it sometime next Wednesday. I'll post the full graph then, but if it would help I'm happy to post another intermediate snapshot in time for the start of your meeting, if you tell us when you plan to start.
RE: Oh yes, it would have
)
No, it's not about lying. It's about getting the relation right between certain properties of the workunit and dynamically calculated parameters in the App that affect the runtime. At the very moment, neither of these is visible to the participants, and even to us only when running under a debugger (or with an amount of debug output that would fill up your harddisk). We see the sinewave over all workunits of a given frequency band, but it's still hard to predict the phase, which would be important for a specific WU.
BM
BM