This has only broken down recently in S5R2, and then only because the beta apps have been a lot slower than what we had before, ....
Akos made an interesting remark, stating that the new apps are, in fact, several orders of magnitude *faster* than the old ones, probably meaning that they can do the same "scientific work" many times faster. So if the pre-S5R2 apps were biplanes, the new ones seem to be jet fighters. Problem is they get assigned much longer missions (just to stretch your paradigm a bit more :-) ) in the hierarchical all-sky search of S5R2.
I just wanted to clearify "slow" a bit so people don't get the impression that the apps "deteriorated" over time in some way.
Let me just emphasize what I wrote in the original "S5R2" posting:
Quote:
The "science run #5" of the LIGO instruments, or S5 for short, gives us not only the most sensitive data, but also the largest amount of data we ever had. [...]
However, with our present [i.e. S5R1] analysis tool, the computation time needed grows to the power of six over the amount of data.
If an analysis of the S4 data took a year, analyzing twice as much data would have taken about 64 years with the old program (and the same computing power). The new program should basically be able to do this in about a year again (actually less, but we have more than twice the data). So it's about fair to say that the new program does the same work 64 times faster than the old.
If an analysis of the S4 data took a year, analyzing twice as much data would have taken about 64 years with the old program (and the same computing power). The new program should basically be able to do this in about a year again (actually less, but we have more than twice the data). So it's about fair to say that the new program does the same work 64 times faster than the old.
BM
Agreed, and I didn't (and am pretty sure Bikeman didn't either) mean to give the impression that overall the project was moving 'backwards' from a performance POV on running the analysis.
It's really a perception issue on the participants part when you look at the project with a 'blackbox' view and then compare one project to another, more than anything else (IMHO).
There is no work/credit lost. WU's are not aborted if they have begun to run. Only WU's in the clients queue that are no longer needed are aborted. All in all it is a good thing because almost 100% of the WU's crunched are used. The sending of the "trailer" and the book-keeping overhead might be a pain for the project however.
But if 3 WUs get crunched, but 2 would be enough for validation, I would regard the effort for the 3rd result "wasted" (regardless of credits granted). I'm crunching for science, not for credits. I personally would not be happy with such a policy at all.
CU
BRM
I agreee, buit only mildly. Either way is fine with me.
Also, it was mentioned in the other thread that there are several reasons why this one didn't turn into another bitch session on the project. One more reason IMHO: crunchers who frequent several boards have learned as a group not to get into complaints so easily: discussions turn sour quickly, so perhaps avoiding the chance to get another discussion off track may be at work to some degree. Let's face it, sometimes it gets ugly. Witness the recent smear on the P@H boards. I think many don't want it repeated here as well.
We just started a new workunit generator with "dynamic deadlines". The deadlines of workunits generated from now on will vary between two and three weeks depending on the size of the wokunit (i.e. the number of templates within it, which should be proportional to the credit granted).
We'll watch it for a while, maybe we need to adjust the actual numbers.
Bernd Machenschalk
I am trying to contact an Einstein@home project Admin.
So the right address would probably be Bruce Allen and David Hammer.
I'll forward the request to them.
In the longer run after upgrading our backend to newer BOINC you should be able to solve this problem yourself, but I don't know for when such an upgrade is scheduled.
1) I have no opinion on frequency ranges. Whatever gives the best science.
2) I think 3 weeks would be fairly good, as it would allow slower hosts to get some of the larger work units done. As of now, it takes [full time] some of my machines a full week to process a work unit. I have seen at one confirmed case where it took a full 2 weeks [1 million seconds! holy crap].
We just started a new workunit generator with "dynamic deadlines". The deadlines of workunits generated from now on will vary between two and three weeks depending on the size of the wokunit (i.e. the number of templates within it, which should be proportional to the credit granted).
We'll watch it for a while, maybe we need to adjust the actual numbers.
BM
That's great to hear! While this hasn't been an issue for me, this will hopefully bring back users who were unable to, or afraid of not making the deadlines. I'm proud to be apart of a project where the development team actually listens to their user base.
Maybe mentioning this on E@H's main page will help make more users aware of this change.
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
We just started a new workunit generator with "dynamic deadlines" ....
Bernd,
I think this is a really great outcome. Thank you very much for taking the views of the participants back to the team and for being willing to get involved in this issue. This should be great for participant morale.
We just started a new workunit generator with "dynamic deadlines". The deadlines of workunits generated from now on will vary between two and three weeks depending on the size of the wokunit (i.e. the number of templates within it, which should be proportional to the credit granted).
We'll watch it for a while, maybe we need to adjust the actual numbers.
3 weeks is still a bit short for some computers. I have one that is going to take more than 12 days to complete. I have another that I pulled off the project that was going to take more than 20 days. these are attached to other projects, so they will have other work on the host when the Einstein task is downloaded. They are going to miss deadlines even at three weeks. BTW, they are on and crunching 24/7.
RE: RE: This has only
)
Let me just emphasize what I wrote in the original "S5R2" posting:
If an analysis of the S4 data took a year, analyzing twice as much data would have taken about 64 years with the old program (and the same computing power). The new program should basically be able to do this in about a year again (actually less, but we have more than twice the data). So it's about fair to say that the new program does the same work 64 times faster than the old.
BM
BM
RE: If an analysis of the
)
Agreed, and I didn't (and am pretty sure Bikeman didn't either) mean to give the impression that overall the project was moving 'backwards' from a performance POV on running the analysis.
It's really a perception issue on the participants part when you look at the project with a 'blackbox' view and then compare one project to another, more than anything else (IMHO).
RE: RE: There is no
)
I agreee, buit only mildly. Either way is fine with me.
Also, it was mentioned in the other thread that there are several reasons why this one didn't turn into another bitch session on the project. One more reason IMHO: crunchers who frequent several boards have learned as a group not to get into complaints so easily: discussions turn sour quickly, so perhaps avoiding the chance to get another discussion off track may be at work to some degree. Let's face it, sometimes it gets ugly. Witness the recent smear on the P@H boards. I think many don't want it repeated here as well.
(Click for detailed stats)
We just started a new
)
We just started a new workunit generator with "dynamic deadlines". The deadlines of workunits generated from now on will vary between two and three weeks depending on the size of the wokunit (i.e. the number of templates within it, which should be proportional to the credit granted).
We'll watch it for a while, maybe we need to adjust the actual numbers.
BM
BM
RE: Bernd Machenschalk I am
)
So the right address would probably be Bruce Allen and David Hammer.
I'll forward the request to them.
In the longer run after upgrading our backend to newer BOINC you should be able to solve this problem yourself, but I don't know for when such an upgrade is scheduled.
BM
BM
1) I have no opinion on
)
1) I have no opinion on frequency ranges. Whatever gives the best science.
2) I think 3 weeks would be fairly good, as it would allow slower hosts to get some of the larger work units done. As of now, it takes [full time] some of my machines a full week to process a work unit. I have seen at one confirmed case where it took a full 2 weeks [1 million seconds! holy crap].
RE: We just started a new
)
That's great to hear! While this hasn't been an issue for me, this will hopefully bring back users who were unable to, or afraid of not making the deadlines. I'm proud to be apart of a project where the development team actually listens to their user base.
Maybe mentioning this on E@H's main page will help make more users aware of this change.
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: We just started a new
)
Bernd,
I think this is a really great outcome. Thank you very much for taking the views of the participants back to the team and for being willing to get involved in this issue. This should be great for participant morale.
Cheers,
Gary.
RE: We just started a new
)
Great news!
CU
BRM
3 weeks is still a bit short
)
3 weeks is still a bit short for some computers. I have one that is going to take more than 12 days to complete. I have another that I pulled off the project that was going to take more than 20 days. these are attached to other projects, so they will have other work on the host when the Einstein task is downloaded. They are going to miss deadlines even at three weeks. BTW, they are on and crunching 24/7.
BOINC WIKI