I finally ran out WUs, crunching about 50 a day and only downloading 32. I read on a previous thread that one way to get around this problem is to detach and then re-attach the host with the project. I attempted this, but when I requested more work the server denied my request saying I had reached my MDQ.
I would like to keep E@H running 100% so if anyone has any thoughts they would be greatly appreciated.
Dave
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
Copyright © 2024 Einstein@Home. All rights reserved.
Ran out of WUs
)
Simple fix, exit boinc, delete client_state.xml file, run boinc again. Host will be issued a new ID and will also download the old albert program. After you download new work units, exit boinc again and replace albert with optimised app. Run boinc again. You can then merge the old host with the new.
Worked great, thanks for your
)
Worked great, thanks for your help. I can now keep E@H running 100%.
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: Worked great, thanks
)
You already have the top rac on our team, what more do you want?
Just kidding, keep up the good work!! I only run einstien at 50% share and even I am getting close to hitting 32/day. The last 2 days went through 28 and 27 units. Hope they can get the servers configured for a heavier load soon so that they can increase the daily quota. Just checked my count for today and I have already got 30 and there is 5.5 hrs of the day left. Still have lots to crunch, so not worried yet, lol.
:)
98SE XP2500+ @ 2.1 GHz Boinc v5.8.8
RE: Simple fix, exit boinc,
)
Is it still necessary to disconnect and re-attach with the project, or is deleting the client_state.xml file enough to get the scheduler to send more work. The reason I ask is because truXoft has to go through the whole calibration process again, and I would like to be able to avoid this every time I run out of work.
Thanks
DB
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: Is it still necessary
)
Unfortunately, that's the down side to it. I'm not 100% sure, but I think you can delete the client_state file, download 32 and then repeat the process. If so, you can built up a large cache and should be able to maintain it. Maybe someone else can tell you how to save the trux calibration.
Terry
RE: Unfortunately, that's
)
Save the original client_state.xml, and replace it once you've finished your new account cycle and merged all the new identities together.
Thanks a lot for the advice
)
Thanks a lot for the advice guys. I have built up a nice cache of WUs that should keep my computer ahead of the MDQ for a while. And by replacing the new client_state file with the old one I have maintained the trux calibration throughout the computer mergers.
With this "back-door" around the MDQ and with Akos' turbo apps, we should be through the S4 run in no time!
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: Thanks a lot for the
)
Now I'm envious....
Where is that faster OSX client that's been promised...?
RE: RE: Worked great,
)
Steve Cressman, I see you have already registered at the BOINC Synergy site, but to complete your transfer to that team, you also need to go to your account page>quit team>find team>select BOINC Synergy>join team. Thank you for joining!
Click my stat image to go to the BOINC Synergy Team site!
I don't know why we must go
)
I don't know why we must go through these contortions....If you get a data set with these short WUs, they process in about eight minutes. So, IMHO, the easiest solution is to raise the MDQ to at least 64(2x) to try and match Akosf's increase in productivity(4x+). Based on what we see with our boxes, E@H is losing about 5% of its hosts for up to 12hours/day. Sad and frustrating.