A new Windows App is available from our Beta Test Page.
It features even more code to track some of the remaining problems. For now we are also distributing the PDB file (containing debugging information) again with the beta package.
This App is the first that has been built with VS2005. I hope that this helps with some of the library problems we see. I don't yet know how this affects the performance.
Please test.
BM
BM
Copyright © 2024 Einstein@Home. All rights reserved.
Windows S5R2 App 4.38 available for Beta Test
)
On a Windows XP Quad host, I tried making this transition from production 4.33 by simply shutting down boincmgr, copying in the new files, and restarting boincmgr.
The four in-process results resumed execution--now using the new ap.
I'll post again tomorrow when they have actually completed if they don't error by then, but it appears that the simple, low work-loss method works for this transition.
I presume, however, that an attempt to return to production by the same means would fail, as discussed recently.
Just installed it on host
)
Just installed it on host 831490. 32-bit Vista, BOINC v5.10.13 service install - the simple method worked for me too without work loss.
As it happens, I noticed the new release just 20 minutes after starting a new 37-hour monster WU (less than 1% completed). That should give a good speed comparision when it finishes - as you can see, the completion times on this box are pretty steady (the recent dip was when SETI was low on work - I think the SETI app overstresses the memory bus on this machine).
Bernd - the Beta download page talks about copying the TWO files from the zip package. With the .pdb, that's become three files. I'm sure beta testers should be alert enough to work it out for themselves, but just in case....
Just switched my quad over as
)
Just switched my quad over as well.
Paused, closed BOINC, extracted....nice and easy and worked like a charm.
I've got four monsters running, one should be done in about 3h, the others will be done in about 15h. Once the first WU finishes I'll boot over to Linux and try out the that app on this host.
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
My early indications say that
)
My early indications say that v4.38 will be significantly slower than v4.33. I have it running on 2 hosts. Both started with "To completion" times similar to their last v4.33 results, but after a couple of hours of processing, those "To completion" times have actually increased a little and "Progress" is barely at 2%. If my extrapolation is accurate, that means my v4.33 times of 57 and 67 hours will go up to around 80 and 90 hours with v4.38.
EDIT: Corrected typos on my estimates of completion.
RE: My early indications
)
On my Banias Pentium M, the new app seems to run about as fast (say +/-10%) as the old one (I switched when the result was about half processed). The first few percent always seem to be slower than progress in the middle of a result, so I would not be surprised if you'd see the performance improve over the next couple of hours.
CU
BRM
RE: My early indications
)
I concur. One of my mid-result changeovers has finished:
result 86508461
My quorum partner looks unlikely to report for some hours yet, so validation must wait, but it completed without reported error.
This E6600 host has reported a good string of results with a very tight range of CPU times ranging from 57044 to 57136 for the most recent eleven results in the last four days. Their names were tightly grouped from:
h1_0508.35_S5R2__167_S5R2c_1
to
h1_0508.35_S5R2__149_S5R2c_1
The new mixed--application result is named
h1_0508.35_S5R2__143_S5R2c_1
This mixed application one at 59717 CPU seconds is well above the expected range, and only about a third was computed with the new beta ap.
My crude initial estimate is that the beta ap is between 25% and 30% slower for this result than expected. Time and other results from other hosts will tell the tale. One does see sudden shifts in execution time from nearly adjacent results once in a while, but this one was on trend to match the usual when I switched over.
In-process work on the four results on my quad also show clear slow-down, though I won't estimate how much yet.
RE: My early indications
)
It's still early, but things may not be as bad (slow) as my first estimate (above). After a couple of more hours of prcoessing it look like my WU's are likely to be only 10 to 12 hours slower than the v4.33 times would have been. That is, v4.38 may only be 15% (+/-) slower (vs. the 25% (+) I reported earlier).
My host has completed its
)
My host has completed its first WU with 4.38 (85% with 4.33, 15% with 4.38)
http://einsteinathome.org/task/86447574
The crunch time is not obviously slower, and falls within the variation of time needed for 4.33.
EDIT : the "slowing" in the results page just after the beta app was started is a result of me playing a game which used most of the threads CPU cycles.
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: Bernd - the Beta
)
Thanks. Fixed.
BM
BM
My quorum partner finished on
)
My quorum partner finished on the second mixed-execution result run on my E6600.
result 86514583
It validated, and took 64175 CPU seconds for a 440 cobblestone result, for which the last ten 4.33 results had taken a tight distribution around 57100 seconds.
So 12% longer time observed, and this was still a mixed-application one. I'll revise my slowdown estimate to roughly 15% extra time, for Core 2 processors on Windows XP. I should have a much better estimate by tomorrow, when several unmixed results will be in from each of two hosts.
So far the Core 2 Quad host has three successful mixed-application outcomes--no validations yet. Clear slow-downs on all three, which with rising proportions of the new ap required 93984, 94561, and 96706 CPU seconds to complete 656 cobblestone results which had been taking about 87000 on the same host.
I also have a successful outcome from a mixed result running on a Banias Pentium M WinXP host. No validation result yet there either.