Information about the new S5 workunits

Constantinos
Constantinos
Joined: 16 May 05
Posts: 39
Credit: 938513
RAC: 0

13 hours of work? I think

13 hours of work?

I think S5 will be very disheartening!

Anyway, my "cruching" self will be sad (Veeeeeeeeeeeeeeeeeerrrryyyyyyyyy sad), but my "scientific" self will continue feel that is offering to science!

Viva Einstein!

Regards

Constantinos

Gravity increases significantly in Autumn, because apples fall in large numbers during that time!

Robert Everly
Robert Everly
Joined: 18 Jan 05
Posts: 9
Credit: 10393199
RAC: 0

Any guesstimates on how many

Any guesstimates on how many WUs will come from each dataset download?

Ed Parker
Ed Parker
Joined: 19 Feb 05
Posts: 11
Credit: 481732
RAC: 0

My S5 unit is at 83.6%

My S5 unit is at 83.6% complete at 29hrs 55mins. 1.2g AMD machine. My 3.0g machine is still getting S4 units.

ledi
ledi
Joined: 7 Mar 06
Posts: 15
Credit: 260202
RAC: 0

RE: I just finish my first

Message 37580 in response to message 37575

Quote:
I just finish my first S5 unit, claimed credit 119, for 13h00 of work.
http://einsteinathome.org/workunit/9763626


Ouch, used to have units of approx. 1 hour with an optimized (akosf) version.
Now just over 12 hours
http://einsteinathome.org/workunit/9789386

I am Homer of Borg. Prepare to be ...ooooh donuts!


Misfit
Misfit
Joined: 11 Feb 05
Posts: 470
Credit: 100000
RAC: 0

RE: As it has been said,

Message 37581 in response to message 37564

Quote:

As it has been said, his generic optimizations, and some extra optimizations have been put into the new application. Specific optimizations like SSE, SSE2, SSE3, etc. can not be pushed out, because currently the BOINC system does not relay that information back to the projects as to ask for that deep of an optimization.

So, we are benifiting from AKOS, Bruce, Bernard, et al's, hard work and dedication to the project to make a better application for S5.

I am unsure what the agreements between the Einstein people and Akos is about optimization, but I would like to think that specific processor optimization can be done, again.


Tell my time estimates for S4 are under 2 hours. Estimates for S5 are over 5 hours.

me-[at]-rescam.org

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1376
Credit: 20312671
RAC: 0

RE: Tell my time estimates

Message 37582 in response to message 37581

Quote:
Tell my time estimates for S4 are under 2 hours. Estimates for S5 are over 5 hours.


Quoting Bernard (earlier in this thread):

- To make up for the faster Apps we increased the size of the workunits. The "long" ones will be roughly five times as long as the "long" ones from S4, the "short" ones will be roughly twice as long as their S4 counterparts.

So, 2 hours and 5 hours is not far off, if you were using a processor specific optimized client, and now using a little less processor optimization. 2.5x on the short one for a double sized unit. I think that's within reason.

Nuadormrac
Nuadormrac
Joined: 9 Feb 05
Posts: 76
Credit: 219267613
RAC: 171503

Just an idea here, for people

Just an idea here, for people with multi-user clients... Right now BOINC is estimating 1.5 hours on the 2 S5 units I'm getting (have been getting about 35 minutes with Akosf u41-04 on my A64)...

When we run outa S4 units, it might be prudent to suspend work fetch to clear out any queue of S4 units you have. In the interum (because the deadlines are about 2 weeks or so (do check this), suspend the s5 units until the S4 is worked down and no longer being issued. Once this happens, BOINC will sense a work shortage, so should then be able to be eased into the longer crunch times without immediately hitting the "panic" EDF mode...

For similar reason, the people at Rosseta (who do offer a time setting) recommend only increasing that in increments, again so BOINC can be eased into the run time one is looking for. Might allow all units to get in on time, without hitting earliest deadline first due to a temporary over-commitment...

In this case, the time estimate will go up yes, but after BOINC already senses itself to be short, albeit not yet outa work. If it gets the new time estimate then before work fetch is re-enabled, it should probably pull the right amount of work without sending itself into a panic...

Edit: Needless to say, re-enable all s5, and make sure it has the new time value before re-enabling work fetch, as the whole point is to help ease BOINC so it doesn't become over-committed.

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: RE: Note that while

Message 37584 in response to message 37533

Quote:
Quote:


Note that while S4 units are still around, you may also get them after you removed the app_info.xml and you will download and run them with the old official App (4.40 in your case). We can't completely avoid this, but the way to make this rather unlikely would be to reset the project after your client has ran out of S4 work to remove the data files that still refer to S4.

BM

So I received today 3 old WUs and 1 new. I noticed a message in my log saying

"Got server request to delete file z1_1416.5"

Does this mean I will get no more S4 WUs? Not sure if that's the data file for S4 data or not. Just wondering if I'll have to monitor my machine or do the reset project thing you propose to prevent crunching S4 data after removing my app_info.xml file.

M

This means that after finishing crunching WU for this file on your computer, BOINC will delete that data file from your computer because there is no more work remaining for it in the project.

Director, Einstein@Home

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: One think I'm curious

Message 37585 in response to message 37543

Quote:

One think I'm curious about is checkpoint frequency. With the general increase in completion times, will results in progress be checkpointed often enough to avoid large losses when the application (and BOINC client) is stopped and restarted?

Oh, and personal opinion only - standardized credit = good; longer completion times = not so good. Completing 8 "work units" in 16 hours just "feels better" than completing 2 work units in 16 hours, credit issues aside. Of course, that's purely subjective and mostly irrelevant.

The main difference between these two scenarios is that on the server side, the 8 x 2 hours scenario requires having 8 results in the database, whereas the 1 x 16 hour scenario requires only a single result in the database. Since the database is our main project bottleneck, and we would like to be able to scale up to more users, the 1 x 16 hour is vastly better for the project.

Director, Einstein@Home

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

RE: Any guesstimates on how

Message 37586 in response to message 37578

Quote:
Any guesstimates on how many WUs will come from each dataset download?

There are 5802 data files, total. With 16.45 million workunits, this implies about 2834 workunits per data file.

But this number is a bit misleading, because there are far fewer workunits per file at low frequencies and more workunits per file at high frequencies. Note that the frequency is the XXXX.X part of the file name.

Bruce

Director, Einstein@Home

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.