Report deadline too short

Holger Setihome.dk
Holger Setihome.dk
Joined: 9 Feb 05
Posts: 1
Credit: 81533
RAC: 0
Topic 188040

Why is the deadline for WU so short.
I run 3 project on my computer (24/7), Climate (25%), Seti (50%) and Einstein (25%)
When I recieve WU it seem' that you do not find out that Einstein only run's with 25%, the result of this is that the short timelimit will run over, and i therefore will not get credit for my work.

PCHome Denmark
PCHome Denmark
Joined: 17 Nov 04
Posts: 8
Credit: 32034
RAC: 0

Report deadline too short

It would be nice with a longer timelimit, so users also have the option to run other projects on there machine.

Ziran
Ziran
Joined: 26 Nov 04
Posts: 194
Credit: 605124
RAC: 686

The crunching time of the

The crunching time of the WU’s are the way they are because of scientific reasons, and can wherefore not be shortened.
The deadline is set to 7 days to minimize the size of the database and thereby reduce the strain on the server and allows more people to participate

For more information on this subject check out this thread.
http://einsteinathome.org/node/187702

Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.

Cochise
Cochise
Joined: 11 Feb 05
Posts: 38
Credit: 3717
RAC: 0

This topic has been discussed

This topic has been discussed many times. One of the suggestions that has come out of these discussions is to:

A - allocate more resources to E@H to crunch units faster
B - make sure your cache is set to a small size (0.1 - 0.5) to insure you don't have lots of WU's sitting on your machine

I run 4 of the projects and have the resource share set
predictor - 100
climate - 100
einistein - 100
seti - 25

with the cache set to 0.5 and I have no problem meeting deadlines. I dont run my machine 24x7 either I just have it set to run all the time the machines on, about 6 hours a day.

good luck!! ;-)

(The numbers in my sig are when i had equal resource share and just joined einstein)

harkain
harkain
Joined: 24 Feb 05
Posts: 1
Credit: 7968147
RAC: 0

Maybe it is really a Boinc

Maybe it is really a Boinc problem, for example in my case if the resource share is say 16.6% and the cache is 2 (days) it should only download enough workunits to terminate them in 8 hours at 100% or less than one unit (and maybe a spare), instead it loads 3 units. The third unit will probably not be done before the deadline... Of course as the cache size approaches the number of days before the deadline it will begin to fail anyway, unless Boinc also considers time until deadline when calculating how much to download.

keputnam
keputnam
Joined: 18 Jan 05
Posts: 47
Credit: 84110314
RAC: 0

> Maybe it is really a Boinc

Message 5469 in response to message 5468

> Maybe it is really a Boinc problem, for example in my case if the resource
> share is say 16.6% and the cache is 2 (days) it should only download enough
> workunits to terminate them in 8 hours at 100% or less than one unit (and
> maybe a spare), instead it loads 3 units. The third unit will probably not be
> done before the deadline... Of course as the cache size approaches the number
> of days before the deadline it will begin to fail anyway, unless Boinc also
> considers time until deadline when calculating how much to download.
>

Again, this has been discussed, and I think I saw somewhere that the developers were working on it. Right now, the Core Client is the only place that knows that you are running multiple projects. The individual project schedulers ASSUME (and we both know what that means) tha you are only running their project.

The solution, especially if CPDN is one of your projects is to set a connect time of .1 or .5 days, and let the multiple projects act as your cache of work, rather than caching multiple WUs from each project.


Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

> > Maybe it is really a

Message 5470 in response to message 5469

> > Maybe it is really a Boinc problem, for example in my case if the
> resource
> > share is say 16.6% and the cache is 2 (days) it should only download
> enough
> > workunits to terminate them in 8 hours at 100% or less than one unit
> (and
> > maybe a spare), instead it loads 3 units. The third unit will probably
> not be
> > done before the deadline... Of course as the cache size approaches the
> number
> > of days before the deadline it will begin to fail anyway, unless Boinc
> also
> > considers time until deadline when calculating how much to download.
> >
>
> Again, this has been discussed, and I think I saw somewhere that the
> developers were working on it. Right now, the Core Client is the only place
> that knows that you are running multiple projects. The individual project
> schedulers ASSUME (and we both know what that means) tha you are only running
> their project.

Not true. The scheduler gets the 'resource share fraction' associated with the E@H project. Please see the FAQ on the E@H front page.

> The solution, especially if CPDN is one of your projects is to set a connect
> time of .1 or .5 days, and let the multiple projects act as your cache of
> work, rather than caching multiple WUs from each project.

This is reasonable.

Cheers,
Bruce

Director, Einstein@Home

Colin Porter
Colin Porter
Joined: 15 Feb 05
Posts: 21
Credit: 6479335
RAC: 0

I would also like to see the

I would also like to see the E@H deadline extended to 14 days. I run S@H as well and although E@H WU's take twice as long to crunch, they have only half the allotted time. I am still trying to strike a reasonable balance after two weeks.

Warning! This post contains atrocious spelling, and terrible grammar. Approach with extreme edginess.

It's not the speed, but the quality - Until I get a faster computer

Ziran
Ziran
Joined: 26 Nov 04
Posts: 194
Credit: 605124
RAC: 686

I see that you the last three

I see that you the last three days have crunched a WU in less then one day. So even with an equal resource chare with SETI you would be able to do an Einstein WU in less then 2 days. So your problem isn’t rely the 7 day deadline, it is that WU’s are stored on your computer to long before the computer starts to crunch them. If you don’t already have lowered the value for the “connect to server every� option, try to set it to 0.1 and run it for a week and you will get a better feeling of what your computer is capable of.

Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.

John McLeod VII
John McLeod VII
Moderator
Joined: 10 Nov 04
Posts: 547
Credit: 632255
RAC: 0

> > Again, this has been

Message 5473 in response to message 5470

> > Again, this has been discussed, and I think I saw somewhere that the
> > developers were working on it. Right now, the Core Client is the only
> place
> > that knows that you are running multiple projects. The individual
> project
> > schedulers ASSUME (and we both know what that means) tha you are only
> running
> > their project.
>
> Not true. The scheduler gets the 'resource share fraction' associated with the
> E@H project. Please see the FAQ on the E@H front page.
>
The resource share is completely insufficient. If all of the other projects are down, and Einstein has only 1% of the CPU, it should download a WU and start working on it. Even though if the shares were carried out strictly, that WU would not complete on time. The CLIENT should be intelligent enough to be able to schedule the work that it has so that it completes on time, not download any more work if it is getting into time trouble, and not download work from a project that has been crunching too much recently. BOINC is supposed to be a multi project application, but the download and CPU schedulers do not handle multi project settings very gracefully.

The client assumes that it can request a full queue from each project. If you have 10 projects and ask for a 1 day, your queue will end up with more than 10 days of work in it. The project server assumes that all projects will be providing work all the time. This is also a bad assumption as some projects will have work sporadically.

Scanarios that are not handled well by the download and CPU schedulers on the client and the project scheduler are tight deadlines (Einstein, Predictor), Sporadic work (LHC, Pirates), short deadlines (Pirates, Predictor), Multi projects (recommended). Slow Machines.

Short deadlines are not the same as tight deadlines. A short deadline can be an hour, but if the crunching is only going to take 5 minutes, it is not really tight. A tight deadline on the other hand has a low ratio of time to deadline / time to crunch.

Bruce Allen
Bruce Allen
Moderator
Joined: 15 Oct 04
Posts: 1119
Credit: 172127663
RAC: 0

> > > Again, this has been

Message 5474 in response to message 5473

> > > Again, this has been discussed, and I think I saw somewhere that
> the
> > > developers were working on it. Right now, the Core Client is the
> only
> > place
> > > that knows that you are running multiple projects. The individual
> > project
> > > schedulers ASSUME (and we both know what that means) tha you are
> only
> > running
> > > their project.
> >
> > Not true. The scheduler gets the 'resource share fraction' associated
> with the
> > E@H project. Please see the FAQ on the E@H front page.
> >
> The resource share is completely insufficient. If all of the other projects
> are down, and Einstein has only 1% of the CPU, it should download a WU and
> start working on it.

If there is NO E@H work queued on a machine, and it is not doing any E@H work, then a request for work will be met with at least one WU, provided that the memory and disk space are available.

> Even though if the shares were carried out strictly,
> that WU would not complete on time. The CLIENT should be intelligent enough
> to be able to schedule the work that it has so that it completes on time, not
> download any more work if it is getting into time trouble, and not download
> work from a project that has been crunching too much recently. BOINC is
> supposed to be a multi project application, but the download and CPU
> schedulers do not handle multi project settings very gracefully.

John, I agree that the Client could be more intelligent in how it requests work, particularly in the situation where not all projects are providing work all the time.

> The client assumes that it can request a full queue from each project. If you
> have 10 projects and ask for a 1 day, your queue will end up with more than 10
> days of work in it. The project server assumes that all projects will be
> providing work all the time. This is also a bad assumption as some projects
> will have work sporadically.

This may be true for the schedulers of other projects. But it is NOT true of the E@H scheduler. If you read the relevant code in sched_send.C (search for estimate_wallclock_time() and read the function calling it) you'll see that the E@H scheduler estimates the wallclock time it will take to complete a job taking into account the resource share. If that's 10% then E@H will only issue 10,000 CPU seconds of work if 100,000 seconds is available. This is a reasonably recent addition to the BOINC generic scheduling code and I don't know which other projects are using it, though I assume that S@H is.

Bruce

Director, Einstein@Home

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.