long workunits for optimized clients by akofs

Sven Glueckspilz
Sven Glueckspilz
Joined: 18 Mar 05
Posts: 23
Credit: 27474851
RAC: 0
Topic 191050

Hallo,

is it possible to give by akofs optimized clients long workunits.

This week I get much short workunits and finished them after 14 hours. So I have to wait for the next day to get 32 new workunits.

If you give the akofs clients longer workunits it would be the same traffic. We would crunch at first the long workunits and you have time to optimize your system. After that we get the short workunits with a higher dayly limit.

vonHalenbach
vonHalenbach
Joined: 6 Nov 05
Posts: 32
Credit: 12590
RAC: 0

long workunits for optimized clients by akofs

--If you give the akofs clients longer workunits it would be the same traffic. We --would crunch at first the long workunits and you have time to optimize your --system. After that we get the short workunits with a higher dayly limit.

Yes, that is a good idea, in my opinion. A batchupload would help too. Download 10 units to crunch and after one day or minimum 5 complete units report them to the server. If you had completed 5 units the application should download another 5 to fill your batch up to 10 again. That should lower the traffic on the server about 5 times. Bigger machines get a bigger batch.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4330
Credit: 251357158
RAC: 37209

Do you have any idea how the

Do you have any idea how the scheduler should tell "akofs optimized clients" from our ordinary Apps?

BM

BM

TauCeti
TauCeti
Joined: 1 Apr 05
Posts: 16
Credit: 1336558
RAC: 0

RE: Do you have any idea

Message 27658 in response to message 27657

Quote:

Do you have any idea how the scheduler should tell "akofs optimized clients" from our ordinary Apps?

BM

Afaik the clients reports it's version number in the communication with the server for each result.

Example: http://einsteinathome.org/task/26667812 with

2006-04-25 10:48:30.3281 [normal]: Optimised by akosf X41.02 --> 'projects/einstein.phys.uwm.edu/albert_4.37_windows_intelx86.exe'

(hey, looks like akosf is testing the X-Apps *drool*)

So how about about adding an 'optimized client' flag for each computer ID in the database? That flag could be set to true if the string "Optimised by akosf" apears in the last returned result (or in a to-be-defined set of results in a timeframe).

I don't know how your backend-DB is organized but you could fire a trigger event to calculate that flag every time a result is checked in or every time the scheduler assigns work for the host. That would be the most accurate solution.

A more relaxed calculation of that flag would imho still be sufficient because a temporary wrong 'uses optimized client' state for a Computer would not cause serious problems in the WU-assignment.

So perhaps it would be sufficient to calculate that flag e.g. once every day for all active computers with an update query (assuming that you can access the communication-data for a result in the database with a 'like' operator).

Tau

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4330
Credit: 251357158
RAC: 37209

Thanks for the Hint. The

Thanks for the Hint.

The stderr output is kept as a text blob just for debugging purposes; grepping over the blobs would be too expensive, and associating them with a host would require altering the db schema (host table), not to speak about the effort needed to code this into the various BOINC server components (and debug...) - no way.

I think that when the servers are ready to handle a graater load and we publish faster official Apps, we will also raise the daily quota again (which we already did).

Until then I'd rather encouage you to attach to another BOINC project as a backup if you don't want to have your computer cycles unused.

BM

BM

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 9

It could be done if you

It could be done if you allowed the Akosf apps to be renamed to 4.38 (for instance) and not use that upgrade number yourself. But even then I think it would take an addition to the DB scheme and the rewrite of the debug codes?

TauCeti
TauCeti
Joined: 1 Apr 05
Posts: 16
Credit: 1336558
RAC: 0

RE: Thanks for the

Message 27661 in response to message 27659

Quote:

Thanks for the Hint.

The stderr output is kept as a text blob just for debugging purposes; grepping over the blobs would be too expensive, and associating them with a host would require altering the db schema (host table), not to speak about the effort needed to code this into the various BOINC server components (and debug...) - no way.
BM

Understood.

And how about an option 'WU size' ('let server decide', 'small', 'large') in the Einstein@Home preferences?

I am not familiar with the BOINC framework in general so i have no idea if that would be possible to implement.

Tau

Ziran
Ziran
Joined: 26 Nov 04
Posts: 194
Credit: 635121
RAC: 1368

One adder approach could be

One adder approach could be to look at the “% of time BOINC client is running “ of the host in the database. If BOINC is running lets say >80% of the day, then the chances are bigger that that particular host is in risk of running out of work if given shorter results. It would only be necessary to test this every time a host needs to be assigned to a new dataset.

Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1376
Credit: 20312671
RAC: 0

As Ageless said, if the

As Ageless said, if the ability to allow version numbers of the science software, without a force of a download, you could easily set which version gets what units.

This could easily allow for older machines that go slower to get smaller units, and newer machines with an optimized client to get the longer units. Of course, it is good for the science to allow all machines to crunch all types of results, so that there is no profiling of results.

I have not had the issue of running out of work, but I do more than one project (and an awesome suggestion from the developer). Why not look into a side project, that you can give a small amount of time to. Set Einstein to 10000, and set the side project to 1, and when Einstein runs out, the side project can run for a few hours, then back to Einstein after the new day.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4330
Credit: 251357158
RAC: 37209

First: there's almost nothing

First: there's almost nothing that's impossible, it's just a matter of effort one puts into it. However, in the real world, everything is limited - like the manpower of this project.

The only thing that looks remotely sane to me is having the workunits linked to the amount of work that is requested, i.e. the more work the client requests at once, the more likely it is to be given larger workunits. This should in principle give the larger chunks of work to the faster Apps (in the long term).

However this would require to rewrite and replace at least the scheduler in the middle of a run. which is rather risky (to say the least). Honestly I wouldn't want to put the stability of the project at stake for that. I'm rather against fixing something that isn't broken.

BM

BM

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1376
Credit: 20312671
RAC: 0

RE: However this would

Message 27665 in response to message 27664

Quote:

However this would require to rewrite and replace at least the scheduler in the middle of a run. which is rather risky (to say the least). Honestly I wouldn't want to put the stability of the project at stake for that. I'm rather against fixing something that isn't broken.

BM


How about looking at a rewite for S5?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.