S5GCE, was: Beyond S5R6

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9562235
RAC: 0

Perhaps it would be good for

Message 96903 in response to message 96902

Perhaps it would be good for a number of hosts to be specially designated for resends - perhaps as a host-specific option, if there is support for that. Personally I have no problem keeping a large number of Einstein@Home specific files on my PC, nor do I have a bandwidth limit to worry about. Though, I imagine it only becomes a problem near the end of a run, and maybe having a few tasks bounce around for a while isn't that much of a problem for analysis.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109411307823
RAC: 34916686

RE: ... hosts to be

Message 96904 in response to message 96903

Quote:
... hosts to be specially designated for resends ...


That would become unnecessary if the problem could be prevented from arising in the first place.

Imagine your host and a number of others were working on a particular frequency band say xxx.00 and the task sequence numbers had dropped from say _500 to _400 and the scheduler had temporarily run out of new tasks. Hosts asking for new tasks would be perhaps given some for the xxx.05 frequency, along with 4 new large data files (something like xxx.25 or xxx.30, or thereabouts) and told to the xxx.00 files.

So there will be 2 copies of 100 tasks 'out there' with a certain proportion predestined to fail for a variety of reasons (maybe as much as 10-20% or more) and none of the hosts are able to get them when they come up for resend. The shame is that the marked files are still on these hosts and don't get removed until all dependent tasks are actually returned. This could take several days depending on cache size for each host.

In addition to resends becoming available over time, there will also be further new tasks (say from _400 to _300) that become available to the scheduler. In the light of this, why not allow any data files to be used during further work fetch requests and for the flag to be canceled if there is a suitable task? Seems to me this would allow resends to be dealt with much more efficiently.

I have a number of hosts that have been working on a particular set of contiguous frequencies where I keep simulating the effect (manual intervention) of the flag being able to be canceled. These hosts are continuing to draw a mix of new and resend tasks without any obvious problems. They do not delete any data files and do not need to draw any new ones. So far, I've seen task seq#s drop from around _880 to around _700. The scheduler keeps claiming to have no more tasks but then keeps getting more as resends arrive and as new tasks come in from the workunit generator. I imagine I'll be able to keep this going all the way down to seq# _0

Cheers,
Gary.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4267
Credit: 244933956
RAC: 16243

This is a pretty lengthy post

Message 96905 in response to message 96901

This is a pretty lengthy post and I'm not sure I could follow it all. But some remarks:

There are 'resent' tasks and there are 'unsent' (or to-be-sent) tasks. Tasks are re-sent to a client if the scheduler knows that this task has been assigned to this client, but it was never reported by the client (neither as finished nor as still in progress), e.g. after a project reset. This is (presently) independent of work fetch or deadline. For tasks that are reported as error, time out or fail validation, another 'unsent' task is created for the same workunit. This will never be sent to the client of a user that already had a task of this workunit.

The (locality) scheduler never 'sends out' tasks in terms of 'pushing'. It just grabs from the pool of 'unsent' tasks what best matches the client's request. If there are 'unsent' tasks that match the files the client has, it assigns these to the client. If not, it asks the workunit generator to make more work for these files. The workunit generator will generate new workunits if possible, or it will respond that there can't be made more work for this set of files. In the latter case the scheduler will try to pick from the unsent results those that minimize the download volume for the client. The time at which a given 'unsent' task is sent out is unpredictable, as it depends on the behavior of the clients, not of the scheduler. (There is a limit, though, to the time how long a task remains 'unsent'. The oldest unsent tasks (what you see in the server status page as 'oldest unsent results') are sent to 'fast' hosts (I think 'fast' here refers to the turnaround time).)

Ideally a delete request for a data file will only be sent if there can't be made more work that involves this data file and all workunits that require it are finished (i.e. all tasks finished or timed out). The only exception should be that the disk space on the client gets tight and space needs to be freed in order to get new tasks. Every other behavior points to a bug in the scheduler (which is not unlikely in the current code, that's why we're working on replacing it).

BM

BM

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109411307823
RAC: 34916686

RE: ... I'm not sure I

Message 96906 in response to message 96905

Quote:
... I'm not sure I could follow it all...

I'm sorry that what I was trying to say wasn't absolutely clear.

I understand fully the 'resend lost tasks' feature of the scheduler. I regard this as a bonus feature that is wonderful to have but which is entirely outside the discussion of my previous posts. When I'm talking about 'resends', I'm NOT talking about 'lost' tasks resent. I'm talking about extra copies of 'primary' tasks that are sent to different hosts as a result of a 'failed' primary task. The 'failure' may have been due to any one of a number of different reasons such as client errors, client aborts, deadline misses, failure to validate, etc.

Primary tasks always have the _0 or _1 suffix as part of the task name. Resends will always have a suffix > 1, eg. _2, _3, etc. Lost tasks that are resent will never have a suffix > 1. So it is very easy to recognise a 'resend', but impossible to know if any particular task is a 'lost task resent' or not unless you have access to the messages in BOINC Manager or know the hostID and time, so you can search for the transaction in the server logs.

So the gist of my comments in previous messages was to highlight what needs to be done to make the handling of _2, _3, _4, etc, tasks (what I call resends) much more efficient. I fully understand the principle of how the WUG is 'instructed' to generate additional primary tasks when the scheduler 'runs out' of a particular frequency band. I've experimented with this and I've seen new tasks created within a few minutes of the scheduler being unable to fill a request for a particular frequency.

From observations I've made over the last couple of weeks, I'm guessing that there is now a tight restriction on how many new tasks the WUG can create in response to a request from the scheduler. I guess this is important in trying to keep the database within reasonable limits. It would appear that around 3-5 new tasks get created for a particular frequency each time the scheduler is unable to supply a task for that frequency.

Because I've spent a great deal of time studying exactly how locality scheduling is working at present, I feel there are important improvements that can be made with what would appear to be very little effort for a programmer. Certainly the BOINC client is involved and the changes will need to come from the BOINC Devs. Before I start a crusade over there, I really want to know that Einstein Devs understand my arguments and also consider that what I'm proposing is worthwhile. So I'm going to pick just one aspect where I believe significant improvement could be easily made.

The Problem

  • * A fast host starts working on tasks of frequency 1139.65 and seq# __880
    * It acquires a couple of tasks only (__880, __879) and then gets shifted to a new frequency, 1139.70
    * Each single frequency 'shift' means 4 files deleted and 4 new files downloaded (~15MB)
    * The 4 deleted files aren't actually deleted - just in the state file
    * A host can easily do up to 10 (or more) shifts per day (several GB per month)
    * This is entirely unnecessary as there are 880 tasks for 1139.65 and the host only got two.

The Solution

  • * Modify the BOINC client so that it is able to inform the scheduler that it actually has the 1139.65 files, even though they are
    * Modify the scheduler so that it can remove the tag on the client if is is now able to supply further 1139.65 tasks.
    * Seeing as the request that caused the shift to 1139.70 would also have caused the scheduler to instruct the WUG to generate more 1139.65 tasks (eg __878, __877, etc) the next work request from the host should get some 1139.65 tasks and the offending tag could be removed.
    * This sequence of 'marking' and 'un-marking' could easily continue for weeks or more without unnecessary frequency shifting - I have several hosts that now have essentially 0MB downloads for June.
    * As a host continues working on a given frequency, it would also be able to score any 'resends' that become available from time to time.
    * I've simulated all this using manual intervention so I'm convinced it works perfectly.
    * We just need a programmer to understand and implement the change.

Quote:
Ideally a delete request for a data file will only be sent if there can't be made more work that involves this data file and all workunits that require it are finished (i.e. all tasks finished or timed out).


And therein lies the problem. As the example above shows, files are deleted when just a couple out of 880 possible tasks were allocated. The fix would appear to be quite simple unless I'm totally off track here. If I am off track, I'd certainly appreciate it if someone would point it out.

Cheers,
Gary.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4267
Credit: 244933956
RAC: 16243

RE: Hi Gary! I'm still

Message 96907 in response to message 96906

Quote:

Hi Gary!

I'm still thinking about this. A couple of remarks / suggestions to some of your points:

Quote:
Modify the BOINC client so that it is able to inform the scheduler that it actually has the 1139.65 files, even though they are

It think the client already reports all files it has (i.e. all files where the file_info has a tag), even if they have been marked for deletion locally. Isn't that right? You should be able to see the files reported in the scheduler_request of the client.

Quote:
Modify the scheduler so that it can remove the tag on the client if is is now able to supply further 1139.65 tasks.

If all goes well the client shouldn't get a delete request for a file for which work could be generated and sent to the client.

There are, however, two exceptions: 1. Disk space gets tight on the client and needs to be freed in order to get new work. 2. The scheduler thinks that no more work that involves this data file can be sent to the client. This part of the scheduler is one that might be outdated wrt. recent runs and might misbehave.

In general I still think what we see is more a problem of old scheduler code that doesn't work well with recent Aps / runs. Therefore we're incrementally working on improving / replacing this part of the scheduler code. One major step we took in that direction in GC1 was to keep track of the datafiles in use in the DB. The scheduler doesn't yet use this information though, that would be the next step.

BM

BM

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109411307823
RAC: 34916686

RE: RE: Modify the BOINC

Message 96908 in response to message 96907

Quote:
Quote:
Modify the BOINC client so that it is able to inform the scheduler that it actually has the 1139.65 files, even though they are

It think the client already reports all files it has (i.e. all files where the file_info has a tag), even if they have been marked for deletion locally. Isn't that right? You should be able to see the files reported in the scheduler_request of the client.


No, that's not correct. I'm sorry to report that this is really the problem. Files that are are not listed in the sched_request even though they do have the tag as well, and they still remain undeleted on the client machine. The client seems to simply 'forget' about such files in all future sched_requests. I get it to 'remember' again by manually deleting the tag.

Quote:
Quote:
Modify the scheduler so that it can remove the tag on the client if is is now able to supply further 1139.65 tasks.

If all goes well the client shouldn't get a delete request for a file for which work could be generated and sent to the client.

There are, however, two exceptions: 1. Disk space gets tight on the client and needs to be freed in order to get new work. 2. The scheduler thinks that no more work that involves this data file can be sent to the client. This part of the scheduler is one that might be outdated wrt. recent runs and might misbehave.


In my case, (1.) doesn't apply, and (2.) shouldn't apply for quite a while yet because the 1139.65 to 1141.25 frequency bands I'm working on each still have many hundreds of tasks able to be generated by requests from the scheduler to the WUG.

I had assumed (but hadn't checked) that the tag was physically sent from the scheduler to the client. By examination of a number of request/reply pairs, the behaviour can be more accurately described as follows:-

  • * The client sends the full list of all large data files on board

except for any that are
* The scheduler tries to find compatible tasks but will shift the frequency band one (or more) steps if necessary
* At the same time the scheduler will flag the WUG to produce more tasks if it couldn't immediately supply a particular frequency band
* The sched_reply does NOT contain s for those bands that it couldn't immediately supply
* The client examines the differences in the s that it sent and the s that it received back and, from this, works out what files need to be
* The client applies tags only to those files whose frequencies were less than than the frequencies that were given in the reply and not any that were higher - see example.
Example of one particular Request/Reply sequence

  • * The client had all data files for all frequencies between 1139.65 and 1141.25 on board
    * the 4 files for 1139.65 were the only ones prior to the request.
    * The request contained s for 1139.70 to 1141.25 inclusive
    * The scheduler sent back a task for 1139.80
    * The sched_reply contained s for 1139.80 to 1140.30 inclusive
    * The messages tab on BOINC Manager announced the request for delete of 8 files, 4 @ 1139.70 and 4 @ 1139.75
    * There was no deletion of any files from 1139.35 to 1141.25, even though none of these were mentioned in the sched_reply.

This particular host now has 12 files (1139.65 to 1138.75 inclusive) that are but which are also going to remain undeleted in the project directory for at least another 5 days (or more) until all tasks that depend on them have actually been completed and returned. In that time there will be many more tasks for these frequencies that get generated by the WUG and which could be sent to this host.

I will continue to remove these tags manually from time to time but it appears that if the client were a little bit smarter it could easily solve the problem without any manual intervention. No scheduler mod is needed but the client would need to do two things:-

1. Send s for all files (including ones) in any request.
2. If a previously file comes back in the reply, get rid of the tag in the state file, since there are obviously more tasks now available.

I'd be interested in your further comments

Cheers,
Gary.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4267
Credit: 244933956
RAC: 16243

RE: Please, keep us in

Message 96909 in response to message 96889

Quote:
Please, keep us in touch as often as you can, so that we'll not feel the project is working independent of us. It will be great to publish a technical diary (or blog, like Technical news in SETI) with daily or may be weekly reports.

I set up the Technical News forum and moved the relevant posts from this thread there. Not sure about the update frequency, but this is surely a way to keep people informed about things that aren't worth a front page project news.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.