Why is Server requesting "delete file"

Tom95134
Tom95134
Joined: 21 Aug 07
Posts: 18
Credit: 2,498,832
RAC: 0
Topic 195393

I occasionally see messages in my message log that the Server has requested that a file (data file) be deleted. Is this just a clean up of files that I processed and, once credit is issued, the Server cleans up the leftovers on my system or, is this a deletion of files pending processing on my machine because results have been reported by another system and confirmed by a 2nd system?

Just curious because I don't see the same thing on SETI@home.

Thanks.
Tom

Gundolf Jahn
Gundolf Jahn
Joined: 1 Mar 05
Posts: 1,079
Credit: 341,280
RAC: 0

Why is Server requesting "delete file"

Almost ;-)

Those are data files used for several (hopefully many) tasks of the same type (frequency range?). As soon as no more tasks of that type are available, the server requests the files to be deleted.

Actually, they aren't deleted before the last task using them has been finished. So, there can be some delay between request to delete and the deletion itself.

Gruß,
Gundolf

Computer sind nicht alles im Leben. (Kleiner Scherz)

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,581
Credit: 308,556,292
RAC: 208,513

RE: I occasionally see

Quote:

I occasionally see messages in my message log that the Server has requested that a file (data file) be deleted. Is this just a clean up of files that I processed and, once credit is issued, the Server cleans up the leftovers on my system or, is this a deletion of files pending processing on my machine because results have been reported by another system and confirmed by a 2nd system?

Just curious because I don't see the same thing on SETI@home.


E@H uses a BOINC feature, not used ( to date ) by other projects, called locality scheduling. The thrust of the concept is to try to reduce overall bandwidth usage by avoiding unnecessary downloads. This is contributor friendly as bandwidth cost and availability varies widely worldwide, so we seek to be inclusive at the low end of capacity with this tactic. Scheduling is tuned to a per-machine basis ie. this is the 'locality' part of the phrase. Here's an apprentice baker's tour :

In essence you get given an unsliced loaf of bread ( data ), then a sequence of instructions in the workunits to slice it up, and one returns the slices as you go. Worldwide there are quite a number of distinct loaves about, a constantly varying pool as the project tasks are met. You'll be sharing workunits with other holders ( your wingmen ) of identical/cloned loaves. The wingman idea is to reduce the impact of instance errors by duplicating processing. A quorum is at least you and your wingman, but may be more for a given slice/task depending on how things go. [ We used to have minimum quorums of three and four when I first turned up, presumably experience has shown fewer is OK. ] Now when your BOINC client reports in with finished work and is requesting new work, there's a bit of a conversation along the lines of 'what have you got for me in terms of the particular loaf I already have?'. Here is the logical point at which bandwidth saving is mainly enacted. The scheduler will look at what still needs to be done to that specific loaf, and attempt to issue further work for you with what you already have on hand.

Eventually loaves get fully sliced and need replacing ..... as indicated the BOINC client sorts this out by itself, once told that a given loaf is no longer needed. Effectively the E@H scheduler is saying that it wont be expecting to you to have that loaf any more, because in it's view of the total project workflows that particular loaf has gone stale. :-)

Quite a clever system really, certainly a handy option in the BOINC framework, but not every project will either need or use this locality scheduling. Hat's off to whom-so-ever thought that up and implemented it! :-)

Cheers, Mike.

( edit ) Strictly speaking the language is that : a 'workunit' is a particular conceptual slice of bread from the one conceptual loaf within the project's data breadbin. However as the loaves are cloned to many machines, then thus sliced on each machine, that implies there exists clones of the slices too. Such clones are called 'tasks'. The scheduler looks at the reported/completed tasks at hand, and being it's job to keep track of what was sent to whom, it can thus answer the 'what to do next with what?' question. I suppose, in an perfect world with machine's matched for mojo etc, you'd only need to clone a loaf twice with these matched machines being wingmen for the entire loaf.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

IIRC Locality scheduling was

IIRC Locality scheduling was developed by the E@H team itself.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,581
Credit: 308,556,292
RAC: 208,513

RE: IIRC Locality

Quote:
IIRC Locality scheduling was developed by the E@H team itself.


Thanks, I didn't know that! Makes sense, as to why they are so good at it then ... :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Phil
Phil
Joined: 24 Feb 05
Posts: 176
Credit: 1,817,881
RAC: 0

To put it another way, most

To put it another way, most projects send one or more data files to be worked on which are automatically deleted afterward. E@H flags the files sticky so they can be reused. When no longer required the server issues a delete command.

tullio
tullio
Joined: 22 Jan 05
Posts: 2,118
Credit: 61,407,735
RAC: 0

On my 6 active BOINC

On my 6 active BOINC projects, Einstein takes more disk space (482 MB), then climateprediction.net (313 MB), QMC (162 MB). The others 3 from 7 to 4 MB each.
Tullio

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.