GW search 02 WUs

lohphat
lohphat
Joined: 20 Feb 05
Posts: 18
Credit: 62,508,216
RAC: 1,560
Topic 224976

When my client gets new WUs, it seems like the download is tiny. I can see only a single 46 byte download but that results in 39 WUs each taking 12min each or so.

 

Can someone elaborate on why the download is so small and how that turns into 39 jobs?  Is it analyzing the same data in different ways?

 

I can't find specific download sizes in the log only that it had taken place.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 1,359
Credit: 2,683,782,624
RAC: 5,895,592

You must be running Gamma-Ray

You must be running Gamma-Ray tasks.  The downloads for Gravity-Wave are HUGE! Dozens of megabytes.

There already is a file called JPLEPH.405 ephemeris data file downloaded when you first start crunching.  Then new task downloads are simply parameter sets for the task search.  Those parameter sets are very small.

If you hover over the current task that is running or look in its properties, you can see the parameter set details.

 

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 198
Credit: 62,013,777
RAC: 120,385

Downloads became small cause

Downloads became small cause bug in locality scheduling config probably fixed and you have all needed data files already. So new tasks just instruct host to process same data files little differently.

 

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 198
Credit: 62,013,777
RAC: 120,385

Keith Myers wrote:You must

Keith Myers wrote:

You must be running Gamma-Ray tasks. 

 

Nope, he runs just as he said.

 

lohphat
lohphat
Joined: 20 Feb 05
Posts: 18
Credit: 62,508,216
RAC: 1,560

Ah yes.  Found the JPLEPH.405

Ah yes.  Found the JPLEPH.405 dated 31jul2020 9102 bytes.

I didn't know that tasks could operate this way; I just assumed each task downloads specific data for itself each time.

Thanks!

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 1,359
Credit: 2,683,782,624
RAC: 5,895,592

Quote:Nope, he runs just as

Quote:
Nope, he runs just as he said.

But he never did state in any of his posts what project he was running.  I just guessed at the small 46 byte downloads that he was running Gamma-Ray.

 

Raistmer*
Raistmer*
Joined: 20 Feb 05
Posts: 198
Credit: 62,013,777
RAC: 120,385

Keith Myers

Keith Myers wrote:

Quote:
Nope, he runs just as he said.

But he never did state in any of his posts what project he was running.  I just guessed at the small 46 byte downloads that he was running Gamma-Ray.

Haha, you right :) I supposed too first :)

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,495
Credit: 66,053,473,020
RAC: 54,907,993

lohphat wrote:Ah yes.  Found

lohphat wrote:
Ah yes.  Found the JPLEPH.405 dated 31jul2020 9102 bytes.

That file is associated with the gamma-ray pulsar (GRP) search.  Recently, you have been running GW tasks only, so that file is not currently being used.  It's designed to be kept, it since it will be needed if you ever get more GRP work.

lohphat wrote:
I didn't know that tasks could operate this way; I just assumed each task downloads specific data for itself each time.

Both GW and GRP searches don't usually need any data files for each extra task that gets supplied.

The GRP search uses a single data file whose name is "LATeahxxxxx.dat" where 'xxxxx' is a combination of numbers and letters.  Such a file may last for several days or more before the first task in a new series will cause the download of a single new data file with a further (and similar) lifetime.  You may receive quite a large number of tasks over the lifetime of a single data file.  That cycle just keeps repeating for as long as you request GRP work.

The GW search uses a very large number of data files for whatever 'frequency range' is allocated to your host on the initial request.  There are several thousand tasks associated with a given 'frequency range'.  You get a very heavy initial download of data files for the first task allocated to you.  After that, a technique known as "locality scheduling" attempts to keep allocating further tasks to you that can use most or all of these same data files that you already have.  You should only see further large data file downloads when the scheduler has run out of tasks belonging to the original frequency range.

It's quite a different mechanism for each search.  However, the upshot of both is that you (hopefully) only see big data downloads fairly infrequently and certainly not on each new task.  To get new work, your client makes a 'scheduler request'.  The scheduler sends a response (scheduler reply) which contains sets of parameters only, one set for each new task allocated.  Your client inserts these parameters into the state file (client_state.xml) which is why (usually) you see no separate large data file downloads with each new task.

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.