When my client gets new WUs, it seems like the download is tiny. I can see only a single 46 byte download but that results in 39 WUs each taking 12min each or so.
Can someone elaborate on why the download is so small and how that turns into 39 jobs? Is it analyzing the same data in different ways?
I can't find specific download sizes in the log only that it had taken place.
Copyright © 2024 Einstein@Home. All rights reserved.
You must be running Gamma-Ray
)
You must be running Gamma-Ray tasks. The downloads for Gravity-Wave are HUGE! Dozens of megabytes.
There already is a file called JPLEPH.405 ephemeris data file downloaded when you first start crunching. Then new task downloads are simply parameter sets for the task search. Those parameter sets are very small.
If you hover over the current task that is running or look in its properties, you can see the parameter set details.
Downloads became small cause
)
Downloads became small cause bug in locality scheduling config probably fixed and you have all needed data files already. So new tasks just instruct host to process same data files little differently.
Keith Myers wrote:You must
)
Nope, he runs just as he said.
Ah yes. Found the JPLEPH.405
)
Ah yes. Found the JPLEPH.405 dated 31jul2020 9102 bytes.
I didn't know that tasks could operate this way; I just assumed each task downloads specific data for itself each time.
Thanks!
Quote:Nope, he runs just as
)
But he never did state in any of his posts what project he was running. I just guessed at the small 46 byte downloads that he was running Gamma-Ray.
Keith Myers
)
Haha, you right :) I supposed too first :)
lohphat wrote:Ah yes. Found
)
That file is associated with the gamma-ray pulsar (GRP) search. Recently, you have been running GW tasks only, so that file is not currently being used. It's designed to be kept, it since it will be needed if you ever get more GRP work.
Both GW and GRP searches don't usually need any data files for each extra task that gets supplied.
The GRP search uses a single data file whose name is "LATeahxxxxx.dat" where 'xxxxx' is a combination of numbers and letters. Such a file may last for several days or more before the first task in a new series will cause the download of a single new data file with a further (and similar) lifetime. You may receive quite a large number of tasks over the lifetime of a single data file. That cycle just keeps repeating for as long as you request GRP work.
The GW search uses a very large number of data files for whatever 'frequency range' is allocated to your host on the initial request. There are several thousand tasks associated with a given 'frequency range'. You get a very heavy initial download of data files for the first task allocated to you. After that, a technique known as "locality scheduling" attempts to keep allocating further tasks to you that can use most or all of these same data files that you already have. You should only see further large data file downloads when the scheduler has run out of tasks belonging to the original frequency range.
It's quite a different mechanism for each search. However, the upshot of both is that you (hopefully) only see big data downloads fairly infrequently and certainly not on each new task. To get new work, your client makes a 'scheduler request'. The scheduler sends a response (scheduler reply) which contains sets of parameters only, one set for each new task allocated. Your client inserts these parameters into the state file (client_state.xml) which is why (usually) you see no separate large data file downloads with each new task.
Cheers,
Gary.