... that had been announced here is out. Yesterday we issued the first 100 WUs with a deadline of 1d for testing. Today the first charge (of two) will be issued, with a deadline of 3d.
To minimize the DB load, four "atomic WUs" are "bundled" together in a WU. The runtime of a task is targeted for 6-8h. The total number of Wus will be ~250k, the first charge contains the WUs up to 300Hz, which are about 65k WUs.
BM
BM
Copyright © 2024 Einstein@Home. All rights reserved.
GW follow-up run #3 (S6BucketFU3UB)
)
I find that I am a recipient of two to these jobs running in "high priority". A LATeah* job was "elevated" to "waiting to run". This machine is an I7, GTX 770 running 3 concurrent E@H GPU jobs and 5 concurrent E@H CPU jobs.
Both jobs in BOINC manager are showing estimated times of around 7 hours. Both have "Deadlines" of Fri 11 Sep 2015 ~7:11 and 6:02 AM EDT (11:11 and 10:02 UTC).
It seems to be prioritizing a
)
It seems to be prioritizing a bit too much, as it stops running one of the 3 gpu concurrent tasks. So I only have 2 GPU task running at the moment even though there is meant to be 3.
I can only conclude that as I have AMD GPU which normally use .5 of the CPU for each task, it has decided to do a CPU job instead of a GPU job. Didn't know it could do that.
Status page says that there
)
Status page says that there are more than 400 invalids while valids are only 250. Is it normal?
And you said that the first charge will consist of 65k WUs, but there are more than 130k already generated and the number is still growing. Does it mean that all the charges will be generated at once?
2 tasks to a WU so 65k would
)
2 tasks to a WU so 65k would be 130k tasks plus the resends.
Does seem quite a lot of error/invalids, but I think they tend to report quicker as they error out before completing the whole task
Already completed 4. 3
)
Already completed 4.
3 with run times just over 4 hours and 1 with run time of just over 5 hours.
Another 4 currently running and just over 1 hour for the first in the nex series to finish.
[EDIT] seeing similar
)
[EDIT] seeing similar behavior as chase
I have noticed that on a PC dedicated to E@H running an ATI GPU with a utilization factor of 0.25 the current number of concurrent GPU jobs has been reduced to 2 (Parkes PMPS XT) with 6 concurrent FU3UB jobs running "high priority". Is this behavior expected? i.e., a reduction in GPU concurrency.
[EDIT] This does not seem to be happening on a NVIDIA machine, i.e., no reduction in GPU concurrency.
RE: ... Is this behavior
)
Yes, I've seen it before.
When running 4x on an ATI card, 2 cores are kept free to provide support. If the number of short duration CPU tasks is such that more cores are needed for them, GPU tasks will be suspended to allow that to happen. If you lower your cache size right down (0.1 days or less), you may be able to get out of high prio mode and things will return to normal. This will also allow fewer CPU tasks to be cached so less likely to have more potential high prio tasks than available CPU cores.
You don't get this problem with NVIDIA because the default allocation is 0.2 CPUs. Even if you were running 4x, that's not enough to reserve a full core so high prio mode can't gain any extra CPU cores by suspending GPU tasks.
Cheers,
Gary.
Note that this behaviour is
)
Note that this behaviour is entirely managed by the local BOINC client on your computer, and displayed by the BOINC Manager. None of this scheduling is mandated by the Einstein project.
If you temporarily reduce the number of days work cached, the BOINC client will not be under such time pressure to meet all task deadlines, and more even scheduling should be resumed.
Gary/Richard, Thanks for
)
Gary/Richard,
Thanks for the input. I have lowered the cache size as suggested and will monitor the GPU concurrent job count. Interesting info on the NVIDIA side Gary.
RE: ...but there are more
)
It is already more than 131k tasks and it continues to grow. Given the number of total needed is 269976 I can suppose that there will be 269976/2=134944 tasks issued in the first part. Am I correct?