GW follow-up run #3 (S6BucketFU3UB)

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,305
Credit: 249,043,424
RAC: 33,591
Topic 198220

... that had been announced here is out. Yesterday we issued the first 100 WUs with a deadline of 1d for testing. Today the first charge (of two) will be issued, with a deadline of 3d.

To minimize the DB load, four "atomic WUs" are "bundled" together in a WU. The runtime of a task is targeted for 6-8h. The total number of Wus will be ~250k, the first charge contains the WUs up to 300Hz, which are about 65k WUs.

BM

BM

Anonymous

GW follow-up run #3 (S6BucketFU3UB)

I find that I am a recipient of two to these jobs running in "high priority". A LATeah* job was "elevated" to "waiting to run". This machine is an I7, GTX 770 running 3 concurrent E@H GPU jobs and 5 concurrent E@H CPU jobs.

Both jobs in BOINC manager are showing estimated times of around 7 hours. Both have "Deadlines" of Fri 11 Sep 2015 ~7:11 and 6:02 AM EDT (11:11 and 10:02 UTC).

chase1902
chase1902
Joined: 13 Aug 11
Posts: 37
Credit: 1,264,094,642
RAC: 0

It seems to be prioritizing a

It seems to be prioritizing a bit too much, as it stops running one of the 3 gpu concurrent tasks. So I only have 2 GPU task running at the moment even though there is meant to be 3.
I can only conclude that as I have AMD GPU which normally use .5 of the CPU for each task, it has decided to do a CPU job instead of a GPU job. Didn't know it could do that.

Stranger7777
Stranger7777
Joined: 17 Mar 05
Posts: 436
Credit: 426,648,420
RAC: 66,632

Status page says that there

Status page says that there are more than 400 invalids while valids are only 250. Is it normal?
And you said that the first charge will consist of 65k WUs, but there are more than 130k already generated and the number is still growing. Does it mean that all the charges will be generated at once?

chase1902
chase1902
Joined: 13 Aug 11
Posts: 37
Credit: 1,264,094,642
RAC: 0

2 tasks to a WU so 65k would

2 tasks to a WU so 65k would be 130k tasks plus the resends.
Does seem quite a lot of error/invalids, but I think they tend to report quicker as they error out before completing the whole task

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3,117
Credit: 4,050,672,230
RAC: 0

Already completed 4. 3

Already completed 4.

3 with run times just over 4 hours and 1 with run time of just over 5 hours.

Another 4 currently running and just over 1 hour for the first in the nex series to finish.

Anonymous

[EDIT] seeing similar

[EDIT] seeing similar behavior as chase

I have noticed that on a PC dedicated to E@H running an ATI GPU with a utilization factor of 0.25 the current number of concurrent GPU jobs has been reduced to 2 (Parkes PMPS XT) with 6 concurrent FU3UB jobs running "high priority". Is this behavior expected? i.e., a reduction in GPU concurrency.

[EDIT] This does not seem to be happening on a NVIDIA machine, i.e., no reduction in GPU concurrency.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,870
Credit: 116,081,148,902
RAC: 35,929,355

RE: ... Is this behavior

Quote:
... Is this behavior expected?


Yes, I've seen it before.

When running 4x on an ATI card, 2 cores are kept free to provide support. If the number of short duration CPU tasks is such that more cores are needed for them, GPU tasks will be suspended to allow that to happen. If you lower your cache size right down (0.1 days or less), you may be able to get out of high prio mode and things will return to normal. This will also allow fewer CPU tasks to be cached so less likely to have more potential high prio tasks than available CPU cores.

You don't get this problem with NVIDIA because the default allocation is 0.2 CPUs. Even if you were running 4x, that's not enough to reserve a full core so high prio mode can't gain any extra CPU cores by suspending GPU tasks.

Cheers,
Gary.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2,143
Credit: 2,927,531,916
RAC: 761,684

Note that this behaviour is

Note that this behaviour is entirely managed by the local BOINC client on your computer, and displayed by the BOINC Manager. None of this scheduling is mandated by the Einstein project.

If you temporarily reduce the number of days work cached, the BOINC client will not be under such time pressure to meet all task deadlines, and more even scheduling should be resumed.

Anonymous

Gary/Richard, Thanks for

Gary/Richard,

Thanks for the input. I have lowered the cache size as suggested and will monitor the GPU concurrent job count. Interesting info on the NVIDIA side Gary.

Stranger7777
Stranger7777
Joined: 17 Mar 05
Posts: 436
Credit: 426,648,420
RAC: 66,632

RE: ...but there are more

Quote:
...but there are more than 130k already generated and the number is still growing. Does it mean that all the charges will be generated at once?

It is already more than 131k tasks and it continues to grow. Given the number of total needed is 269976 I can suppose that there will be 269976/2=134944 tasks issued in the first part. Am I correct?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.