12/31/2021 2:08:16 PM | Einstein@Home | (reached daily quota of 704 tasks)

Bedrich Hajek
Bedrich Hajek
Joined: 9 Dec 05
Posts: 4
Credit: 796,889,763
RAC: 556,467
Topic 226681

12/31/2021 2:08:13 PM | Einstein@Home | Sending scheduler request: Requested by user.
12/31/2021 2:08:13 PM | Einstein@Home | Requesting new tasks for NVIDIA GPU
12/31/2021 2:08:16 PM | Einstein@Home | Scheduler request completed: got 0 new tasks
12/31/2021 2:08:16 PM | Einstein@Home | No work sent
12/31/2021 2:08:16 PM | Einstein@Home | No work is available for Binary Radio Pulsar Search (Arecibo)
12/31/2021 2:08:16 PM | Einstein@Home | No work is available for Binary Radio Pulsar Search (Arecibo, GPU)
12/31/2021 2:08:16 PM | Einstein@Home | No work is available for Gamma-ray pulsar search #5
12/31/2021 2:08:16 PM | Einstein@Home | No work is available for Gamma-ray pulsar binary search #1 on GPUs
12/31/2021 2:08:16 PM | Einstein@Home | (reached daily quota of 704 tasks)
12/31/2021 2:08:16 PM | Einstein@Home | Project has no jobs available

How come there is a limit of 704 task per day? My computer can crunch more than that. See below:

 

ID: 12095349 Details | Tasks Cross-project stats: 2 1,695,617.85 363,068,458 7.16.20 GenuineIntel Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz [Family 6 Model 63 Stepping 2] (12 processors) [2] NVIDIA NVIDIA GeForce RTX 2080 Ti (4095MB) driver: 471.11 Microsoft Windows 10 Professional x64 Edition, (10.00.18363.00)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 1,596
Credit: 11,823,062,361
RAC: 25,945,557

Einstein allocates 32 tasks

Einstein allocates 32 tasks per CPU thread, and 256 tasks per GPU.

 

(32*12)+(2*256) = 896. but it also factors in your compute settings for the CPU (using less CPU means you are allocated less). to get to 704 tasks, I'm guessing you have your "use at most % CPU" setting to 50%.

 

you can increase this to 100% to get up to 896. or you can set the <ncpus> flag in your cc_config.xml file to make boinc think you have more CPU cores than you do, then limit CPU use with a <max_concurrent> or <project_max_concurrent> flag in the appropriate app_config.xml file for the project that you want to limit CPU use on.

_________________________________________________________________________

archae86
archae86
Joined: 6 Dec 05
Posts: 3,064
Credit: 5,778,403,357
RAC: 3,915,092

Just for completeness, in

Just for completeness, in case someone gets to this thread by title or by search, I'll mention that your quota is temporarily reduced for errors.  However this reduction is very, very rapidly retired with successful work returns.  As I saw no errors at all in a glance at the task list for the Original Poster, this point has no bearing on his current situation.

One last point--"Errors" here does not mean WU's returned successfully but judged to be invalid.  The two most common ways people get enough of them to get a meaningful quota reduction are either to abort a large number of tasks or to suffer a host abnormality which suddenly starts running tasks in rapid sequence, with all terminating very early.  That one happened to me recently, and it consumed my entire onboard queue, and my entire daily quota, so even after I rebooted meant I could not fetch work again until the "day" had turned to a new one.  In my case that turned out to be midnight one time zone east of Newfoundland on that occasion.  I've never seen it be midnight of any of the three of UTC, my own time zone, nor Berkeley.  One of BOINC's little mysteries, that.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 2,117
Credit: 5,305,162,946
RAC: 19,197,793

Regarding when a "new day"

Regarding when a "new day" occurs, I would state it isn't a BOINC mystery, rather an Einstein one.

I thing Einstein picks some quantum state time zone somewhere between the three Einstein download servers which occupy different geographical locations.

 

Eugene Stemple
Eugene Stemple
Joined: 9 Feb 11
Posts: 39
Credit: 112,156,059
RAC: 110,679

</p> <pre> 2022-01-26

</p>

<pre>
2022-01-26 03:38:09.6152   Request: [USER#xxxxx] [HOST#3949388] [IP xxx.xxx.xxx.251] client 7.16.16
2022-01-26 03:38:09.7278 [debug]   have_master:1 have_working: 1 have_db: 1
2022-01-26 03:38:09.7278 [debug]   using working prefs
2022-01-26 03:38:09.7278 [debug]   have db 1; dbmod 1642053060.000000; global mod 0.000000
2022-01-26 03:38:09.7285    [handle] [HOST#3949388] [RESULT#1220530520] [WU#601739458] got result (DB: server_state=4 outcome=0 client_state=0 validate_state=0 delete_state=0)
2022-01-26 03:38:09.7285    [handle] cpu time 812.948400 credit/sec 0.066431, claimed credit 54.005344
2022-01-26 03:38:09.7285    [handle] [RESULT#1220530520] [WU#601739458]: setting outcome SUCCESS
2022-01-26 03:38:09.7329    [send] effective_ncpus 8 max_jobs_on_host_cpu 999999 max_jobs_on_host 999999
2022-01-26 03:38:09.7329    [send] effective_ngpus 1 max_jobs_on_host_gpu 999999
2022-01-26 03:38:09.7329    [send] Not using matchmaker scheduling; Not using EDF sim
2022-01-26 03:38:09.7329    [send] CPU: req 38071.22 sec, 0.00 instances; est delay 0.00
2022-01-26 03:38:09.7330    [send] CUDA: req 577.11 sec, 0.00 instances; est delay 0.00
2022-01-26 03:38:09.7330    [send] work_req_seconds: 38071.22 secs
2022-01-26 03:38:09.7330    [send] available disk 24.46 GB, work_buf_min 43200
2022-01-26 03:38:09.7330    [send] active_frac 0.999989 on_frac 0.988523 DCF 0.692235
2022-01-26 03:38:09.7519    [mixed] sending locality work first (0.0418)
2022-01-26 03:38:09.7526    [send] stopping work search - daily quota exceeded (512>=512)
2022-01-26 03:38:09.7526    [send] stopping work search - daily quota exceeded (512>=512)
2022-01-26 03:38:09.7526    [mixed] sending non-locality work second
2022-01-26 03:38:09.7526    [send] stopping work search - daily quota exceeded (512>=512)
2022-01-26 03:38:09.7526    Daily result quota 512 exceeded for host 3949388
2022-01-26 03:38:09.7550 [debug]   [HOST#3949388] MSG(high) No work sent
2022-01-26 03:38:09.7550 [debug]   [HOST#3949388] MSG(high) (reached daily quota of 512 tasks)
2022-01-26 03:38:09.7550 [debug]   [HOST#3949388] MSG( low) Project has no jobs available
2022-01-26 03:38:09.7550    Sending reply to [HOST#3949388]: 0 results, delay req 74753.00
2022-01-26 03:38:09.7551    Scheduler ran 0.143 seconds</pre>

<p>

Yeah, I know, mixing CPU and GPU tasks causes problems like this (above) but I was running a mix in a stable configuration for many months.  Then, upon upgrading the boinc client from 7.14 to 7.16, I did a Project Reset.  I knew that would erase the project's knowledge of my CPU and GPU work-unit performance but was prepared for some instability until new numbers would be derived.  The initial (after Reset) download batch was entirely CPU work, and seriously overcommitted as well.  I aborted only as many as necessary to let the CPU finish as many as possible before deadline.  That process finished yesterday.  Made a minor(?) change to the app_config.xml, adjusting project_max_concurrent from 8 down to 5; re-read config files and all appeared to be normal, with work fetch resumed and GPU tasks also being downloaded.  I did not notice until this morning that work fetches were taking place every 60 seconds and 3 or 4 CPU tasks downloaded at each instance - until the daily quota of 512 was reached during the night.  OBTW, cache parameters are 0.3 + 0.1 day.

I only know of two recovery strategies:  (1) abort nearly all the buffered tasks down to a manageable number; or (2) let boinc run until tasks are automatically aborted for not being started before the deadline.  Wing-persons probably prefer #1 but, from a project scheduler point of view, would #2 lead more quickly to a balance of downloaded work and the 0.3 day buffer parameter? 

In the host log (above) I am puzzled by "max jobs on host cpu  999999".  Is that just a place-holder until the project scheduler calculates a value for this host?

Hindsight is wonderful...   I should have enabled new tasks only occasionally, and briefly, to let the task run-time history statistics rebuild.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.