amd gpu job cache full

Anonymous
Topic 219384

In boinc manger log file I am getting "not requesting tasks:  don't need (amd gpu job cache full)"  I have plenty of O2AS V.107 jobs with one running but I am no longer getting "gamma ray pulsar search #1 WUs".  And I have none in progress.  

Any ideas as to why I no longer have "gamma ray pulsar search #1" WUs coming down?

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7221424931
RAC: 978146

I'm guessing, as I can't see

I'm guessing, as I can't see the most important numbers for your machine, but I think it a good guess.

The first GW job you finished bumped up the Task duration correction factor for your machine by a lot--probably over a factor of ten.r

BOINC uses this number as part of the procedure for guessing how many hours of work are currently in your "job cache".  So what was "enough" just before your first GW return, is now "way too much" and no new work will be requested until you finish and report most of the work in your queue, or, alternately, you run a long stretch of only GRP work so the TDCF grinds back down to a value which sets the estimated queue size lower than your cache request.

I don't think you'll find these two types of work to mix well on your machine and suggest you enable new fetch of just one of the two at any given time.

If you really, really want to run both, I suggest you set your cache request numbers (both of them) to something really, really small (like 0.1 day) and be prepared to live with a quite erratic pattern of fetch behavior.

The numbers which would most readily help you check my guess:

1. Look at the listing for the computer in question in your account on the Einstein web site, and observe the value reported for "task duration correction factor".

2. Look at the task list for your machine in Boinc manager on the machine, and observe the Remaining (estimated) time column.  I expect that for the GW tasks you'll see the value is close to your actual experience, while for the GRP tasks (if any remain) it will be far more than you saw.

 

Anonymous

archae86 wrote:I'm guessing,

archae86 wrote:

I'm guessing, as I can't see the most important numbers for your machine, but I think it a good guess.

The first GW job you finished bumped up the Task duration correction factor for your machine by a lot--probably over a factor of ten.r

BOINC uses this number as part of the procedure for guessing how many hours of work are currently in your "job cache".  So what was "enough" just before your first GW return, is now "way too much" and no new work will be requested until you finish and report most of the work in your queue, or, alternately, you run a long stretch of only GRP work so the TDCF grinds back down to a value which sets the estimated queue size lower than your cache request.

I don't think you'll find these two types of work to mix well on your machine and suggest you enable new fetch of just one of the two at any given time.

If you really, really want to run both, I suggest you set your cache request numbers (both of them) to something really, really small (like 0.1 day) and be prepared to live with a quite erratic pattern of fetch behavior.

The numbers which would most readily help you check my guess:

1. Look at the listing for the computer in question in your account on the Einstein web site, and observe the value reported for "task duration correction factor".

2. Look at the task list for your machine in Boinc manager on the machine, and observe the Remaining (estimated) time column.  I expect that for the GW tasks you'll see the value is close to your actual experience, while for the GRP tasks (if any remain) it will be far more than you saw.

 

you were spot on.  these two applications do not play well together so I unchecked the "all-sky" app on my projects page and the "gamma ray pulsar search # 1" WUs downloaded.  I would never have figured this out.  Nice work.  

mmonnin
mmonnin
Joined: 29 May 16
Posts: 291
Credit: 3394906540
RAC: 2863470

I run 2 clients for PCs where

I run 2 clients for PCs where I have GPUs as I often want separate queues for CPU vs GPU tasks. It helps a lot during competitions, offers more flexibility and the GPUs can never become blocked by CPU tasks.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.