Fetching too many tasks

Neal Burns
Neal Burns
Joined: 19 Feb 22
Posts: 25
Credit: 213291419
RAC: 188
Topic 227446

Hi,

Boinc keeps downloading E@H tasks until it has almost 1000. I only want a few hours' worth at most. When I delete them, they just come back.

Thanks in advance for your help,

Neal

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47219792642
RAC: 65379002

are you using a

are you using a <project_max_concurrent> line in your app_config file?

_________________________________________________________________________

Neal Burns
Neal Burns
Joined: 19 Feb 22
Posts: 25
Credit: 213291419
RAC: 188

This is what's in my

This is what's in my app_config:

 

<app_config>
   <app>
      <name>hsgamma_FGRPB1G</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>.75</gpu_usage>
          <cpu_usage>1</cpu_usage>
      </gpu_versions>
    </app>

</app_config>

 

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4970
Credit: 18772317099
RAC: 7209726

Neal Burns wrote:This is

Neal Burns wrote:

This is what's in my app_config:

 

<app_config>
   <app>
      <name>hsgamma_FGRPB1G</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>.75</gpu_usage>
          <cpu_usage>1</cpu_usage>
      </gpu_versions>
    </app>

here

</app_config>

<project_max_concurrent>2</project_max_concurrent> add this

 

Neal Burns
Neal Burns
Joined: 19 Feb 22
Posts: 25
Credit: 213291419
RAC: 188

I think that solved it.

I think that solved it. Thanks

 

Neal Burns
Neal Burns
Joined: 19 Feb 22
Posts: 25
Credit: 213291419
RAC: 188

That actually didn't solve

That actually didn't solve it. It's back up to 1002 tasks.

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3965
Credit: 47219792642
RAC: 65379002

i asked about project max

i asked about project max concurrent because I was going to tell you to not use that lol. not that you should be using it. many people in the past have had an issue with this parameter due to a bug in how BOINC is/was handling that parameter. so i was only curious if that might have had been a contributing factor.

I'm not sure if there's a way to get a good middle ground of a few hours of work without a custom BOINC client. but if you are not running other GPU projects at the same time, you can try setting Einstein to a '0' resource share. this will limit you to 1 task per GPU at any given time. basically keeping no cache at all. I'm not sure if too few tasks will be more preferable for you that too many tasks

_________________________________________________________________________

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4970
Credit: 18772317099
RAC: 7209726

I was just going to comment

I was just going to comment that it works sometimes on some systems and it doesn't work sometimes on some systems owing to the bug in BOINC code that has tried to fix the issue and just made it more flaky than it was before the fix.

All you can do is try it and see if it works.  Or remove it and try other solutions.

 

 

Neal Burns
Neal Burns
Joined: 19 Feb 22
Posts: 25
Credit: 213291419
RAC: 188

Keith Myers wrote: I was

Keith Myers wrote:

I was just going to comment that it works sometimes on some systems and it doesn't work sometimes on some systems owing to the bug in BOINC code that has tried to fix the issue and just made it more flaky than it was before the fix.

All you can do is try it and see if it works.  Or remove it and try other solutions.

 

Are you referring to making the resource share zero?

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4970
Credit: 18772317099
RAC: 7209726

Neal Burns wrote: Keith

Neal Burns wrote:

Keith Myers wrote:

I was just going to comment that it works sometimes on some systems and it doesn't work sometimes on some systems owing to the bug in BOINC code that has tried to fix the issue and just made it more flaky than it was before the fix.

All you can do is try it and see if it works.  Or remove it and try other solutions.

 

Are you referring to making the resource share zero?

 

No.  Using a <project_max_concurrent< statement was what I was referring.

 

Harri Liljeroos
Harri Liljeroos
Joined: 10 Dec 05
Posts: 4366
Credit: 3219920421
RAC: 2037124

I was under impression that

I was under impression that the latest version 7.16.20 was supposed to have the fix for this but I have not tested that version myself. (Wrong version shown on Release notes page in boinc.berkeley.edu)

Changes in 7.20.0

  • Manager: show appropriate Welcome Page on first run.
  • Client: pass process priority to wrapper
  • Client: disable GET feature of GUI RPC
  • Manager: add ctrl-A shortcut to go to advanced view
  • Client: allow empty GUI RPC password but show warning
  • Client (linux): Ignore tty(S|ACM) devices in TTY idle time calculation
  • Client: display IPV6 addresses correctly
  • Client: don't tell Manager that graphics app exists if it's still downloading
  • Manager: fix alt-space crash
  • Manager: fix RTL languages in disk view
  • Client: add reset_host_info() GUI RPC
  • Client: put CDATA around link field of notices
  • Client: fix problems with set_app_config() RPC
  • Client: fix overly aggressive project-wide file transfer backoff policy
  • Client: fix work-fetch logic when max concurrent limits are used
  • Manager: add "Suspend when no mouse/keyboard input in last XX minutes" to prefs dialog
  • Manager: correctly handle large numbers in prefs
  • Manager (Win): Make Manager DPI unaware to let wxWidgets and Windows scale GUI elements properly
  • Manager: fix failure to connect to client with non-English language
  • Client (Win): fix detection of Windows product
  • Client: Fix bug in new version check
  • Mac: fix screensaver preferences dialog under MacOS 12 Monterey
  • Mac: ensure curl does not depend on unavailable libraries
  • Mac: use newer libraries: c-ares-1.17.2, curl-7.79.1, freetype-2.11.0, openssl-3.0.0, wxWidgets-3.1.5

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.