My 3080 can do a Fermi task every 3 minutes, for ~480/day and my CPU's good for another ~20 GW tasks/day, which means if I don't do a few hours of gaming I'm hitting this every day now:
> 10/12/2021 10:03:51 PM | Einstein@Home | (reached daily quota of 448 tasks)
I have a vague recollection that years ago you could edit something in one of the config files so the client pretends to have a larger number of CPU cores to trick the server into giving you a higher quota. Is that possible with the current client, and if so what do I need to mess with?
Copyright © 2024 Einstein@Home. All rights reserved.
Hi! cc_config.xml <ncpu
)
Hi!
cc_config.xml
<ncpus>-1</ncpus>
Your host currently has 8 cores. Try changing that ncpus value to higher than 8.
"-1" is default (all cores, 8 in this case) but you could use for example value "16".
Many of us who use this trick
)
Many of us who use this trick don't run CPU tasks. If you don't take special additional measures, you may find yourself running considerably more CPU tasks than you actually have CPUs, which is not likely to be what you want. Sadly I forget what the extra tricks for that case are. Possibly someone will chime in.
In his Einstein
)
In his Einstein app_config.xml file, he will need to add:
<project_max_concurrent>n</project_max_concurrent>
where n is the number of tasks he wants to run. With an 8-thread CPU, personally I would leave this on 7. This will run 6x CPU tasks + 1x GPU task, leaving one free.
make sure other projects (if there are any) are set to NNT, otherwise BOINC will think you have extra cores “free” to use and just spin up tasks from other projects.
_________________________________________________________________________
Going to take a guess. I
)
Going to take a guess. I have never set ncpu above what I really have but I have set number of GPUs to way above my real count.
1. Probably can get more tasks above the daily quota as that worked for me once before on gpu tasks.
2. Possibly: Depending on the GPU, Einstein needs either %96 (NVidia) or only %13 or less (AMD)
If the app_config.xml specifies (for example) "nothing" then you have assigned a full CPU when only .13 is needed. In this case it might be useful to pretend to have more CPU than you really have.
the app_config should have had something like <avg_ncpus>0.125</avg_ncpus> but I suspect this will not buy a lot of extra CPUs
my 2c
Ian&Steve C. wrote: In his
)
I run 7 on my headless boxes, 6 on my main one to avoid bursts of lag when I have regular apps trying to use more than one core. It probably averages about a half core idle.
JStateson wrote: Going to
)
I'm trying the fake GPU route now; but not sure if I want to stick with it or not. Mostly because having to set max concurrent to 1 in my app config stopped the fermi GPU app from crashing by trying to run on the fake card. While that works for now, if a new E@H app is deployed it'd do so without the limit and promptly crash it's way through my task quota. Likewise for any of my backup projects; which because I don't follow them closely would have a high likelihood of any app configging I do being out of date when they're called on to run.
tutorial here:
https://www.youtube.com/watch?v=DR3ehBw9L_s
archae86 wrote:Many of us
)
What else is new? I've been dealing with the too many CPU task issue for the last decade or so due to the garbage fire that is a single DCF value per project.
DanNeely wrote: I'm trying
)
I was not aware of that youtube video. However I have on occasions had to edit that coproc file due to strange behavior of some ATI driver updates.
Some time back I modded the boinc client to simply report any number of video cards back to the project. It does not change the coproc file so there are no fake cards to worry about. The source is here https://github.com/JStateson/MSboinc
DanNeely wrote: archae86
)
Run 2 clients per PC. 1 for GPUs, one for CPUs. I do this on all systems.
archae86 wrote: Many of us
)
Going the fake GPU route I've actually had the opposite problem. There's a separate clientside limit on max tasks at a time, and my local backlog has become entirely filled with GPU tasks, leaving my CPUs running a backup project for the last few days.
I think I'm going to try switching to fake CPUs instead and see if skewing the notionally desired workload mix will help. If it doesn't help, or if it ends up skewing the problem hard the other direction, I guess I'll have to reduce my desired amount of local data. I don't really like doing so; but unless someone can point me to another setting to fiddle it doesn't look like I can get enough GPU work to have a few days on hand no matter what I do.