As of today, I've been using my 3950X with Windows 10 and Linux Mint in a dual-boot mode, switching off every (approximately) week. I'm curious if what I am doing is okay?
On my Linux Mint #12893636 computer I have 1237 total tasks, 105 in progress, 287 pending, 798 valid, with 22 invalid and 17 errors. The errors are all timed out with no response. The invalids are roughly 3 per day.
The thing that surprises me the most is the task duration correction factor of 0.042797 and the average turn around time of 0.02 days.
Are these readings okay? Do I need to make some adjustments to my cache sizes? Leave them alone?
Proud member of the Old Farts Association
Copyright © 2024 Einstein@Home. All rights reserved.
Well first off, your DCF is
)
Well first off, your DCF is incorrect because for some reason you have never benchmarked the system.
Yours is still at the BOINC default for a new system. Go to the Manager Tools menu and Run CPU benchmarks to get an accurate assessment of your system's speed.
Second, your no response errors are flukes. Einstein GRP tasks have a 14 day deadline. They were marked late in less than a day. We occasionally see this on all hosts. Nobody has come up for a reason for it though.
Don't worry about it. The rest of your tasks like your invalids are normal and everybody has that low level 1-3% of invalids simply due to architecture differences between cards.
Your turnaround time is very low so you must have a low cache level set. That is fine if you are comfortable with that. The project seems to be very capable of supplying tasks upon request for everybody without taking on a huge cache to work through because of improbable project upsets.
Also if you run multiple projects at the same time on a host, keeping a low cache level on all projects makes is easier for BOINC to keep true to your project resource share without having the REC mechanism distorting how much work any project does against the others.
These are my current
)
These are my current benchmark results. The screenshot of my computer's results taken about 20 minutes after.
Tue 06 Jul 2021 05:00:14 PM CDT | | Running CPU benchmarks
Tue 06 Jul 2021 05:00:14 PM CDT | | Suspending computation - CPU benchmarks in progress
Tue 06 Jul 2021 05:00:45 PM CDT | | Benchmark results:
Tue 06 Jul 2021 05:00:45 PM CDT | | Number of CPUs: 28
Tue 06 Jul 2021 05:00:45 PM CDT | | 6666 floating point MIPS (Whetstone) per CPU
Tue 06 Jul 2021 05:00:45 PM CDT | | 21036 integer MIPS (Dhrystone) per CPU
Tue 06 Jul 2021 05:00:46 PM CDT | | Resuming computation
...[EDIT]...
I think they're a tad better with the Task Duration Correction Factor moving up, at this moment to 0.215056.
Proud member of the Old Farts Association
Well remember that Einstein
)
Well remember that Einstein awards fixed credits so the DCF factor has no influence on credit awarded here. Would make a difference at other projects where they use the older standard BOINC credit system where DCF is still in the codebase.
But it will make a difference in task scheduling and scheduler request allotment because now the scheduler has a more accurate assessment of how fast you burn through tasks.
You may have to make an adjustment to your caching levels especially if you run both GW and GRP sub-projects on the same hosts.
No, I am only running GRP
)
No, I am only running GRP (GPU) tasks for Einstein and BHspin v2 (CPU) tasks for Universe on this machine as of now. I've put Milkyway on hold and may just eliminate it altogether because the credits don't seem to add up to much compared to the other two. Plus, I have to give up at least 1/2 of one GPU to run a single task which leaves me with less for Einstein.
Oh well, as always, thanks Keith.
Proud member of the Old Farts Association
GWGeorge007 wrote: As of
)
Congratulations on making the leap. Are you running U@H on the cpu?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor)
Tom M wrote: GWGeorge007
)
Thanks! Yes I am. Also doing E@H on the GPU. We'll see how it goes.
Proud member of the Old Farts Association
It is now a second day that
)
It is now a second day that my computer gets tasks at about after midnight to 4 or 5 am. Then the server says waiit for 69000 seconds...
https://einsteinathome.org/fi/host/12836077/log
petri33 wrote: It is now a
)
2 reasons you gpu is too fast and they ran out of tasks:
Each pc is limited so that everyone gets some tasks instead of one person hogging them all, then they just aren't making enough tasks to keep up with the demand right now.
petri33 wrote: It is now a
)
One of the work around is to increase your CPU core count (simulated) using <ncpu></ncpu> in the cc_config.xml file under options.
You probably want to at least double your core count. One of the top performers has a 20 core CPU but regularly "reports" 128 cores. https://einsteinathome.org/host/12784895
While his systems are currently offline the above system has reached 12 Million RAC on occasion.
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor)
Petri, your system does about
)
Petri, your system does about 2000 tasks per day. You’re constantly twiddling too much and breaking speed limits!
so as Tom and Mikey have mentioned, you’re hitting the limits. Your current system specs will allow you about 1400 tasks per day.
from what others have mentioned, each CPU “core” allows you 32 tasks. And each GPU allows you 256 tasks.
12xCPU = 384
4xGPU = 1024
total = 1408
you should set your ncpus to at least 32 cpus to get you up to the 2000 tasks range
_________________________________________________________________________