I'm curious about my Linux computer

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 370
Credit: 587,317,415
RAC: 2,911,705
Topic 225657

As of today, I've been using my 3950X with Windows 10 and Linux Mint in a dual-boot mode, switching off every (approximately) week.  I'm curious if what I am doing is okay?

On my Linux Mint #12893636 computer I have 1237 total tasks, 105 in progress, 287 pending, 798 valid, with 22 invalid and 17 errors.  The errors are all timed out with no response.  The invalids are roughly 3 per day.

The thing that surprises me the most is the task duration correction factor of 0.042797 and the average turn around time of 0.02 days.

Are these readings okay?  Do I need to make some adjustments to my cache sizes?  Leave them alone?

George

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 1,319
Credit: 2,373,171,963
RAC: 8,182,254

Well first off, your DCF is

Well first off, your DCF is incorrect because for some reason you have never benchmarked the system.

Yours is still at the BOINC default for a new system.  Go to the Manager Tools menu and Run CPU benchmarks to get an accurate assessment of your system's speed.

Second, your no response errors are flukes.  Einstein GRP tasks have a 14 day deadline.  They were marked late in less than a day.  We occasionally see this on all hosts.  Nobody has come up for a reason for it though.

Don't worry about it.  The rest of your tasks like your invalids are normal and everybody has that low level 1-3% of invalids simply due to architecture differences between cards.

Your turnaround time is very low so you must have a low cache level set.  That is fine if you are comfortable with that.  The project seems to be very capable of supplying tasks upon request for everybody without taking on a huge cache to work through because of improbable project upsets.

Also if you run multiple projects at the same time on a host, keeping a low cache level on all projects makes is easier for BOINC to keep true to your project resource share without having the REC mechanism distorting how much work any project does against the others.

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 370
Credit: 587,317,415
RAC: 2,911,705

These are my current

These are my current benchmark results.  The screenshot of my computer's results taken about 20 minutes after.

Tue 06 Jul 2021 05:00:14 PM CDT |  | Running CPU benchmarks
Tue 06 Jul 2021 05:00:14 PM CDT |  | Suspending computation - CPU benchmarks in progress
Tue 06 Jul 2021 05:00:45 PM CDT |  | Benchmark results:
Tue 06 Jul 2021 05:00:45 PM CDT |  | Number of CPUs: 28
Tue 06 Jul 2021 05:00:45 PM CDT |  | 6666 floating point MIPS (Whetstone) per CPU
Tue 06 Jul 2021 05:00:45 PM CDT |  | 21036 integer MIPS (Dhrystone) per CPU
Tue 06 Jul 2021 05:00:46 PM CDT |  | Resuming computation

...[EDIT]...

I think they're a tad better with the Task Duration Correction Factor moving up, at this moment to 0.215056.

 

George

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 1,319
Credit: 2,373,171,963
RAC: 8,182,254

Well remember that Einstein

Well remember that Einstein awards fixed credits so the DCF factor has no influence on credit awarded here.  Would make a difference at other projects where they use the older standard BOINC credit system where DCF is still in the codebase.

But it will make a difference in task scheduling and scheduler request allotment because now the scheduler has a more accurate assessment of how fast you burn through tasks.

You may have to make an adjustment to your caching levels especially if you run both GW and GRP sub-projects on the same hosts.

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 370
Credit: 587,317,415
RAC: 2,911,705

No, I am only running GRP

No, I am only running GRP (GPU) tasks for Einstein and BHspin v2 (CPU) tasks for Universe on this machine as of now.  I've put Milkyway on hold and may just eliminate it altogether because the credits don't seem to add up to much compared to the other two.  Plus, I have to give up at least 1/2 of one GPU to run a single task which leaves me with less for Einstein.

Oh well, as always, thanks Keith.

George

Tom M
Tom M
Joined: 2 Feb 06
Posts: 1,118
Credit: 1,927,392,833
RAC: 4,747,402

GWGeorge007 wrote: As of

GWGeorge007 wrote:

As of today, I've been using my 3950X with Windows 10 and Linux Mint in a dual-boot mode, switching off every (approximately) week.  I'm curious if what I am doing is okay?

Congratulations on making the leap.  Are you running U@H on the cpu?

Tom M

Over the hill?  What hill?  I don't REMEMBER any hill...
A Proud member of the O.F.A. (I've forgotten what that stands for.... ;)

 

 

 

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 370
Credit: 587,317,415
RAC: 2,911,705

Tom M wrote: GWGeorge007

Tom M wrote:

GWGeorge007 wrote:

As of today, I've been using my 3950X with Windows 10 and Linux Mint in a dual-boot mode, switching off every (approximately) week.  I'm curious if what I am doing is okay?

Congratulations on making the leap.  Are you running U@H on the cpu?

Tom M

Thanks!  Yes I am.  Also doing E@H on the GPU.  We'll see how it goes.

George

petri33
petri33
Joined: 4 Mar 20
Posts: 68
Credit: 1,042,707,071
RAC: 5,854,608

It is now a second day that

It is now a second day that my computer gets tasks at about after midnight to 4 or 5 am. Then the server says waiit for 69000 seconds...

 

https://einsteinathome.org/fi/host/12836077/log

 

 

mikey
mikey
Joined: 22 Jan 05
Posts: 7,602
Credit: 616,596,622
RAC: 11,250

petri33 wrote: It is now a

petri33 wrote:

It is now a second day that my computer gets tasks at about after midnight to 4 or 5 am. Then the server says waiit for 69000 seconds...

 

https://einsteinathome.org/fi/host/12836077/log

2 reasons you gpu is too fast and they ran out of tasks:

2021-07-11 05:04:44.1289 [PID=27617] [debug]   [HOST#12836077] MSG(high) No work is available for Gamma-ray pulsar binary search #1 on GPUs
2021-07-11 05:04:44.1289 [PID=27617] [debug]   [HOST#12836077] MSG(high) (reached daily quota of 1376 tasks)

Each pc is limited so that everyone gets some tasks instead of one person hogging them all, then they just aren't making enough tasks to keep up with the demand right now.

 

Tom M
Tom M
Joined: 2 Feb 06
Posts: 1,118
Credit: 1,927,392,833
RAC: 4,747,402

petri33 wrote: It is now a

petri33 wrote:

It is now a second day that my computer gets tasks at about after midnight to 4 or 5 am. Then the server says waiit for 69000 seconds...

https://einsteinathome.org/fi/host/12836077/log

One of the work around is to increase your CPU core count (simulated) using <ncpu></ncpu> in the cc_config.xml file under options.

You probably want to at least double your core count.  One of the top performers has a 20 core CPU but regularly "reports" 128 cores.  https://einsteinathome.org/host/12784895

While his systems are currently offline the above system has reached 12 Million RAC on occasion.

Tom M

Over the hill?  What hill?  I don't REMEMBER any hill...
A Proud member of the O.F.A. (I've forgotten what that stands for.... ;)

 

 

 

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 880
Credit: 5,453,775,271
RAC: 31,963,109

Petri, your system does about

Petri, your system does about 2000 tasks per day. You’re constantly twiddling too much and breaking speed limits! 
 

so as Tom and Mikey have mentioned, you’re hitting the limits. Your current system specs will allow you about 1400 tasks per day. 
 

from what others have mentioned, each CPU “core” allows you 32 tasks. And each GPU allows you 256 tasks.

12xCPU = 384 

4xGPU = 1024

total = 1408

 

you should set your ncpus to at least 32 cpus to get you up to the 2000 tasks range  

 

_____________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.