Observations on FGRBP1 1.18 for Windows

mmonnin
mmonnin
Joined: 29 May 16
Posts: 292
Credit: 3,442,846,540
RAC: 3,880,084

Darrell_3 wrote:Another means

Darrell_3 wrote:

Another means to achieve 3 CPU intensive tasks on a graphics card is to tell the scheduler that each WU uses only 0.66 CPU and 0.33 GPU, even though they actually will use [waste in a spin loop] whatever CPU is available to each.  Then the scheduler will start 6 tasks using 6*0.66=4 CPUs, and 6*0.33=2 graphic cards [computations rounded up].

 

 

I always do this. I ran 8x CPU tasks and 6x Asteroid tasks at once on a 3770k by setting the CPU usage to 0.1 in Asteroids's app_config. The exe for Asteroids used what it needed and the CPU tasks just ran a bit slower but there was no wasted CPU time.

These OpenCL E@H tasks are different though. Any CPU usage by another exe on the core dedicated to the E@H exe will drop the GPU utilization way down. E@H exe files and all others have to be separated with a spare left open. The Wait cycle/spin loop must be in constant use.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

I am out of 1.18 work units

I am out of 1.18 work units on one machine running two GTX 750 Ti's, and will shortly be out of work on another.  Apparently it is due to lack of betas.  https://einsteinathome.org/content/no-more-gpu-work-get?page=1

This is all very interesting, but I am not inclined to run 1.17 for long.

 

 

Holmis
Joined: 4 Jan 05
Posts: 1,118
Credit: 1,055,935,564
RAC: 0

If you're having problems

If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.

I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,872
Credit: 117,917,187,965
RAC: 34,570,008

Holmis wrote:... try setting

Holmis wrote:
... try setting your "additional" cache setting to a very low number ...

This is good advice because otherwise BOINC waits until the 'low water mark' (the 1st setting) is reached before trying to fill up to the 'high water mark' (1st setting plus additional).  For most people, I don't really see the point of having this work cache 'range'.  Just set what you want in the first setting and leave the other as 0.01 days.  BOINC will then always be trying to maintain the full value.

I'm using around 0.8 days and 0.01 days for my two settings and I'm not seeing any problems getting sufficient work (yet, at least).  Sure, the client tends to ask a lot and get rejected a lot but suddenly there will be some successful requests and the cache fills.  Yesterday, I installed GPUs in two old Q6600 hosts that were shut down last December after crunching CPU tasks continuously since early 2008.   In both cases, the machines took a while to get the initial GPU tasks (running x2) but today, they both have full caches.  To kick start each one, I used 'update' to force a request each minute but once they had the first few tasks, they were left to fend for themselves.

 

Cheers,
Gary.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

Holmis wrote:If you're having

Holmis wrote:

If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.

I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.

I leave mine at the default of 0.1 + 0.5 days, which is usually no problem.  But work has started flowing again, so I think it was just a temporary shortage on the server.  I expect a lot of people did not notice, because they keep a larger cache.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3,534,096,961
RAC: 1,447,958

I don't know what happened,

I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s

Has anybody else observed a similar behavior ? Was there some change in work amount?

-----

Gavin
Gavin
Joined: 21 Sep 10
Posts: 191
Credit: 40,644,337,738
RAC: 0

Mumak wrote:I don't know what

Mumak wrote:

I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s

Has anybody else observed a similar behavior ? Was there some change in work amount?

I can confirm that over the last maybe 4 -5 days I have observed runtime increases somewhere in the region of 30 - 60 seconds per task. So you are not alone :-)
Perhaps we are now sifting through data from a different frequency that's slightly more demanding?

TimeLord04
TimeLord04
Joined: 8 Sep 06
Posts: 1,442
Credit: 72,378,840
RAC: 0

Jim1348 wrote:Holmis wrote:If

Jim1348 wrote:
Holmis wrote:

If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.

I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.

I leave mine at the default of 0.1 + 0.5 days, which is usually no problem.  But work has started flowing again, so I think it was just a temporary shortage on the server.  I expect a lot of people did not notice, because they keep a larger cache.

Day 3 for me watching my Win XP Pro x64 system with EVGA GTX-760 card chewing through work, and NOT getting new work in queue...  Work queue now down approx. 50% of what it was...

Just changed Preferences to 0.5 and 0.01 and still no joy...  Hit Update three times, no new work coming in.  System will be out of work tonight, or early tomorrow morning.  Frown

[EDIT:]

In BOINC Preferences  --->  Computing Preferences, my original settings were 0.01 and 5.  This was yielding "No work is available..." messages in Event Log.

With the lower settings of 0.5 and 0.01, I got "No work sent.  Job cache full."  BUT, I ONLY have 29 Units in queue at this moment.  (8:55 AM - PST.)

 

TL

TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257,957,147
RAC: 0

TimeLord04 wrote:Just changed

TimeLord04 wrote:
Just changed Preferences to 0.5 and 0.01 and still no joy...  Hit Update three times, no new work coming in.  System will be out of work tonight, or early tomorrow morning.  

I wish we were given some explanation.  Is it temporary (maybe due to server limitations), or more long-term because there is not enough work?  People tend to assume the worst, which is sometimes accurate.

TimeLord04
TimeLord04
Joined: 8 Sep 06
Posts: 1,442
Credit: 72,378,840
RAC: 0

Jim1348 wrote:TimeLord04

Jim1348 wrote:
TimeLord04 wrote:
Just changed Preferences to 0.5 and 0.01 and still no joy...  Hit Update three times, no new work coming in.  System will be out of work tonight, or early tomorrow morning.  

I wish we were given some explanation.  Is it temporary (maybe due to server limitations), or more long-term because there is not enough work?  People tend to assume the worst, which is sometimes accurate.

Well, just played some more with settings...  Now at 5 and 0.01 and the queue is SLOOOOWLY filling up getting one and two units per pull...  So, the work IS there, just have to keep playing with settings on the Windows machine to get it...

My MAC, (on the other hand), is STILL set at 0.01 and 5, (my original settings), and has NO trouble keeping the queue full on the 1.17 Units...  (The MAC has TWO EVGA GTX-750TI SC cards...)  So, I don't get it...  Why would the Windows Platform be stifled from getting work???  Surprised

 

TL

[EDIT:]

Now up to 44 Units in queue.  9:25 AM - PST

TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.