Another means to achieve 3 CPU intensive tasks on a graphics card is to tell the scheduler that each WU uses only 0.66 CPU and 0.33 GPU, even though they actually will use [waste in a spin loop] whatever CPU is available to each. Then the scheduler will start 6 tasks using 6*0.66=4 CPUs, and 6*0.33=2 graphic cards [computations rounded up].
I always do this. I ran 8x CPU tasks and 6x Asteroid tasks at once on a 3770k by setting the CPU usage to 0.1 in Asteroids's app_config. The exe for Asteroids used what it needed and the CPU tasks just ran a bit slower but there was no wasted CPU time.
These OpenCL E@H tasks are different though. Any CPU usage by another exe on the core dedicated to the E@H exe will drop the GPU utilization way down. E@H exe files and all others have to be separated with a spare left open. The Wait cycle/spin loop must be in constant use.
If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.
I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.
... try setting your "additional" cache setting to a very low number ...
This is good advice because otherwise BOINC waits until the 'low water mark' (the 1st setting) is reached before trying to fill up to the 'high water mark' (1st setting plus additional). For most people, I don't really see the point of having this work cache 'range'. Just set what you want in the first setting and leave the other as 0.01 days. BOINC will then always be trying to maintain the full value.
I'm using around 0.8 days and 0.01 days for my two settings and I'm not seeing any problems getting sufficient work (yet, at least). Sure, the client tends to ask a lot and get rejected a lot but suddenly there will be some successful requests and the cache fills. Yesterday, I installed GPUs in two old Q6600 hosts that were shut down last December after crunching CPU tasks continuously since early 2008. In both cases, the machines took a while to get the initial GPU tasks (running x2) but today, they both have full caches. To kick start each one, I used 'update' to force a request each minute but once they had the first few tasks, they were left to fend for themselves.
If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.
I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.
I leave mine at the default of 0.1 + 0.5 days, which is usually no problem. But work has started flowing again, so I think it was just a temporary shortage on the server. I expect a lot of people did not notice, because they keep a larger cache.
I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s
Has anybody else observed a similar behavior ? Was there some change in work amount?
I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s
Has anybody else observed a similar behavior ? Was there some change in work amount?
I can confirm that over the last maybe 4 -5 days I have observed runtime increases somewhere in the region of 30 - 60 seconds per task. So you are not alone :-)
Perhaps we are now sifting through data from a different frequency that's slightly more demanding?
If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.
I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.
I leave mine at the default of 0.1 + 0.5 days, which is usually no problem. But work has started flowing again, so I think it was just a temporary shortage on the server. I expect a lot of people did not notice, because they keep a larger cache.
Day 3 for me watching my Win XP Pro x64 system with EVGA GTX-760 card chewing through work, and NOT getting new work in queue... Work queue now down approx. 50% of what it was...
Just changed Preferences to 0.5 and 0.01 and still no joy... Hit Update three times, no new work coming in. System will be out of work tonight, or early tomorrow morning.
[EDIT:]
In BOINC Preferences ---> Computing Preferences, my original settings were 0.01 and 5. This was yielding "No work is available..." messages in Event Log.
With the lower settings of 0.5 and 0.01, I got "No work sent. Job cache full." BUT, I ONLY have 29 Units in queue at this moment. (8:55 AM - PST.)
Just changed Preferences to 0.5 and 0.01 and still no joy... Hit Update three times, no new work coming in. System will be out of work tonight, or early tomorrow morning.
I wish we were given some explanation. Is it temporary (maybe due to server limitations), or more long-term because there is not enough work? People tend to assume the worst, which is sometimes accurate.
Just changed Preferences to 0.5 and 0.01 and still no joy... Hit Update three times, no new work coming in. System will be out of work tonight, or early tomorrow morning.
I wish we were given some explanation. Is it temporary (maybe due to server limitations), or more long-term because there is not enough work? People tend to assume the worst, which is sometimes accurate.
Well, just played some more with settings... Now at 5 and 0.01 and the queue is SLOOOOWLY filling up getting one and two units per pull... So, the work IS there, just have to keep playing with settings on the Windows machine to get it...
My MAC, (on the other hand), is STILL set at 0.01 and 5, (my original settings), and has NO trouble keeping the queue full on the 1.17 Units... (The MAC has TWO EVGA GTX-750TI SC cards...) So, I don't get it... Why would the Windows Platform be stifled from getting work???
Darrell_3 wrote:Another means
)
I always do this. I ran 8x CPU tasks and 6x Asteroid tasks at once on a 3770k by setting the CPU usage to 0.1 in Asteroids's app_config. The exe for Asteroids used what it needed and the CPU tasks just ran a bit slower but there was no wasted CPU time.
These OpenCL E@H tasks are different though. Any CPU usage by another exe on the core dedicated to the E@H exe will drop the GPU utilization way down. E@H exe files and all others have to be separated with a spare left open. The Wait cycle/spin loop must be in constant use.
I am out of 1.18 work units
)
I am out of 1.18 work units on one machine running two GTX 750 Ti's, and will shortly be out of work on another. Apparently it is due to lack of betas. https://einsteinathome.org/content/no-more-gpu-work-get?page=1
This is all very interesting, but I am not inclined to run 1.17 for long.
If you're having problems
)
If you're having problems getting beta GPU work try setting your "additional" cache setting to a very low number as that will cause Boinc to ask for work more often and thus upping your chance of getting new work.
I have my set to 0.01 days and have little to no problems keeping my host working on 1.18 tasks.
Holmis wrote:... try setting
)
This is good advice because otherwise BOINC waits until the 'low water mark' (the 1st setting) is reached before trying to fill up to the 'high water mark' (1st setting plus additional). For most people, I don't really see the point of having this work cache 'range'. Just set what you want in the first setting and leave the other as 0.01 days. BOINC will then always be trying to maintain the full value.
I'm using around 0.8 days and 0.01 days for my two settings and I'm not seeing any problems getting sufficient work (yet, at least). Sure, the client tends to ask a lot and get rejected a lot but suddenly there will be some successful requests and the cache fills. Yesterday, I installed GPUs in two old Q6600 hosts that were shut down last December after crunching CPU tasks continuously since early 2008. In both cases, the machines took a while to get the initial GPU tasks (running x2) but today, they both have full caches. To kick start each one, I used 'update' to force a request each minute but once they had the first few tasks, they were left to fend for themselves.
Cheers,
Gary.
Holmis wrote:If you're having
)
I leave mine at the default of 0.1 + 0.5 days, which is usually no problem. But work has started flowing again, so I think it was just a temporary shortage on the server. I expect a lot of people did not notice, because they keep a larger cache.
I don't know what happened,
)
I don't know what happened, but suddenly all my hosts take more time to finish v1.18:
Fury X x1: 450 -> 520 s
RX 480 x1: 660 -> 750 s
HD7950 x2: 1230 -> 1400 s
GTX 1050 Ti x1: 1470 -> 1540 s
Has anybody else observed a similar behavior ? Was there some change in work amount?
-----
Mumak wrote:I don't know what
)
I can confirm that over the last maybe 4 -5 days I have observed runtime increases somewhere in the region of 30 - 60 seconds per task. So you are not alone :-)
Perhaps we are now sifting through data from a different frequency that's slightly more demanding?
Jim1348 wrote:Holmis wrote:If
)
Day 3 for me watching my Win XP Pro x64 system with EVGA GTX-760 card chewing through work, and NOT getting new work in queue... Work queue now down approx. 50% of what it was...
Just changed Preferences to 0.5 and 0.01 and still no joy... Hit Update three times, no new work coming in. System will be out of work tonight, or early tomorrow morning.
[EDIT:]
In BOINC Preferences ---> Computing Preferences, my original settings were 0.01 and 5. This was yielding "No work is available..." messages in Event Log.
With the lower settings of 0.5 and 0.01, I got "No work sent. Job cache full." BUT, I ONLY have 29 Units in queue at this moment. (8:55 AM - PST.)
TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees
TimeLord04 wrote:Just changed
)
I wish we were given some explanation. Is it temporary (maybe due to server limitations), or more long-term because there is not enough work? People tend to assume the worst, which is sometimes accurate.
Jim1348 wrote:TimeLord04
)
Well, just played some more with settings... Now at 5 and 0.01 and the queue is SLOOOOWLY filling up getting one and two units per pull... So, the work IS there, just have to keep playing with settings on the Windows machine to get it...
My MAC, (on the other hand), is STILL set at 0.01 and 5, (my original settings), and has NO trouble keeping the queue full on the 1.17 Units... (The MAC has TWO EVGA GTX-750TI SC cards...) So, I don't get it... Why would the Windows Platform be stifled from getting work???
TL
[EDIT:]
Now up to 44 Units in queue. 9:25 AM - PST
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees