YOW I remember when my SC'd 660Ti was supposed to be the fastest.........it takes 2hrs to run tasks X2
Why don't you open the purse strings just a little :-). Just as a simple example let's say about 90 bucks for something like this. It should drop that 2hrs for x2 down to something like 40mins - I actually get about 34 mins for x2 on an MSI R7 370 running in a Pentium dual core (G640) which used to have a GTX650 in it. Of course, if you were to spend a little more you could get something like an RX 480 - a lot cheaper in your country than in mine. If you take a look at this host - currently #8 in the top hosts list, you'll see it has dual RX 480s. It's currently the highest placed dual-GPU host in the list. I saw an RX 480 on newegg for just 160 bucks after rebate. The cheapest RX 480 4GB here is the equivalent of close to $US250.
MAGIC wrote:
And right now the server will only let me have 2 new tasks after 10 tries and mine are always set for 10 days worth.
Why are you trying for a 10 day cache??? If a new improved version of an app comes out, that's a long time to wait before you get to take advantage of it -- quite apart from the risk of exceeding deadlines if a low crunch time estimate causes an over-fetch.
The difficulty for getting work is because the new app is a beta test app. Each workunit must be made up of a standard app task as a companion for a beta test task. There's a limited sub-set of tasks that the scheduler gets to choose from. If the scheduler has already allocated all the beta test copies from that sub-set, you just have to wait until more get added when some standard (non-beta) tasks get allocated and room to store more beta tasks is released. Because of the speed improvement, there's obviously quite a demand for beta tasks.
This will all disappear when the new app comes out of beta. I see exactly the same behaviour as you and I have a work cache around 0.5 days. My hosts aren't having any real problem getting work. They just have to ask a few extra times - no big deal.
MAGIC wrote:
I won't have any luck here these days unless we get back to non-CPU GPU tasks again
We've never really had 'non-CPU' GPU tasks before :-). Even BRPx CUDA tasks had to steal a bit of CPU support from whatever else was running. With the latest app, the CPU component to run GPU tasks on the AMD GPUs I'm using is getting quite small - might even be smaller than what it used to be for BRP6 :-).
In the past, it often took quite a long time for the efficiency of an app to improve like this. The current rate of progress is quite breathtaking by comparison. All those staff or volunteers involved deserve a big vote of thanks for what has been a rapid and relatively painless transition.
but remember, as far i understand u can not compare between the projects, its made for primegrid.
Few month ago i compared the single precision you get per given money (SP Operations divided through buying price, list in Excel) with the used prices on Ebay and decided to take the AMD R9 290 with a good cooling. Get them in Ebay for 120-150 Euro.
Dont' know how the Ratio is for the Performnace with single precision and double precision with the 1.18.
If there are better options or something wrong, please let me know.
I won't have any luck here these days unless we get back to non-CPU GPU tasks again
We've never really had 'non-CPU' GPU tasks before :-). Even BRPx CUDA tasks had to steal a bit of CPU support from whatever else was running. With the latest app, the CPU component to run GPU tasks on the AMD GPUs I'm using is getting quite small - might even be smaller than what it used to be for BRP6 :-).
In the past, it often took quite a long time for the efficiency of an app to improve like this. The current rate of progress is quite breathtaking by comparison. All those staff or volunteers involved deserve a big vote of thanks for what has been a rapid and relatively painless transition.
It's been a number of years since we've had a GPU app that was as CPU hungry as the current ones. And while CPU load for the current app on AMD GPUs might be falling fast; on the nVidia side they're still eating a full CPU core each. Somewhere on one of the threads about the new app, I saw a claim that the nVidia OpenCL libraries were to blame; with any app built against them getting a CPU spin lock instead of the more efficient options that AMD OpenCL or nVidia CUDA get. The cynical part of me suspects that nVidia is deliberately gimping their OpenCL like this to encourage the use of CUDA instead.
... on the nVidia side they're still eating a full CPU core each.
Yes, exactly. On the older ones I've tried - 550Ti, 650 and 750Ti, the CPU component is virtually equal to the full elapsed time. I don't know if that's also true for the latest and greatest in the NVIDIA range - I don't own any and I haven't bothered to check because they're too expensive for me. I'm more interested in something a bit less costly like the RX 480 (or 470 or 460) where an elapsed time of around 1340s only requires 96s of CPU support. I haven't done any real research yet but I know it's cheaper for me to by an RX 480 than a GTX 1060 and I believe it will give a greater output as well. Once I've checked that, I'll probably buy one to measure output against power consumption.
DanNeely wrote:
... The cynical part of me suspects that nVidia is deliberately gimping their OpenCL like this to encourage the use of CUDA instead.
I have no idea of the real story but something like that sounds plausible. I'm sure all that NVIDIA really worries about is gaming performance as that's where their biggest sales and profits come from. I think it's quite fortunate that AMD cards that need to compete on price just happen to have the better OpenCL performance (or so it seems). It seems to be a win/win for non-gamers who just want to crunch :-).
Of course, if you were to spend a little more you could get something like an RX 480 - a lot cheaper in your country than in mine. I saw an RX 480 on newegg for just 160 bucks after rebate.
Just bought one...Will post timings...It's all your fault Gary.
I forgot to mention this was
)
I forgot to mention this was on a 4790k at 4.6 which, due to the app being so cpu intensive, might account for something.
Maybe I'll throw the card in my xeon at 2.6 and see if there's any difference.
Zitat:To get more than 6M
)
That's a nice one, it's got plenty of everything :)
Cheers
GTX 970: 1
)
wes. wrote:GTX 970: 1
)
YOW I remember when my SC'd 660Ti was supposed to be the fastest.........it takes 2hrs to run tasks X2
And right now the server will only let me have 2 new tasks after 10 tries and mine are always set for 10 days worth.
I won't have any luck here these days unless we get back to non-CPU GPU tasks again
MAGIC Quantum Mechanic
)
Story of my life MAGIC.
Hopefully the new apps won't kill off older cards.
Wasn't there a statistics page that displayed the top video cards or was that just every other boinc project?
MAGIC Quantum Mechanic
)
Why don't you open the purse strings just a little :-). Just as a simple example let's say about 90 bucks for something like this. It should drop that 2hrs for x2 down to something like 40mins - I actually get about 34 mins for x2 on an MSI R7 370 running in a Pentium dual core (G640) which used to have a GTX650 in it. Of course, if you were to spend a little more you could get something like an RX 480 - a lot cheaper in your country than in mine. If you take a look at this host - currently #8 in the top hosts list, you'll see it has dual RX 480s. It's currently the highest placed dual-GPU host in the list. I saw an RX 480 on newegg for just 160 bucks after rebate. The cheapest RX 480 4GB here is the equivalent of close to $US250.
Why are you trying for a 10 day cache??? If a new improved version of an app comes out, that's a long time to wait before you get to take advantage of it -- quite apart from the risk of exceeding deadlines if a low crunch time estimate causes an over-fetch.
The difficulty for getting work is because the new app is a beta test app. Each workunit must be made up of a standard app task as a companion for a beta test task. There's a limited sub-set of tasks that the scheduler gets to choose from. If the scheduler has already allocated all the beta test copies from that sub-set, you just have to wait until more get added when some standard (non-beta) tasks get allocated and room to store more beta tasks is released. Because of the speed improvement, there's obviously quite a demand for beta tasks.
This will all disappear when the new app comes out of beta. I see exactly the same behaviour as you and I have a work cache around 0.5 days. My hosts aren't having any real problem getting work. They just have to ask a few extra times - no big deal.
We've never really had 'non-CPU' GPU tasks before :-). Even BRPx CUDA tasks had to steal a bit of CPU support from whatever else was running. With the latest app, the CPU component to run GPU tasks on the AMD GPUs I'm using is getting quite small - might even be smaller than what it used to be for BRP6 :-).
In the past, it often took quite a long time for the efficiency of an app to improve like this. The current rate of progress is quite breathtaking by comparison. All those staff or volunteers involved deserve a big vote of thanks for what has been a rapid and relatively painless transition.
Cheers,
Gary.
u mean this
)
To Magic:
u mean this https://www.primegrid.com/gpu_list.php
but remember, as far i understand u can not compare between the projects, its made for primegrid.
Few month ago i compared the single precision you get per given money (SP Operations divided through buying price, list in Excel) with the used prices on Ebay and decided to take the AMD R9 290 with a good cooling. Get them in Ebay for 120-150 Euro.
Dont' know how the Ratio is for the Performnace with single precision and double precision with the 1.18.
If there are better options or something wrong, please let me know.
Gary Roberts wrote:MAGIC
)
It's been a number of years since we've had a GPU app that was as CPU hungry as the current ones. And while CPU load for the current app on AMD GPUs might be falling fast; on the nVidia side they're still eating a full CPU core each. Somewhere on one of the threads about the new app, I saw a claim that the nVidia OpenCL libraries were to blame; with any app built against them getting a CPU spin lock instead of the more efficient options that AMD OpenCL or nVidia CUDA get. The cynical part of me suspects that nVidia is deliberately gimping their OpenCL like this to encourage the use of CUDA instead.
DanNeely wrote:... on the
)
Yes, exactly. On the older ones I've tried - 550Ti, 650 and 750Ti, the CPU component is virtually equal to the full elapsed time. I don't know if that's also true for the latest and greatest in the NVIDIA range - I don't own any and I haven't bothered to check because they're too expensive for me. I'm more interested in something a bit less costly like the RX 480 (or 470 or 460) where an elapsed time of around 1340s only requires 96s of CPU support. I haven't done any real research yet but I know it's cheaper for me to by an RX 480 than a GTX 1060 and I believe it will give a greater output as well. Once I've checked that, I'll probably buy one to measure output against power consumption.
I have no idea of the real story but something like that sounds plausible. I'm sure all that NVIDIA really worries about is gaming performance as that's where their biggest sales and profits come from. I think it's quite fortunate that AMD cards that need to compete on price just happen to have the better OpenCL performance (or so it seems). It seems to be a win/win for non-gamers who just want to crunch :-).
Cheers,
Gary.
Gary Roberts wrote:Of course,
)