Unfortunately, I know some crunchers who were turned off by this. They wanted their cpu cores to be used on other projects so they switched to a different gpu project altogether.
Is there anyone reading this that can answer whether the newer CPU-GPU programs are doing a "spin loop" on the CPU side? The SETI applications do that unless a control file is supplied telling the program to use a timed wait. This reduced the CPU per program from 100% CPU to around 10%-20% but it did slow the processing down a little.
Is there anyone reading this that can answer whether the newer CPU-GPU programs are doing a "spin loop" on the CPU side?
I don't think the people who write this code have taken a position on this topic. But some others posting here have cited other references to say it seems likely, and observations of elements of the actual behavior on systems in the face of various forms of load mix changes make it seem likely to me.
The potential effort which seems most likely to avert this element of the current application is a build using a CUDA base. Not long before Christmas Bernd wrote something that suggested to me he thought that highly unlikely at the time. However as some undesired behavior of the current (openCL) application version came out, he more recently wrote something that made it seem much more likely.
I imagine it will depend on what competing priorities are in play, and, if an attempt is started, what difficulties turn up on the way. I hope for a CUDA build (55 or later) but am not counting on it.
I guess I'll be running 2 tasks from now on. [FGRBP1 1.17] had 3 tasks as the sweet spot (lost the data) with 4 tasks being the highest ppd but with diminishing returns.
I guess I'll be running 2 tasks from now on. [FGRBP1 1.17] had 3 tasks as the sweet spot
I've not kept careful enough records to be quantitative, but my impression is that the gain in going from 1X to 2X was much larger than for the last couple of previous Einstein applications for the earlier FGRBP1 releases, but that abruptly ended with Windows 1.18.
I've just been doing some overclock tinkering on my 1050, which crashed badly when I started running 1.18 at settings in effect for the last couple of months. I have checked my logs, and see that at a comparable condition my 2X productivity was just 8% higher than my 1X. (even less improvement than you've reported on your 1080)
I'm seeing a substantial
)
I'm seeing a substantial speedup on my trusty old ATi 7870xt.
Running a single task I get:
1.17 about 1800s
1.18 about 1000s
Calculation time dropped to 55%
Woot! Thx for the new version :)
This is a card from the good old days when double precision performance wasn't as nerfed as it is on the newer cards.
You should try running two
)
You should try running two tasks concurrently. There should be a further speedup :-).
Cheers,
Gary.
I might try it for a short
)
I might try it for a short while, but I guess I'll stay at 1 task eventually for a couple of reasons.
The card only has 2 gigs of RAM, so 2 tasks probably would fill it up completely, causing problems with everthing else trying to use the GPU
GPU utilisation went up from 60% to 85%, there might be little gain.
The card runs in turbo mode (975MHz) right now. At higher utilisation it's bound to drop to 925Mhz.
Also, this is my everyday use PC, so I Iike having a bit of GPU power left :)
Cheers
Hans
P.S: How many PCs do I need to buy for a 6,000,000 RAC?
:
2GB RAM is fine for x2. You
)
2GB RAM is fine for x2. You should try it and see how responsive your normal work still is. You never know - it might still be OK :-).
To get more than 6M (actually more than 7M) you only need 3 of these :-).
The way its RAC has been rocketing up (probably on board one of these), you might even be able to do it with just two, in a few days time :-).
Cheers,
Gary.
These CPU-GPU tasks seem to
)
These CPU-GPU tasks seem to be CPU hogs
My quad core with the 660Ti OC does X2 in 2 hours but my once FAST 3-core with the 560Ti OC takes 5 times as long.
It used to be close to the same as the 660Ti when these were pure GPU tasks before.
Got one error which I never remember sing before *The printer is out of paper*
https://einsteinathome.org/task/602955096
I haven't tried my 650Ti's yet or the 550Ti but it looks like some slow tasks when I do.
MAGIC Quantum Mechanic
)
Unfortunately, I know some crunchers who were turned off by this. They wanted their cpu cores to be used on other projects so they switched to a different gpu project altogether.
Is there anyone reading this
)
Is there anyone reading this that can answer whether the newer CPU-GPU programs are doing a "spin loop" on the CPU side? The SETI applications do that unless a control file is supplied telling the program to use a timed wait. This reduced the CPU per program from 100% CPU to around 10%-20% but it did slow the processing down a little.
Darrell_3 wrote:Is there
)
I don't think the people who write this code have taken a position on this topic. But some others posting here have cited other references to say it seems likely, and observations of elements of the actual behavior on systems in the face of various forms of load mix changes make it seem likely to me.
The potential effort which seems most likely to avert this element of the current application is a build using a CUDA base. Not long before Christmas Bernd wrote something that suggested to me he thought that highly unlikely at the time. However as some undesired behavior of the current (openCL) application version came out, he more recently wrote something that made it seem much more likely.
I imagine it will depend on what competing priorities are in play, and, if an attempt is started, what difficulties turn up on the way. I hope for a CUDA build (55 or later) but am not counting on it.
GTX 1080 results on FGRBP1
)
GTX 1080 results on FGRBP1 1.18 in Windows 10:
1 task@9-10 minutes each = 160-144 tasks/day
2 tasks@16 minutes each = 180 tasks/day
3 tasks@24 minutes each = 180 tasks/day
4 tasks@32 minutes each = 180 tasks/day
I guess I'll be running 2 tasks from now on. [FGRBP1 1.17] had 3 tasks as the sweet spot (lost the data) with 4 tasks being the highest ppd but with diminishing returns.
wes. wrote:I guess I'll be
)
I've not kept careful enough records to be quantitative, but my impression is that the gain in going from 1X to 2X was much larger than for the last couple of previous Einstein applications for the earlier FGRBP1 releases, but that abruptly ended with Windows 1.18.
I've just been doing some overclock tinkering on my 1050, which crashed badly when I started running 1.18 at settings in effect for the last couple of months. I have checked my logs, and see that at a comparable condition my 2X productivity was just 8% higher than my 1X. (even less improvement than you've reported on your 1080)