Save Overclocking GPU

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1702989778
RAC: 0

(If a graphics card happens

(If a graphics card happens to brake down, as long as a customer isn't going to return that card with a non-original bios, no manufacturer was ever able to claim if the card had been overclocked or not. When playing only with the normal software settings all GPU's can be overclocked as much as they can take and that will never invalidate the warranty.)

Der Mann mit der Ledertasche
Der Mann mit de...
Joined: 12 Dec 05
Posts: 151
Credit: 302594178
RAC: 0

Hi Folks, after some

Hi Folks,

after some Experiments and Testings I come to the Result that for my three OC Cards it make no Sense to tune in any way. Perhaps, if I would spend more Time in finer Tunings, I would get a little bit speed up. The only Thing I've changed permanently, is that these Cards run three GPU Task at the same Time. I will decide after a couple of Weeks if that was a good decision.

Thanks to all there have give me one or more Advise, but Think that's all no Reason to "run angry"! ;-)

BR

DMmdL

Greetings from the North

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1704
Credit: 1069802775
RAC: 1275267

RE: Hi Folks, after some

Quote:

Hi Folks,

after some Experiments and Testings I come to the Result that for my three OC Cards it make no Sense to tune in any way. Perhaps, if I would spend more Time in finer Tunings, I would get a little bit speed up. The only Thing I've changed permanently, is that these Cards run three GPU Task at the same Time. I will decide after a couple of Weeks if that was a good decision.

Thanks to all there have give me one or more Advise, but Think that's all no Reason to "run angry"! ;-)

BR

DMmdL

You won't be able to get those 750's to run any faster with the 2-core CPU's if you are running CPU tasks at the same time.

The GPU cards need you to run at least one free CPU core and with those 2-core CPU's they will run best if you ONLY run GPU X2 and leave those 2 CPU cores free.

And the ones running with a quad-core will run best running GPU X2 with 2 free CPU cores.......at least one free.

I have a couple 3-core running GPU X2 and X2 with vLHC CPU tasks ok since those don't need as much CPU and Ram as the Einstein CPU tasks

My 660Ti running GPU X2 here runs best if I leave the 2 CPU cores free on that quad-core.

I have a 650Ti running ok with GPU X2 and running just one CPU core free.......but don't use the CPU cores here.

Mine are all OC's and SC'd running 24/7 from over 3 to 4 years.

Main thing is keep them running cool enough (mine are running fine @ low 60's C )

I could be running even more GPU's here if I wasn't running all the Cern CPU tasks the last 5 years.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109968952424
RAC: 30401468

RE: The GPU cards need you

Quote:
The GPU cards need you to run at least one free CPU core and with those 2-core CPU's they will run best if you ONLY run GPU X2 and leave those 2 CPU cores free.


You need to be a bit careful making emphatic blanket statements, because the true situation is that, "It depends ...". This is not at all a criticism, just a heads-up for others who might be reading.

Both CPU architecture and the type of GPU (as well as other factors) may have a significant influence on performance, so saying that you have to "leave those two CPU cores free" may turn out not to be the best advice. We should advise people to try it out on their particular setup and see what works best for them. As an example, I have a Pentium dual core (G3258) with a 750Ti. It has run both 2x and 3x on the GPU and at 2x there is no output benefit at all from running less than 2 CPU tasks. Any gain in GPU output (difficult to quantify) is outweighed completely by the loss of the CPU task(s). Even at 3x I can still run 2 CPU tasks. There is no significant gain, but no output loss either, just a bit more power used. If the aim is to minimise power consumption, then it's quite different yet again, with different optimal choices.

So, it's important to emphasise that people should think through their aims and to experiment with settings to see what best suits those aims. There is no blanket, 'one size fits all' best solution. The majority of volunteers probably just want a 'set and forget' experience. For them, 'leave everything at default' is probably the best advice for the 'performance' settings. If people get sufficiently involved to start adding GPUs and running concurrent tasks, we can make suggestions but urge them to experiment for themselves.

Cheers,
Gary.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

Tasks with Arecibo data

Tasks with Arecibo data complete successfully on my Geforce GTX 750 OC, while tasks with Parkes data tend to go on for a long time and then error out. My PC has a 4 core A10-6700 and 24 GB RAM. I am a novice Windows 10 user and GPU cruncher,so all my parameters are default.
Tullio

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7057194931
RAC: 1600768

RE: The majority of

Quote:
The majority of volunteers probably just want a 'set and forget' experience. For them, 'leave everything at default' is probably the best advice for the 'performance' settings.


While I'm strongly in agreement that urging people to decide what they care about, then experiment to see what happens is good for those minded to give things a try, I do think there is one setting that is so commonly helpful it can be urged for the "set and forget" group.

That is setting GPU tasks to run at what I like to call a multiplicity of two, or running 2X for short, and what the Einstein@home preferences term a "GPU utilization factor " of 0.5.

Across my own experience and reports I have seen here, running 2X very nearly always beats running at 1X on all reasonable metrics, often rather substantially. On the other hand, further gain from going higher varies greatly from case to case, and is commonly quite small, and sometimes not present. Still, backing down by one CPU is so commonly helpful in some way, that trying that dimension should be pretty high on the short list of things to try. And if one helps a lot, possibly more.

As I care a lot about power, I've followed that advice all the way down to zero CPU jobs, which for my hardware and the current applications I run gives a definite power efficiency advantage over running even one, and is surprisingly close to break-even on credit production.

I agree that the effect of reducing the number of CPUs BOINC is authorized to use varies rather widely based on application, specifics of the CPU, specifics of the GPU, and perhaps the phase of the moon for all I know.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109968952424
RAC: 30401468

RE: Tasks with Arecibo data

Quote:
Tasks with Arecibo data complete successfully on my Geforce GTX 750 OC, while tasks with Parkes data tend to go on for a long time and then error out.


Tullio,

Here is a link to all error tasks that show for that machine. There are twice as many BRP4G failures as BRP6 failures. If you look at how long some tasks ran for before they errored out - 68,641.20 for BRP6, 35,011.61 for BRP4G, 411,008.55 for O1AST - it's pretty apparent there is something going on that is completely disrupting the performance of that machine. Are you running multiple projects in different virtual machines or something like that?

Cheers,
Gary.

Bill592
Bill592
Joined: 25 Feb 05
Posts: 786
Credit: 70825065
RAC: 0

RE: Tasks with Arecibo data

Quote:
Tasks with Arecibo data complete successfully on my Geforce GTX 750 OC, while tasks with Parkes data tend to go on for a long time and then error out. My PC has a 4 core A10-6700 and 24 GB RAM. I am a novice Windows 10 user and GPU cruncher,so all my parameters are default.
Tullio

Tullio,
Go into account settings - computing preferences - and
set it to run "No more than 98%" of CPUs
That should Free Up one of your CPU cores and, your GPU
will run much better - (Probably : )
It won't take effect until Boinc requests more data.

Bill

(You might have to do the same at your other projects Seti Etc
(worth trying anyway)

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7057194931
RAC: 1600768

RE: It won't take effect

Quote:
It won't take effect until Boinc requests more data.


Actually for that one a simple project update will do the trick. Changing GPU multiplicity requires a successful WU download to take effect, but most other preference adjustments take hold after an update, whether forced by user request, or naturally coming out of normal activity.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109968952424
RAC: 30401468

RE: ... setting GPU tasks

Quote:
... setting GPU tasks to run at what I like to call a multiplicity of two, or running 2X for short, and what the Einstein@home preferences term a "GPU utilization factor " of 0.5.


Absolutely! We could all agree that this (almost universally - apart from low end GPUs) would give better performance 'out of the box'. However, I don't think it should be the default behaviour. There may be lots of people with low end GPUs who would be adversely affected without understanding why. There may also be people who suffer a hardware failure and possibly could make some sort of 'claim' that the project unfairly overloaded their equipment. It may be best to keep it as an 'opt-in' option (more prominently publicised) and let the user choose it if they wish.

The utilization factor of 0.5 would be a real bonus for volunteers with AMD GPUs, even those with a 'little less than mid-range' type of unit, but particularly for the higher end. The real benefit would be the automatic 'freeing' of a CPU core without the user needing to change the % of cores setting. If a single GPU task is running and there are CPU tasks on all cores, the performance may be quite woeful, as it was for the host in this particular thread, once GPU crunching got going.

Maybe the default for AMD GPUs should be set to "1 CPU + 1 AMD GPU" so that everybody had a 'free' core always, and if you ran 2x you got 2 free cores. This should work quite well for the 'non-tweakers' but could be easily overridden with an app_config.xml for people who wish to tweak.

Quote:
As I care a lot about power, I've followed that advice all the way down to zero CPU jobs, which for my hardware and the current applications I run gives a definite power efficiency advantage over running even one, and is surprisingly close to break-even on credit production.


Yes, for sure! Right now, there is probably significant new participation from people interested in GW detection. Until there is a reasonable GPU app, we don't particularly want to discourage people from running GW CPU tasks :-).

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.