Problem with CasA beta platform

Manney
Manney
Joined: 19 Aug 06
Posts: 54
Credit: 7312627
RAC: 0
Topic 197543

I have a 8 core AMD CPU and 760GTX GPU.

when I'm running the the new GPU version of these WU, it starts running 8 standard WU + 1 GPU WU.

The GPU WU uses up pretty much an entire core, so its battling for CPU time with a standard unit. When this happens this pushes the crunch time of the GPU unit to over 8 hours.

When I suspend all but 7 standard WU, this leaves 1 core the GPU unit. This allows the 7 cores to churn through the standard WU as usual 1WU/core/14 hours and the GPU work to churn out 1 WU/30 minutes.

Can someone re-patch this beta version so it does this automatically.

Thanks.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117246902002
RAC: 36198343

Problem with CasA beta platform

Quote:
Can someone re-patch this beta version so it does this automatically.


You have the ability to free up a core yourself by changing the number of CPUs to use to something less than 100%. No need to suspend tasks, just change the setting. You can do this under computing prefs on the website or you can use local prefs in BOINC Manager. To have one free core in an 8 core CPU, just set 88%. I think it would even work to free up one core if you set 99%.

Your other alternative (which is a bit more complicated) is to change the device requirements from 0.5 CPUs + 1.0 NVIDIA GPUs to 1.0 CPUs + 1.0 NVIDIA GPUs by using an app_config.xml file. The instructions to do this are given in BOINC's Client Configuration. Scroll down towards the bottom and look for the Application Configuration heading. Although this is more complicated, it has the advantage that all cores are still 'available for use' if you weren't running a GW GPU task.

There is also a third possibility. I'm not running any GW GPU tasks but I see there is a 'GW GPU tasks utilization factor' in the project prefs. If you set this factor to 0.5 (to run 2 tasks concurrently) this would automatically reserve a full core for GPU support duties. It might be worth seeing how this affects the overall crunching efficiency :-).

EDIT: I've just noticed that Holmis has posted an example app_config.xml, showing how to the second procedure above.

Cheers,
Gary.

mikey
mikey
Joined: 22 Jan 05
Posts: 12657
Credit: 1839054474
RAC: 4427

RE: RE: Can someone

Quote:
Quote:
Can someone re-patch this beta version so it does this automatically.

You have the ability to free up a core yourself by changing the number of CPUs to use to something less than 100%. No need to suspend tasks, just change the setting. You can do this under computing prefs on the website or you can use local prefs in BOINC Manager. To have one free core in an 8 core CPU, just set 88%. I think it would even work to free up one core if you set 99%.

Just to confirm your thoughts...yes using 99% will work to use 1 less cpu core. On a 4 core pc anything down to 75% will have the same effect. But if you put 74% it will not use 2 cores, in my quad core example and it goes down from there. My laptop has an I7 that is hyper-threaded to 8 cpu cores, I have the percentage set at 63% and it is only using 5 cores to crunch with. If I set it to 64% it will use 6 cores to crunch with.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117246902002
RAC: 36198343

Thanks for confirming that.

Thanks for confirming that. I thought I'd seen it mentioned before but I wasn't 100% certain. I tend to use the precise value, rounded up if necessary.

Cheers,
Gary.

Manney
Manney
Joined: 19 Aug 06
Posts: 54
Credit: 7312627
RAC: 0

Thanks for that, I edited the

Thanks for that, I edited the config files. Everything running fine now.

Just out of curiosity is the WU my gpu is crunching in 30 minutes identical to the one my CPU takes 14 hours to process?

Because I am receiving 390 credit points, just wondering if both WU have the same scientific value.

Holmis
Joined: 4 Jan 05
Posts: 1118
Credit: 1055935564
RAC: 0

Yes they are and that's the

Yes they are and that's the beauty of GPU crunching, it's fast! =)

If you check out the list of valid S6CasA tasks from your machine and then click on any of the "work unit IDs" from a GPU task you'll see that they are all being compared to a task that been run on a CPU, in the beta phase the project only allows one beta task in every work unit to make sure that the new app gives the same results as the old one.

Manney
Manney
Joined: 19 Aug 06
Posts: 54
Credit: 7312627
RAC: 0

Why so then the CPU costs

Why so then the CPU costs more than my GPU?

mikey
mikey
Joined: 22 Jan 05
Posts: 12657
Credit: 1839054474
RAC: 4427

RE: Why so then the CPU

Quote:
Why so then the CPU costs more than my GPU?

You could always get a GeForce GTX 780 Ti for $700.00 if it makes you feel bad? Your FX-8329 can be gotten for $160 bucks. Your particular gpu, a GTX 760 is for sale now for $250 dollars. I got all the above prices at newegg.com.

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

RE: Why so then the CPU

Quote:
Why so then the CPU costs more than my GPU?


Your CPU is a Jack of all trades, master of none. Meaning it can do just about anything but not anything very efficiently.
Your GPU is specialized to perform parallel computations. It requires that the computations are organized to fit the GPU, but if you do that, it is super efficient.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7212594931
RAC: 972293

RE: Why so then the CPU

Quote:
Why so then the CPU costs more than my GPU?


For quite a few years now, standard CPU designs have been far higher component count and computational output larger than the optimum tradeoff size for performance per unit complexity. In other words, if you made a wafer full of older generation CPU designs, you would actually get more calculated aggregate throughput than you do from the newer ones. If all problems were perfectly parallelized by commodity software, CPU design would have taken a different path, with far more cores per die, rather lower performance per core, but much higher total output.

But in real life many problems of interest are NOT perfectly parallelized by commodity code. It is closer to the truth to say that only a few problems are usefully made parallel by fully automatic means, and that some more are pushed into an approximation of that shape by talented programmers helped by complex, hard-to-understand software.

The graphics chips, on the other hand, evolved to fit a heavily constrained set of problems, many of which had high inherent parallel character, and for which the problem to be solved to some extent got swayed by the efficient use of the resources employed to solve it. So they currently employ many more, much smaller "processors" per chip. For problems that suit them, they are much nearer the optimum "size of single processor". For problems that don't suit them, you can always just run it on a general purpose CPU.

If GPUs were not better than CPUs at something they would not exist.

Manney
Manney
Joined: 19 Aug 06
Posts: 54
Credit: 7312627
RAC: 0

I adjusted my CPU in this

I adjusted my CPU in this platform so it dedicates an entire core for the GPU. It worked for a while, but now its crunching 9 units at the same time, meaning the GPU is being fed correctly.

I still says 1 CPU + 1 GPU as oppose to before doing the change it said 0.5 CPU + 1 GPU.

Any fixes?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.