ABP1 CUDA applications

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,018
Credit: 96,906,875
RAC: 146,388

RE: RE: I don't know why

Message 95657 in response to message 95656

Quote:
Quote:

I don't know why you got a value as low as 10.5K. Perhaps there is more variation in what the GPU use can save and perhaps that was a task with little or no GPU contention. I'm only guessing.

Indeed, this is curious. Not all WUs are alike, there is some data dependency in the WUs' runtime, but this is the most noticeable I've ever seen. I suggest to wait if it validates and then see if the wingman also spent a less-than-average runtime on it.


My humble suggestion is contention yes, but what aspect of GPU? GPU thread time per se? Memory on the graphics card? GPU/CPU bandwidth? What option selected for "Suspend GPU work while computer is in use?" .....

Those all vary widely. Then throw in a first person shooter and it all becomes moot. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,748
Credit: 26,058,504,501
RAC: 35,150,268

RE: ... Those all vary

Message 95658 in response to message 95657

Quote:
... Those all vary widely. Then throw in a first person shooter and it all becomes moot. :-)


It wouldn't even need to be a game, would it? I'd imagine an animated 3D screensaver kicking in while he's sleeping would have some effect ...?

Cheers,
Gary.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,515
Credit: 420,803,809
RAC: 233,671

The funny thing is that we

The funny thing is that we have here an outlier (the 10k sec job) on the fast end. I can understand outliers on the slow end, but if using an ego-shooter accelerates ABP1 jobs we are sending the wrong message to kids :-).

CU
Bikeman

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,018
Credit: 96,906,875
RAC: 146,388

RE: RE: ... Those all

Message 95660 in response to message 95658

Quote:
Quote:
... Those all vary widely. Then throw in a first person shooter and it all becomes moot. :-)

It wouldn't even need to be a game, would it? I'd imagine an animated 3D screensaver kicking in while he's sleeping would have some effect ...?


With my young lads - "Online Call of Duty 4" or some such - the GPU temp can go up 20+ degrees real quick, real easy. :-)

I think it's polygon rate related? [ thus scene complexity and rate of change thereof ]

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

hotze33
hotze33
Joined: 10 Nov 04
Posts: 100
Credit: 293,019,576
RAC: 941

@ Gary Roberts My system is a

@ Gary Roberts
My system is a einstein dedicated one. It has a and 2 9800GX2, so every cpu core has one gpu.
One 9800GX2 is in a PCIe x16 slot and one in PCIex4. There was a difference in run time earlier. The gpu in PCIex4 took 15.5k s and the one in PCIx16 took 14.3k s. Now for the 90 cr claimed units I dont get times above 14.4k. So there is some kind of speed up.
100cr claimed 15.2k s
70cr claimed 10.7k s
92 cr claimed 14k s
67 cr claimed 10.1k s
All units granted 250cr.
The run times seem more consistently now. The last transition between the cuda apps gave me also some weird cpu numbers.

In conclusion I have to get a better mainboard with a least two PCIex16 slots ;)

Tom95134
Tom95134
Joined: 21 Aug 07
Posts: 18
Credit: 2,498,832
RAC: 0

RE: RE: RE: RE: ...An

Message 95662 in response to message 95632

Quote:
Quote:
Quote:
Quote:
...And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

i agree with that !!


No one told anyone to leave the project. XJR-Maniac was just quite rude without reason.

If you don't want Einstein CUDA tasks, deselect them in your preferences. Nothing easier than that.


Yes, I too am unsure how that deduction was made from Gary's post. :-)

In any case the primary requirement for CUDA to yield significant benefit is that the problem must lend itself to massive parallelism ( ideally thousands of threads, plus other restrictions ). This is a basic reason ( plus of course issues like compiler technology ) that leads to variable success with apps.

The development here at E@H is quite cautious, with a considerable user pool feeding back via beta testing. CUDA is no exception. While not always successful ( a failure outcome is within the definition of testing ), one hopes to be able to productively generalise beyond the test participants. One can opt out of CUDA if it doesn't fly well enough. In fact that is likely to be a common response for those with unsuitable hardware for optimal CUDA use. Alas as Oliver pointed out, without changing BOINC code ( not under E@H control ) then a default setting of opt-out was/is not available.

Cheers, Mike.

My understanding is that you can opt out of CUDA work for E@H. The problem is how do you do it and not lose downloaded WUs.

Here is the approach I am taking:

#1. Not allow new E@H tasks (WU) to be downloaded.

#2. When all E@H tasks have been completed and uploaded then change the E@H preferences to not allow CUDA tasks.

#3. Allow new E@H tasks.

You may ask why I am taking these steps. Mainly because E@H does not make efficient use of the GPU. I can see this simply by watching the Elapsed Time and the "To Completion" time on the Tasks tab. SETI@Home shows a decrement of the To Completion time of about 10~15 seconds for every second of CPU time while E@H only decrements the "To Completion" value by about 1~2 seconds for every second of Elapsed time.

I look forward to running E@H tasks on the GPU in the future but currently it is not the best use of my hardware. My GPU is an NVIDIA card GeForce 8800 GTS 512 and the mobo is an Intel D975XBX2 with an Intel Core2 Quad 2.40 GHz.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,515
Credit: 420,803,809
RAC: 233,671

RE: Mainly because E@H

Message 95663 in response to message 95662

Quote:
Mainly because E@H does not make efficient use of the GPU. I can see this simply by watching the Elapsed Time and the "To Completion" time on the Tasks tab. SETI@Home shows a decrement of the To Completion time of about 10~15 seconds for every second of CPU time while E@H only decrements the "To Completion" value by about 1~2 seconds for every second of Elapsed time.

I'm not at all contesting your statement about the degree to which ABP1 CUDA currently uses the GPU, but you arrive at this conclusion for the wrong reason.

The rate at which the "time to completion" is diminishing is NOT a good indicator of app efficiency. When BOINC downloads a new workunit, it tries to predict the runtime of it, based on information that is embedded in the workunit itself and based on statistics BOINC gathered during earlier WUs for the same project.

So the rate at which the "time to completion" changes for a job in execution is just a measure of how good this runtime prediction was. If BOINC made a good guess, the rate will diminish at a rate of 1:1 and in the end the predicted runtime will turn out to be about right.

If the initial guess was way too high, BOINC will sense during the execution of the task that the progress (in % as shown in the boincmanager) is actually faster than predicted and it will correct the time to completion slowly.

If the initial guess was way too low, you will even see the "Time to completion" going UP instead of down for some time during the execution of the task.

CU
Bikeman

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 4,748
Credit: 26,058,504,501
RAC: 35,150,268

RE: @ Gary Roberts My

Message 95664 in response to message 95661

Quote:
@ Gary Roberts
My system is a einstein dedicated one. It has a and 2 9800GX2, so every cpu core has one gpu.

As your computers are hidden, it wasn't possible for me to see what hardware you had. You didn't comment on how many cards or that they were duals so I just made the (wrong) assumption of a single GPU. I did warn that I don't have any NVIDIA experience and that I was just guessing :-).

So just ignore my comments about possible contention.

Cheers,
Gary.

Tom95134
Tom95134
Joined: 21 Aug 07
Posts: 18
Credit: 2,498,832
RAC: 0

RE: I'm not at all

Message 95665 in response to message 95663

Quote:

I'm not at all contesting your statement about the degree to which ABP1 CUDA currently uses the GPU, but you arrive at this conclusion for the wrong reason.

The rate at which the "time to completion" is diminishing is NOT a good indicator of app efficiency. When BOINC downloads a new workunit, it tries to predict the runtime of it, based on information that is embedded in the workunit itself and based on statistics BOINC gathered during earlier WUs for the same project.

So the rate at which the "time to completion" changes for a job in execution is just a measure of how good this runtime prediction was. If BOINC made a good guess, the rate will diminish at a rate of 1:1 and in the end the predicted runtime will turn out to be about right.

If the initial guess was way too high, BOINC will sense during the execution of the task that the progress (in % as shown in the boincmanager) is actually faster than predicted and it will correct the time to completion slowly.

If the initial guess was way too low, you will even see the "Time to completion" going UP instead of down for some time during the execution of the task.

CU
Bikeman

Bikeman,

Thanks for the information. You may be right on the reason for slow decrementing of the To Completion time. It just seemed to me a good indication since the BOINC client is the same for both E@H and S@H. In any case, I'm flushing all the E@H tasks and will "disallow" GPU tasks for E@H until this gets resolved. I also hope by then the size of the E@H CUDA/GPU tasks will also be less than 3 hours (a limit which appears to be what the people at S@H are using) so E@H plays better with others on the GPU.

Tom

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 768
Credit: 25,160,422
RAC: 0

RE: I also hope by then the

Message 95666 in response to message 95665

Quote:
I also hope by then the size of the E@H CUDA/GPU tasks will also be less than 3 hours (a limit which appears to be what the people at S@H are using) so E@H plays better with others on the GPU.

Our tests indicate that the upcoming ABP2 GPU tasks typically take ~0.6 hours per WU. This is still "just" a factor of 2-3 faster than the ABP2 CPU version, but after the ABP2 release we are going to concentrate on improving the GPU code.

Cheers,
Oliver

 


Einstein@Home Project

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.