ABP1 CUDA applications

cristipurdel
cristipurdel
Joined: 19 Jul 07
Posts: 26
Credit: 11991887
RAC: 0

RE: Please note that even

Message 95597 in response to message 95595

Quote:

Please note that even today the CUDA app wouldn't actually require a full CPU. However, as soon as you tell the client you use less than 100% it doesn't renice the process (reduce it's priority) anymore. From our point of view it's better to have the process claiming one CPU at the lowest priority than using, say, 60% at normal priority.

Thanks for the info, very useful.

From my point of view, while I'm running boinc on my xeon + nvidia, I can run only 3 other projects + 1 einstein gpu. Instead of sending tasks which use 1.00 CPU + 1.00 NVIDIA, can you modify that to 0.99 & 0.99 so that another project can be freed to utilize the rest of the CPU core used on einstein?

I hope I'm not mistaken, but when I used the app_info, I put 0.3 & 0.9 and I could run a total of 5 projects, now I run obviously just 4.

If the cpu or gpu is idling some of the time, or all the time, don't give it 100%, take a little bit off, like 99% :)

I'm saying this thinking in order not to make another app_info, which I'll have to edit again when the ABP2 gets out.

Mar.io
Mar.io
Joined: 9 Oct 08
Posts: 3
Credit: 56552471
RAC: 4483

RE: - NVidia GPU with at

Quote:

- NVidia GPU with at least 450MB of free memory
- Display Driver version 190.38 (&up), i.e. CUDA 2.3 capability
BM

1. Are 450 MB really necessary? I only have 256 MB...
2. Why CUDA 2.2 is not enough? It is the Laptop version of these drivers. Therefore no Laptop can use the GPU or am I wrong?

Thanks
Mario

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 942
Credit: 25166626
RAC: 0

RE: 1. Are 450 MB really

Message 95599 in response to message 95598

Quote:

1. Are 450 MB really necessary? I only have 256 MB...

Yes, we really need that much memory.

Quote:

2. Why CUDA 2.2 is not enough? It is the Laptop version of these drivers. Therefore no Laptop can use the GPU or am I wrong?

I think you are wrong as there are no "laptop" versions AFAIK. You may just download the latest driver for your operating system. CUDA 2.3 is superior to 2.2 in a number of details...

Oliver

 

Einstein@Home Project

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 942
Credit: 25166626
RAC: 0

RE: Instead of sending

Message 95600 in response to message 95597

Quote:
Instead of sending tasks which use 1.00 CPU + 1.00 NVIDIA, can you modify that to 0.99 & 0.99 so that another project can be freed to utilize the rest of the CPU core used on einstein?
I hope I'm not mistaken, but when I used the app_info, I put 0.3 & 0.9 and I could run a total of 5 projects, now I run obviously just 4.
If the cpu or gpu is idling some of the time, or all the time, don't give it 100%, take a little bit off, like 99% :)

That's what I tried to explain: as soon as we assign less than 100% (this includes 99%) BOINC will run the app with normal priority. Please keep in mind that BOINC is meant to run in the background, affecting the user as little as possible - and there are still volunteers out there running Einstein@Home on single or dual core CPUs...

Oliver

 

Einstein@Home Project

pelpolaris
pelpolaris
Joined: 5 Nov 06
Posts: 2
Credit: 74690389
RAC: 1246

RE: RE: Do you have any

Message 95601 in response to message 95588

Quote:
Quote:
Do you have any idea of what kind of "Linows" or "Windux" platforms that can accept to run the Einstein CUDA 23 ?

Not sure what you mean by that, but their standard app is working fine on this Mandriva 2010.0 Linux system.

I only ment something as common as W32 or/and W64; Linux 32 or/and Linux64.

XJR-Maniac
XJR-Maniac
Joined: 8 Dec 05
Posts: 3
Credit: 670148
RAC: 0

RE: RE: Now got my first

Message 95602 in response to message 95590

Quote:
Quote:
Now got my first (and probably last) ABP1 3.13 "CUDA" WU finished in over 10 hours time on a Q9650/GeForce GTX260 were CPU time was 8.3 hours and GPU time around 2 hours. This means more than 8 hours of wasted GPU time! Does that make any sense?

You don't have any CUDA tasks on your Q9650. The list of tasks for that machine shows 3 completed ABP1 tasks, all of which took around 17k secs and none of which used a GPU for crunching. I decided to look at your other machines and I found the GPU crunched ABP1 task on your Pentium D. There are no other ABP1 tasks still showing on that machine so there's no ability to do a comparison. Here is the list of tasks for your pentium D with the GPU crunched ABP1 task at the top. It's interesting to note that it took much the same time to crunch the ABP1 task (250 credits) as the two previous GW tasks (136 credits) - nearly double the credits for a tiny bit more crunch time.

Oh, I'm sorry for that. Seems that I got a little bit confused. Of course you're right when you say that it's not fair to compare a Pentium D with a Q9650. I thought that the CUDA app has been running on the Q9650 machine. My fault.

Quote:
What is even more interesting is the apparent dramatic slowdown after Nov 20. The three earlier GW tasks took around 11K secs each while the two after this date took 27K and 29k secs respectively. Now there is variability in the GW crunch times but there is usually a variation in credits to compensate - at least partially. Since all GW tasks were awarded the same credit, it's unusual to see such a huge variation in crunch time. Can you think of anything that might have happened to your machine after Nov 20? Something drastic like halving the CPU frequency might do it :-).

Regarding the dramatic slowdown there is a very simple solution for that mystery. I had to swap my CPUs between those two machines because I got a new mainboard that doesn't support that Pentium D anymore, but the machine that has the Q9650 before has a mainboard "old" enough to run with that "old" Pentium D chip.

Quote:
Quote:
My last two ABP1 3.12 CPU only WUs took less than 5 hours on a Q9650.

It's not really fair to compare a Pentium D to a Q9650 :-).

Quote:
Are the new WUs more complex or longer than the old ones or is this just another bad joke?

There aren't any new tasks - just the same old tasks being crunched with a new program which is (performance-wise) much the same as the beta test app it replaces.

It might help if you realise that just because one project can make hugely efficient use of a GPUs parallelism, other projects may struggle to do anything like the same even after considerable effort has been expended. You might take that into account when firing off your criticism.

OK, but if you come to a point where it's obvious that it makes no sense to continue a project like this disastrous CUDA application, I think someone should have the courage to stop it, instead of firing it out to the public with might and main, just to be able to say "look here, we have a CUDA app, too!".

I always say that BOINC is not a one man show. If you create a new application you should always keep in mind that there is a world outside your lab and you have to share the resources, that your volunteers are donating to you, with other projects.

And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

Megacruncher
Megacruncher
Joined: 11 Feb 05
Posts: 7
Credit: 1270208414
RAC: 942511

Dearie me, keep it civil

Dearie me, keep it civil folks, we are all working for the good of humankind after all.

Can I just clarify something. I gather from above that the CUDA application might only use the GPU for a small part of the time - 4% was mentioned. Does this mean that it can be working on Collatz etc for the the other 96%? Or is it not that simple and the GPU needs to hang around waiting for Einstein?

If it is the case that my GPU will be in a state of enforced idleness then I think I might chose to set it to work on something that keeps it a little busier.

 

 

cristipurdel
cristipurdel
Joined: 19 Jul 07
Posts: 26
Credit: 11991887
RAC: 0

RE: Dearie me, keep it

Message 95604 in response to message 95603

Quote:

Dearie me, keep it civil folks, we are all working for the good of humankind after all.

Can I just clarify something. I gather from above that the CUDA application might only use the GPU for a small part of the time - 4% was mentioned. Does this mean that it can be working on Collatz etc for the the other 96%? Or is it not that simple and the GPU needs to hang around waiting for Einstein?

If it is the case that my GPU will be in a state of enforced idleness then I think I might chose to set it to work on something that keeps it a little busier.


You can set your resources smth like ... x CPU + y GPU for Einstein, and (1-x) CPU + (1-y) GPU for Collatz, but that would require a lot of 'attention' to new version updates, thus tweaking the app_info, and the boinc processes would get a 'normal' priority, which would most probably freeze your screen 'most of the time'.

Michael Goetz
Joined: 11 Feb 05
Posts: 21
Credit: 3060502
RAC: 0

I got my first three

I got my first three Einstein CUDA tasks last night. The computer is a C2D Q6600 (2.4GHz) with a factory-OC GTX280 GPU. Link to the machine's tasks is here: 1093664

As of the time I'm posting this, 5 ABP1 tasks have been crunched; 2 on CPU only and 3 on CPU/GPU. Since the credit given is 250 for both the CPU and CPU/GPU tasks are both the same, I'm assuming (perhaps incorrectly) that the same amount of work is being done by both.

CPU only tasks:
148600607
148695929

CPU/GPU tasks:
148948845
148986070
149018218

The interesting part is that CPU/GPU task ran only moderately faster than the CPU only tasks, despite consuming the expensive GPU resource. Also of note is that that GPU was running approximately 10 degrees Celsious cooler than it does when running GPUGRID. (The fan was running significantly slower with the Einstein app too.) This, of course, indicates that the GPU is being utilized much, much less when running the Einstein app than it is with GPUGRID.

My conclusion is that the GPU version of the app is making very inefficient use of the GPU -- in fact, judging by the time comparisons, it appears to be doing less work on the GPU than on the CPU!

As such, at least on this computer, it appears to be a waste of computing hardware to run the CUDA version of this app.

Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686056871
RAC: 577489

RE: RE: .... Let's

Message 95606 in response to message 95594

Quote:
Quote:


....
Let's assume (it's just a simple example) that an app does computations consisting of two parts, A and B, where A has to be executed before B can start. Let's assume that on a CPU, A and B each take 50% of the runtime.

Now assume that only part B can easily be ported to GPU-code, resulting in a (say) 25 fold speedup for this part B. A still has to be done on the CPU.

The result: If the total runtime was 1000 sec before, it is now 520 s, almost doubling the performance. Only 20 seconds of these 520 seconds will be spent in the GPU, or below 4 %. So even small load factors on the GPU can result in reasonable speedups.
...
Bikeman


Are this the actual numbers for ABP1cuda23 ?
If not, could you post the actual speedup on your machine between the GPU and CPU version?

Hi!

This was just an example, and more important there IS no single speedup factor for this kind of app. Relative performance compared with running on CPU only will depend drastically on the combination of CPU and GPU. E.g. running on an AMD CPU with a rather powerful GPU will give you a better speedup, because the part that is done on the GPU executes rather not-so-well on AMD CPUs (the CPU version is slower than on Intels)....etc. On a fast Intel Core2 or i7 the speedup will be less, even with powerful GPUs.

I'm currently travelling and have not yet managed to deactivate my app_info.xml on my only CUDA machine that is capable to run this app, so I can't provide numbers.

CU
Bikeman

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.