ABP1 CUDA applications

Bryan
Bryan
Joined: 12 Aug 07
Posts: 7
Credit: 870,833
RAC: 0

Well, it seems that my

Well, it seems that my piddling little 256MB Nvidia card (though big enough to run 2 big fat monitors) isn't enough to run the new app. Meanwhile, I'm not getting any CPU units either. Are all the new units CUDA only?

Michael Goetz
Joined: 11 Feb 05
Posts: 15
Credit: 170,028
RAC: 0

RE: Is this the way we

Message 95618 in response to message 95616

Quote:

Is this the way we know that time is wasted :

There's two ways you can tell the GPU isn't being used effectively.

Firstly, if you have a utility that displays the GPU temperature, that's a very good indication of how much the GPU is doing. My GPU is running MUCH cooler running E@H than when it's running any other application.

Secondly, you can compare the run-time of CPU and CPU/GPU workunits of the same application (Einstein, SETI, and Milkyway all offer the same app in both CPU and GPU versions). Depending on your hardware, your GPU is usually 10 to 20 times faster than your CPU, so you should expect the GPU WUs to finish in only 5% o 10% of the time it takes the same CPU WU to finish (which represents a 1000% to 2000% speed increase).

On Einstein, everyone is seeing cold GPU temps and only about a 33% increase in speed -- while using hardware that's physically around 2000% faster.

Or, let me put it this way: If it normally takes your machine about 10 hours to complete an E@H WU, if it was making good use of the GPU, it would finish WUs in around 30 to 60 minutes, give or take.

To put things in perspective, if I run Milkyway@home on my CPU, it take a bit under four hours to run. M@H pushes my GPU harder than any other app, as evidenced by higher temperatures. When I run M@H WUs on my GPU, they complete in slightly under 4 minutes. A 6000% increase. Compared to the 33% increase I'm seeing on E@H.

Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

Grutte Pier [Wa Oars]~GP500
Grutte Pier [Wa...
Joined: 18 May 09
Posts: 39
Credit: 5,612,852
RAC: 1,114

All we can say is, there is

All we can say is, there is allot more potential For CUDA @ Einstein@home.

And these are the first steps to moving on.
Without a start there is no motion.

@ EINstein@home ask for more support from the big community.
There are people here that can help with the development of a cuda AND stream client for this.

You can multiply the total TFLOPS of troughput with that.

Tom95134
Tom95134
Joined: 21 Aug 07
Posts: 18
Credit: 2,498,832
RAC: 0

First let me say I'm

First let me say I'm crunching CUDA numbers just fine for both SETI@Home and Einstein@Home.

I only have one comment. SETI@Home (which also feeds CUDA tasks) breaks their CUDA tasks down into much smaller "chunks", i.e., up to about 30 minutes. Because of the longer tasks coming from Einstein@Home this will capture the GPU which essentially means it doesn't play well with others.

Right now SETI is going through some problems keeping tasks fed but once these issues are sorted out I suspect that the longer tasks from Einstein will mean that everyone that participates in both programs will either have to do some very fine tweaking of the percentages allocated to each project or abandon accepting CUDA tasks from one of the projects. Keep in mind that you cannot tweak CPU project use separate from GU project use.

Tom95134
Tom95134
Joined: 21 Aug 07
Posts: 18
Credit: 2,498,832
RAC: 0

RE: RE: Is this the way

Message 95621 in response to message 95618

Quote:
Quote:

Is this the way we know that time is wasted :

There's two ways you can tell the GPU isn't being used effectively.

Firstly, if you have a utility that displays the GPU temperature, that's a very good indication of how much the GPU is doing. My GPU is running MUCH cooler running E@H than when it's running any other application.

Secondly, you can compare the run-time of CPU and CPU/GPU workunits of the same application (Einstein, SETI, and Milkyway all offer the same app in both CPU and GPU versions). Depending on your hardware, your GPU is usually 10 to 20 times faster than your CPU, so you should expect the GPU WUs to finish in only 5% o 10% of the time it takes the same CPU WU to finish (which represents a 1000% to 2000% speed increase).

On Einstein, everyone is seeing cold GPU temps and only about a 33% increase in speed -- while using hardware that's physically around 2000% faster.

Or, let me put it this way: If it normally takes your machine about 10 hours to complete an E@H WU, if it was making good use of the GPU, it would finish WUs in around 30 to 60 minutes, give or take.

To put things in perspective, if I run Milkyway@home on my CPU, it take a bit under four hours to run. M@H pushes my GPU harder than any other app, as evidenced by higher temperatures. When I run M@H WUs on my GPU, they complete in slightly under 4 minutes. A 6000% increase. Compared to the 33% increase I'm seeing on E@H.

I agree that something needs to be done to optimize the CUDA tasks. Because of this long processing time for GPU targeted tasks it is critical that something be done about the size of the GPU tasks coming from Einstein@Home. If using the GPU is not that much more efficient in processing tasks then one might as well not allow CUDA for Einstein@Home.

Kty
Kty
Joined: 18 Oct 09
Posts: 21
Credit: 8,698,851
RAC: 0

Hello, One of my computers

Message 95622 in response to message 95621

Hello,

One of my computers is a laptop using a NVIDIA 9600M GS.
E@H refuse to upload Cuda WU on this laptop because the display driver 190.38 is requested.
After checking on NVIDIA web site, it appears that this driver 190.38 is not for laptops or M graphics cards series. The latest display driver available for GPU M series is the 186.81.

So if you're using a laptop with NVIDIA GPU, don't waste your time triying to run E@H cuda WU. it is not possible right now.

Lionel.

Ver Greeneyes
Ver Greeneyes
Joined: 26 Mar 09
Posts: 140
Credit: 9,562,235
RAC: 0

Lionel, try one of the

Message 95623 in response to message 95622

Lionel, try one of the drivers from LaptopVideo2Go. Be sure to read the instructions on how to use a modified inf, it's simple when you know how!

Michael Goetz
Joined: 11 Feb 05
Posts: 15
Credit: 170,028
RAC: 0

RE: I only have one

Message 95624 in response to message 95620

Quote:
I only have one comment. SETI@Home (which also feeds CUDA tasks) breaks their CUDA tasks down into much smaller "chunks", i.e., up to about 30 minutes. Because of the longer tasks coming from Einstein@Home this will capture the GPU which essentially means it doesn't play well with others.

The problem isn't the length of the WU. S@H's WU are actually longer (i.e., more computation is done) than Einstein's. On my computer, SETI's WUs take about 10 hours vs. 6 hours for Einstein. The difference is that those same identical work units, run on my GPU, drop to 10 minutes for SETI but only 4 hours for Einstein.

For both SETI and Einstein, the same WUs are sent to both CPU and GPU computers. If you look at the results for both projects, you'll see this -- for any of your GPU WUs, chances are likely that the other computer ran it on the CPU.

It's not the size of the WU that's the problem -- it's that the GPU is barely being used.

As for BOINC scheduling of short vs. long tasks on the GPU -- that *should* be fine. "Should* is the important word there. I don't actually know how well the BOINC client schedules GPU tasks. But how it's supposed to work is clear: If the work percentages are the same, two projects should each get 50% of the GPU time, regardless of the size of the WUs. More of the shorter WUs should run, as compared to the longer WUs.

Oh, I want to make something clear. While it's painfully obvious that the E@H application isn't utilizing the GPU very well, I am in no way implying that the people who wrote the application did a poor job. Some problems simply do not lend themselves to being solved on a massively parallel computer (i.e., a GPU). It may be that Einstein is one of those problems, and it just can't be run efficiently on a GPU.

Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

Walker
Walker
Joined: 29 Nov 05
Posts: 1
Credit: 634,284
RAC: 95

RE: OK, but if you come to

Message 95625 in response to message 95602

Quote:

OK, but if you come to a point where it's obvious that it makes no sense to continue a project like this disastrous CUDA application, I think someone should have the courage to stop it, instead of firing it out to the public with might and main, just to be able to say "look here, we have a CUDA app, too!".

I always say that BOINC is not a one man show. If you create a new application you should always keep in mind that there is a world outside your lab and you have to share the resources, that your volunteers are donating to you, with other projects.

And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

i agree with that !!

Gundolf Jahn
Gundolf Jahn
Joined: 1 Mar 05
Posts: 1,079
Credit: 341,280
RAC: 0

RE: RE: ...And hey, don't

Message 95626 in response to message 95625

Quote:
Quote:
...And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

i agree with that !!


No one told anyone to leave the project. XJR-Maniac was just quite rude without reason.

If you don't want Einstein CUDA tasks, deselect them in your preferences. Nothing easier than that.

Gruß,
Gundolf

Computer sind nicht alles im Leben. (Kleiner Scherz)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.