Use Graphic Cards?

Pepperammi
Pepperammi
Joined: 20 Feb 05
Posts: 131
Credit: 437,943
RAC: 0
Topic 191320

This has popped a few times over the years. Using a GPU to crunch data as well as the cpu. I remember that someone was trying to do some code to make it work as some sort of school project or something but I havent heard anything since.

I bring it up because out of interest I was reading some moer detailed info bout some of the newer graphic cards out there (think mine is one) and they have a built in Hardware Physic engine. Havok Mostly for taking the load off CPU during games but might this be usefull somewhat in some of the Boinc projects.
I really don't know i'm just going on the thought that physics engines presumable can crunch some good math? Maybe more usefull for that particle collider project.
I don't know
I suppose it'll be more than just making an app. New boinc ect.
Added thought when Vista come out they make a big fuss about the directx 10 bieng far more direct to the hardware. This might make it a more feasable idea maybe?

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

Use Graphic Cards?

It's not that newer gfx cards have a built in 'physics' engine, but that thier processors have gotten sufficiently general purpose that they can run some nongfx related code, 'physics' processing among it. This is done in place of, not in addition to, additonal graphics processing. The reason I'm using scare quotes around 'physics' is because the 'physics' processing being done in games is newtonian particlebased simulations only. The enhanced capabilities of newer gfx cards may alow them to be used for additonal DC processing, but afaik at present only Folding@Home is working on this. Assuming the folding client is sucessful, some other projects will probably end up with them eventually, although initially they'll probably be self hacked clients like what akos has done with the current einstien client.

Elwood
Elwood
Joined: 13 Feb 06
Posts: 8
Credit: 136,898
RAC: 0

I have a friend whose job

I have a friend whose job entails developing applications to run wholly in the NVidia 512MB graphics card environment. Preliminary tests show pretty amazing potential in the arena, given enough time and talent to write the appropriate code, of course. I believe a machine with four graphics cards was able to rendor graphics faster than quad-prosessor boxes with pretty decent graphics cards of their own. IMO, it's only a matter of time before more code takes advantage of the incredible performance of some of these cards for more than just video.

Gecko
Gecko
Joined: 5 Jun 06
Posts: 10
Credit: 61,039
RAC: 0

I've been following the

I've been following the subject w/ much interest for the last year.
Recently came across this:

http://gamma.cs.unc.edu/GPUFFTW/

You can keep up w/ much of the latest developments here:

http://www.gpgpu.org
http://www.gpgpu.org/forums/viewtopic.php?t=2021&highlight=fftw

To be honest, much of my interest in this subject is based-upon Seti's heavy use of FFTs. I'm new to Einstein and don't know much about the the kind of processing that Einstein performs. Does Einstein spend a lot of time processing FFTs as well?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,305
Credit: 248,922,265
RAC: 33,990

RE: Does Einstein spend a

Message 36404 in response to message 36403

Quote:
Does Einstein spend a lot of time processing FFTs as well?


No, none at all.

The data is FFTed during the preprocessing, so we can distribute the frequency bands to the users, but nothing like this is done in the Apps.

BM

BM

Pepperammi
Pepperammi
Joined: 20 Feb 05
Posts: 131
Credit: 437,943
RAC: 0

RE: I've been following the

Message 36405 in response to message 36403

Quote:

I've been following the subject w/ much interest for the last year.
Recently came across this:

http://gamma.cs.unc.edu/GPUFFTW/

You can keep up w/ much of the latest developments here:

http://www.gpgpu.org
http://www.gpgpu.org/forums/viewtopic.php?t=2021&highlight=fftw

To be honest, much of my interest in this subject is based-upon Seti's heavy use of FFTs. I'm new to Einstein and don't know much about the the kind of processing that Einstein performs. Does Einstein spend a lot of time processing FFTs as well?

http://gamma.cs.unc.edu/GPUFFTW/

Look slike they have created the program that would open up the gpu to be used. That how I interpret the info on the site but it seems that a new program needs to be written that uses those librarys which'l talk to the gpu. Might be wrong in my understanding as it's a bit over my head. I'll have a more thorough read. Results in some areas do seem amazing.
Appears like its open for people to have a bash at getting it to work now though.

Anyone with enought knowhow fancy giving it a go? A geuss they already have and found loads of complications like need new app specific to use gpu and boinc would need reprogramming to understand the gpu as a new asset and to make sure it understood to use the new app on the gpu not the standard cpu app and viseversa.

@ The Tick - Thanks for adding that. That's one of the sites I remember seeing some time ago but they hadn't gotten very far then.

Gecko
Gecko
Joined: 5 Jun 06
Posts: 10
Credit: 61,039
RAC: 0

Yes, that's assuming that the

Yes, that's assuming that the kind of processors used on these GPUs can perform well w/ the kind of calculations that Einstein performs. Being new to Einstein, I'm not yet familiar w/ the relationship between CPU architecture vs. aligment w/ Einstein's requirements.

From this read, it looks like the GPUs are wickedly fast w/ the kind of FFTs Seti performs, SP (single precision) 1d complex powers of two. Wouldn't mean jack to Einstein according to Bernd below....unless GPUs can excel at other types of processing relevant to Einstein. If so, imagine the power using Nvidia's new GeForce 7950 GX2, or better yet, (2) of them which is possible according to Nvidia.

If there's a relevant application to this and other dist.comput. projects, GPUs could provide a huge leap, aka "supercomputer on a desk....." Hummm, where I have heard THAT before? LOL!
If any of our resident wizards were looking to chart new waters, this is probably the expeditionary-type project to challenge them, but if one COULD get BOINC and a project application to work properly, it would likley be the next "gotta have". It could also give almost anyone's PC substantially more project life-span and performance upgradibility as they could upgrade GPUs from time to time. A GPU upgrade could make a major improvement & would not necessarily require a m/b change each time, which most CPU upgrades will require if changing CPU families. Why pay the $$$ to stay w/in a cpu family, ie. D 805 to D 950, and spend the same $$$ vs. a GPU, to gain less performance (looking at this of course from the point of a single-purpose computer)?

Fun to think about, probably Everest to make work.
Bernd, are we dreaming or just crazy?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4,305
Credit: 248,922,265
RAC: 33,990

RE: Fun to think about,

Message 36407 in response to message 36406

Quote:
Fun to think about, probably Everest to make work.
Bernd, are we dreaming or just crazy?


As I wrote in the PS3 thread - it would mean actually rewriting parts of the analysis code with a completely new structure. Probably some man-months of work (I think even a bit more than for the PS3/Cell). Might be worth to write and optimize for a large number (many hundrets or thousands) of identical systems, but not for a few dozens.

BM

BM

Martin P.
Martin P.
Joined: 17 Feb 05
Posts: 162
Credit: 40,156,217
RAC: 0

Even if it was possible and

Even if it was possible and someone was willing to change the code, wouldn't this create major thermical problems in the computers? Are the GPUs' processors and coolers intended to run on full speed for such a long times? And what about noise levels when all fans are running at full RPM?

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1,364
Credit: 3,562,358,667
RAC: 0

RE: Even if it was possible

Message 36409 in response to message 36408

Quote:
Even if it was possible and someone was willing to change the code, wouldn't this create major thermical problems in the computers? Are the GPUs' processors and coolers intended to run on full speed for such a long times? And what about noise levels when all fans are running at full RPM?

While gaming GPUs are pegged at 100%, and it doesn't take long to approach equilibrium thermal levels.

Pepperammi
Pepperammi
Joined: 20 Feb 05
Posts: 131
Credit: 437,943
RAC: 0

RE: RE: Fun to think

Message 36410 in response to message 36407

Quote:
Quote:
Fun to think about, probably Everest to make work.
Bernd, are we dreaming or just crazy?

As I wrote in the PS3 thread - it would mean actually rewriting parts of the analysis code with a completely new structure. Probably some man-months of work (I think even a bit more than for the PS3/Cell). Might be worth to write and optimize for a large number (many hundrets or thousands) of identical systems, but not for a few dozens.

BM

this shouldn't be too much of an issue. Writing the code i admit would be very difficult but should be easyer than starting an app from scratch since we have the standard code to use as a basis to work from to know what the rewriten/new app would need to be doing.

is this relevant?

Quote:

http://gamma.cs.unc.edu/GPUFFTW/documentation.html=Building an application
Setting up the library for use in your application is simple.

Build the gpufftw library
Make sure you link to glut32.lib and gpufftw.lib (or libgpufftw.a on linux)
The include folder included with the distribution should be in your include path
Once the above two things have been done, all you need to do is #include in your application


The site link 'the tick' provided has code to allow programs to use sort of generic geforce cards. It works as now on all gefore 6 and later cards they say and their working on adding ati chipsets now. My limited understanding reads it as it maybe works as a translator. Send the instructions to the code they've writen and it'll translate to the gpu. I'm probly wrong.

If you consider that at least 35-40% of pcs already cruching work for science projects will be using nvidia cards (other 35-40% ati and the rest on board- of which some will be nvidia or ati too).
These are modest guestimates.
Thats a hell of a lot of pc's with extra assets that could be used for science. A hell lot more than trying to convert some few consoles out there. I say 'consoles' because it would still be more than if you converted a small fration of consoles-xbox-xbox360 and ps3 which I hear still hasn't fully finished being delevoped even.

Plus also consider that gpu's are advancing at incredable rate and when directx10 with vista is realeased these more direct contact with the gpu and vista apparently has a form of gpu mutitasking ability? nice.
Doing more looking i saw that for some science distrbution projects the gpu can outperform the cpu! in the case of these projects maybe they should start to consider writing apps for gpu's instead of cpus as the performance gap is only set to increase.. better if both cpu an gpu are utilised.

Quote:
wouldn't this create major thermical problems in the computers?


you should be worried about the build quality of your computer if this is the case. They are suppose to be built to run steady at full load and be capable of cooling themselve respectively even on a warm day.

Sounds to me like nobody wants to take up the task "too difficult". I think once the first attempt is made that code can be used as a basis for other projects to work from. But the ball has to be given an initial push.

Its ambitious i know but i'm going to try to start to learn to code myself. I don't have nearly the grey matter as the coders on some of these pages but maybe ill get far enough to give it a stab in the dark and maybe that'l be enought to finialy give this the starting push.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.