On increasing the speed.

ragnar schroder
ragnar schroder
Joined: 31 Mar 05
Posts: 29
Credit: 6974655
RAC: 1634
Topic 189053

An idea: Perhaps other system components than the CPU can be utilized to facilitate MUCH faster crunching?

Modern GPUs in particular are very powerful - some may even be able to do discrete FFT on MB-size data sets in parallell. (I assume FFT is what the Einstein app spends much of its time doing.)

There seems to be ways to harness this power from application level software, since projects like
http://graphics.stanford.edu/projects/brookgpu/intro.html exist.

So maybe the Einstein application can be rewritten to optionally allow for utilization of GPUs and/or other components? Not to mention specialized FFT cards that many tech/science people may have already. They go for under 1K.

Payoffs in terms of computing resources available to the project could be HUGE.

And out in geek land this may even be a piece-of-cake kind of thing.

Just a suggestion - it's always nice to suggest work that OTHERS should do :-)

Greetings, Ragnar Schroder

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

On increasing the speed.

> Payoffs in terms of computing resources available to the project could be
> HUGE.
>
> And out in geek land this may even be a piece-of-cake kind of thing.

Ragnar,

This is one of the more common proposals and on the surface it looks to be a "slam-dunk" ...

However, (pronounced "However COMMA") most GPUs are actually programmed in assembler for the GPU itself. Meaning, that without higher level language support (and I can debate that C, and any of its derivitives truly deserves the designation as a higher level language, with me taking the quite reasonable position that it is not, and you taking the unreasonable position that it is) it would be pretty near impossible to make an executable that would be viable across several platforms.

Though I will admit that I am truly fascinated by the proposition of using the actual display memory as a data store of the intermediate results of the calculations, very much in the mode of the old O-Scope on the address bus technique used to monitor many computers in the early days (for a description of this see Kidder's "The Soul of a New Machine"; which reminds me, I haven't read that book in a couple of years my ownself) ...

So, the bottom line is that it is not likely to happen any time soon. And if it does, it would first hit porjects like SETI@Home which is open source so someone like you could make the first translation of the source to GPU executable ...

gravywavy
gravywavy
Joined: 22 Jan 05
Posts: 392
Credit: 68962
RAC: 0

Likewise the idea of using

Likewise the idea of using FFT cards sounds good, and would be a good proposal in a grid situation (I mean where the project owned and paid for all the hardware) or one where a small number of collaborators were all putting in significant amounts of money to run the project.

The problem in a public donation distributed computing (DC) is that most people don't have an FFT board, and those few machines that do are almost certainy dedicated to running heavy number crunching on a stand-alone project. There will be very little spare capacity on existing FFT boards.

The idea of asking peiople to buy FFT boards again sounds fine, but misses the point of DC: whichy is to use up spare capacity on machines that already exist.
Few donors would part with 1K to boost their ratings. The few that did would increase their own ratings in the 'top producers' lists, but the overall impact on the project would be small, at the cost of all that extra programming and all that extra testing.

The essence of DC is to use the machine you already have; from the project viewpoint that means you write code for the machines that people have already bought for other purposes, not for hardware you then expect them to go out and buy.

So a technicaly-good idea but better suited to private collaborations than to public-donor projects, in my opinion.

~~gravywavy

Bruno G. Olsen & ESEA @ greenholt
Bruno G. Olsen ...
Joined: 20 Dec 04
Posts: 115
Credit: 7668259
RAC: 0

are you talking about

are you talking about utilizing hw for something it wasn't intended for?

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

> are you talking about

Message 11002 in response to message 11001

> are you talking about utilizing hw for something it wasn't intended for?

Yes, and no ... :)

The use of a GPU to do signal processing and other operations similar to what we do with DC projects is nothing new. The killer is that the conversions/coding efforts are huge with little potential gain. I grant that I have 2 super-duper GPUs in my old game playing machines, but it is far more common to have low end GPU cards ...

Plus, each one has to be programmed for individually... programs that run on ATI cards may not run at all on a nVidia card, and versa vicea ...

Also, to muddy the waters more, we are basically keeping up with the demand even without the added benefit of the GPUs out there.

Heck, if we REALLY want to glut the market ... all we need to do is to have some companies like Dell to add BOINC to the installs as an icon like the ones for AOL and so forth. Many people just don't know about this aspect of computing as this is not on their radar screens at all ...

But, to finish. Yes the use is non-standard, but no, the GPU is not going to be used to do anything that is is not currently capable of doing ...

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 92

> Plus, each one has to be

Message 11003 in response to message 11002

> Plus, each one has to be programmed for individually... programs that run on
> ATI cards may not run at all on a nVidia card, and versa vicea ...

No, you'd just program the application in DirectX 3D for Windows and whatever the language is for other OSes. It can live off one program though, else we'd have seperate games for nVidia, Ati, Matrox and other cards.

The problem is imho that Dx is too big an API to program in.
Not only do you then need the latest DirectX, but also need to download that whole DirectX version (32MB minimum?) plus the application (another 16MB minimum?), so that's bye bye all dial-up users for every major application change.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

> No, you'd just program the

Message 11004 in response to message 11003

> No, you'd just program the application in DirectX 3D for Windows and whatever
> the language is for other OSes. It can live off one program though, else we'd
> have seperate games for nVidia, Ati, Matrox and other cards.

I am not sure if DirectX is a viable "language" to use to attempt to perform work in scientific processing. I admit that I could be wrong ... but in either case, if it is viable to program in Direct-X, I would not think that the overhead would be low enough that it would be possible to achieve high performance.

Bruno G. Olsen & ESEA @ greenholt
Bruno G. Olsen ...
Joined: 20 Dec 04
Posts: 115
Credit: 7668259
RAC: 0

Paul: well, I'm not

Paul:

well, I'm not excactly asking about capability but intented use, but anyway.. ;) The reason I was asking is, I'm not sure that I'm at all interested in my graphics cards being used for anything but graphics - as I was buying it to do graphics. The same way I don't want a graphics card that makes use of system memory. In the end I would end up not being sure how any component of my computer is performing, how many resources I have to this and that - not to mention the possibility that one non-critical component could become critical - making a brake down of a normally non-critical component have potentially massive impact on software performance or even functionality.

Hope you see my point, as I think it's pretty hard to explain in a foreign language :)

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 92

> I am not sure if DirectX is

Message 11006 in response to message 11004

> I am not sure if DirectX is a viable "language" to use to attempt to perform
> work in scientific processing. I admit that I could be wrong ... but in either
> case, if it is viable to program in Direct-X, I would not think that the
> overhead would be low enough that it would be possible to achieve high
> performance.

Good point. It may not be the viable language to program in, but it would be the one to program with. You're addressing the GPU, so you need to use Dx to tell it what to do. Since, as said by Bruno, it is only there to process videodata, graphics etc. the program may end up being either big, or to burn out the GPU itself.

For the person who thought of this again...
Then there's the other problem: A lot of GPUs out there, don't have a fan. They have a heatsink with special fins. Fins don't cool as much as a fan does!
People you want to take a look inside their case break their warranty by opening up their case and looking inside. You may not give a damn, but they who just bought a $2500 system may.

Hence why I doubt any project around will want to try this.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

> well, I'm not excactly

Message 11007 in response to message 11005

> well, I'm not excactly asking about capability but intented use, but anyway..
> ;) The reason I was asking is, I'm not sure that I'm at all interested in my
> graphics cards being used for anything but graphics - as I was buying it to do
> graphics. The same way I don't want a graphics card that makes use of system
> memory. In the end I would end up not being sure how any component of my
> computer is performing, how many resources I have to this and that - not to
> mention the possibility that one non-critical component could become critical
> - making a brake down of a normally non-critical component have potentially
> massive impact on software performance or even functionality.

Yes, the use of system memory allows less expensive video cards at the huge cost of a very slow graphics system. And your other point is well taken too. I put high end graphics cards in two of my older systems (EQ-1 and EQ-2) when I was playing EverQuest a lot. Roughly every year to 18 months I would buy another $500 something card for each system to keep up with the speed.

> Hope you see my point, as I think it's pretty hard to explain in a foreign
> language :)

Yes, I see your point. And have no problems reading and understanding your posts. Heck, English is my ONLY human language and I have troubles having people understand what I write ...

And as Ageless said, a high cost in programming. I did look at the API yesterday after my post and saw nothing there that encouraged me to think that the API was general enough to allow the use via Direct-X for general computing efforts.

Wurgl (speak^Wcrunching for Special: Off-Topic)
Wurgl (speak^Wc...
Joined: 11 Feb 05
Posts: 321
Credit: 140550008
RAC: 0

> Heck, English is my ONLY

Message 11008 in response to message 11007

> Heck, English is my ONLY human language ...

Human? Do you speak Klingon?

;^)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.