The truth is that to do GPU computing you need a card, in general, that costs more than $100 to even get into the game to start at the low end. If you want to do serious work, well, right off you need to start thinking of a card in the $200+ range ... Domination required true commitments of cash ... :)
Sure, but you replace 60 regular computers with one GPU, so even $200+ is not too much.
Which just means that project that were previously impractical now become practical. Take GPU Grid, likely it would not have been started several years ago because of the intensity of the work ... now it is practical...
We get a few more GPU projects and this will actually mean more resources to non-GPU projects.
It seams that all of the different projects that use GPU optimized code and runs under BOINC have the code optimized for Intel and NVIDIA except Milkyway@home. I wonder why? is it that much harder to write a program that runs on RADEON than on NVIDIA?
Nvidia ported over the Seti@Home application and helped add the code to BOINC. ATI/AMD wasn't interested at the time, didn't have the manpower or money or whatever. They basically told BOINC and projects to add support and port over the applications themselves, without so much as a peep of help. The documentation to port over the applications to ATI's CAL option isn't really much of a documentation. It's more trial and error.
The Milkyway application was made by a volunteer, a project participant.
It seams that all of the different projects that use GPU optimized code and runs under BOINC have the code optimized for Intel and NVIDIA except Milkyway@home. I wonder why? is it that much harder to write a program that runs on RADEON than on NVIDIA?
It seams that all of the different projects that use GPU optimized code and runs under BOINC have the code optimized for Intel and NVIDIA except Milkyway@home. I wonder why? is it that much harder to write a program that runs on RADEON than on NVIDIA?
Nvidia ported over the Seti@Home application and helped add the code to BOINC. ATI/AMD wasn't interested at the time, didn't have the manpower or money or whatever. They basically told BOINC and projects to add support and port over the applications themselves, without so much as a peep of help. The documentation to port over the applications to ATI's CAL option isn't really much of a documentation. It's more trial and error.
The Milkyway application was made by a volunteer, a project participant.
Hmm... It seams that they need help from volunteers both to write the code and crunsh.
Einstein@Home have also had good help from volonteers to write their optimized
cood if I remember right.:)
Eventually we will get there ... Folding has had years to get where they are now.
OpenCL should make this easier, maybe, ...
But more likely what will happen is that we will find OpenCL is like the non-optimized applications and that on those projects with open source that ports to the specific APIs will produce applications that will be even faster than the OpenCL applications.
That's a pity, as the ATI HD 38xx and 48xx series seem to be good at double precision
and what is about new 4770 with almost 1Tflops?
going to buy one or several for E@H and MW@H.
Unless you want to have it to play games might as well wait till EaH actually starts to issue GPU work. Last month there were statements that applications were in the works, and more recently that there were not ...
MW is having difficulty keeping up with demands and there are a lot of complaints about that. If Travis is off coding a GPU application that actually explains that a little bit. In the mean time I would wait to see what happens.
RE: RE: The truth is that
)
Which just means that project that were previously impractical now become practical. Take GPU Grid, likely it would not have been started several years ago because of the intensity of the work ... now it is practical...
We get a few more GPU projects and this will actually mean more resources to non-GPU projects.
It seams that all of the
)
It seams that all of the different projects that use GPU optimized code and runs under BOINC have the code optimized for Intel and NVIDIA except Milkyway@home. I wonder why? is it that much harder to write a program that runs on RADEON than on NVIDIA?
Take a look at this page, the speed is enormous.
Nvidia ported over the
)
Nvidia ported over the Seti@Home application and helped add the code to BOINC. ATI/AMD wasn't interested at the time, didn't have the manpower or money or whatever. They basically told BOINC and projects to add support and port over the applications themselves, without so much as a peep of help. The documentation to port over the applications to ATI's CAL option isn't really much of a documentation. It's more trial and error.
The Milkyway application was made by a volunteer, a project participant.
RE: It seams that all of
)
As for the speed differences and why most projects use CUDA, this forum thread on the GPUGRID message boards might explain it a little bit...
RE: RE: It seams that all
)
Why not use both like Folding@home?
RE: Nvidia ported over the
)
Hmm... It seams that they need help from volunteers both to write the code and crunsh.
Einstein@Home have also had good help from volonteers to write their optimized
cood if I remember right.:)
RE: Why not use both like
)
Eventually we will get there ... Folding has had years to get where they are now.
OpenCL should make this easier, maybe, ...
But more likely what will happen is that we will find OpenCL is like the non-optimized applications and that on those projects with open source that ports to the specific APIs will produce applications that will be even faster than the OpenCL applications.
RE: That's a pity, as the
)
and what is about new 4770 with almost 1Tflops?
going to buy one or several for E@H and MW@H.
RE: RE: That's a pity, as
)
Unless you want to have it to play games might as well wait till EaH actually starts to issue GPU work. Last month there were statements that applications were in the works, and more recently that there were not ...
MW is having difficulty keeping up with demands and there are a lot of complaints about that. If Travis is off coding a GPU application that actually explains that a little bit. In the mean time I would wait to see what happens.
RE: RE: RE: That's a
)
i waiting for this several years already! =)
so what about double precision at 4770?