Einstein@Home features two searches using two different apps (one for Gravitational Wave detection using LIGO detector data ("GC1 app"), and one search for pulsars using Arecibo radio telescope data ("ABP2 app") ).
Currently, one app (ABP2) has experimental support for GPU crunching (NVIDIA CUDA enabled cards only with at least 512 MB RAM). This app uses the GPU for some parts of the computation (Fast Fourier Transform or FFT for the curious) but handles the rest of the crunching on the CPU.
A CUDA enabled ABP2 app allocates a full CPU core plus one GPU, so if you have a (say) quad-core CPU and one single-GPU video card, you can run 3 E@H CPU apps plus one ABP2 CUDA app in parallel.
The fact that only part of the computation is done on the GPU limits the speedup you can expect for this app. If you have a very fast CPU and a low-end video card (those selling for around 50 $ for example), the GPU app might even run slower than the CPU version!
The best results (in terms of a comparison between CPU and GPU version) can be expected on low- to mid-range AMD CPUs with a medium to top-range graphics cards. E.g. I have a AMD X2 @ 2.5 GHz and a 9800 GT and get a speedup of approximately 2.
This app is just the first step, the next generation of ABP2 CUDA app is currently under development and will put more load on the GPU and less on the CPU, further increasing the performance of the app. No release date has been announced yet.
thank you for the info.
I just took a look into the einstein data-folder and found -- no app_info.xml.
How and where can I make these settings? I could run that on my quad-core with GTX260 (the old one).
One app works with cuda, three would be great! At the moment Seti is offline 3 days a week and the cache for the cuda apps is much too small for three days, GPUGRID does not like my old GTX260, MW runs much faster on my ATI-Cards and there are no more real scientific apps (collatz is something for a handful of math-freaks; I wonder, why they dont have apps for my handy ... ), so my nVidia runs dry.
There is no need for an app_info.xml file, the ABP2 CUDA app is a stock app distributed automatically.
So your PC should download and run such units, unless:
- resource share and/or debt favors other projects on that PC at the moment
- available (=free) RAM on the card is less than ca 400 MB
- you disabled GPU crunching in the web preferences or local manager settings
- set E@H to "no more work".
From what I see in your host list, your grahics driver version and amount of memory should be sufficient, so I's not clear to me why that host doesn't request (and get) E@H work at the moment.
Hi,
I did a project reset. After a lot of downloads I have now running:
3 sse2 - wu's an
1 cuda23 wu.
I like it the other way round: 3 cuda and one sse2 or 4 cuda, whatever is possible.
I asked for the xml-file, because some other apps use that to adjust the amount of GPU-usage.
Hi,
this seems to be a waste of resources. How do other project manage to regulate this? Whats the secret behind these app_info.xml files?
Other projects allow to adjust the usage of CPU and GPU in this file without changing anything in the app-file itself.
A example from astropulse:
ATI
0.25
This seems to be a function of boinc, not of the app. If this is true, creating of an approbiate app_info could help.
Is there anywhere information about this?
As you mentioned earlier, einstein does not need a info-file, but would it be recognised?
Hi,
this seems to be a waste of resources. How do other project manage to regulate this? Whats the secret behind these app_info.xml files?
Other projects allow to adjust the usage of CPU and GPU in this file without changing anything in the app-file itself.
A example from astropulse:
ATI
0.25
This seems to be a function of boinc, not of the app. If this is true, creating of an approbiate app_info could help.
Is there anywhere information about this?
As you mentioned earlier, einstein does not need a info-file, but would it be recognised?
Regards,
Alexander
The cruel truth is that two ABP2 tasks just would not fit on your video card at the same time, memory wise.
Hi,
I think, I have to accept that.
But after all I can tell my children, that I've tried it ;-))
Anyway, is there somewhere a list of commands, that are accepted by boinc? Why do they not write ?
And a summary of the organisation of that file and the parameters?
The official BOINC wiki article on this can be found here: http://bolt.berkeley.edu/wiki/Anonymous_platform. If you need more info on specific tags, you'll find more stuff in discussion forums ==> google is your friend :-)
This app is just the first step, the next generation of ABP2 CUDA app is currently under development and will put more load on the GPU and less on the CPU, further increasing the performance of the app. No release date has been announced yet.
HB
Hi,
I'm just checking all the micro- and macrocosmos project, that have GPU-apps, what's going on there.
GPUGRID is working on OpenCL-Apps, that will run on nVidia and on ATI-Cards, that support OpenCl.
My (old type) GTX260 lacks some features, this is why i cannot crunch GRUGRID now. They are working on Fermi-enabled Apps, but right now they have only Beta wu's.
MW has no Fermi Support, I did not yet check SETI.
I found a guy, that may buy my GTX260, its good enough for his purposes. But that raises the question, which GPU should be my next one.
Will the new Einstein-App support Fermi, OpenCL ? What are the requirements for future apps? Is double-precision required? And as I've seen in earlier post, memory could also be important.
It would be nice if someone could leave a note here.
I see that einstein@home can run Nvidia GPU.
)
Hi!
Einstein@Home features two searches using two different apps (one for Gravitational Wave detection using LIGO detector data ("GC1 app"), and one search for pulsars using Arecibo radio telescope data ("ABP2 app") ).
Currently, one app (ABP2) has experimental support for GPU crunching (NVIDIA CUDA enabled cards only with at least 512 MB RAM). This app uses the GPU for some parts of the computation (Fast Fourier Transform or FFT for the curious) but handles the rest of the crunching on the CPU.
A CUDA enabled ABP2 app allocates a full CPU core plus one GPU, so if you have a (say) quad-core CPU and one single-GPU video card, you can run 3 E@H CPU apps plus one ABP2 CUDA app in parallel.
The fact that only part of the computation is done on the GPU limits the speedup you can expect for this app. If you have a very fast CPU and a low-end video card (those selling for around 50 $ for example), the GPU app might even run slower than the CPU version!
The best results (in terms of a comparison between CPU and GPU version) can be expected on low- to mid-range AMD CPUs with a medium to top-range graphics cards. E.g. I have a AMD X2 @ 2.5 GHz and a 9800 GT and get a speedup of approximately 2.
This app is just the first step, the next generation of ABP2 CUDA app is currently under development and will put more load on the GPU and less on the CPU, further increasing the performance of the app. No release date has been announced yet.
Happy crunching
HB
Hello Bikeman, thank you
)
Hello Bikeman,
thank you for the info.
I just took a look into the einstein data-folder and found -- no app_info.xml.
How and where can I make these settings? I could run that on my quad-core with GTX260 (the old one).
One app works with cuda, three would be great! At the moment Seti is offline 3 days a week and the cache for the cuda apps is much too small for three days, GPUGRID does not like my old GTX260, MW runs much faster on my ATI-Cards and there are no more real scientific apps (collatz is something for a handful of math-freaks; I wonder, why they dont have apps for my handy ... ), so my nVidia runs dry.
Regards
Alexander
Hi Alexander, There is no
)
Hi Alexander,
There is no need for an app_info.xml file, the ABP2 CUDA app is a stock app distributed automatically.
So your PC should download and run such units, unless:
- resource share and/or debt favors other projects on that PC at the moment
- available (=free) RAM on the card is less than ca 400 MB
- you disabled GPU crunching in the web preferences or local manager settings
- set E@H to "no more work".
From what I see in your host list, your grahics driver version and amount of memory should be sufficient, so I's not clear to me why that host doesn't request (and get) E@H work at the moment.
CU
HB
Hi, I did a project reset.
)
Hi,
I did a project reset. After a lot of downloads I have now running:
3 sse2 - wu's an
1 cuda23 wu.
I like it the other way round: 3 cuda and one sse2 or 4 cuda, whatever is possible.
I asked for the xml-file, because some other apps use that to adjust the amount of GPU-usage.
Any hint, how to do that?
Regards
Alexander
Hi! You can only run one
)
Hi!
You can only run one ABP2 CUDA tasks at the same time (unless your PC has more than one NVIDIA GPU, but I don't think that's the case for your PC).
Greetings
HB
Hi, this seems to be a waste
)
Hi,
this seems to be a waste of resources. How do other project manage to regulate this? Whats the secret behind these app_info.xml files?
Other projects allow to adjust the usage of CPU and GPU in this file without changing anything in the app-file itself.
A example from astropulse:
ATI
0.25
This seems to be a function of boinc, not of the app. If this is true, creating of an approbiate app_info could help.
Is there anywhere information about this?
As you mentioned earlier, einstein does not need a info-file, but would it be recognised?
Regards,
Alexander
RE: Hi, this seems to be a
)
The cruel truth is that two ABP2 tasks just would not fit on your video card at the same time, memory wise.
CU
HB
Hi, I think, I have to accept
)
Hi,
I think, I have to accept that.
But after all I can tell my children, that I've tried it ;-))
Anyway, is there somewhere a list of commands, that are accepted by boinc? Why do they not write ?
And a summary of the organisation of that file and the parameters?
BTW, many thanks for your effort.
Regards
Alexander
You're welcome. The
)
You're welcome.
The official BOINC wiki article on this can be found here: http://bolt.berkeley.edu/wiki/Anonymous_platform. If you need more info on specific tags, you'll find more stuff in discussion forums ==> google is your friend :-)
CU
HBE
RE: Hi! This app is just
)
Hi,
I'm just checking all the micro- and macrocosmos project, that have GPU-apps, what's going on there.
GPUGRID is working on OpenCL-Apps, that will run on nVidia and on ATI-Cards, that support OpenCl.
My (old type) GTX260 lacks some features, this is why i cannot crunch GRUGRID now. They are working on Fermi-enabled Apps, but right now they have only Beta wu's.
MW has no Fermi Support, I did not yet check SETI.
I found a guy, that may buy my GTX260, its good enough for his purposes. But that raises the question, which GPU should be my next one.
Will the new Einstein-App support Fermi, OpenCL ? What are the requirements for future apps? Is double-precision required? And as I've seen in earlier post, memory could also be important.
It would be nice if someone could leave a note here.
Regards,
Alexander