Einstein@Home is beginning a new round of searching for radio pulsars in short-orbital-period binary systems.
This is accompanied by the release of a new application (called BRP3). The new application is particularly efficient on NVIDIA Graphics Processor Cards (up to a factor of 20 faster than the CPU-only application). In addition, when running on an NVIDIA GPU card, this new application makes very little use of the CPU (typically around 20% CPU use when the GPU is devoted to Einstein@Home).
The NVIDIA GPU application is initially available for Windows and Linux only. We hope to have a Macintosh version available soon. Due to limitations in the NVIDIA drivers, the Linux version still makes heavy use of the CPU. This will be fixed in Spring 2011, when a new version of the NVIDIA Driver is released. Many thanks to NVIDIA technical support for their assistance!
Because we have exhausted the backlog of data from Arecibo Observatory, this new application is being shipped with data from the Parkes Multibeam Pulsar Survey (from the Parkes Radio Telescope in Australia). In the next weeks we expect to also start using this new application on fresh Arecibo data taken with the latest 'Mock Spectrometer' back-end.
Questions, problems or bug reports related to this new application and search should be reported in this news item thread as a 'Comment'.
Bruce Allen
Director, Einstein@Home
Copyright © 2024 Einstein@Home. All rights reserved.
Comments
RE: We just installed
)
For Boinc to see Cuda GPU's on OSX you need to install the Cuda toolkit and Cuda driver (in that order),
i believe it'll be the Cuda 3.2 version that you need to install.
Claggy
RE: For Boinc to see Cuda
)
What do you need the CUDA Toolkit for?
BM
BM
RE: RE: For Boinc to see
)
I was thinking for compatibility across different projects that have MAC Cuda apps,
(collatz's MAC Cuda app errors out without the toolkit)
Claggy
RE: collatz's MAC Cuda app
)
Too bad. They should distribute the libraries (cudart, cufft, whatever they use) with the application, like all other projects I know of do. It's legal as long as they distribute the EULA.txt with the libs.
BM
BM
We just installed a Quadro
)
We just installed a Quadro 4000, the latest drivers, CUDA drivers and the CUDA toolkit. Any word on whether Einstein@home supports this card on OS X 10.6.7?
Chris
I have a MacBook Pro with a
)
I have a MacBook Pro with a Geforce 9400M with 253 megs of RAM. I also have a Mac Mini with a Geforece 320M with 252 megs of RAM:
Both machines report the following error:
Wed Apr 13 13:51:05 2011 Einstein@Home Message from server: Your NVIDIA GPU has insufficient memory (need 300MB)
Anyone else receiving this message?
RE: I have a MacBook Pro
)
The BRP3 GPU application requires at least 300MB of GPU memory and sometimes more. Unfortunately your cards with 252-253MB of memory do not have enough memory to run the BRP3 GPU tasks.
RE: We just installed a
)
We don't support CUDA on 10.6.x until NVidia fixes the bug in the CUDA driver for that OS version (see here).
BM
BM
RE: Another difference of
)
I have to revisit this thread again because I'm considering buying some GPU for this project. But it will not be a Tesla card because of the price tag. :P It's just very hard to compare the options because of the differences in architecture etc.
It seems both the Tesla and Quadro series cards have the full double precision performance while the GTX (consumer) cards are capped to 1/4 of the full speed. How about single precision then, is it unlimited on all of these cards, and in some way comparable? However, does Einstein@Home perform only single precision calculations on GPUs? If this is the case, then Quadro series cards would be much less useful.
Earlier it was said "1.5GB version of the 580 can run four tasks at once", so one task would use at most 375MB of memory. For example, we could compare Quadro 2000 (1 GB, 400 euros) and Quadro 4000 (2 GB, 700 euros). Quadro 2000 possibly would be able to run only 2 tasks at once, while 4000 would be guaranteed to run at least 5. In this sense Quadro 2000 would be very bad in cost efficiency, am I right here?
RE: However, does
)
We always strive to support all kinds of volunteer hardware and make use of it as good as we possibly can. Therefore we try hard to use only single-precision because all GPUs support it. Requiring double precision would severely reduce a) the number of usable GPUs as well as b) overall application performance. However, it might not be possible to use only single precision at some point in the future, but for the time being single precision is sufficient.
Oliver
Einstein@Home Project
RE: We always strive to
)
Now if we could just get an OpenCL or CAL app for ATI hardware, we would be getting a big boost in crunching! I'd throw my card at Einstein if I could, but right now, it's just doing conjecture crunching.
RE: Now if we could just
)
Work in progress...
Einstein@Home Project
RE: Therefore we try hard
)
I very much appreciate being able to use a single-precision card at the moment.
Can I ask what might prompt the move to double-precision at E@H?
Thanks!
RE: Can I ask what might
)
Future search codes might require double-precision. But so far, single-precision is sufficient and there are no new algorithms/codes on the horizon that would change that.
Oliver
Einstein@Home Project
RE: RE: Now if we could
)
Hi. When the program will be published?
RE: RE: RE: Now if we
)
We'll publish an OpenCL App as soon as it's finished and tested to an extend that we can tell that it does more good than bad.
Oliver is working full time on the application, and he is making good progress; however, a few problems still need to be solved. And we are also actively working with BOINC developers to get an OpneCL-aware BOINC Client out of the door, which also doesn't exits yet.
Seems that E@H is again on the bleeding edge of development.
BM
BM
RE: We always strive to
)
Than why don't you build applications for older machines?
May be they crunch no to fast, but there are enormous amount of them around here and all over the world.
RE: Than why don't you
)
What machines are you referring to? The GW Application supports Linux and Windows machines back to Pentium(TM) II, and PPC Macs that aren't sold anymore for - how long? - six years or so.
The relation between effort required (maintenance time, electricity) and benefit compared to modern machines gets worse every year a computer ages.
The source code is freely available, even a build script that is designed to work on most OS. If you want to support a machine that we don't have a stock App for, you are very welcome to 'roll your own'.
BM
BM
RE: RE: Than why don't
)
Yes, I'm talking about old P2s, P3s and P4s. Yes, the application is able to start there. But... it consumes so much RAM, that it becomes impossible to run E@H as a background task for still well working machine. The reasons to use these machines are:
1) They are still doing well and do their job.
2) Any new machine instead of the old one will cost much more than it can save by the difference in electricity bills.
3) If I buy a new machine, I will support certain machine manufacture to consume more and more far not endless earth resources (why new computer cases are not compatible with elders? Why new power supplies cannot be used with new motherboards with lower power consumption? etc)
So, there's no need to support any different app.type, but there's a reason for the app to look how much memory it may use in its current run.
RE: So, there's no need to
)
We're not wasting any memory in the App, in fact we try pretty hard to keep memory requirements as small as possible. From the files that you downloaded (for the GW search) we're picking only the few bins that we need. Requires a lot of I/O operations, but keeps memory requirements low.
The S6Bucket tasks should take around 100 MB. If your old machines don't have that much and you don't want to spend swap space on it, then maybe there's some other BOINC project that suits your machines better.
BM
BM
RE: RE: We always strive
)
I found I was wingman to this machine recently which rather amused me.
Oh, I see, that new S6 search
)
Oh, I see, that new S6 search consumes less memory then previous one. And it again makes possible to put in those old ones, at least those, who have more than 256 Mb of RAM. Thank you!!! And please try your best to keep memory consumption as low as possible.
BTW, using swap file there is not useful because of terrible speed of old hard drives used in there.
RE: RE: RE: We always
)
But your wingman fails all the tasks it downloads. Firstly because of lack of memory resources.
RE: We're not wasting any
)
I would actually prefer to trade some RAM for reduced disk I/O (seeing as I have 8 gigs of RAM and I rarely use all of it). Would it be possible to add this as an 'Einstein@Home preference' on the website, or would that require significant refactoring of the application? I should mention that I'm talking about system RAM here, not GPU RAM.
I have 8 GB too on my 32-bit
)
I have 8 GB too on my 32-bit Linux pae, but most of it is used as a disk cache, so reducing disk I/O.
Tullio