Now that the new nVidia GPU reviews are out does anybody know of some good information about their GPGPU capabilities? Currently I have no GPGPU capable of Einstein crunching. It is time to upgrade my GPU and decided it would be nice to get one that could be used for einstein too. I had in mind spending somewhere close to $200-$250 (U.S.). I know the new less expensive nVidia is going to be more than what I would like to spend but if the GPGPU gains are, by some defensible metric, way worth the extra money then I may have a difficult decision. There doesn't seem to be any way of knowing when or if ATI cards will become einstein capable which further adds to the difficulty of the decision given that the ATI cards seem to be cheaper/faster. Should I:
1) Get new less expensive nVidia card
2) Get some other last gen. nVidia card (any recommendations for $200?)
3) Get a good ATI card and hope for einstein compatibility in near future
Thanks for any help
Copyright © 2024 Einstein@Home. All rights reserved.
New nVidia GPGPU's worth it?
)
Probably a lot of people have been thinking similar thoughts to yours - I certainly have.
So I thought I'd be brave/silly enough to show my ignorance and make some possibly overly opinionated statements and see who shoots me down :). The views expressed are entirely my own.
* Buying a GPU just to crunch E@H is a waste at the moment because you need to tie up a full CPU core with the GPU for a nice but somewhat less than stellar performance gain. Whilst it is likely that further gains will be made in the future, the timeframe is uncertain and hardware costs only get cheaper if you delay. It could be many months or more before performance improves sufficiently to justify the purchase.
* Buying a GPU to crunch {other project of choice} has a lot more going for it. E@H is my project of choice so if I build a quad, I want all cores crunching E@H. However, I can easily justify to myself a few extra bucks to add a GPU that can crunch something of interest without tying up a full CPU core at the same time. I am interested in other physics/astronomy disciplines. If LHC ever had work, I might have a real problem in deciding my CPU project of choice.
* There is a project, Milkyway, that can use a GPU without tying up a CPU, and it does use the GPU extremely efficiently. A task that would take an hour or three on one of the cores of a quad, only takes a minute or two on a suitable GPU. The main disadvantage is that a GPU supporting double precision is required. If you were purchasing a GPU just to crunch something like MW, you would be mad to choose any of the nVidia offerings, including the latest. That will probably be true for quite a while to come. Of course there are other projects outside the ones I've mentioned where an nVidia GPU may be a good solution. I don't have knowledge of and am not talking about those.
* It is rather unlikely that E@H will be willing or able to support ATI GPUs at any time in the future, officially. Bernd has given that opinion a couple of times and listed a number of reasons. So it would be folly to buy the cheaper and superior ATI GPU thinking that you will be able to use it on E@H within its normal lifetime. However, it is possible that someone external to the project who is smart enough and dedicated enough (and some would say stupid enough), could grab the code and weave some magic incantations over it and come up with a working version for ATI hardware that had sufficient performance to justify the effort. One can but live in hope :-).
* If I were in your position and were planning to upgrade a GPU and were wanting to live within a budget and get best bang for buck performance, I would first choose a project that I would be happy to support and then look at the particular requirements of that project. Being only interested in Physics/astronomy limits me but it may not limit you. Also I would want to be sure that my investment would last for a while and be productive so I would try to avoid projects that might have ongoing problems, limited uptimes, work shortages, budget constraints, Developer oddities or any other characteristics likely to lead to participant frustration. I'm not afraid of the occasional period of chaos but I would try to avoid it on a longer term basis :-).
* Once I had selected the project and had worked out some of the nitty gritty of how the project actually functions, I would feel a bit more confident in selecting the GPU. In my case I chose to look at MW some time ago and I ran it for a while on some hosts that I had already retired from E@H. Once I had seen the evidence of how various GPUs performed I decided that my best option was ATI HD4850s which I could get for a little over $US100 each at the time. If I were doing it again now I'd probably be still trying to get 4850s on ebay or something like that. I've really been very happy with that model GPU. Because of the lack of credible competition, the latest ATI stuff seems more expensive than it needs to be. Hopefully that might change soon as there are some significant performance gains in the 5800 series ATI cards. Let's all pray for some decent nVidia competition.
Hopefully the above points might generate some counter arguments. It will be good to see what other people think.
Cheers,
Gary.
With a half year old GPU on
)
With a half year old GPU on an i7 a job is done only about 20% faster than for
similar jobs using only one CPU.
With a one year old GPU inside a notebook a job is done about 20% slower now
than a similar job on a CPU, this was different a few months ago ;o)
Interestingly, switching off GPU processing for the notebook results in
another fraction of Pulsar Search.
To get some minor useful effect, I think one has to use a new and expensive
GPU - but often you get a better result spending the money for a better CPU
with more cores for einstein@home.
However, this may change, if the GPU processing is further optimised, but then
it is early enough to think about this issue.
RE: * There is a project,
)
I think that is still up in the air at this point (the latest offerings being the 470 and 480 cards that should appear in shops on the 12th). There are some rumors that double precision performance will be capped on these cards to make their professional line of products more appealing - I hope these rumors are false. Either way, I don't think much is known about their performance for Milkyway at this point.
Thanks for the input guys.
)
Thanks for the input guys. Doesn't sound like anybody has much more of an idea of what to do than I. I noticed nobody actually made a recommendation of 1,2, or 3. I happen to have an i7 like Olaf and if a half year old nVidia gives a boost I guess that's my best choice. In addition to my i7 and Q6600, I have an old d805 that I was considering retiring because of electricity trade off. Perhaps that oc'd d805 could feed a half year old nVidia but that would defeat my purpose in upgrading my video card in my i7 that I use for day to day computing needs. Arrg!
I don't think anybody would argue much with anything you said Gary. Most certainly performance gets cheaper the longer you wait. I have waited about three years (HD 2600xt). From the input so far, (thanks to everyone by the way) I will get an nVidia card that has been out for a while so I miss the early adopter tax. It is just so difficult to pass up on the bang for buck ATI offers (like your HD 4850 Gary) at the moment but that seems to be the state of affairs since einstein is the only project I support or really ever plan to support (Though I also agree with Gary that LHC is in the same science ballpark as einstein). I will put my faith in that software development will keep getting better for nVidia cards. I just worry that someone who develops the software will decide that only cards from a certain point forward will be supported in the best way and mine will fall outside that support soon.
So, can anybody give me a particular nVidia model recommendation? Or, did I totally miss everyone's point that at this junction adding a crunching card to an i7 will only hurt performance?
This is a sucky state of affairs. Who can I blame for this situation? As always, it must be somebody's else's fault I'm in this position.
With a mature, well written,
)
With a mature, well written, app (eg milkeyway, gpugrid, or collatz) the impact on your general performance will be minimal. the current E@H cuda is a proof of concept type that only implements a single small part of the calculation on the GPU while remaining otherwise CPUbound. Speaking bluntly there's no real point to spend any money to run Einstein on a GPU at present.
there's also no good reason to buy an nVidia GPU these days for either general gaming or general GPU computing.
ATI's 5xxx series spanks nVidia's 2xx series at gaming for a given pricepoint. The gap for GPU apps is even bigger. A GTX260 gets ~30k credits day on MW/collatz. The equivalent ati cards score, IIRC, ~ 250k/day.
The 4xx series benchmarks indicate that, at best, they're only marginally faster than the equivalent level ATI cards; but are significantly more expensive, draw significantly more power, and consequently are significantly louder. The only crunching benchmark I've seen done was for folding at home where tehy were about 4x as fast as the 2xx series. Assuming the gap is as wide there as on boinc projects they're still not going to be good enough out of the box. (It's possible apps targeting their new features might perform significantly better.)
My recomendation would be to buy as much 5xxx series card as you can afford and run one of the three GPU projects I listed above. If you insist on nVidia either get a 260 or wait a few more days to blow your money on a 470/480 and run one of the projects with a mature GPU app on it.
RE: There are some rumors
)
The rumours seem to be pretty strong as evidenced by threads like this on nVidia's own forums.
Cheers,
Gary.
I suppose it 'all depends'.
)
I suppose it 'all depends'. As usual. :-)
By that I mean it comes back to one's motive for BOINC/E@H involvement. The safe all purpose solution/reply is to have a given hardware setup that flies well in a practical/economic/pleasing sense for some other ( non-BOINC ) reason(s) entirely. Given that's in place to one's advantage, one can then slide in some BOINC work of suitable choice upon that.
However : Myself, probably yourself and no doubt quite a few others, are rather more involved than that. We've expanded from a casual ad-hoc donation of services to a hobby-like level. Thus we 'risk' an extra amount of material worth ( hardware, one's time, neuronal wear & tear, ISP connection/usage costs, electricity fees ..... ) over and above the non-BOINC quota. I think the answer then becomes different as addressed to this 'type' of contributor. After all, we are discussing this in Cruncher's Corner ! :-)
So : as I'm not wise enough to speak more generally I'll confine to E@H alone from here on. Like LHC and other 'one of a kind' projects, it is it's own prototype. Experimental from the get go. Hence there is likely to be a 'happenstance effect' - we use something because it has been fortunately found to work and not necessarily due to a survey of the entire solution space with an optimal choice resulting. For effective searching ( say within our lifetimes as an upper bound! ) approximation is needed, though one hopes as an ongoing refinement to yield progressively more superior solutions. To a significant extent I've already seen this since my first foray here. Back then a suggestion of video cards solving scientific problems would have been met with some incredulity - 'what the heck?!':-)
Thus the inclusion of GPU work has arisen to take advantage of matters developing well outside of the project. Expect this sort of thing to happen again.
The question asked, alas, comes down to 'what will the future hold?'. Which is why the replies are havering! Great question by the way. :-)
If asked, I advise people never to go into a casino with any more money than they would accept as a dead loss. Specifically if that's a zero amount, then don't go in. [ Please, this isn't a 'p*** off if you don't like it' comment, but a frank recognition of the uncertain nature of frontier research ]. To me that's part of the attraction of E@H, to others that may well be a downside. It's probably not good to be in the 'hobbyist' category here at E@H if one hasn't realised these aspects.
Also ( and I'm sorry if I seem to be belaboring the point, but like Gary and other mods I'm speaking to the lurker multitude as well here. My signature is partly playful upon this aspect ) one base reason for the LIGO data analysis pipeline ( in the continuous wave area at least ) being addressed via distributed computing is lack of resources for a full technical solution to the problem ( finding the golden needles of gravitational wave signals in a haystack full of rubbish ). Whew! But that resource dearth also applies to development as well. We just don't yet know what we don't yet know, so we have heuristics guiding the programming. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Actually I think I know where
)
Actually I think I know where the future will go. ;-)
Fast forward 2015 ------>
"Hey, how's your GSPU going?"
"GSPU?"
"Yeah, your Gravity Signal Processing Unit!"
"Errr ... "
"They used to call them 'G-Force'"
"Ummm. Oh. Mine has got 'NVidia' written on it."
"And?"
"with ..... wait a moment .... 'sesi-hepto-octal quantum interlock' ......"
"Yup, that's the one. The Allen core."
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Wow, well now I get the idea
)
Wow, well now I get the idea that it is better to bet that einstein will never, within the lifetime of this contemplated video card, get any good out of any GPGPU and therefore agree the best way to go for me is to go ahead and get the best ATI 5xxx card for the money.
Anybody care to change my mind again?
The thing is, though nVidia is all the bad things y'all are saying about them, they seem to be the ones that better have a handle on the whole GPGPU computing concept (That has been my perception from the reading I have done at least.) and that was the train I was hoping to hop onto. nVidia at least seems to be placing more importance on GPGPU computing than ATI but I'm getting the idea around here that none are ready for einstein prime time yet. Or is it the other way around and einstien is not prime time ready for GPGPU? The answer would seem to be both. I guess there just is not enough incentive yet for the software.
RE: The rumours seem to be
)
Yeah, I saw that a few hours after I posted :( I'm very disappointed, and really torn about getting one of these cards now. Even if I admit to myself I won't be getting it just for its science computing capabilities, this artificial limiting leaves a very bad taste in my mouth. There's simply no way any consumer is going to buy a Tesla card, so I really think they're shooting themselves in the foot with this. Tesla should simply imply direct, lifetime support and maybe have all the shader clusters enabled. They may think this is a good business decision, but I've seen a little of just how much money some of the more competitive crunchers have - I think doing this is just going to lose them a percentage of their sales without actually helping out Tesla in the slightest.