GPUGRID sends out WU using a CUDA version that "matches" with the hardware it's sent to.
Yes. This is relatively easy if you have the apps, because BOINC reports the driver version under Win. It's a bit more tricky under Linux, as something will always break under some distro.. but for most systems this works.
I don't care if it works for most systems, I want it to work for MY systems. Einstein apps may not be as efficient as they could be, but they DO work. As opposed to GPUGRID apps which may be highly efficient but only if they work reliably, and that's a big IF. They're always trying to adopt the latest greatest features over there but they're sacrificing stability with it. I'd prefer the Einstein way any time.
Yes, it's good if we have choices and it's good for the projects to choose an approach suitable for them. At Einstein it's "we need massive throughput and very reliable results". They're searching for needles in piles of hay. And they do not particularly care how fast the analysis of individual WUs is done, as long as the entire package is being worked on.
On the other hand at GPU-Grid they need results back quickly, because they're iterating over time and thus need to generate the next WU from a previous result. This means too slow crunchers actually hinder project progress, as they increase the time the researchers have to wait for the answers to their questions, which in turn increases the time it takes them to formulate new questions. Faster apps allow them to simulate bigger systems within acceptable time. And they're pushing the boundary here, simulating stuff which has never been done before. That's why they're always at the leading edge, where it's soemtimes more of a bleeding edge.
The Cuda 5.5 version was announced in the christmas week 2013. Isn't it time to test it, at least at Albert? Stability and reliability tests cannot be done without distributing the app.
"We do have a CUDA 5.5 version that could be released but up to now the incentive to do so wasn't given (to justify the effort involved)."
This sounds like there wouldn't be much of a performance improvement. I suppose you also tested with Kepler and Maxwell GPUs?
Yes, briefly at some point last year. While there was indeed some improvement on newer GPU generations it didn't really justify deploying and maintaining the separate set of binaries. However, we discussed the situation again (which we do at least once a year) and we sort of arrived at the conclusion that it might be about time to upgrade to CUDA 5.5, dropping CUDA 3.2. The performance benefits of the (now more) numerous newer GPUs should outweigh the loss of the older GPUs not supported by 5.5 anymore...
We're not yet sure when exactly we'll make the move across the board but it should be rather soonish (this year) I'd say.
Yes, briefly at some point last year. While there was indeed some improvement on newer GPU generations it didn't really justify deploying and maintaining the separate set of binaries. However, we discussed the situation again (which we do at least once a year) and we sort of arrived at the conclusion that it might be about time to upgrade to CUDA 5.5, dropping CUDA 3.2. The performance benefits of the (now more) numerous newer GPUs should outweigh the loss of the older GPUs not supported by 5.5 anymore...
We're not yet sure when exactly we'll make the move across the board but it should be rather soonish (this year) I'd say.
HTH,
Oliver
I would like to suggest that you give people at least a partial list of similar, if possible, project that CAN still use the older 3.2 standard rather than just cut them loose. Maybe a thread that you can point them too that could be updated as users find new projects. Obviously not NOW, but as you get closer to cutting them loose it could be opened and a link sent out to folks thru the 'Notices' section of the Boinc Manager, for example.
Sorry, I don't think I can follow you: do you suggest that we should propose *other* projects to those volunteers who don't have CUDA 5.5 compatible GPUs? If so, then I can say that we don't have the resources for that type of research unfortunately. But we will make sure our volunteers will get notified about that change as to avoid any confusion. If someone wants to compile list of alternative projects, please feel free to do so.
Oliver, thanks for your answer. From my point of view this sounds reasonable. However, if you're going to make the cut wouldn't it make more sense to go straight to CUDA 6.5? Assuming it works without new bugs, of course. I'm not aware of any chips which are 5.5 capable but not 6.5.
The requirements towards the driver would be more challenging, though. It shouldn't be a problem under Windows, but some Linux folks have real trouble getting recent nVidia drivers for their distro. Let alone Optimus support, which is a nightmare on Linux to being with.
Quote:
do you suggest that we should propose *other* projects to those volunteers who don't have CUDA 5.5 compatible GPUs?
Yes, that's how I understand him. I'm sure the community could work this out.. maybe just post a help request here when you're about to announce the change and see if helpful answers come up.
However, if you're going to make the cut wouldn't it make more sense to go straight to CUDA 6.5? Assuming it works without new bugs, of course. I'm not aware of any chips which are 5.5 capable but not 6.5.
We haven't yet assessed those details. But sure, if the GPU compatibility is more or less the same, then, yes, 6.5 could be an option.
Quote:
The requirements towards the driver would be more challenging, though. It shouldn't be a problem under Windows, but some Linux folks have real trouble getting recent nVidia drivers for their distro. Let alone Optimus support, which is a nightmare on Linux to being with.
That's exactly one of the reasons why we're typically very conservative about raising the minimum requirement. Many of our contributing volunteers are in fact institutional clusters that are running linux.
Again, going from 3.2 to 5.5 should be feasible but 6.5 seems too aggressive to me right now. Anyhow, none of this is carved into stone and we'll consider all options before make a move.
RE: RE: GPUGRID sends out
)
I don't care if it works for most systems, I want it to work for MY systems. Einstein apps may not be as efficient as they could be, but they DO work. As opposed to GPUGRID apps which may be highly efficient but only if they work reliably, and that's a big IF. They're always trying to adopt the latest greatest features over there but they're sacrificing stability with it. I'd prefer the Einstein way any time.
Yes, it's good if we have
)
Yes, it's good if we have choices and it's good for the projects to choose an approach suitable for them. At Einstein it's "we need massive throughput and very reliable results". They're searching for needles in piles of hay. And they do not particularly care how fast the analysis of individual WUs is done, as long as the entire package is being worked on.
On the other hand at GPU-Grid they need results back quickly, because they're iterating over time and thus need to generate the next WU from a previous result. This means too slow crunchers actually hinder project progress, as they increase the time the researchers have to wait for the answers to their questions, which in turn increases the time it takes them to formulate new questions. Faster apps allow them to simulate bigger systems within acceptable time. And they're pushing the boundary here, simulating stuff which has never been done before. That's why they're always at the leading edge, where it's soemtimes more of a bleeding edge.
MrS
Scanning for our furry friends since Jan 2002
The Cuda 5.5 version was
)
The Cuda 5.5 version was announced in the christmas week 2013. Isn't it time to test it, at least at Albert? Stability and reliability tests cannot be done without distributing the app.
Alexander
Please read me comment if you
)
Please read me comment if you haven't yet done so.
Oliver
Einstein@Home Project
"We do have a CUDA 5.5
)
"We do have a CUDA 5.5 version that could be released but up to now the incentive to do so wasn't given (to justify the effort involved)."
This sounds like there wouldn't be much of a performance improvement. I suppose you also tested with Kepler and Maxwell GPUs?
MrS
Scanning for our furry friends since Jan 2002
Yes, briefly at some point
)
Yes, briefly at some point last year. While there was indeed some improvement on newer GPU generations it didn't really justify deploying and maintaining the separate set of binaries. However, we discussed the situation again (which we do at least once a year) and we sort of arrived at the conclusion that it might be about time to upgrade to CUDA 5.5, dropping CUDA 3.2. The performance benefits of the (now more) numerous newer GPUs should outweigh the loss of the older GPUs not supported by 5.5 anymore...
We're not yet sure when exactly we'll make the move across the board but it should be rather soonish (this year) I'd say.
HTH,
Oliver
Einstein@Home Project
RE: Yes, briefly at some
)
I would like to suggest that you give people at least a partial list of similar, if possible, project that CAN still use the older 3.2 standard rather than just cut them loose. Maybe a thread that you can point them too that could be updated as users find new projects. Obviously not NOW, but as you get closer to cutting them loose it could be opened and a link sent out to folks thru the 'Notices' section of the Boinc Manager, for example.
Sorry, I don't think I can
)
Sorry, I don't think I can follow you: do you suggest that we should propose *other* projects to those volunteers who don't have CUDA 5.5 compatible GPUs? If so, then I can say that we don't have the resources for that type of research unfortunately. But we will make sure our volunteers will get notified about that change as to avoid any confusion. If someone wants to compile list of alternative projects, please feel free to do so.
Oliver
Einstein@Home Project
Oliver, thanks for your
)
Oliver, thanks for your answer. From my point of view this sounds reasonable. However, if you're going to make the cut wouldn't it make more sense to go straight to CUDA 6.5? Assuming it works without new bugs, of course. I'm not aware of any chips which are 5.5 capable but not 6.5.
The requirements towards the driver would be more challenging, though. It shouldn't be a problem under Windows, but some Linux folks have real trouble getting recent nVidia drivers for their distro. Let alone Optimus support, which is a nightmare on Linux to being with.
Yes, that's how I understand him. I'm sure the community could work this out.. maybe just post a help request here when you're about to announce the change and see if helpful answers come up.
MrS
Scanning for our furry friends since Jan 2002
RE: However, if you're
)
We haven't yet assessed those details. But sure, if the GPU compatibility is more or less the same, then, yes, 6.5 could be an option.
That's exactly one of the reasons why we're typically very conservative about raising the minimum requirement. Many of our contributing volunteers are in fact institutional clusters that are running linux.
Again, going from 3.2 to 5.5 should be feasible but 6.5 seems too aggressive to me right now. Anyhow, none of this is carved into stone and we'll consider all options before make a move.
Oliver
Einstein@Home Project