One ought not forget that GPU's are really, really good for ....... wait for it ...... graphics!! :-) :-)
So the speed benefit elsewhere is only going to be recovered out of those other scenarios to the extent that the problem can be massively paralleled, pipelined etc .... including setting up memory and whatnot. If one thread does indeed need the result from another then it will block, no bones about it. Can one factorise the problem/code into independent non-blocking threads???
A top fuel dragster may be fast, but it steers like a cow. Not your best choice for a trip to the shops [ and note the earlier implication about integer speeds for a floating point problem ].
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
One ought not forget that GPU's are really, really good for ....... wait for it ...... graphics!! :-) :-)
So the speed benefit elsewhere is only going to be recovered out of those other scenarios to the extent that the problem can be massively paralleled, pipelined etc .... including setting up memory and whatnot. If one thread does indeed need the result from another then it will block, no bones about it. Can one factorise the problem/code into independent non-blocking threads???
eventually yes. What we need are GPUs like Larabe's. Each SP is a p1 core, and while quite pokey compared to an i7 is at least theoretically capable of running a WU on it's own. Once that's the case and the GPU can run hundreds of cut down WU's at a time the logistics get somewhat simpler.
eventually yes. What we need are GPUs like Larabe's. Each SP is a p1 core, and while quite pokey compared to an i7 is at least theoretically capable of running a WU on it's own. Once that's the case and the GPU can run hundreds of cut down WU's at a time the logistics get somewhat simpler.
This is reminiscent of purpose built digital orrerys ( for planetary orbital prediction ) which I first saw featured in Scientific American some years ago - the idea there was to gauge the divergence of predictions with level of precision ( ie. chaos ).
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
A top fuel dragster may be fast, but it steers like a cow. Not your best choice for a trip to the shops
... so are you saying I shouldnt be taking my dragster to pick up groceries?!? ;)
Here is another hand in the air for an OpenCL app. I too much prefer to have crunch actual science vs. finding numbers (based on my poor understanding of math theory, finding more examples doesnt prove anything). My goal is not to get more credits, but to do more science; seems that a cross-GPU enabled app will get more science done.
My sincere thanks to all the developers out there helping us to maximize our layman contributions to science!
... so are you saying I shouldnt be taking my dragster to pick up groceries?!? ;)
:-) I've dreamt for some time about using, say, an Abrams M1A2 for local routes. :-)
[ Especially shopping centre carparks. I'd be unlikely to be on the wrong end of road-rage, cutting in or inappropriate finger gestures. Never leave home without a HE round up the tube ]
Quote:
Here is another hand in the air for an OpenCL app .... My sincere thanks to all the developers out there helping us to maximize our layman contributions to science!
Hear, hear!
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
AFAIK No. The Client uses only a single Shared Memory segment to communicate with the Apps; the communication between App and Graphics should be implemented with memory mapped files.
RE: I've seen, this is
)
They plan to do everything they can. But utilizing GPUs effectively isn't easy, so give them some time; things will only get better.
One ought not forget that
)
One ought not forget that GPU's are really, really good for ....... wait for it ...... graphics!! :-) :-)
So the speed benefit elsewhere is only going to be recovered out of those other scenarios to the extent that the problem can be massively paralleled, pipelined etc .... including setting up memory and whatnot. If one thread does indeed need the result from another then it will block, no bones about it. Can one factorise the problem/code into independent non-blocking threads???
A top fuel dragster may be fast, but it steers like a cow. Not your best choice for a trip to the shops [ and note the earlier implication about integer speeds for a floating point problem ].
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: One ought not forget
)
eventually yes. What we need are GPUs like Larabe's. Each SP is a p1 core, and while quite pokey compared to an i7 is at least theoretically capable of running a WU on it's own. Once that's the case and the GPU can run hundreds of cut down WU's at a time the logistics get somewhat simpler.
RE: eventually yes. What
)
This is reminiscent of purpose built digital orrerys ( for planetary orbital prediction ) which I first saw featured in Scientific American some years ago - the idea there was to gauge the divergence of predictions with level of precision ( ie. chaos ).
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: A top fuel dragster
)
... so are you saying I shouldnt be taking my dragster to pick up groceries?!? ;)
Here is another hand in the air for an OpenCL app. I too much prefer to have crunch actual science vs. finding numbers (based on my poor understanding of math theory, finding more examples doesnt prove anything). My goal is not to get more credits, but to do more science; seems that a cross-GPU enabled app will get more science done.
My sincere thanks to all the developers out there helping us to maximize our layman contributions to science!
RE: ... so are you saying I
)
:-) I've dreamt for some time about using, say, an Abrams M1A2 for local routes. :-)
[ Especially shopping centre carparks. I'd be unlikely to be on the wrong end of road-rage, cutting in or inappropriate finger gestures. Never leave home without a HE round up the tube ]
Hear, hear!
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Just curious... Is it
)
Just curious...
Is it normal to have BOINC reporting under Messages tab every other minute that CUDA tasks are being restarted...?
EDIT: Never mind. I think I've found the culprit. I think BOINC's finicky about the DCF variation as CUDA tasks are downloaded.
I've got three of these long
)
I've got three of these long CUDA workunits that are having error messages written all over them:
161525558
161525522
161525518
Any ideas...?
EDIT: Previous short CUDA workunits are also having the same error messages, but they were valid.
Is the Spy-Hill workround
)
Is the Spy-Hill workround still needed?
RE: Is the Spy-Hill
)
AFAIK No. The Client uses only a single Shared Memory segment to communicate with the Apps; the communication between App and Graphics should be implemented with memory mapped files.
BM
BM