I'm running some old GPU cards which still work nicely on the Gamma Ray tasks. Granted, they are not as fast as a modern card, but some tasks crunched is better than zero tasks crunched. My NVIDIA card is too old to crunch the newer gravitational wave tasks as it only has 1 Gig of memory. If you want to crunch those, you need a card with more memory, as to how much, I want to say at least 2 Gig, but it might be higher. Last time I tried to crunch GW tasks, the NVIDIA card (GT-1030) failed, but the AMD (RX-560) still worked.
The information Gary provided in the previous link is helpful. There are also web sites which list GPU cards by crunching power. Those sites might also provide some insight.
In most configurations, electric power consumption is a material cost of contributing work to Einstein. This is true even if the system in question would still be running all the time were Einstein work not being done (as is true for my three) as the GPU, CPU, and memory all burn more power than they would otherwise.
It is yet more true if the system would otherwise be turned off, or not even exist.
It is especially true if the GPU is of a very old generation, as those produce far less useful Einstein output per watt-hour consumed than do the modern ones.
The details vary a lot by location and person. For example some of us don't see a bill for our power consumption, and think of it as free.
Still, I think the main "is it worth it" objection to using very old GPUs for Einstein is severe power inefficiency. Investing the money saved by not running the old configuration for a while in modern hardware (especially non bleeding-edge models bought used) could often give higher Einstein output for less money and less energy consumption in the long run.
I still have HD 4670, albeit the AGP version (one of the last GPUs to be released for AGP). That aside, it uses TeraScale architecture, which predates even GCN. Assuming you can still obtain drivers with OpenCL support for it (I only recall using it on Windows XP), GCN was where GPGPU on Radeon GPUs really took off. TeraScale was much more focused on graphics over GPGPU, which I think is part of why they didn't consume nearly as much energy as their contemporary counterparts from Nvidia (and also why today's RDNA-based GPUs generally consume less power than their GCN predecessors, with correspondingly less emphasis on GPGPU).
Which is not to say that it isn't possible to use OpenCL applications with TeraScale-based GPUs, I distinctly remember running AMD's Cayman GPUs (HD 6000 series) with BOINC. But I also remember the substantial performance gains when migrating to GCN-based GPUs (Tahiti and later).
As for the question 'is it worth it?' as others have mentioned there's a power consumption consideration involved. I haven't used my HD 4670 in years, and suspect it wouldn't be worth the effort to dust it off just for GPGPU.
This might
)
This might help:
https://einsteinathome.org/content/macpro51-ati-5770#comment-165035
Have a nice Sunday
I'm running some old GPU
)
I'm running some old GPU cards which still work nicely on the Gamma Ray tasks. Granted, they are not as fast as a modern card, but some tasks crunched is better than zero tasks crunched. My NVIDIA card is too old to crunch the newer gravitational wave tasks as it only has 1 Gig of memory. If you want to crunch those, you need a card with more memory, as to how much, I want to say at least 2 Gig, but it might be higher. Last time I tried to crunch GW tasks, the NVIDIA card (GT-1030) failed, but the AMD (RX-560) still worked.
The information Gary provided in the previous link is helpful. There are also web sites which list GPU cards by crunching power. Those sites might also provide some insight.
In most configurations,
)
In most configurations, electric power consumption is a material cost of contributing work to Einstein. This is true even if the system in question would still be running all the time were Einstein work not being done (as is true for my three) as the GPU, CPU, and memory all burn more power than they would otherwise.
It is yet more true if the system would otherwise be turned off, or not even exist.
It is especially true if the GPU is of a very old generation, as those produce far less useful Einstein output per watt-hour consumed than do the modern ones.
The details vary a lot by location and person. For example some of us don't see a bill for our power consumption, and think of it as free.
Still, I think the main "is it worth it" objection to using very old GPUs for Einstein is severe power inefficiency. Investing the money saved by not running the old configuration for a while in modern hardware (especially non bleeding-edge models bought used) could often give higher Einstein output for less money and less energy consumption in the long run.
https://www.techpowerup.com/g
)
https://www.techpowerup.com/gpu-specs/radeon-hd-4670.c234
That is a really small amount of video ram.
Maybe running the CPU would be worth it?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor)
I still have HD 4670, albeit
)
I still have HD 4670, albeit the AGP version (one of the last GPUs to be released for AGP). That aside, it uses TeraScale architecture, which predates even GCN. Assuming you can still obtain drivers with OpenCL support for it (I only recall using it on Windows XP), GCN was where GPGPU on Radeon GPUs really took off. TeraScale was much more focused on graphics over GPGPU, which I think is part of why they didn't consume nearly as much energy as their contemporary counterparts from Nvidia (and also why today's RDNA-based GPUs generally consume less power than their GCN predecessors, with correspondingly less emphasis on GPGPU).
Which is not to say that it isn't possible to use OpenCL applications with TeraScale-based GPUs, I distinctly remember running AMD's Cayman GPUs (HD 6000 series) with BOINC. But I also remember the substantial performance gains when migrating to GCN-based GPUs (Tahiti and later).
As for the question 'is it worth it?' as others have mentioned there's a power consumption consideration involved. I haven't used my HD 4670 in years, and suspect it wouldn't be worth the effort to dust it off just for GPGPU.
Soli Deo Gloria
Tom M
)
At only 512mb of onboard ram, most Boinc projects have set a 1gb memory 'floor', that card isn't even worth trying at most projects.
For what it's worth, I found
)
For what it's worth, I found iGPUs with at least 512 MiB RAM allocated can still receive work, including E@H.
Soli Deo Gloria