I've fitted Tom's card (PCI-Express 4 lane, to 8 USB riser connections). I connected 6 GPUs to it, but the computer objected (device manager showed one as "cannot start this device") - I blame this on the card, I've often had one card refuse to be on the same machine as another.
So I moved that card back to the miner machine, and am running 5 on the card.
However something is weird - MSI Afterburner is not reading the cards correctly, 4 of them show no usage, one shows 30% usage, and none show any temperature or clock speeds. GPU-Z shows everything but the clock speed ok, so I'm using that instead. I tried a couple of other programs but they failed to work too.
However, folding@home is very happy! It sees all 5 cards, and they are all running at very high (97-99%) usage, even when I run 23 of the 24 CPU cores on Rosetta. Clearly it's sharing more than just the one PCI-E v2 lane the normal risers do.
Tom - thankyou very much indeed! I shall attempt more GPUs on it as I get others repaired, I have 5 on a shelf to take apart.
Update - the lack of temperature readings was due to not having the right driver installed. That machine used to have a Baffin and now has Tahitis. They seemed to work, but some things didn't.
Probably the driver problem was limiting it to 5 GPUs on Tom's card aswell.
Why can't things just work?
Coupled with that I'm arguing with folding at home which is actually worse than boinc. It just asked for a 4.2 billion core CPU task on a 4 core machine. The server objected and gave it a 64 core task, then another. Needless to say it got very busy and the GPUs slowed right down. It allocates 1 core to each GPU, so I had 4-6=-2 CPU cores left. But it couldn't understand -2. Programmers! Stop using variables which can't handle negative numbers! Yeah it might run slightly faster but when it screws up it really screws up.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
I've fitted Tom's card (PCI-Express 4 lane, to 8 USB riser connections). I connected 6 GPUs to it, but the computer objected (device manager showed one as "cannot start this device") - I blame this on the card, I've often had one card refuse to be on the same machine as another.
So I moved that card back to the miner machine, and am running 5 on the card.
However something is weird - MSI Afterburner is not reading the cards correctly, 4 of them show no usage, one shows 30% usage, and none show any temperature or clock speeds. GPU-Z shows everything but the clock speed ok, so I'm using that instead. I tried a couple of other programs but they failed to work too.
However, folding@home is very happy! It sees all 5 cards, and they are all running at very high (97-99%) usage, even when I run 23 of the 24 CPU cores on Rosetta. Clearly it's sharing more than just the one PCI-E v2 lane the normal risers do.
Tom - thankyou very much indeed! I shall attempt more GPUs on it as I get others repaired, I have 5 on a shelf to take apart.
Update - the lack of temperature readings was due to not having the right driver installed. That machine used to have a Baffin and now has Tahitis. They seemed to work, but some things didn't.
Probably the driver problem was limiting it to 5 GPUs on Tom's card aswell.
Why can't things just work?
Coupled with that I'm arguing with folding at home which is actually worse than boinc. It just asked for a 4.2 billion core CPU task on a 4 core machine. The server objected and gave it a 64 core task, then another. Needless to say it got very busy and the GPUs slowed right down. It allocates 1 core to each GPU, so I had 4-6=-2 CPU cores left. But it couldn't understand -2. Programmers! Stop using variables which can't handle negative numbers! Yeah it might run slightly faster but when it screws up it really screws up.
No idea how thos works buy you might try this on your AMD cards:
Hy, the system variable
CUDA_CACHE_MAXSIZE 4294967296
can enhance cuda crunching times, please test it by yourselfes.
Isn’t not for AMD cards at all, where did you get that impression?
It’s a CUDA variable. CUDA = Nvidia only.
But as you can read from the OPs reply, he’s not sure if it even helps at all, he might have confused the change in performance with something else he did. It didn’t do anything for my Titan V system on Linux.
Isn’t not for AMD cards at all, where did you get that impression?
It’s a CUDA variable. CUDA = Nvidia only.
But as you can read from the OPs reply, he’s not sure if it even helps at all, he might have confused the change in performance with something else he did. It didn’t do anything for my Titan V system on Linux.
I knew that but the OP said it was for AMD cards which is what I took to be correct when obviously it wasn't.
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
It looks like if you want to try for 4 GPU's plugged directly into the motherboard my choices seem to be a MSI x570 godlike or a epyc ASRock single CPU motherboard?
It looks like a threadripper MB or Intel-based MB might also do it. I am trying to re-use available parts (CPU, ram, CPU cooler) which is why I am not seriously considering the TR and Intel solutions.
For people who don't know, I have sworn off risers and extender cables.
Respectfully,
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Peter Hucker wrote:I've
)
Update - the lack of temperature readings was due to not having the right driver installed. That machine used to have a Baffin and now has Tahitis. They seemed to work, but some things didn't.
Probably the driver problem was limiting it to 5 GPUs on Tom's card aswell.
Why can't things just work?
Coupled with that I'm arguing with folding at home which is actually worse than boinc. It just asked for a 4.2 billion core CPU task on a 4 core machine. The server objected and gave it a 64 core task, then another. Needless to say it got very busy and the GPUs slowed right down. It allocates 1 core to each GPU, so I had 4-6=-2 CPU cores left. But it couldn't understand -2. Programmers! Stop using variables which can't handle negative numbers! Yeah it might run slightly faster but when it screws up it really screws up.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Peter Hucker wrote: Peter
)
No idea how thos works buy you might try this on your AMD cards:
Hy, the system variable
CUDA_CACHE_MAXSIZE 4294967296
can enhance cuda crunching times, please test it by yourselfes.
It came from a thread at MilkyWay
https://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=4982
mikey wrote:No idea how thos
)
They don't seem too sure it does anything.
And I can't find the AMD equivalent.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Peter Hucker wrote:mikey
)
It IS only for AMD cards, or did you mean Nvidia equivalent?
Isn’t not for AMD cards at
)
Isn’t not for AMD cards at all, where did you get that impression?
It’s a CUDA variable. CUDA = Nvidia only.
But as you can read from the OPs reply, he’s not sure if it even helps at all, he might have confused the change in performance with something else he did. It didn’t do anything for my Titan V system on Linux.
_________________________________________________________________________
Ian&Steve C. wrote: Isn’t
)
I knew that but the OP said it was for AMD cards which is what I took to be correct when obviously it wasn't.
mikey wrote:I knew that but
)
but they didnt say that. anywhere. ever. and if you look at their host, they have an Nvidia RTX 3060.
this is their post in its entirety:
no mention of AMD anywhere. only mention of cuda, which is only for Nvidia.
the only one who said anything about AMD with this command was you.
_________________________________________________________________________
Ian&Steve C. wrote: mikey
)
OMG...sorry about that I have no idea what I was thinking, maybe a Moderator can just delete the whole thread and avoid any confusion in the future
https://www.ebay.com/itm/1955
)
https://www.ebay.com/itm/195567964690?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=euOjwGUQTI2&sssrc=2349624&ssuid=PAS42sWPTnG&var=&widget_ver=artemis&media=COPY
Epyc mb with two 7742 CPUs. Under $3000USD
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
It looks like if you want to
)
It looks like if you want to try for 4 GPU's plugged directly into the motherboard my choices seem to be a MSI x570 godlike or a epyc ASRock single CPU motherboard?
It looks like a threadripper MB or Intel-based MB might also do it. I am trying to re-use available parts (CPU, ram, CPU cooler) which is why I am not seriously considering the TR and Intel solutions.
For people who don't know, I have sworn off risers and extender cables.
Respectfully,
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!