It is well documented that the high frequency tasks fail on 3GB Nvidea cards. A question I have is my GTX1660Super which has 6GB of memory seems to run without errors running 2X. That's only 3GB per task. I wonder if anyone has a clue why that is?
Luck? As the chances of you having two large tasks at once is lower.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Three GW tasks done, 10 waiting to run. Still running 2 Rosetta@home CPU tasks, six Gamma-ray CPU tasks and 2 LHC@home VirtualBox tasks, seemingly frozen after a reboot due to a Windows 10 Cumulative upgrade at night.
Tullio
Virtualbox hates being interrupted. You can't help reboots, although there are ways to stop that stupid automatic Windows update, but they keep working around them, whatever happened to asking permission before restarting MY machine? But you can stop Boinc interrupting them. Tick "leave applications in memory when suspended" and/or set "switch between applications" to an absurdly high number, like 1000000 minutes. Then they won't be interrupted and won't go wrong.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
It is well documented that the high frequency tasks fail on 3GB Nvidea cards. A question I have is my GTX1660Super which has 6GB of memory seems to run without errors running 2X. That's only 3GB per task. I wonder if anyone has a clue why that is?
you likely are not getting hit with two of the >3GB tasks at the same time. I think it's a little risky to try running 2x with a 6GB card. you might on rare occasion get two of the ~3200MB tasks, and it might fail one of them in that case. just look at the mem use used by each application on the GPU and you'll see.
you likely are not getting hit with two of the >3GB tasks at the same time. I think it's a little risky to try running 2x with a 6GB card. you might on rare occasion get two of the ~3200MB tasks, and it might fail one of them in that case. just look at the mem use used by each application on the GPU and you'll see.
If it's only occasionally it's no big deal. Especially if they crash right at the start and waste no time.
But hopefully soon the coding will be fixed so we aren't given enormous tasks that don't fit!
There are many screwed up things that need manual intervention in Boinc. For example I used to use the common method of allocating x cores to a GPU to run an Einstein task. So if a certain number of Einsteins were running on the GPU, there would be some CPU cores left aside for them. But that fails if you run LHC Atlas tasks. Seeing I have 6 cores on this machine, it hands me a 6 core Atlas task, despite 2 CPU cores being used 24/7 to assist the GPU. So when it runs it, the GPU slows to a crawl. I now set all GPU tasks to need zero CPU, and just globally reduce the usable cores.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Ian, that makes sense but if true it seems to be so rare that it is a non event. The computer has only 4 errors and they were caused by a GTX660 3GB card when I was trouble shooting a problem with the box.
What I forgot to mention is a Linux Virtual Machine on the Windows 10 host, running OpenSuSE Tumbleweed development version, frequently updated with kernel now 5.6.12. It is the only system I left in Science United, running mostly Milkyway@home and Asteroids@home tasks. Of course, being a Virtual Machine, it cannot use the GTX 1060 of its host.
Just keep an eye on it. It’s been my observation so far that I usually get batches of the same kind of task for a while. The same frequency and counting down the sequence number at the end. So I’ve seen times where a get a ton of those 3200MB tasks in a row on the same card. But I don’t run any 6GB cards, or even 2 WU at at time. On my systems, while running 2x does increase GPU utilization, the tasks run slower than half speed, so there’s no point for me. Running 1x is the fastest config on my systems
What I forgot to mention is a Linux Virtual Machine on the Windows 10 host, running OpenSuSE Tumbleweed development version, frequently updated with kernel now 5.6.12. It is the only system I left in Science United, running mostly Milkyway@home and Asteroids@home tasks. Of course, being a Virtual Machine, it cannot use the GTX 1060 of its host.
Tullio
nothing in this post has anything to do with Einstein GW tasks. Please stay on topic.
Sorry. But it explains why I cannot run GW GPU tasks on a Linux Virtual Machine on a Windows 10 host. The Virtual Machine does not see the GTX 1060 on the host, which has completed 34 GW tasks despite only having only a 3 GB Video RAM.
Sorry. But it explains why I cannot run GW GPU tasks on a Linux Virtual Machine on a Windows 10 host. The Virtual Machine does not see the GTX 1060 on the host, which has completed 34 GW tasks despite only having only a 3 GB Video RAM.
Tullio
I think there is an "IOMMU" setting in your bios that effects if your VM can "see" the Video card. Maybe that would allow the VM to use it?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!
Betreger wrote: It is well
)
Luck? As the chances of you having two large tasks at once is lower.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
tullio wrote: Three GW tasks
)
Virtualbox hates being interrupted. You can't help reboots, although there are ways to stop that stupid automatic Windows update, but they keep working around them, whatever happened to asking permission before restarting MY machine? But you can stop Boinc interrupting them. Tick "leave applications in memory when suspended" and/or set "switch between applications" to an absurdly high number, like 1000000 minutes. Then they won't be interrupted and won't go wrong.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Betreger wrote: It is well
)
you likely are not getting hit with two of the >3GB tasks at the same time. I think it's a little risky to try running 2x with a 6GB card. you might on rare occasion get two of the ~3200MB tasks, and it might fail one of them in that case. just look at the mem use used by each application on the GPU and you'll see.
_________________________________________________________________________
Ian&Steve C. wrote: you
)
If it's only occasionally it's no big deal. Especially if they crash right at the start and waste no time.
But hopefully soon the coding will be fixed so we aren't given enormous tasks that don't fit!
There are many screwed up things that need manual intervention in Boinc. For example I used to use the common method of allocating x cores to a GPU to run an Einstein task. So if a certain number of Einsteins were running on the GPU, there would be some CPU cores left aside for them. But that fails if you run LHC Atlas tasks. Seeing I have 6 cores on this machine, it hands me a 6 core Atlas task, despite 2 CPU cores being used 24/7 to assist the GPU. So when it runs it, the GPU slows to a crawl. I now set all GPU tasks to need zero CPU, and just globally reduce the usable cores.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Ian, that makes sense but if
)
Ian, that makes sense but if true it seems to be so rare that it is a non event. The computer has only 4 errors and they were caused by a GTX660 3GB card when I was trouble shooting a problem with the box.
What I forgot to mention is a
)
What I forgot to mention is a Linux Virtual Machine on the Windows 10 host, running OpenSuSE Tumbleweed development version, frequently updated with kernel now 5.6.12. It is the only system I left in Science United, running mostly Milkyway@home and Asteroids@home tasks. Of course, being a Virtual Machine, it cannot use the GTX 1060 of its host.
Tullio
Just keep an eye on it. It’s
)
Just keep an eye on it. It’s been my observation so far that I usually get batches of the same kind of task for a while. The same frequency and counting down the sequence number at the end. So I’ve seen times where a get a ton of those 3200MB tasks in a row on the same card. But I don’t run any 6GB cards, or even 2 WU at at time. On my systems, while running 2x does increase GPU utilization, the tasks run slower than half speed, so there’s no point for me. Running 1x is the fastest config on my systems
_________________________________________________________________________
tullio wrote: What I forgot
)
nothing in this post has anything to do with Einstein GW tasks. Please stay on topic.
_________________________________________________________________________
Sorry. But it explains why I
)
Sorry. But it explains why I cannot run GW GPU tasks on a Linux Virtual Machine on a Windows 10 host. The Virtual Machine does not see the GTX 1060 on the host, which has completed 34 GW tasks despite only having only a 3 GB Video RAM.
Tullio
tullio wrote: Sorry. But it
)
I think there is an "IOMMU" setting in your bios that effects if your VM can "see" the Video card. Maybe that would allow the VM to use it?
Tom M
A Proud member of the O.F.A. (Old Farts Association). Be well, do good work, and keep in touch.® (Garrison Keillor) I want some more patience. RIGHT NOW!