I have compared the GW App on both an older Core2 Quad and newer i7-8700 using a NV 970 in Ubuntu. The App is around 25% faster on the 8700, and it doesn't use a full CPU core on the newer CPUs running AMD GPUs. There isn't much difference with the GR App, but there is a large difference with GW App between older and newer CPUs. A recent GW test on a 'new' AMD 570 on an i7-6700 showed CPU usage around 45% and GPU usage around 60-70% with higher spikes. All this is under Ubuntu.
So still a fair way from 100%. I guess we need very modern CPUs for GW. Mine is 10 years old, yours is 5.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
I have compared the GW App on both an older Core2 Quad and newer i7-8700 using a NV 970 in Ubuntu. The App is around 25% faster on the 8700, and it doesn't use a full CPU core on the newer CPUs running AMD GPUs. There isn't much difference with the GR App, but there is a large difference with GW App between older and newer CPUs. A recent GW test on a 'new' AMD 570 on an i7-6700 showed CPU usage around 45% and GPU usage around 60-70% with higher spikes. All this is under Ubuntu.
since you also compared a non-AVX CPU to a newer AVX CPU, that seems to support my hunch that AVX is a significant player in getting the work done quickly.
Two errors on May 14, 5 on May 16. In these last all wingmen errored out.
In May 14 only one out of 5 wingmen completed a task. Is there something wrong with the tasks?
Tullio
Nothing wrong with the tasks other than that they require a lot of GPU RAM, more than 3 GB worth of it.
I check all of your failed tasks (using the link provided by Ian&Steve C. in message Message 177802) and your wingmen and they all fail with the following:
Transferring host memory to GPU failed: CL_MEM_OBJECT_ALLOCATION_FAILURE
That's what you get when running out of GPU RAM.
Some of your wingmen had 4 GB cards but I suspect they tried to run multiple tasks at the same time so they ran out of GPU RAM.
Edit: A bit late as Ian&Steve C. already replied in Message 177811.
Note to self: Need to learn to look at the page number before posting!
Nothing wrong with the tasks other than that they require a lot of GPU RAM, more than 3 GB worth of it.
I check all of your failed tasks (using the link provided by Ian&Steve C. in message Message 177802) and your wingmen and they all fail with the following:
Transferring host memory to GPU failed: CL_MEM_OBJECT_ALLOCATION_FAILURE
That's what you get when running out of GPU RAM.
Some of your wingmen had 4 GB cards but I suspect they tried to run multiple tasks at the same time so they ran out of GPU RAM.
I have to wonder if there's something up with the coding for Nvidea cards. Because on my AMDs if they run out of RAM, they just use system RAM (admittedly more slowly, but the task completes correctly). I would have thought memory allocation was done by the card and the driver, but maybe not. I can't believe Nvidea would make cards and drivers that can't handle running out of RAM.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Actually.... nVidia has had something called "Unified Memory" for quite some time. That's why NV uses a Very large VM space. My guess is the App hasn't been coded to use the VM space. The Einstein App also uses 100% CPU for a NV Non-Windows OpenCL task which is Not necessary. I compiled quite a Few Non-Windows OpenCL Apps for SETI@Home, None of them used 100% CPU in Linux or Mac. The only OpenCL Apps at SETI that used 100% CPU was the Windows version.
In other news, it appears Ubuntu has fixed the 20.04 BUG which caused some Apps to suffer a DBUS timeout when launched as USER. This included BOINC. The recent 20.04 Updates have fixed this problem, You no longer have to jump through Hoops to avoid BOINC from taking 25 seconds to launch with the Clean Install method. Just install the current Updates for 20.04, problem solved.
Actually.... nVidia has had something called "Unified Memory" for quite some time. That's why NV uses a Very large VM space. My guess is the App hasn't been coded to use the VM space. The Einstein App also uses 100% CPU for a NV Non-Windows OpenCL task which is Not necessary. I compiled quite a Few Non-Windows OpenCL Apps for SETI@Home, None of them used 100% CPU in Linux or Mac. The only OpenCL Apps at SETI that used 100% CPU was the Windows version.
In other news, it appears Ubuntu has fixed the 20.04 BUG which caused some Apps to suffer a DBUS timeout when launched as USER. This included BOINC. The recent 20.04 Updates have fixed this problem, You no longer have to jump through Hoops to avoid BOINC from taking 25 seconds to launch with the Clean Install method. Just install the current Updates for 20.04, problem solved.
Have you offered your skills to the Developers here at Einstein? I'm guessing they would love to schedule a conference call at some point to at least discuss things. Better crunching means getting results quicker and that means Science is done quicker too.
Actually.... nVidia has had something called "Unified Memory" for quite some time. That's why NV uses a Very large VM space. My guess is the App hasn't been coded to use the VM space. The Einstein App also uses 100% CPU for a NV Non-Windows OpenCL task which is Not necessary. I compiled quite a Few Non-Windows OpenCL Apps for SETI@Home, None of them used 100% CPU in Linux or Mac. The only OpenCL Apps at SETI that used 100% CPU was the Windows version.
In other news, it appears Ubuntu has fixed the 20.04 BUG which caused some Apps to suffer a DBUS timeout when launched as USER. This included BOINC. The recent 20.04 Updates have fixed this problem, You no longer have to jump through Hoops to avoid BOINC from taking 25 seconds to launch with the Clean Install method. Just install the current Updates for 20.04, problem solved.
Having read your reference link above I think I need to emphasize that there is a need to reboot after installing the current updates for 20.04.
Three GW tasks done, 10 waiting to run. Still running 2 Rosetta@home CPU tasks, six Gamma-ray CPU tasks and 2 LHC@home VirtualBox tasks, seemingly frozen after a reboot due to a Windows 10 Cumulative upgrade at night.
It is well documented that the high frequency tasks fail on 3GB Nvidea cards. A question I have is my GTX1660Super which has 6GB of memory seems to run without errors running 2X. That's only 3GB per task. I wonder if anyone has a clue why that is?
TBar wrote:I have compared
)
So still a fair way from 100%. I guess we need very modern CPUs for GW. Mine is 10 years old, yours is 5.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
TBar wrote: I have compared
)
since you also compared a non-AVX CPU to a newer AVX CPU, that seems to support my hunch that AVX is a significant player in getting the work done quickly.
_________________________________________________________________________
tullio wrote:Two errors on
)
Nothing wrong with the tasks other than that they require a lot of GPU RAM, more than 3 GB worth of it.
I check all of your failed tasks (using the link provided by Ian&Steve C. in message Message 177802) and your wingmen and they all fail with the following:
That's what you get when running out of GPU RAM.
Some of your wingmen had 4 GB cards but I suspect they tried to run multiple tasks at the same time so they ran out of GPU RAM.
Edit: A bit late as Ian&Steve C. already replied in Message 177811.
Note to self: Need to learn to look at the page number before posting!
Holmis wrote:Nothing wrong
)
I have to wonder if there's something up with the coding for Nvidea cards. Because on my AMDs if they run out of RAM, they just use system RAM (admittedly more slowly, but the task completes correctly). I would have thought memory allocation was done by the card and the driver, but maybe not. I can't believe Nvidea would make cards and drivers that can't handle running out of RAM.
If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.
Actually.... nVidia has had
)
Actually.... nVidia has had something called "Unified Memory" for quite some time. That's why NV uses a Very large VM space. My guess is the App hasn't been coded to use the VM space. The Einstein App also uses 100% CPU for a NV Non-Windows OpenCL task which is Not necessary. I compiled quite a Few Non-Windows OpenCL Apps for SETI@Home, None of them used 100% CPU in Linux or Mac. The only OpenCL Apps at SETI that used 100% CPU was the Windows version.
In other news, it appears Ubuntu has fixed the 20.04 BUG which caused some Apps to suffer a DBUS timeout when launched as USER. This included BOINC. The recent 20.04 Updates have fixed this problem, You no longer have to jump through Hoops to avoid BOINC from taking 25 seconds to launch with the Clean Install method. Just install the current Updates for 20.04, problem solved.
TBar wrote: Actually....
)
Have you offered your skills to the Developers here at Einstein? I'm guessing they would love to schedule a conference call at some point to at least discuss things. Better crunching means getting results quicker and that means Science is done quicker too.
they need to talk to petri33
)
they need to talk to petri33 (I think he's been in touch with the people who matter), he's the mad scientist coder
_________________________________________________________________________
TBar wrote: Actually....
)
Having read your reference link above I think I need to emphasize that there is a need to reboot after installing the current updates for 20.04.
Three GW tasks done, 10
)
Three GW tasks done, 10 waiting to run. Still running 2 Rosetta@home CPU tasks, six Gamma-ray CPU tasks and 2 LHC@home VirtualBox tasks, seemingly frozen after a reboot due to a Windows 10 Cumulative upgrade at night.
Tullio
It is well documented that
)
It is well documented that the high frequency tasks fail on 3GB Nvidea cards. A question I have is my GTX1660Super which has 6GB of memory seems to run without errors running 2X. That's only 3GB per task. I wonder if anyone has a clue why that is?