feeling courageous again after my machines settled back down to 1.17s for a couple of days, and wanted to try these 1.19s again. So I switched the Beta back on in prefs but I keep getting only 1.17s now. Has 1.19 been pulled again?
Thanks for your support!
Kai.
I, too, have been wondering; my MAC is ONLY getting Non-Beta 1.17 Units. Never did get the original 1.18 released Units on the MAC, and have yet to see a 1.19.
The Win XP Pro x64 system with GTX-760 card, (on the other hand), has been picking up the 1.18 Units there since the 16th... (I believe that's when they were released to us, could be off by a day...)
I'm glad you found it understandable and thanks for saying so. I always try to cater for all readers, whatever their level of understanding happens to be so I do tend to be a bit on the verbose side in order to get the message across.
_AF_EDLS_GuL wrote:
Additionally, limiting the number of CPU cores used will avoid virtualbox multicore tasks to take all 4 cores, even if the gpu requests one of them, which is very limiting for the gpu.
Yes, for sure. I don't run such multicore tasks so I haven't experienced that particular problem. I could imagine it would be quite bad with NVIDIA GPUs in particular as they really seem to require a full CPU core per task for support.
The speed of the CPU and having huge amounts of RAM don't seem to be all that critical. I get pretty much the same GPU task elapsed times for a given GPU even with 2009 vintage core 2 quads with only 4GB RAM. A lot of my more modern hosts only have 8GB. However make sure you use at least dual channel mode - 2 matched sticks. Maybe you need more RAM with Win10 :-).
That's interesting my GTX 770 is taking ~3x as long to complete GPU tasks after being moved from a i7-4770k to an older i7-930. CPU tasks on that host take about 50% longer than on my newer boxes, in line with the drop in CPU speed; FRGP GPU performance is catastrophically worse though. Running BRP tasks on that host with a GTX560 saw a performance drop in line with the older GPUs lower performance level; and when the new tasks came in I assumed it was an issue with the app being poorly tuned for the old GPU. Buying a new GPU for my gaming box and passing the others down didn't help any though; the 930 remained glacially slow even after getting a faster card added.
I always try to cater for all readers, whatever their level of understanding happens to be so I do tend to be a bit on the verbose side in order to get the message across.
Thanks a lot, I do appreciate that.
DanNeely wrote:
That's interesting my GTX 770 is taking ~3x as long to complete GPU tasks after being moved from a i7-4770k to an older i7-930. CPU tasks on that host take about 50% longer than on my newer boxes, in line with the drop in CPU speed; FRGP GPU performance is catastrophically worse though. Running BRP tasks on that host with a GTX560 saw a performance drop in line with the older GPUs lower performance level; and when the new tasks came in I assumed it was an issue with the app being poorly tuned for the old GPU. Buying a new GPU for my gaming box and passing the others down didn't help any though; the 930 remained glacially slow even after getting a faster card added.
That makes me wonder where the bottleneck is for computing tasks. Up to now my best guess was that it all depends on the GDDR5 card memory and its clock speed. However, that new card in old system baffles me and I wonder what influence mainboard chips have when CPU speed has little relevance in this case. Won't data have to be shuffled between card and CPU through PCI lanes?
That's interesting my GTX 770 is taking ~3x as long to complete GPU tasks after being moved from a i7-4770k to an older i7-930. CPU tasks on that host take about 50% longer than on my newer boxes, in line with the drop in CPU speed; FRGP GPU performance is catastrophically worse though. Running BRP tasks on that host with a GTX560 saw a performance drop in line with the older GPUs lower performance level; and when the new tasks came in I assumed it was an issue with the app being poorly tuned for the old GPU. Buying a new GPU for my gaming box and passing the others down didn't help any though; the 930 remained glacially slow even after getting a faster card added.
That makes me wonder where the bottleneck is for computing tasks. Up to now my best guess was that it all depends on the GDDR5 card memory and its clock speed. However, that new card in old system baffles me and I wonder what influence mainboard chips have when CPU speed has little relevance in this case. Won't data have to be shuffled between card and CPU through PCI lanes?
PCIe capacity is possible I suppose - I know the early BRP tasks could take a 20-30% hit from an earlier PCIe generation and/or being in an 8x electrical slot - but the performance hit is large enough that I'm somewhat skeptical of it this time (OTOH I haven't tried running the card in my 4770 or 4790 in an 8x slot to see what happens). With these apps needing a lot of CPU time, I'd assumed that the issue actually was on the CPU side. If the CPU half heavily used AVX instructions when available and fell back to SSE when not, 50% slower due to lower clock speeds and 100% slower due to SSE would be a very close fit to the actual slow down I see. Gary's claiming not to see a hit with an even older CPU though does have me rather puzzled.
That's interesting my GTX 770 is taking ~3x as long to complete GPU tasks after being moved from a i7-4770k to an older i7-930. CPU tasks on that host take about 50% longer than on my newer boxes, in line with the drop in CPU speed; FRGP GPU performance is catastrophically worse though. Running BRP tasks on that host with a GTX560 saw a performance drop in line with the older GPUs lower performance level; and when the new tasks came in I assumed it was an issue with the app being poorly tuned for the old GPU. Buying a new GPU for my gaming box and passing the others down didn't help any though; the 930 remained glacially slow even after getting a faster card added.
That makes me wonder where the bottleneck is for computing tasks. Up to now my best guess was that it all depends on the GDDR5 card memory and its clock speed. However, that new card in old system baffles me and I wonder what influence mainboard chips have when CPU speed has little relevance in this case. Won't data have to be shuffled between card and CPU through PCI lanes?
PCIe capacity is possible I suppose - I know the early BRP tasks could take a 20-30% hit from an earlier PCIe generation and/or being in an 8x electrical slot - but the performance hit is large enough that I'm somewhat skeptical of it this time (OTOH I haven't tried running the card in my 4770 or 4790 in an 8x slot to see what happens). With these apps needing a lot of CPU time, I'd assumed that the issue actually was on the CPU side. If the CPU half heavily used AVX instructions when available and fell back to SSE when not, 50% slower due to lower clock speeds and 100% slower due to SSE would be a very close fit to the actual slow down I see. Gary's claiming not to see a hit with an even older CPU though does have me rather puzzled.
Theres' something else gong on here as well. I have two dual Xeon machines, one with 5630's and one with 5670's, identical R9 280x's. Just the difference in cpu gives me a 10% decrease in GPU throughput on E@H. Could be HT frequency, I don't know, but I was very surprised to see the difference as I would expect these Units to be almost entirely GPU bound as they're not very Memory hungry. Would be nice to find out where the bottleneck is in the end...
Gary's claiming not to see a hit with an even older CPU though does have me rather puzzled.
Quite recently, I have purchased a number of new 2GB R7 370 cards from two different manufacturers, MSI and Gigabyte. The MSI cards have a slightly greater factory overclock. There are 5 of the Gigabyte cards in 4 separate hosts and 2 of the MSI cards in separate hosts. All of these cards in all hosts run GPU tasks 2x (<gpu_usage> 0.5). Some of these replaced GTX 650s and the rest were added to machines previously with no discrete GPU. All hosts use app_config.xml to lower the <cpu_usage> to 0.3 so that the CPU support comes from having the BOINC preference for # of cores set at 50%.
For each of these hosts respectively, here are links to pages of the most recently validated tasks - FX 6300 tasks - Q8400 tasks - G645 tasks - i5 3570K tasks. If you check all of these links you will see remarkably constant crunch times between about 45 to 46 minutes irrespective of the amount of RAM (4GB to 16GB), the PCI-e version (1.x to 3), the CPU speed (2.66GHz to 4.0GHz) or the PCI-e lanes - the last host has both 16x and 4x slots with very little difference in the crunch times between the 2 cards.
The 2 MSI cards are doing remarkably better than the Gigabyte cards and I don't understand why since the clock speed limits are pretty close. The MSI cards seem to run at whatever the limit is set at but the Gigabyte cards seem to use a core speed somewhat below the limit. It's hard to say because the tool I use to monitor values produces a single output and quits. There is no continuous display to show fluctuations or changes.
The MSI cards are in hosts with Pentium dual core CPUs, an Ivy Bridge G640 and a faster Haswell G3258. The crunch times are remarkably close at around 33+ mins, at least 12 mins faster than what the Gigabyte cards are producing. They are also consuming more power. I'm wondering if the Gigabyte cards are running at some lower performance level or power state. I've spent quite a bit of time today looking for ways to tweak this. I've upped the core and memory speed limits to at least equal to those of the MSI cards. However the reported speeds whilst running don't change, nor does the crunch time. I don't normally bother with changing clock speeds but the time difference between the two brands is so large that I'd like to know why.
I've suddenly had a lot of tasks resulting in error either right away or at some point during computation. Has anyone else been seeing these or is it just my machine?
I've suddenly had a lot of tasks resulting in error either right away or at some point during computation. Has anyone else been seeing these
One possibility is that you may need to try turning your clocks down. I believe more than one of us has reported that a particular card's maximum successful clock rates are slower on this application than other recent ones.
The MSI cards are in hosts with Pentium dual core CPUs, an Ivy Bridge G640 and a faster Haswell G3258. The crunch times are remarkably close at around 33+ mins, at least 12 mins faster than what the Gigabyte cards are producing. They are also consuming more power. I'm wondering if the Gigabyte cards are running at some lower performance level or power state.
Card core may account for that:
Gigabyte clock is specified with 950 MHz and 975 boost (150 W)
MSI clock is specified with 1030 MHz and 1120 boost (185 W)
Kai Leibrandt wrote:Bernd
)
I, too, have been wondering; my MAC is ONLY getting Non-Beta 1.17 Units. Never did get the original 1.18 released Units on the MAC, and have yet to see a 1.19.
The Win XP Pro x64 system with GTX-760 card, (on the other hand), has been picking up the 1.18 Units there since the 16th... (I believe that's when they were released to us, could be off by a day...)
TL
TimeLord04
Have TARDIS, will travel...
Come along K-9!
Join SETI Refugees
_AF_EDLS_GuL wrote:Very clear
)
I'm glad you found it understandable and thanks for saying so. I always try to cater for all readers, whatever their level of understanding happens to be so I do tend to be a bit on the verbose side in order to get the message across.
Yes, for sure. I don't run such multicore tasks so I haven't experienced that particular problem. I could imagine it would be quite bad with NVIDIA GPUs in particular as they really seem to require a full CPU core per task for support.
Cheers,
Gary.
Gary Roberts wrote:The speed
)
That's interesting my GTX 770 is taking ~3x as long to complete GPU tasks after being moved from a i7-4770k to an older i7-930. CPU tasks on that host take about 50% longer than on my newer boxes, in line with the drop in CPU speed; FRGP GPU performance is catastrophically worse though. Running BRP tasks on that host with a GTX560 saw a performance drop in line with the older GPUs lower performance level; and when the new tasks came in I assumed it was an issue with the app being poorly tuned for the old GPU. Buying a new GPU for my gaming box and passing the others down didn't help any though; the 930 remained glacially slow even after getting a faster card added.
Gary Roberts wrote: I always
)
Thanks a lot, I do appreciate that.
That makes me wonder where the bottleneck is for computing tasks. Up to now my best guess was that it all depends on the GDDR5 card memory and its clock speed. However, that new card in old system baffles me and I wonder what influence mainboard chips have when CPU speed has little relevance in this case. Won't data have to be shuffled between card and CPU through PCI lanes?
solling2 wrote:DanNeely
)
PCIe capacity is possible I suppose - I know the early BRP tasks could take a 20-30% hit from an earlier PCIe generation and/or being in an 8x electrical slot - but the performance hit is large enough that I'm somewhat skeptical of it this time (OTOH I haven't tried running the card in my 4770 or 4790 in an 8x slot to see what happens). With these apps needing a lot of CPU time, I'd assumed that the issue actually was on the CPU side. If the CPU half heavily used AVX instructions when available and fell back to SSE when not, 50% slower due to lower clock speeds and 100% slower due to SSE would be a very close fit to the actual slow down I see. Gary's claiming not to see a hit with an even older CPU though does have me rather puzzled.
DanNeely wrote:solling2
)
Theres' something else gong on here as well. I have two dual Xeon machines, one with 5630's and one with 5670's, identical R9 280x's. Just the difference in cpu gives me a 10% decrease in GPU throughput on E@H. Could be HT frequency, I don't know, but I was very surprised to see the difference as I would expect these Units to be almost entirely GPU bound as they're not very Memory hungry. Would be nice to find out where the bottleneck is in the end...
K.
DanNeely wrote:Gary's
)
Quite recently, I have purchased a number of new 2GB R7 370 cards from two different manufacturers, MSI and Gigabyte. The MSI cards have a slightly greater factory overclock. There are 5 of the Gigabyte cards in 4 separate hosts and 2 of the MSI cards in separate hosts. All of these cards in all hosts run GPU tasks 2x (<gpu_usage> 0.5). Some of these replaced GTX 650s and the rest were added to machines previously with no discrete GPU. All hosts use app_config.xml to lower the <cpu_usage> to 0.3 so that the CPU support comes from having the BOINC preference for # of cores set at 50%.
The 4 Gigabyte endowed hosts have an AMD FX 6300 CPU, an Intel Q8400 quad, a G645 Pentium dual core, and an Intel i5 3570K. The last one is the one with 2 cards.
For each of these hosts respectively, here are links to pages of the most recently validated tasks - FX 6300 tasks - Q8400 tasks - G645 tasks - i5 3570K tasks. If you check all of these links you will see remarkably constant crunch times between about 45 to 46 minutes irrespective of the amount of RAM (4GB to 16GB), the PCI-e version (1.x to 3), the CPU speed (2.66GHz to 4.0GHz) or the PCI-e lanes - the last host has both 16x and 4x slots with very little difference in the crunch times between the 2 cards.
The 2 MSI cards are doing remarkably better than the Gigabyte cards and I don't understand why since the clock speed limits are pretty close. The MSI cards seem to run at whatever the limit is set at but the Gigabyte cards seem to use a core speed somewhat below the limit. It's hard to say because the tool I use to monitor values produces a single output and quits. There is no continuous display to show fluctuations or changes.
The MSI cards are in hosts with Pentium dual core CPUs, an Ivy Bridge G640 and a faster Haswell G3258. The crunch times are remarkably close at around 33+ mins, at least 12 mins faster than what the Gigabyte cards are producing. They are also consuming more power. I'm wondering if the Gigabyte cards are running at some lower performance level or power state. I've spent quite a bit of time today looking for ways to tweak this. I've upped the core and memory speed limits to at least equal to those of the MSI cards. However the reported speeds whilst running don't change, nor does the crunch time. I don't normally bother with changing clock speeds but the time difference between the two brands is so large that I'd like to know why.
Cheers,
Gary.
I've suddenly had a lot of
)
I've suddenly had a lot of tasks resulting in error either right away or at some point during computation. Has anyone else been seeing these or is it just my machine?
https://einsteinathome.org/host/11669629/tasks/error
Matt_145 wrote:I've suddenly
)
One possibility is that you may need to try turning your clocks down. I believe more than one of us has reported that a particular card's maximum successful clock rates are slower on this application than other recent ones.
Gary Roberts wrote: The
)
Card core may account for that:
Gigabyte clock is specified with 950 MHz and 975 boost (150 W)
MSI clock is specified with 1030 MHz and 1120 boost (185 W)
No difference in memory: GDDR5 1400 MHz
Tasks seem to like it fast. ;-)