Was probably because "BRP5 work generator (tmp)" was not running.
I'd say not as the work generators here at Einstein do switch on and off when needed and as I posted earlier in this thread there were over 5000 tasks ready to be sent out when we couldn't get any. I think something happened to the scheduler so that it couldn't send BRP5 tasks.
The work generators normally generate work up to a set amount then they turn off until some other lower level is reached at witch time they start up again.
As far as I could track it down the reason was that our feeder crashed at an unfortunate moment, and when it was (automatically) restarted it read a configuration it couldn't recover from by later re-reading the DB. I fixed the problems by restarting it a few times. This piece of (server) software is still a bit of a mystery to me.
I currently have 3 AMD 7970's (another on the way ;-) with 3GB's of vram each).
It should be entirely possible to run 12 tasks at a time with that amount of ram if you wished.
However, you would be wasting your time, even with an ultra high end CPU, motherboard and lots of GB's of system RAM the GPU itself will not be fast enough to process that amount of data in a timely fashion.
GPU load figures are more important here than available RAM/number of tasks.
If it helps, my 3GB cards run 3 tasks concurrently at up to 96% GPU load, 4 tasks or more do not increase GPU load but do significantly increase runtimes. CPU runtimes also increase and overall productivity falls dramatically.
Every system is different and the only real way to answer your question is for you to put the time in and do the testing to find your machines equilibrium.
It may be annoying knowing your card has so much available memory and its not being used, but as often is the case... less is more!
If it helps, my 3GB cards run 3 tasks concurrently at up to 96% GPU load, 4 tasks or more do not increase GPU load but do significantly increase runtimes. CPU runtimes also increase and overall productivity falls dramatically.
i have 7970 too, but my results are different
3 WUs at once: 94% gpu load
4 WUs at once: 97% gpu load and running time increase proportionally
now i'm running 10 WU at once with 99% gpu load and estimated runtime about 31000 secs (3100 secs per WU)
If it helps, my 3GB cards run 3 tasks concurrently at up to 96% GPU load, 4 tasks or more do not increase GPU load but do significantly increase runtimes. CPU runtimes also increase and overall productivity falls dramatically.
i have 7970 too, but my results are different
3 WUs at once: 94% gpu load
4 WUs at once: 97% gpu load and running time increase proportionally
now i'm running 10 WU at once with 99% gpu load and estimated runtime about 31000 secs (3100 secs per WU)
Was probably because "BRP5
)
Was probably because "BRP5 work generator (tmp)" was not running.
-----
RE: Was probably because
)
I'd say not as the work generators here at Einstein do switch on and off when needed and as I posted earlier in this thread there were over 5000 tasks ready to be sent out when we couldn't get any. I think something happened to the scheduler so that it couldn't send BRP5 tasks.
The work generators normally generate work up to a set amount then they turn off until some other lower level is reached at witch time they start up again.
As far as I could track it
)
As far as I could track it down the reason was that our feeder crashed at an unfortunate moment, and when it was (automatically) restarted it read a configuration it couldn't recover from by later re-reading the DB. I fixed the problems by restarting it a few times. This piece of (server) software is still a bit of a mystery to me.
BM
BM
how many GPU's megabytes does
)
how many GPU's megabytes does 1 WU eat? i want to know how many WUs can i run at once.
According to GPU-Z on my AMD,
)
According to GPU-Z on my AMD, about 192MB.
My AMD uses about 300MB
)
My AMD uses about 300MB according to GPUZ.
RE: how many GPU's
)
GPU-Z shows I'm using about 540MB on my lowly GTX 260 (that's running two WU's at once).
I suppose it depends on your
)
I suppose it depends on your card!
I currently have 3 AMD 7970's (another on the way ;-) with 3GB's of vram each).
It should be entirely possible to run 12 tasks at a time with that amount of ram if you wished.
However, you would be wasting your time, even with an ultra high end CPU, motherboard and lots of GB's of system RAM the GPU itself will not be fast enough to process that amount of data in a timely fashion.
GPU load figures are more important here than available RAM/number of tasks.
If it helps, my 3GB cards run 3 tasks concurrently at up to 96% GPU load, 4 tasks or more do not increase GPU load but do significantly increase runtimes. CPU runtimes also increase and overall productivity falls dramatically.
Every system is different and the only real way to answer your question is for you to put the time in and do the testing to find your machines equilibrium.
It may be annoying knowing your card has so much available memory and its not being used, but as often is the case... less is more!
RE: If it helps, my 3GB
)
i have 7970 too, but my results are different
3 WUs at once: 94% gpu load
4 WUs at once: 97% gpu load and running time increase proportionally
now i'm running 10 WU at once with 99% gpu load and estimated runtime about 31000 secs (3100 secs per WU)
RE: RE: If it helps, my
)
Linux vs Win7...