...If you don't win the users, you'll not have the project payback. That's the idea behind the suggestion
You'd still have to tell the GPU manufacturers first to release updated APIs that include their latest releases and to keep up with releasing updates. In the case of Nvidia, they refuse to release a reference file the developers can use that tells how many cores each GPU has. Without this, your requests go down the drain quickly, as there isn't any way to detect this on all platforms, plus whatever method there may have been isn't uniform enough between GPU versions.
These questions don't just crop up at project forums --which can't do anything about it, as they normally don't develop (for) BOINC and it's BOINC that does the detection of your GPU-- but also at the BOINC forums and BOINC email lists. The answer thus far is still the same: Get the manufacturers to provide the needed information and have them update it as much as they release new GPUs and then maybe, perhaps, possibly, theoretically you've got a a deal.
Are there more crunchers noticing the increase in time with multiple WU's ?
Read somewhere on the internet that every extra WU means a time-increase of 100%, so 2 WU's means 200% of the one WU-time and three WU's means 300% of the one WU-time. Give or take a few minutes.
That means it's useless to run multiple WU's on a single GPU.
It looks like every WU wants to use the same memory location and therefore is waiting for the other.
Are there more crunchers noticing the increase in time with multiple WU's ?
Read somewhere on the internet that every extra WU means a time-increase of 100%, so 2 WU's means 200% of the one WU-time and three WU's means 300% of the one WU-time. Give or take a few minutes.
That means it's useless to run multiple WU's on a single GPU.
It looks like every WU wants to use the same memory location and therefore is waiting for the other.
Not on my machine. With one CUDA-task the run time was 4700s and a GPU-load of 67%. With two CUDA-tasks on one GPU the run time increased to 7000s and the GPU-load to 80%.
I think it depends on how much the GPU is used. Milkyway used over 90% with a single task, so there is not much left for the second task.
During last night the GPU wasn't used by anything other than E@H and still the time was doubled.
I use MSI Afterburner to prefent the shaderspeed clock down to 810MHz and PL to assign a core to every WU to keep speed up as much as possible.
One WU running (3200s) means a gpu-usage of around 42% but two didn't increase that %.
Are there more crunchers noticing the increase in time with multiple WU's ?
Read somewhere on the internet that every extra WU means a time-increase of 100%, so 2 WU's means 200% of the one WU-time and three WU's means 300% of the one WU-time. Give or take a few minutes.
That means it's useless to run multiple WU's on a single GPU.
It looks like every WU wants to use the same memory location and therefore is waiting for the other.
Running multiple threads provide increasing gpu load, so for example single wu running 0.5 + 1 on GTX570 will finish in about 3600 sec, but 3 wu running together 0.5 + 0.33 will finish in 6200 sec each.
Will calculate and compare results: day 24 hours * 60 min * 60 sec = 86400 sec a day.
86400 / 3600 = 24 WU per day * 500 = 12 000 cr per day
OR
86400 / 6200 * 3 = almost 42 WU per day * 500 = 21 000 cr per day
Are there more crunchers noticing the increase in time with multiple WU's ?
Read somewhere on the internet that every extra WU means a time-increase of 100%, so 2 WU's means 200% of the one WU-time and three WU's means 300% of the one WU-time. Give or take a few minutes.
That means it's useless to run multiple WU's on a single GPU.
It looks like every WU wants to use the same memory location and therefore is waiting for the other.
Running multiple threads provide increasing gpu load, so for example single wu running 0.5 + 1 on GTX570 will finish in about 3600 sec, but 3 wu running together 0.5 + 0.33 will finish in 6200 sec each.
Will calculate and compare results: day 24 hours * 60 min * 60 sec = 86400 sec a day.
86400 / 3600 = 24 WU per day * 500 = 12 000 cr per day
OR
86400 / 6200 * 3 = almost 42 WU per day * 500 = 21 000 cr per day
That`s difference. :)
Well, as stated before 1 WU took 53 min/WU.
Is 1440/53 = 27.17 wu's/day = 13.585 credits/day
Doing two simultaneously it took 104 min/WU.
Is 1440/104 = 13.85 wu's/day x 2 = 27.7 wu's/day = 13.850 credits/day
That's not a lot of difference (3.71 wu's/week = 1855 credits/week).
Will give 3 at the same time a try. They all use the same core. The others cores are doing Malaria
During last night the GPU wasn't used by anything other than E@H and still the time was doubled.
I use MSI Afterburner to prefent the shaderspeed clock down to 810MHz and PL to assign a core to every WU to keep speed up as much as possible.
One WU running (3200s) means a gpu-usage of around 42% but two didn't increase that %.
I´m really sorry that I can´t help you. My last test showed:
1 task on the GPU = 3,982.58 sec
2 tasks on the GPU = 6,731.75 sec each
So 69% higher run time, but 100% more credits.
Because it worked here since the first day I tried it with the old ABP-CUDA-app, I don´t know where there problem could be in your setup. Sorry.
During last night the GPU wasn't used by anything other than E@H and still the time was doubled.
I use MSI Afterburner to prefent the shaderspeed clock down to 810MHz and PL to assign a core to every WU to keep speed up as much as possible.
One WU running (3200s) means a gpu-usage of around 42% but two didn't increase that %.
I´m really sorry that I can´t help you. My last test showed:
1 task on the GPU = 3,982.58 sec
2 tasks on the GPU = 6,731.75 sec each
So 69% higher run time, but 100% more credits.
Because it worked here since the first day I tried it with the old ABP-CUDA-app, I don´t know where there problem could be in your setup. Sorry.
No need to be sorry, all info is welcome.
Your figures show an improvement of 4 WU's /day which is nice.
I hoped my GTX 460 FTW would radiate but too bad.
RE: ...If you don't win the
)
You'd still have to tell the GPU manufacturers first to release updated APIs that include their latest releases and to keep up with releasing updates. In the case of Nvidia, they refuse to release a reference file the developers can use that tells how many cores each GPU has. Without this, your requests go down the drain quickly, as there isn't any way to detect this on all platforms, plus whatever method there may have been isn't uniform enough between GPU versions.
These questions don't just crop up at project forums --which can't do anything about it, as they normally don't develop (for) BOINC and it's BOINC that does the detection of your GPU-- but also at the BOINC forums and BOINC email lists. The answer thus far is still the same: Get the manufacturers to provide the needed information and have them update it as much as they release new GPUs and then maybe, perhaps, possibly, theoretically you've got a a deal.
Are there more crunchers
)
Are there more crunchers noticing the increase in time with multiple WU's ?
Read somewhere on the internet that every extra WU means a time-increase of 100%, so 2 WU's means 200% of the one WU-time and three WU's means 300% of the one WU-time. Give or take a few minutes.
That means it's useless to run multiple WU's on a single GPU.
It looks like every WU wants to use the same memory location and therefore is waiting for the other.
RE: Are there more
)
Not on my machine. With one CUDA-task the run time was 4700s and a GPU-load of 67%. With two CUDA-tasks on one GPU the run time increased to 7000s and the GPU-load to 80%.
I think it depends on how much the GPU is used. Milkyway used over 90% with a single task, so there is not much left for the second task.
During last night the GPU
)
During last night the GPU wasn't used by anything other than E@H and still the time was doubled.
I use MSI Afterburner to prefent the shaderspeed clock down to 810MHz and PL to assign a core to every WU to keep speed up as much as possible.
One WU running (3200s) means a gpu-usage of around 42% but two didn't increase that %.
I calculated three units
)
I calculated three units together (1.05) for 75% load GPU and 900-1000 mb of RAM and 0.20 GPU CPU ...
RE: Are there more
)
Running multiple threads provide increasing gpu load, so for example single wu running 0.5 + 1 on GTX570 will finish in about 3600 sec, but 3 wu running together 0.5 + 0.33 will finish in 6200 sec each.
Will calculate and compare results: day 24 hours * 60 min * 60 sec = 86400 sec a day.
86400 / 3600 = 24 WU per day * 500 = 12 000 cr per day
OR
86400 / 6200 * 3 = almost 42 WU per day * 500 = 21 000 cr per day
That`s difference. :)
RE: RE: Are there more
)
Well, as stated before 1 WU took 53 min/WU.
Is 1440/53 = 27.17 wu's/day = 13.585 credits/day
Doing two simultaneously it took 104 min/WU.
Is 1440/104 = 13.85 wu's/day x 2 = 27.7 wu's/day = 13.850 credits/day
That's not a lot of difference (3.71 wu's/week = 1855 credits/week).
Will give 3 at the same time a try. They all use the same core. The others cores are doing Malaria
RE: During last night the
)
I´m really sorry that I can´t help you. My last test showed:
1 task on the GPU = 3,982.58 sec
2 tasks on the GPU = 6,731.75 sec each
So 69% higher run time, but 100% more credits.
Because it worked here since the first day I tried it with the old ABP-CUDA-app, I don´t know where there problem could be in your setup. Sorry.
RE: RE: During last night
)
No need to be sorry, all info is welcome.
Your figures show an improvement of 4 WU's /day which is nice.
I hoped my GTX 460 FTW would radiate but too bad.
???? gtx 470 on my
)
????
gtx 470 on my calculations 3 units in 1h30 ...
Between 4800 seconds and 5440 seconds