Well, if all goes well (and if you happen to have GPUs crunching), that little fella will have to do some more climbing to new peaks soon because a test of some new optimized BRP6 apps is about to start. If you want to help for the testing, make sure to have "Run beta/test application versions?" set to "yes" in the project settings ( http://einstein.phys.uwm.edu/prefs.php?subset=project ). We expect that the speed-up will be quite noticeable for NVIDIA cards and somewhat less but still significant for AMD/ATI cards.
Yeah, "invitation" accepted, "Run beta/test application versions?" set to "yes" for one PC :-)
NB: Six NVIDIA GPUs are working in 3 PCs (2 x GTX670/2GB, 2 x GTX560Ti/1GB, 2 x GTX460/1GB) and one GTX460/1GB in the forth PC. GPU utilization about 95% 7/24.
I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]
We expect that the speed-up will be quite noticeable for NVIDIA cards and somewhat less but still significant for AMD/ATI cards.
For NVIDIA, is there any change to the version of CUDA? I was wondering if the larger improvement might have had something to do with that :-).
For both types, do you think driver version might have any influence on the degree of improvement seen? For those who have found best performance with a particular driver, is that likely to change with the new app?
We expect that the speed-up will be quite noticeable for NVIDIA cards and somewhat less but still significant for AMD/ATI cards.
For NVIDIA, is there any change to the version of CUDA? I was wondering if the larger improvement might have had something to do with that :-).
Our plan is like this:
1) Bring out an improved version that still uses CUDA 3.2
2) If nothing is fundamentally broken by the new version, also bring out the same code with CUDA 5.5. The CUDA 3.2 version should then go out only to those CUDA hosts that do not support 5.5 or higher.
3) after some time, evaluate how much we would lose (in hosts and overall computing power) if we dropped support for CUDA 3.2 . The hope is that by now, the contribution by CUDA 3.2-only hosts would be very, very small indeed. In that case we would stop to support CUDA 3.2 for any new development.
Quote:
For both types, do you think driver version might have any influence on the degree of improvement seen? For those who have found best performance with a particular driver, is that likely to change with the new app?
Good question. I would not rule out that indeed several things might change: a) then 'best' driver version to use and b) the optimal 'GPU utilization factor'. But I expect that any speed-up that you get by chosing a different driver or changing the utilization factor will be smaller than the speed-up you get from the optimized app itself.
My feeling is that this optimization will close a bit the gap that has opened at E@H between NVIDIA and AMD cards. If you look at the 'top computers' statistics, there is a clear dominance by AMD GPU driven hosts. I don't see this in other comparable BOINC projects.
Yeah, "invitation" accepted, "Run beta/test application versions?" set to "yes" for one PC :-)
NB: Six NVIDIA GPUs are working in 3 PCs (2 x GTX670/2GB, 2 x GTX560Ti/1GB, 2 x GTX460/1GB) and one GTX460/1GB in the forth PC. GPU utilization about 95% 7/24.
Impressive!
For the Beta-test it will be crucial that we get results from a wide variety of cards.
Thanks
HBE
Seems like a very good plan - incremental steps so that you have a much easier time identifying any breakage :-).
I have only one small problem and it's not really something to be too bothered about. When the FGRP4 1.05 beta app came out last November, I arranged to free up a 'venue' and put a number of hosts in that venue with beta test apps being allowed. There was no issue with the test app and although I didn't really see a significant performance improvement, I did decide quite recently there was enough reason to run that app on all my hosts. The easiest way for me to do that was to allow beta test apps on all venues. I do use all venues so can't put all hosts into a single beta test enabled venue just to run FGRP4 beta. I don't want the new BRP6 app to go automatically to every GPU enabled host without some initial testing, so I'll have to turn off beta test apps for all venues except the one I'll use to test BRP6 beta. The hosts currently running 1.05 beta will simply revert to 1.04, I imagine.
The other alternative would be for 1.05 beta to become the default app. Can you advise if there are any plans to do this before FGRP4 finishes? For people with quite a few disparate hosts, it can become quite difficult juggling the limited number of venues. Having beta tests going simultaneously in different searches makes it even more difficult.
it will be crucial that we get results from a wide variety of cards.
I thought that wide variety of cards is like running GTX 8800 and GT750Ti simultaneously in one host running under Windows XP.
WinXP - honestly? This system is hopefully dying out soon. Don't ride a dead horse.
With all due respect,
It's real easy to get trapped into thinking you need the latest and greatest gear, but there is a ton of old gear on this project doing some darn good work.
Gary Roberts comes to mind. He is right up there at the top for total credits here at Einstein and he's doing it with gear that in some cases is positively ancient. (Like over 10 years old if I remember correctly.) I can only dream about matching his numbers.
Also, that old nag might be the only thing that person can afford :-)
I would love to see someone with an old boat anchor and a RAC of 63 nail down the first GW find.
Interpretation of statistics ...
)
Nice artwork!
Well, if all goes well (and if you happen to have GPUs crunching), that little fella will have to do some more climbing to new peaks soon because a test of some new optimized BRP6 apps is about to start. If you want to help for the testing, make sure to have "Run beta/test application versions?" set to "yes" in the project settings ( http://einstein.phys.uwm.edu/prefs.php?subset=project ). We expect that the speed-up will be quite noticeable for NVIDIA cards and somewhat less but still significant for AMD/ATI cards.
Cheers
HBE
Yeah, "invitation" accepted,
)
Yeah, "invitation" accepted, "Run beta/test application versions?" set to "yes" for one PC :-)
NB: Six NVIDIA GPUs are working in 3 PCs (2 x GTX670/2GB, 2 x GTX560Ti/1GB, 2 x GTX460/1GB) and one GTX460/1GB in the forth PC. GPU utilization about 95% 7/24.
I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]
RE: We expect that the
)
For NVIDIA, is there any change to the version of CUDA? I was wondering if the larger improvement might have had something to do with that :-).
For both types, do you think driver version might have any influence on the degree of improvement seen? For those who have found best performance with a particular driver, is that likely to change with the new app?
Cheers,
Gary.
RE: RE: We expect that
)
Our plan is like this:
1) Bring out an improved version that still uses CUDA 3.2
2) If nothing is fundamentally broken by the new version, also bring out the same code with CUDA 5.5. The CUDA 3.2 version should then go out only to those CUDA hosts that do not support 5.5 or higher.
3) after some time, evaluate how much we would lose (in hosts and overall computing power) if we dropped support for CUDA 3.2 . The hope is that by now, the contribution by CUDA 3.2-only hosts would be very, very small indeed. In that case we would stop to support CUDA 3.2 for any new development.
Good question. I would not rule out that indeed several things might change: a) then 'best' driver version to use and b) the optimal 'GPU utilization factor'. But I expect that any speed-up that you get by chosing a different driver or changing the utilization factor will be smaller than the speed-up you get from the optimized app itself.
My feeling is that this optimization will close a bit the gap that has opened at E@H between NVIDIA and AMD cards. If you look at the 'top computers' statistics, there is a clear dominance by AMD GPU driven hosts. I don't see this in other comparable BOINC projects.
HB
RE: Yeah, "invitation"
)
Impressive!
For the Beta-test it will be crucial that we get results from a wide variety of cards.
Thanks
HBE
RE: For the Beta-test it
)
So I "opened the doors" for all three PCs: GTX 460/560Ti/670
I'll keep a fire blanket ready ...
I know I am a part of a story that starts long before I can remember and continues long beyond when anyone will remember me [Danny Hillis, Long Now]
RE: Our plan is like this:
)
Seems like a very good plan - incremental steps so that you have a much easier time identifying any breakage :-).
I have only one small problem and it's not really something to be too bothered about. When the FGRP4 1.05 beta app came out last November, I arranged to free up a 'venue' and put a number of hosts in that venue with beta test apps being allowed. There was no issue with the test app and although I didn't really see a significant performance improvement, I did decide quite recently there was enough reason to run that app on all my hosts. The easiest way for me to do that was to allow beta test apps on all venues. I do use all venues so can't put all hosts into a single beta test enabled venue just to run FGRP4 beta. I don't want the new BRP6 app to go automatically to every GPU enabled host without some initial testing, so I'll have to turn off beta test apps for all venues except the one I'll use to test BRP6 beta. The hosts currently running 1.05 beta will simply revert to 1.04, I imagine.
The other alternative would be for 1.05 beta to become the default app. Can you advise if there are any plans to do this before FGRP4 finishes? For people with quite a few disparate hosts, it can become quite difficult juggling the limited number of venues. Having beta tests going simultaneously in different searches makes it even more difficult.
Cheers,
Gary.
RE: it will be crucial that
)
I thought that wide variety of cards is like running GTX 8800 and GT750Ti simultaneously in one host running under Windows XP.
RE: RE: it will be
)
WinXP - honestly? This system is hopefully dying out soon. Don't ride a dead horse.
Om mani padme hum.
RE: RE: RE: it will be
)
With all due respect,
It's real easy to get trapped into thinking you need the latest and greatest gear, but there is a ton of old gear on this project doing some darn good work.
Gary Roberts comes to mind. He is right up there at the top for total credits here at Einstein and he's doing it with gear that in some cases is positively ancient. (Like over 10 years old if I remember correctly.) I can only dream about matching his numbers.
Also, that old nag might be the only thing that person can afford :-)
I would love to see someone with an old boat anchor and a RAC of 63 nail down the first GW find.
Just sayin'
Phil
I thought I was wrong once, but I was mistaken.