I will probably order the MSI GTX660 TF OC next week ïŠ
I have the EVGA GTX 660Ti SC running Cuda X2 on one of my quads also running T4T X2 and LHC at the same time and on this sunny day the temp is 68 C which is the normal temp under load.
I have been following that thread "openCL Benchmarks" but I came away with the feeling that the 670 is good bang for the buck. This thread kind of makes me wonder.
But I've already ordered one (from your favorite supplier, Tiger Direct) and I'll have some hard numbers starting next week.
Joe
Which version did you order Joe?
The OC 670?
It is over $100 more than I payed for the 660Ti SC and I still like the 550Ti OC for the price and speed the way I run them.......the 660 is about 8mins faster running X2 for me.
I do have another quad core that has an older nVidia so I will upgrade to one to run Cuda's so I have to decide which card to get.
If they let me get another 550ti OC for $100 or less with free shipping I may just go that way.....since I would also have to get a new PS and add some Ram
My 550 ti is performing very well. So far it may be the best bang for the buck card I have.
I'm in the pleasant situation at work where we might start using these things so I'm trying to get as much experience as I can both in using them and writing code that uses them. So I have yet to buy 2 of the same type.
I'm fearful of overclocking. Perhaps out of ignorance but I don't feel I have the tools to figure out how much overclocking is too much. I suppose these OC cards have adequate burn-in by the manufacturers that we don't have to worry. But so far I've shied away from them.
GPU overclocking does not bring a significant decrease of the unit duration.
I overclocked three different GTX460, from (650-675)MHz to (800-820)MHz and the duration has changed from approx. 50 minutes to approx. 45 (depends on each card and PC configuration) for 2 paralel units.
Do not worry of overclocking once you are not in a hurry and you control the temperature. But do not expect a huge performance increase.
I was pretty scared of flashing the gpu bios with new parameters (MHZ, voltage), but then I did it so many times... it is just a piece of cake now :D
GPU overclocking does not bring a significant decrease of the unit duration.
I overclocked three different GTX460, from (650-675)MHz to (800-820)MHz and the duration has changed from approx. 50 minutes to approx. 45 (depends on each card and PC configuration) for 2 paralel units.
Do not worry of overclocking once you are not in a hurry and you control the temperature. But do not expect a huge performance increase.
I was pretty scared of flashing the gpu bios with new parameters (MHZ, voltage), but then I did it so many times... it is just a piece of cake now :D
This is the way to test and see how long they will last for sure (never play video games myself)
I really can't say how long they last compared to a stock video card but the main thing is watching the temp and trying to keep it as cool as possible and I only do that by having the PS outside the box and have the side panel off with a basic room fan blowing on them.
And you probably know about the different stages of OC'ing these cards.
My 550Ti is OC'd but my 660Ti is superclocked.....up from the basic OC
I have been checking the temp with that one once in a while and during the day it is 68-C and just now when I checked the night temp it was down to 63-C
(of course I register the cards so they will be replaced if they do die under warranty )
My 550 ti is performing very well. So far it may be the best bang for the buck card I have.
I'm in the pleasant situation at work where we might start using these things so I'm trying to get as much experience as I can both in using them and writing code that uses them. So I have yet to buy 2 of the same type.
I'm fearful of overclocking. Perhaps out of ignorance but I don't feel I have the tools to figure out how much overclocking is too much. I suppose these OC cards have adequate burn-in by the manufacturers that we don't have to worry. But so far I've shied away from them.
Joe
Well your new GB GTX 670 does cost about $100 more than the EVGA 660 superclocked I got but the triple fans on your 670 will keep it running cooler Joe.
Like Shafa said I wouldn't worry about the OC'ing and so far with my superclocked it is running fine and also as I mentioned just register your card after you install it and start running the card and it says yours is OC'd and the warranty is 36 months so you should have no problems.
I agree about the 550Ti since we can actually buy 3 or 4 of the OC version for the price of the 660 or 670 and my 550 OC only takes about 8 more minutes to complete cuda X2 compared to my 660 SC
Now the GeForce 610M in the laptop I am on now does take close to 4hrs to run cuda X2 but it also runs T4T X2 and LHC X5 all at the same time but runs cool and I run it 24/7 for almost 4 months so I'm glad I got this one (from TigerD of course)
Let us know how your 670 does and how many tasks you run at the same time.
Hmmm... Can you use the GTX 660 with older drivers or does it have to be the latest one. There have been reports about a performance degradation with the latest drivers.
Cheers
HB
You need 306.23 at a minimum.
I have just installed one and its running through its first BRP cuda work units. My average elapsed time on a GTX560Ti is 46 minutes (running 2 at a time), curent estimate looks like its going to come in around 35 mins for the first pair. No idea if these are "average" WU. I need many more samples to draw a conclusion but its looking good so far.
Mine is a Palit brand factory OC'ed one. Not as fast as EVGA but then I was trying to keep the watts down, not to mention the heat.
Update:
Came in at 34.5 min. Another 2 WU running now
My 550 ti is performing very well. So far it may be the best bang for the buck card I have.
I'm in the pleasant situation at work where we might start using these things so I'm trying to get as much experience as I can both in using them and writing code that uses them. So I have yet to buy 2 of the same type.
I'm fearful of overclocking. Perhaps out of ignorance but I don't feel I have the tools to figure out how much overclocking is too much. I suppose these OC cards have adequate burn-in by the manufacturers that we don't have to worry. But so far I've shied away from them.
Joe
Well your new GB GTX 670 does cost about $100 more than the EVGA 660 superclocked I got but the triple fans on your 670 will keep it running cooler Joe.
Like Shafa said I wouldn't worry about the OC'ing and so far with my superclocked it is running fine and also as I mentioned just register your card after you install it and start running the card and it says yours is OC'd and the warranty is 36 months so you should have no problems.
I agree about the 550Ti since we can actually buy 3 or 4 of the OC version for the price of the 660 or 670 and my 550 OC only takes about 8 more minutes to complete cuda X2 compared to my 660 SC
Now the GeForce 610M in the laptop I am on now does take close to 4hrs to run cuda X2 but it also runs T4T X2 and LHC X5 all at the same time but runs cool and I run it 24/7 for almost 4 months so I'm glad I got this one (from TigerD of course)
Let us know how your 670 does and how many tasks you run at the same time.
-Samson-
Looking at results using Binary Radio Pulsar Search (Arecibo) v1.28 (BRP4cuda32) from this
host, I'm worried by the fact that 3 wingmen are needed to get a Canonnical Result.
Over 90% of the errors are from the same CUDA app. I'm using(GTX480 doing 2
per GPU.
I wonder if maybe people are running too many WUs at a time, on their GPU(s).
A SETI enthousiast has come up whith a test app. which determens how many WUs
per GPU is most efficient.
OCing a GPU, can quickly increase instabillity, heat increase or even thermal
shutdown.
F.i.on SETI the GTX560 Ti has become infamous, cause of too many errors.
And some of the 600 series NVidia are erring out, GTX670 660Ti, 650Ti, 645.
Issues with some GPUs using the 3xx.xx series and up are also reported.(CUDA
50.00)
Such as INcreased runtime and CPU-time. And unfortunatly errors.
IMHO, running more then 1 BRPS per GPU needs carefull monitoring when starting
to run >1 per GPU. And check if it's more efficient compaired to running 1 per
GPU.
Having less then 240 CUDA-cores is almost a boundary, making 2 per GPU less
effective or producing (only) errors!
IMHO, running more then 1 BRPS per GPU needs carefull monitoring when starting
to run >1 per GPU. And check if it's more efficient compaired to running 1 per
GPU.
Having less then 240 CUDA-cores is almost a boundary, making 2 per GPU less
effective or producing (only) errors!
Good point that I'd like to see more discussion on topic.
I have found 2 tasks per GPU to be a significant improvement in throughput, while maintaining adequate cooling and no change in error rates (which are not zero). I'm now experimenting with 3 and should have decent numbers in a couple of weeks.
My [minimal] understanding of how multiple tasks work on nVidia GPUs is that only one kernel runs at any one time but memory transfers are overlapped with processing.
I don't know of a way to monitor how busy we're keeping the memory bus or processing units at least not on Linux. But I believe it's generally true the feeding of data to the GPU is a major bottleneck. So I expect minimal differences between 2 and 3 tasks.
As far as temperatures, I've been using full towers with enough fans to cool a pottery kiln and my GPU temps are in the 50's when idle and mid 70's when fully
loaded. CPU temps in the 40's when idle sometimes lower and I manage the utilization to keep them under 80.
I don't have any hard numbers on target temps these are based on reading and guessing. What do others use for their targets?
I have found 2 tasks per GPU to be a significant improvement in throughput, while maintaining adequate cooling and no change in error rates (which are not zero). I'm now experimenting with 3 and should have decent numbers in a couple of weeks.
My [minimal] understanding of how multiple tasks work on nVidia GPUs is that only one kernel runs at any one time but memory transfers are overlapped with processing.
I don't know of a way to monitor how busy we're keeping the memory bus or processing units at least not on Linux. But I believe it's generally true the feeding of data to the GPU is a major bottleneck. So I expect minimal differences between 2 and 3 tasks.
I totally agree with that; running 2 apps increases output significant (runnung a HD5850, a HD6950 and a GTX550) . I found, that running 3 wu's together sometimes result in a driver reset, not necessarily together with failing wu's. Increase of speed is minimal when running 3 wu's. But noise level increases.
I also found a significant difference when running the card in an x8-slot compared to the standard x16 slot. Bandwidth is very important here.
I'm not shure about 'only one kenel' at a time; this would lead to very different calculation times, but the two wu's finish within a 2 min window, even when running a whole day.
So maybe someone with a deeper understanding can bring light into this dark grey?
@Gavin You must have a driver
)
@Gavin
You must have a driver problem
RE: Also take care of the
)
I have the EVGA GTX 660Ti SC running Cuda X2 on one of my quads also running T4T X2 and LHC at the same time and on this sunny day the temp is 68 C which is the normal temp under load.
RE: RE: I have been
)
I went for the Gigabyte not overclocked (http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=3734599&tab=7&SRCCODE=WEBLET03ORDER&cm_mmc=Email-_-WebletMain-_-WEBLET03ORDER-_-Deals)
My 550 ti is performing very well. So far it may be the best bang for the buck card I have.
I'm in the pleasant situation at work where we might start using these things so I'm trying to get as much experience as I can both in using them and writing code that uses them. So I have yet to buy 2 of the same type.
I'm fearful of overclocking. Perhaps out of ignorance but I don't feel I have the tools to figure out how much overclocking is too much. I suppose these OC cards have adequate burn-in by the manufacturers that we don't have to worry. But so far I've shied away from them.
Joe
GPU overclocking does not
)
GPU overclocking does not bring a significant decrease of the unit duration.
I overclocked three different GTX460, from (650-675)MHz to (800-820)MHz and the duration has changed from approx. 50 minutes to approx. 45 (depends on each card and PC configuration) for 2 paralel units.
Do not worry of overclocking once you are not in a hurry and you control the temperature. But do not expect a huge performance increase.
I was pretty scared of flashing the gpu bios with new parameters (MHZ, voltage), but then I did it so many times... it is just a piece of cake now :D
RE: GPU overclocking does
)
This is the way to test and see how long they will last for sure (never play video games myself)
I really can't say how long they last compared to a stock video card but the main thing is watching the temp and trying to keep it as cool as possible and I only do that by having the PS outside the box and have the side panel off with a basic room fan blowing on them.
And you probably know about the different stages of OC'ing these cards.
My 550Ti is OC'd but my 660Ti is superclocked.....up from the basic OC
I have been checking the temp with that one once in a while and during the day it is 68-C and just now when I checked the night temp it was down to 63-C
(of course I register the cards so they will be replaced if they do die under warranty )
RE: I went for the
)
Well your new GB GTX 670 does cost about $100 more than the EVGA 660 superclocked I got but the triple fans on your 670 will keep it running cooler Joe.
Like Shafa said I wouldn't worry about the OC'ing and so far with my superclocked it is running fine and also as I mentioned just register your card after you install it and start running the card and it says yours is OC'd and the warranty is 36 months so you should have no problems.
I agree about the 550Ti since we can actually buy 3 or 4 of the OC version for the price of the 660 or 670 and my 550 OC only takes about 8 more minutes to complete cuda X2 compared to my 660 SC
Now the GeForce 610M in the laptop I am on now does take close to 4hrs to run cuda X2 but it also runs T4T X2 and LHC X5 all at the same time but runs cool and I run it 24/7 for almost 4 months so I'm glad I got this one (from TigerD of course)
Let us know how your 670 does and how many tasks you run at the same time.
-Samson-
RE: Hmmm... Can you use the
)
You need 306.23 at a minimum.
I have just installed one and its running through its first BRP cuda work units. My average elapsed time on a GTX560Ti is 46 minutes (running 2 at a time), curent estimate looks like its going to come in around 35 mins for the first pair. No idea if these are "average" WU. I need many more samples to draw a conclusion but its looking good so far.
Mine is a Palit brand factory OC'ed one. Not as fast as EVGA but then I was trying to keep the watts down, not to mention the heat.
Update:
Came in at 34.5 min. Another 2 WU running now
BOINC blog
RE: RE: I went for the
)
Looking at results using Binary Radio Pulsar Search (Arecibo) v1.28 (BRP4cuda32)
from this
host, I'm worried by the fact that 3 wingmen are needed to get a
Canonnical Result.
Over 90% of the errors are from the same CUDA app. I'm using(GTX480 doing 2
per GPU.
I wonder if maybe people are running too many WUs at a time, on their GPU(s).
A SETI enthousiast has come up whith a test app. which determens how many WUs
per GPU is most efficient.
OCing a GPU, can quickly increase instabillity, heat increase or even thermal
shutdown.
F.i.on SETI the GTX560 Ti has become infamous, cause of too many errors.
And some of the 600 series NVidia are erring out, GTX670 660Ti, 650Ti, 645.
Issues with some GPUs using the 3xx.xx series and up are also reported.(CUDA
50.00)
Such as INcreased runtime and CPU-time. And unfortunatly errors.
IMHO, running more then 1 BRPS per GPU needs carefull monitoring when starting
to run >1 per GPU. And check if it's more efficient compaired to running 1 per
GPU.
Having less then 240 CUDA-cores is almost a boundary, making 2 per GPU less
effective or producing (only) errors!
Better be safe then sorry!
RE: IMHO, running more
)
Good point that I'd like to see more discussion on topic.
I have found 2 tasks per GPU to be a significant improvement in throughput, while maintaining adequate cooling and no change in error rates (which are not zero). I'm now experimenting with 3 and should have decent numbers in a couple of weeks.
My [minimal] understanding of how multiple tasks work on nVidia GPUs is that only one kernel runs at any one time but memory transfers are overlapped with processing.
I don't know of a way to monitor how busy we're keeping the memory bus or processing units at least not on Linux. But I believe it's generally true the feeding of data to the GPU is a major bottleneck. So I expect minimal differences between 2 and 3 tasks.
As far as temperatures, I've been using full towers with enough fans to cool a pottery kiln and my GPU temps are in the 50's when idle and mid 70's when fully
loaded. CPU temps in the 40's when idle sometimes lower and I manage the utilization to keep them under 80.
I don't have any hard numbers on target temps these are based on reading and guessing. What do others use for their targets?
Joe
RE: I have found 2 tasks
)
I totally agree with that; running 2 apps increases output significant (runnung a HD5850, a HD6950 and a GTX550) . I found, that running 3 wu's together sometimes result in a driver reset, not necessarily together with failing wu's. Increase of speed is minimal when running 3 wu's. But noise level increases.
I also found a significant difference when running the card in an x8-slot compared to the standard x16 slot. Bandwidth is very important here.
I'm not shure about 'only one kenel' at a time; this would lead to very different calculation times, but the two wu's finish within a 2 min window, even when running a whole day.
So maybe someone with a deeper understanding can bring light into this dark grey?
Regards
Alexander