Tesla K40 & System Upgrade Thoughts?

Tony
Tony
Joined: 5 Sep 13
Posts: 7
Credit: 3186733
RAC: 0
Topic 197449

Good morning team.

I am currently running the below mentioned system configuration. I have just acquired a third GTX OC 770, however, although I have heaps of room in the case I don't believe I'm going to be able to fit all three cards in SLI on the board. So for starters I will upgrade to a much larger motherboard and then have all the cards in SLI. I have a few important questions, firstly I'm seriously considering the Nvidia Tesla K40, I don't know how I would bank roll it but that's another matter. Could I drop this card into the computer out of SLI and if so will this type of card make a serious dent in my Boinc projects? Would the K40 be the last word in Boinc graphic upgrade? Will I need to do any additional programming to get boinc to work with the K40?

Would I be better off simply putting 5k into another system?

Lastly, I'm open to cost effective ideas to take this build to another level. As an entry level Boinc operator I'm grasping at straws with respect to bumping up this system without building another computer.

I know you all work very hard and I greatly appreciate all your assistance in advance. Best Regards,

Intel i7-4470
Corsair Obsidian 900D
Seasonic XP-1000 Platinum 1000W Power Supply
Sabertooth- Z87 TUF Intel Z87
G-Skill 16GM Ram Kit 1866 Speed
G-Skill 16GM Ram Kit 1866 Speed
Intel 355 SATA3-SSD 240GB
WD Black SATA 1TB 7500 HDD
WD Black SATA 1TB 7500 HDD
Gigabyte GTX OC 770 4GB
Gigabyte GTX OC 770 4GB
Samsung 22x DVD RW
Corsair H60 CPU Cooler
NetGear WND4100 Wireless N900 Dual Band USB Adapter
Windows 8 64

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

Tesla K40 & System Upgrade Thoughts?

I don't run any multi-GPU boxes but one thing comes to mind that you might want to consider: PCI Express lanes

A motherboard (or CPU / chipset) only has a limited number of lanes to assign to PCI express devices. The more devices connected the fewer lanes each device gets.
For instance, if you have a single GPU it gets 16 lanes. Two GPUs means each gets 8 lanes, three and four GPUs get only 4 lanes each.
Lanes are important since the number of lanes determines the maximum transfer speed between the GPU and the CPU / chipset. E@H is an application that does a lot of transfer between CPU and GPU.

I have no idea if this is a real problem, maybe your fast system can feed a fast GPU with just 4 lanes. But it's something to consider. Shame to starve a shiny new GPU :)

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 740030628
RAC: 556

Hello, For Einstein@Home

Hello,

For Einstein@Home GPU applications, the available PCI-E lanes per card is important.

An optimal hardware configuration would be the x79 with an Ivy Bridge Extreme processor such as the Intel 4820K. This particular series of processors has 40 PCI-E 3.0 lanes and with a supporting motherboard, your cards can operate at PCI-E 3.0 x16/x16/x8. I ran this type triple card configuration for quite some time via an older 3930K processor before reorganizing my hardware to better support FGRP3.

I do not have experience with the Tesla K40. If you are running other applications or working on CUDA development that can take advantage of this particular GPU specifically in the area of double precision calculations, then it could be a good choice for you. If you are looking at purchasing specifically for Einstein@Home, the AMD 7970 or R9-280x is a very optimal choice for this project and will cost much less than the K40. Einstein@Home GPU applications do not require double precision floating point support.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 536497666
RAC: 190130

Tony, you posted the same

Tony, you posted the same request over at GPU-Grid. That in itself is not bad, but when discussing such a serious configuration it's important to know which project or project-mix to aim for. What do you want your system to run?

(also posting this at GPU-Grid, I'm sure anyone reading there also wants to know this)

MrS

Scanning for our furry friends since Jan 2002

Tony
Tony
Joined: 5 Sep 13
Posts: 7
Credit: 3186733
RAC: 0

Those are some outstandingly


Those are some outstandingly useful thoughts. I am currently working on the following projects and as you can see I'm a relatively humble player in the online grid computing game. I greatly appreciate all your assistance in inputting your comments and thoughts and I appreciate your toleration of my level of ongoing ignorance.

With that being said I already have the third GTX 770, so heat management is going to be an issue even given the fact that I have one of the largest cases on the market. It was outstanding to get the feedback on the K40, I just assumed it would be the end all in processing for Boinc, I was obviously incorrect.

So given my current system specs and case dimensions I was thinking to employ a motherboard upgrade to the ASUS P9X79-E WS Motherboard, then drop all three cards into the system.

A final thought, is there any Boinc advantage to running in SLI? Can I run two or three cards in SLI and a forth card out of SLI or two cards in SLI and a third card out of SLI. Lastly can I run two or three 770 GPU's in SLI and a 780 out of SLI?

I think once the mother board is upgraded and the third card is dropped in, that will be about as far as I can take this system without being an expert on the subject.

GPUGRID 1,178,400 10 Mar 2014
MilkyWay@home 4,039,660 4 Mar 2009
Einstein@Home 2,257,269 5 Sep 2013
Asteroids@home 1,798,080 10 Aug 2013

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4812603666
RAC: 78632

There is no advantage to SLI,

There is no advantage to SLI, These projects don't want it.

My suggestion for the kind of money you are considering dropping is to get four 780Ti cards that vent out of the back of the case.

I am running two 470s and two 560Tis like that in a single case, very little room between them, and for Einstein they aren't even beginning to get too hot.

But my case is a Rosewill Blackhawk Ultra and I have two external front fans, two internal fans blowing at the cards, a fan blowing up on the cards, a fan blowing up out of the case, two power supplies sucking air from the case out the back, as well as a 120mm fan blowing out of the case.

In other words, there is no lack of "cool" air.

GPUGRID is another story. That's the only project I've ever run that seems to want to cook whatever cards, in whatever multiple configurations I have them in. I had to quit crunching GPUGRID because I was burning everything up.

The other thing I'm doing, which you may not find necessary, is running GPU tasks only (here and at SETI@Home).

Task Monitor does show that I have some room to run a CPU task, but I find any CPU task slows the GPU tasks and the rig is overall less productive. Plus, the case temperatures rise. I'm guessing the decrease in speed has to-do with swapping things from RAM. Your mileage will vary with a good Intel CPU, not might.

That computer is currently #7 in the statistics. As far as its RAC is concerned, I'd have to know which thing it was running on which days and I haven't looked that closely.

A computer with four 780Tis would smoke mine.

Now, having said all of that, if you'll look at the list of the top computers, they are almost all running AMD "Tahiti" cards here at Einstein.

If you want to stick with NVIDIA cards and you've got some time to wait, the new Maxwell cards promise to be more productive, more efficient, and therefore cooler when they finally hit the market. I almost bought some AMD cards, but decided to see what the Maxwell NVIDIA cards look-like before I spend any more money replacing older GPUs.

I know a person has to eventually "jump on" somewhere, but I'm choosing to wait for a miracle before I start getting a new generation of cards.

I hope all of that is food for thought.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752607592
RAC: 1507896

I refer the honourable

I refer the honourable gentleman to the answer I've just given at MilkyWay.

http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3504

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.