Maxwell 2

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4869283227
RAC: 175224

RE: I'd appreciate a

Quote:

I'd appreciate a little guidance regarding the two graphics cards in one box matter. I know lots of people do this. Should I just expect that when I power back up after adding the second card that BOINC will see the second card and start using it?

I'm going to avoid the temptation to predict and just tell you what I do whether it seems necessary or not.

When I install a new card, any new card, identical or not, I'll let it boot, then I'll reboot and do a new, clean driver installation.

The reason I've just adopted that is because I noticed the video cards' audio "seem" to want to use resources that might already be in use, or not shared, or something. (this is vague since the installation routines don't tell me why they seem to be installing audio drivers where they already existed, etc)

On more than one computer I've noticed conflicts between the audio for the video card and USB drivers. *Sometimes* that goes unremarked by the OS, but things stop working or driver installations fail. That might be hardware / driver dependent, so I'm not saying you will see the same behavior.

So, after I install a GPU I boot, wait, reboot, do a *clean* install of the video drivers, and reboot again. It may not be necessary, but I feel better about the odds of everything working.

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

I highly recommend a clean

I highly recommend a clean install too, especially when two (or more) cards are involved. If anything can go wrong in that case, it will. By "clean", I mean using a driver cleaner. Originally I used Driver Sweeper, which may still be available, and then Driver Fusion. Most recently, I have used Display Driver Uninstaller which forces you to reboot in Safe Mode and is more complete, though I think any will do.

If you have a recent version of BOINC (i.e., version 7), that should be enough. But on some cards you had to use a configuration file, as follows:


1

You just copy that to a text editor (i.e, Notepad), and use "Save As" (not "Save") to save it as "cc_config.xml", and then move it to the BOINC data folder. That, more or less, should do it. It probably helps if the cards are matched, but that is not as critical in BOINC as with some other software clients.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4869283227
RAC: 175224

RE: Running more than 1

Quote:

Running more than 1 work unit per card doesn't improve any efficiency in those cards.

I would never argue with you about the results you see on your cards. I don't have the same cards as you. I'm not sure we have even one card in common.

I do want to be certain I understand what you are telling me, though.

What "type" of work were you doing and how many at a time where you trying to run?

Since we have motherboards and processors in common I'd also like to know if you were running any CPU work at the same time.

My experience is not the same as yours, so I want to be sure I'm understanding that we are talking about the same thing.

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7282331708
RAC: 2039759

Jim1348 wrote:If you have a

Jim1348 wrote:

If you have a recent version of BOINC (i.e., version 7), that should be enough.

1


As I was running a recent BOINC, I paid no attention to this advice on initial installation of my dual-GPU test configuration (one 750 TI plus one 660, in a box with two PCIe 3.0 16x slots).

I failed on first try, getting a notation in the startup messages for one of my two GPUs as "ignored by config". The other ran just fine, but the one it chose to run as "higher capability" presumably because it listed "compute capability 5.0" was the 750 Ti (while the 660 listed compute capability 3.0).

I did have a recent BOINC (7.3.11), but I also did have a cc_config.xml, without the use_all_gpus line.

Simply adding that line to the options section of my existing cc_config.xml and restarting boinc made all well.

Performance remains to be seen, but with half an hour of run time, the machine is still up, which gives me good hope that my power supply in this box will be sufficient for either of my intended terminal configurations (pulling out the old 660 and putting either a modern 750 or 970 in the same place).

Separately, my conclusion after a couple of days of running and tweaking on a setup with just superclocked 750 Ti in this box is that I can't get meaningfully more performance out of it than I get out of a plain Jane 750 in another box. This is running Einstein Perseus Arm Survey. As that is my application of interest, I don't currently expect to procure any more 750 Ti's, but am likely to buy between one and three more 750s. I don't offer this as blanket advice to others, as people with a gaming interest may like the extra Ti performance offered, and other current or future applications may more gainfully use the extra Ti resources than Einstein Perseus currently appears to.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2983337081
RAC: 743272

'Recent' version of BOINC is

'Recent' version of BOINC is a relative term. According to client configuration, has been available since v6.6.25

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

RE: Simply adding that line

Quote:
Simply adding that line to the options section of my existing cc_config.xml and restarting boinc made all well.


Good. You have discovered the basic rule: If it doesn't work without it, you put it in.

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Hey Tbret, I never (well

Hey Tbret,

I never (well almost never) run any work on the CPUs. I found out a long time about that causing problems with the 750s so I've just gotten in the habit of not running any. I ran a Binary Pulsar search on all 3 of the cards. 2 were running 1 Arecibo apiece and 1 was running Perseus. I then tried running 2 Arecibo and 2 Perseus on them. The only thing I saw at the end was doubling of time. I asked Juan about it and he told me that it has to do with the PCIe slot more than the crunching ability of the GPU. Those results seem to confirm what he told me. If you want I can try it again, this time with only Perseus since I seem to have gotten more than a few. I'll let you know how it goes.

Zalster

Tom*
Tom*
Joined: 9 Oct 11
Posts: 54
Credit: 366729484
RAC: 0

If anyone would know about

If anyone would know about PCI-E limitations here it would be Juan.

tbret
tbret
Joined: 12 Mar 05
Posts: 2115
Credit: 4869283227
RAC: 175224

RE: Hey Tbret, I never

Quote:

Hey Tbret,

I never (well almost never) run any work on the CPUs. I found out a long time about that causing problems with the 750s so I've just gotten in the habit of not running any. I ran a Binary Pulsar search on all 3 of the cards. 2 were running 1 Arecibo apiece and 1 was running Perseus. I then tried running 2 Arecibo and 2 Perseus on them. The only thing I saw at the end was doubling of time. I asked Juan about it and he told me that it has to do with the PCIe slot more than the crunching ability of the GPU. Those results seem to confirm what he told me. If you want I can try it again, this time with only Perseus since I seem to have gotten more than a few. I'll let you know how it goes.

Zalster

I thought we were comparing incomparable things and we were. I'm having better results running 2 BRP4s at a time but the Perseus I have to run one at a time.

The deal with the PCIe is simply that a 690 wants more traffic than a single PCIe slot running from a quad i5 can deliver.

As far as I know Juan hasn't tried running a 690 over here in a long time (not that anything has changed in that regard).

juan BFP
juan BFP
Joined: 18 Nov 11
Posts: 839
Credit: 421443712
RAC: 0

RE: As far as I know Juan

Quote:
As far as I know Juan hasn't tried running a 690 over here in a long time (not that anything has changed in that regard).


Yes in the past my experience shows the bottleneck is not the GPU itself it´s the PCIe transfer capacity seriusly compromised in the case of the 690 (actualy a 2xGPU model).

Due the lack of SETI work, I will power some 690 here tomorrow, so let see if something changes (i don´t belive it does).

For now: No CPU work and 2 WU running at a time on each card with all cores free to feed the GPU´s. The same configuration i use for AP work. But IIRC one WU at a time is the limit on the 2x690 hosts when powered by the few cores I5´s.

lHj2ixL.jpg

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.