Help! Are my GPU cards starting to fail?

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4701
Credit: 17545593058
RAC: 6412212

Yes looks correct assuming

Yes looks correct assuming there is just the one card in the system.

Mikey's suggestion of assigning the system to a different venue other than default or Generic is also valid.

But if the system is also running other projects, then that can be more complicated.

 

 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828126138
RAC: 206032

Keith Myers wrote: But if

Keith Myers wrote:

But if the system is also running other projects, then that can be more complicated. 

It's a project by project setting, not a global setting, ie I can put pc 1 in the Generic venue on project A and it can be in the Work venue on project B, I do it all the time as I adjust what I crunch for and how much of it I do.

Most of my projects have the Generic venue set as the default with a zero resource share, but a few that I have been crunching for for a LONG time have the Generic venue set with a 25% or more resource share and Home as the Default venue with a zero resource share. I use 100% as the top resources share as it's easy for me, I know others use 1000% but in the end it's the same just a percentage of the total. I tend to use higher percentage settings for projects that don't have alot of work so I get some workunits when they do have them and then have a zero or very low resource share project that normally crunches when there is no work to be had.

Since Prime Grid instituted multi-tasking the way the did, thru the website, I use almost all of ther Venue settings as I have many different cpu makes and models, trying to tweak the settings to get the most out of my contribution with each pc.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4701
Credit: 17545593058
RAC: 6412212

Aye, but there's the rub . .

Aye, but there's the rub . . . . .  your cache size is global and applies to all venues.

I never could get Einstein to behave with regard its cache size when my hosts all run multiple projects concurrently.  Venues never solved the issue for me.

There never is one size fits all solutions with BOINC.

 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828126138
RAC: 206032

Keith Myers wrote: Aye, but

Keith Myers wrote:

Aye, but there's the rub . . . . .  your cache size is global and applies to all venues.

I never could get Einstein to behave with regard its cache size when my hosts all run multiple projects concurrently.  Venues never solved the issue for me.

There never is one size fits all solutions with BOINC. 

I agree!!

I tend to run one project at a time, one on the cpu and a different one on the gpu, on each pc so it sorta works for  me.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4701
Credit: 17545593058
RAC: 6412212

I've always run multiple

I've always run multiple projects on each host.  I came close about a year ago on one host that I dedicated to Einstein for about 8 months, but then added GPUGrid to it eventually.

Thank goodness for the Pandora Box client the GPUUG members use.  That finally enabled me to run all my projects exactly as I want them with enough controls to cover all permutations.

No more overfetching .  .  . ever.

 

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1695
Credit: 1042587676
RAC: 1386129

I would just test that card

I would just test that card running Gamma-ray pulsar binary search #1 on GPUs since it isn't  a heat problem at all.

And 2GB works with those easily 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828126138
RAC: 206032

Keith Myers wrote: I've

Keith Myers wrote:

I've always run multiple projects on each host.  I came close about a year ago on one host that I dedicated to Einstein for about 8 months, but then added GPUGrid to it eventually.

Thank goodness for the Pandora Box client the GPUUG members use.  That finally enabled me to run all my projects exactly as I want them with enough controls to cover all permutations.

No more overfetching .  .  . ever. 

I think it matters that I physically have more pc's than you do as it gives me more options to put a gpu, or cpu, anywhere I choose without harming another projects rac. Believe me there are problems too, the electric bill for one.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4701
Credit: 17545593058
RAC: 6412212

Yes, the power bill was a

Yes, the power bill was a killer this month. Had to come up with $6600 to settle up with PG&E.

Dropped from five pc's down to two to get it under control. Looking to be much better in the coming months where the solar generation mostly covers the pc's usage and only the A/C ends up costing.  But I wouldn't need as much A/C if I didn't run the computers too. My power bills should be ~ $400 less each month from now on.  I can handle that.

 

mikey
mikey
Joined: 22 Jan 05
Posts: 11889
Credit: 1828126138
RAC: 206032

Keith Myers wrote:Yes, the

Keith Myers wrote:

Yes, the power bill was a killer this month. Had to come up with $6600 to settle up with PG&E.

Dropped from five pc's down to two to get it under control. Looking to be much better in the coming months where the solar generation mostly covers the pc's usage and only the A/C ends up costing.  But I wouldn't need as much A/C if I didn't run the computers too. My power bills should be ~ $400 less each month from now on.  I can handle that. 

Fortunately I don't have to pay my bill in lump sums, mine are by the month on how much I use, I'm usually in the $600+ range every month but 3 weeks ago I shut off my gpu's  and my wife it's running less this month so far. I did run them for 24 hours a couple days ago to keep the RAC's up and will keep doing that as needed, but I'm going to try and keep them off as long as the temps stay above 80F here. That means September or October but we'll see how much it really helps when the bill comes in near the end of the month. I DID increase the number of pc's though, I added 3 more boxes to the bunch for a total of 38 more cpu cores. I have one  more I'm still working  on that is a 12 core cpu, it's in a 'mining rig' setup so takes up alot of space.

Do you guys get rolling blackouts? If so Generac, and others, have a 'power wall' they are selling now that connects to your solar panels and then is a battery backup for your home if the power goes out. I wish that was around near me when I bought my generator a few years ago as I would have done that instead!! It's modular so you can add battery packs as needed depending on how much power you need when it's out.  With your panels you can charge them for free.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4701
Credit: 17545593058
RAC: 6412212

Quote: Do you guys get

Quote:

Do you guys get rolling blackouts? If so Generac, and others, have a 'power wall' they are selling now that connects to your solar panels and then is a battery backup for your home if the power goes out. I wish that was around near me when I bought my generator a few years ago as I would have done that instead!! It's modular so you can add battery packs as needed depending on how much power you need when it's out.  With your panels you can charge them for free.

I avoided all the rolling blackouts last year.  All around me but my town escaped them. I got two quotes for more solar and Powerwall storage and was going to proceed until the news of Seti shutting down, pretty much shot that idea down.  Would have had to replace both house and garage roofs first before proceeding.

I still want a Powerwall or two, but pointless until I have enough excess solar generation to charge them up during the day. Irks me that the city building code prevents me from adding another ground mount array, but the setback from the alley behind the property prevents it.

I sized the arrays to cover the initial two BOINC computers when I first got solar.  But the power consumption of the two current computers is about 150% of those initial ones of 2013. So not quite even-steven now.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.