Several questions about hardware components

juan BFP
juan BFP
Joined: 18 Nov 11
Posts: 839
Credit: 421443712
RAC: 0

RE: RE: Now i finaly

Quote:
Quote:

Now i finaly understand, but that is bad, i can´t change the MB (the X79 MB are to expensive here - i hate tax hungry countries) so my only option is switch the 590/690 Gpus back to Seti ...

Actually you have more options - for example,just spread your 690/670 between relatively slow computers. Leave one video card for one motherboard - so you will have full PCI-e speed. Of course,it will increase the size of your Zoo but you can use cheap old fashioned MB instead of expensive modern MB. I'm sure you have some in your mothballs.


Thats another ideia, i was thinking in that direction too, i actualy have a lot more hosts that could run boinc and the 690 but the problem is the internal quests, some users simply don´t agree to share it´s resources with the others. (some medieval times thinking of course)

I could "force" them to do that but is not a good political way to do the things you know.

Besides of course the power bill, 2 host with one GPU uses a lot of more power than a single host with 2x690 and the power bill is allready high you could imagine why.

But i realy belive my best option is build smaller 2xgpu host (690+670 on each one) by just switch the GPUS. All PSU allready have the capacity to easely run that configuraion.

lHj2ixL.jpg

 

Mike Davis
Mike Davis
Joined: 3 Apr 05
Posts: 12
Credit: 3026924
RAC: 0

600 dollars for a mb? thats a

600 dollars for a mb? thats a hell of a lot of tax and i hate to think what your 690s cost there!... mine is only a cheap evga x79 sli board... cost usd150 inc delivery when it was on special offer

juan BFP
juan BFP
Joined: 18 Nov 11
Posts: 839
Credit: 421443712
RAC: 0

RE: 600 dollars for a mb?

Quote:
600 dollars for a mb? thats a hell of a lot of tax and i hate to think what your 690s cost there!... mine is only a cheap evga x79 sli board... cost usd150 inc delivery when it was on special offer


a single 690 cost here > US$ 2000... What model of MB you use?

lHj2ixL.jpg

 

Neil Newell
Neil Newell
Joined: 20 Nov 12
Posts: 176
Credit: 169699457
RAC: 0

RE: I belive something

Quote:


I belive something else is happening, the keppler code is high diferent from fermis code, don´t know if you are familiar with the optimized SETI aps build by Jason.

Until the arrival of the keppler version, the kepplers GPUs was slow compared against the fermis, but now, after he delivers a keppler code oriented code, the equation changes, now the kepplers (specialy the 690) is faster than the equivalent 580/590 with a lot less power need. An increase of the output of more than 20% happens just for the first set of "optimizations" on the code.

It must be a difficult problem for the programmers because some codes are easier to make parallel (i.e. suitable for GPU) than others; in my understanding, the e@h GPU code uses a lot of bandwidth as well as a lot of GPU cycles. So every program is different, and what works for SETI may not work here (maybe someone with knowledge of both can comment?)

So imagine the programmers work really hard to make e@h faster on Kepler GK104, for months and months - and then the GK110 is released! (like next month, maybe).


juan BFP
juan BFP
Joined: 18 Nov 11
Posts: 839
Credit: 421443712
RAC: 0

RE: RE: I belive

Quote:
Quote:


I belive something else is happening, the keppler code is high diferent from fermis code, don´t know if you are familiar with the optimized SETI aps build by Jason.

Until the arrival of the keppler version, the kepplers GPUs was slow compared against the fermis, but now, after he delivers a keppler code oriented code, the equation changes, now the kepplers (specialy the 690) is faster than the equivalent 580/590 with a lot less power need. An increase of the output of more than 20% happens just for the first set of "optimizations" on the code.

It must be a difficult problem for the programmers because some codes are easier to make parallel (i.e. suitable for GPU) than others; in my understanding, the e@h GPU code uses a lot of bandwidth as well as a lot of GPU cycles. So every program is different, and what works for SETI may not work here (maybe someone with knowledge of both can comment?)

So imagine the programmers work really hard to make e@h faster on Kepler GK104, for months and months - and then the GK110 is released! (like next month, maybe).


I understand but as far i know the SETI optimized code have more than 2 years on development, even before the time where fermi´s was launch and just few months ago ported to kepplers. So if no one start the development when more and more 690 or the new generation of 700 nvidias will be more worldwide avaiable, could be so late to start to think about. As usual the software is always trying to catch the hardware...

But something is important, like me (i have more than 10x690 not all running E@h but certanly with that limitation i will not think to use them on E@H) there are few who have several 690 and like to use them on E@H and due the power capacity of this boards is a compute capacity than can´t be ignored.

Could be a particular case but we don´t need big CPU´s to make our jobs so is dificult justify buying the x79 MB models, but is easy to order new GPUs to fill a lot of slots ready avaiable on our hosts. Actualy i just order this week a new 690 in order to replace the last one 560 already working here, expect to arrive in one month due the costums problems. After that i have few 580/590 to upgrade...

Imagine so in few months when we and the others start to deploy the 700 models... the problem will be increased proportionaly.

Of course i know is not an easy task to do...

lHj2ixL.jpg

 

Mike Davis
Mike Davis
Joined: 3 Apr 05
Posts: 12
Credit: 3026924
RAC: 0

RE: RE: 600 dollars for a

Quote:
Quote:
600 dollars for a mb? thats a hell of a lot of tax and i hate to think what your 690s cost there!... mine is only a cheap evga x79 sli board... cost usd150 inc delivery when it was on special offer

a single 690 cost here > US$ 2000... What model of MB you use?

its an EVGA x79 SLI, so not an expensive one, and it was on a super cheap deal at the time but it has been solid so far but i have read some bad reviews on them so who knows! i just needed a new mb at the time as my old processor/mb combo decided to die on me. Will more than do for me for the next while, but does only have 4 RAM slots and apparently doesnt like overclocking much but have bothered to try yet tbh.

http://www.overclockers.co.uk/showproduct.php?prodid=MB-040-EA <---- It was £99.99 when i bought it from there and with the cpu i got for a decent price somewhere else, actually worked out cheaper than a socket 1155 would with a mb of a similar spec but less pcie lanes :)

wetnoodle
wetnoodle
Joined: 11 Oct 12
Posts: 9
Credit: 186836
RAC: 0

Hi, folks! This is my

Hi, folks!

This is my first post on these forums ever ... hope I don't make too many mistakes.

I can attest the importance of dust build-up and CPU cooling.

I am a smoker, and live in a dusty environment, and those are a double whammy for a computer's cooling system. Tobacco smoke (or any other kind of smoke, for that matter) forms a sticky film on solid surfaces that attracts and holds dust ... but dust doesn't need smoke as an excuse to build up on CPU heat sinks. I have a family member who doesn't allow smoking inside his house, but his clothes-dryer vent doesn't work properly, so he has a build-up of lint in the air to befoul his computer.

As a guess, I would say that if your computer is more than a year old, then you need to keep an eye on your CPU's heat sink to ensure it isn't blocked by dust.

The first time I encountered this problem, it drove me nuts because I didn't know what was wrong with my machine. It kept freezing up every few minutes and then running again every few minutes -- no error messages or software failures, it just worked, and then froze, and then worked, and then froze again. In the end it turned out that the CPU was overheating and shutting itself down to prevent damage. I finally figured it out and blew the dust out of the heat sink, and then everything was fine again.

In like manner, my current machine was getting a build-up of dust that I could see in the heat sink, so before it became a critical problem, I cleaned it out. BOINC keeps my CPU usage at 100% constantly, and the average CPU temp went from around 50 degrees C before, to 45 after the cleaning.

I found a neat little software app for monitoring CPU temperature. It puts a number on the Windows taskbar so you can see the temp at all times. You can find it at http://www.alcpu.com/CoreTemp/, if you're interested. It is free and works really well. It is unobtrusive and uses very little memory.

Well, that's it for my first post. Best wishes to everyone!
Rick

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 323648071
RAC: 183495

Well done Rick! ;-) As you

Well done Rick! ;-)

As you have attested it is indeed a perennial cause of 'weird errors' here at E@H, and is often an underestimated factor. Interestingly you don't need to necessarily have 'dirt' evident in the house, it is a function of the nature of very small particles to be attracted by electrical devices. A charge is typically induced on metal conducting surfaces by interaction with the atmosphere ( less so for moist air which dissipates same ) and dust particles are able to be polarised in the presence of electric fields. Some industrial smoke stacks do this deliberately with purpose built electrostatic dust precipitators. I have found that cases with fine filters in front of the intake fans have dramatically reduced build up on the heat sinks etc. But I blow the lot out around once a month anyway.

We have all, at some time or other, been afflicted by the sneaky Dust Bunnies. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.