The coming distributed processing explosion

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 769
Credit: 187,482,696
RAC: 175,149

RE: I mean I want to see

Message 99648 in response to message 99647

Quote:
I mean I want to see what I get as an RAC before attempting to increase its performance. That way I hope to get a feel for what the improvement is worth. I also intend to look into cost/performance eventually as in does a $100 card to most of what two cards in crossfire or not. In other words to find a point of diminishing returns.

Keep it simple and forget RAC. Just take a look at how long an average WU takes for a given amount of credits, then divide 24*3600 by this value and multiply the result with the credits. That's the maximum RAC you could get, if everything ran 24/7 without problems.

Regarding the value of cards of different speeds: as I said here it highly depends on the project and evaluating the FPU performance using the current Einstein CUDA V2 app is pointless, as performance doesn't depend much on GPU speed anyway and the app is not going to stay forever. Otherwise compare the basic features (compute capability) and theoretical FLOPs. Shaving off 10€ off a card for 1/2 less FLOPs is rubbish, whereas paying 50% more for 30% more performance will be borderline worthy and depend on you.

Quote:
I meant Do not register. I have nVidia chips on the mobo of one machine but the test of the computer replies "no GPUs" and thus are not used.

You need a half-way modern nvidia driver (180+), at least a 8000 series GPU (the integrated ones do use different naming schemes, though) and a modern BOINC client (6.10 preferred). Furthermore you mus tnot install BOINC in service mode and must not access the machine via remote desktop.

MrS

Scanning for our furry friends since Jan 2002

Bill592
Bill592
Joined: 25 Feb 05
Posts: 786
Credit: 70,825,065
RAC: 0

RE: RE: Not long ago

Message 99649 in response to message 99603

Quote:
Quote:
Not long ago Einstein had a similar problem due to too many new crunchers. They invested in a new server and outsourced some work to another PC.

Err ... do tell?

Cheers, Mike.

I believe Herr Machenschalk is working on this at the Secret
MPG underground base near Peenemünde )

Bill

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6,891,649
RAC: 0

RE: RE: I mean I want to

Message 99650 in response to message 99648

Quote:
Quote:
I mean I want to see what I get as an RAC before attempting to increase its performance. That way I hope to get a feel for what the improvement is worth. I also intend to look into cost/performance eventually as in does a $100 card to most of what two cards in crossfire or not. In other words to find a point of diminishing returns.

Keep it simple and forget RAC. Just take a look at how long an average WU takes for a given amount of credits, then divide 24*3600 by this value and multiply the result with the credits. That's the maximum RAC you could get, if everything ran 24/7 without problems.

I do not see how to do this. I started with seti and their servers still show all the flakiness of any other pioneer. After watching their outages on my return some ten months ago I increased my number of days of work from 3 to 5 to 10. So my return times are in days. Is there another measure in the XML files I should be looking at?

Quote:
Regarding the value of cards of different speeds: as I said here it highly depends on the project and evaluating the FPU performance using the current Einstein CUDA V2 app is pointless, as performance doesn't depend much on GPU speed anyway and the app is not going to stay forever. Otherwise compare the basic features (compute capability) and theoretical FLOPs. Shaving off 10€ off a card for 1/2 less FLOPs is rubbish, whereas paying 50% more for 30% more performance will be borderline worthy and depend on you.

I see that. I just want to try to quantify it. I am not a gamer for good reason, I am an easy addict. I swore off games back on the Atari 800 (save I developed some for publication in COMPUTE!) mainly after developing callouses on the palm of my left hand from holding the joystick. So a gaming card is purely a matter of a boinc hobby.

But with my earlier discussion of getting ahead of the curve by buying refurbed computers I have a consideration of adding a quad core AMD, Phenom or Athlon, for a touch under $500 or a video card for the same price. Is a video card faster than a Quad for the same price? I have not seen an answer to that.

All I am looking for is if I am going to spend money on this hobby where is the most bang for the buck.

BTW: I have two quads and a three year old single core at the moment. I am debating keeping the single core on line vice replacing it with another quad.

Quote:
Quote:
I meant Do not register. I have nVidia chips on the mobo of one machine but the test of the computer replies "no GPUs" and thus are not used.

You need a half-way modern nvidia driver (180+), at least a 8000 series GPU (the integrated ones do use different naming schemes, though) and a modern BOINC client (6.10 preferred). Furthermore you mus tnot install BOINC in service mode and must not access the machine via remote desktop.

MrS

I know a driver is there in some form as gnome (a linux thing) lets me do graphics intensive effects on the machine with the chips but not on the machines without the chips. If by service mode you mean started at boot time I find boinc for linux does not do a damned thing with a boot start as it does not know where my seti files are.

If nVidia chips on the mobo is something new then it is worth the programmers looking into as an embedded accelerated graphics as it is becoming as common as ethernet and audio have on the mobo in the past.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 769
Credit: 187,482,696
RAC: 175,149

RE: I do not see how to do

Message 99651 in response to message 99650

Quote:
I do not see how to do this. I started with seti and their servers still show all the flakiness of any other pioneer. After watching their outages on my return some ten months ago I increased my number of days of work from 3 to 5 to 10. So my return times are in days. Is there another measure in the XML files I should be looking at?

In the task list of your Phenom over here at Einstein I see that 250.91 credit WUs take from 36 to 39 ks. One could calculate an averager over 10 (or better 20) WUs, but here I'll let my belly average it to 37.5 ks. That's 24*3600/37500 = 2.3 WUs per core and day. Overall maximum RAC will be 4 * 2.3 * 251 = 2300, running only Global Correlations S5 search #1 v1.05 tasks. The 80 credit WUs of ABP search change the credits/time a bit, but the goal is only to get a quick reasonable estimate. For your Athlon II I deduce a value of RAC 3850 and see running times which are comparable to 2.4 GHz Core 2 CPUs. Is your Phenom crunching in power saving mode? Typing "cat /proc/cpuinfo" should be able to tell you, if I remember correctly.

Quote:
I see that. I just want to try to quantify it.

That's a good idea. Which project do you want?
For pure "credits for the buck" Milkyway on ATIs is almost impossible to beat. Collatz on ATIs comes close, but their science is not exactly most peoples favorite. At Milkyway my ATI HD4870 (bought for 160€ + 25€ cooling beginning of 2009) could get an RAC of 133.8k. If I wouldn't disturb it every now and then and if the server wouldn't crash every now and then (in this case it switches to Collatz as backup project). That does answer some of your questions, doesn't it? ;)
It's overclocked to 830 MHz, up from the stock 750 MHz (1:1 scaling) and the memory is underclocked by about 50% to save ~30W. Not all ATIs run Milkyway, though. With nVidia you'll certainly get less credits/€ but probably more interesting project choices.

Quote:
BTW: I have two quads and a three year old single core at the moment. I am debating keeping the single core on line vice replacing it with another quad.

So you probably don't have many PCIe slots. That means you'll want to buy fewer faster GPUs. Because with GPUs the next generation will always be around and you'll be tempted to buy again. But then you'll need a new house for your "old" card (which is still too new to throw it away), so you'll be glad if you've got a spare PCIe slot left.

Regarding the nVidia onboard GPUs: sorry, I was implying you're running Windows. Forget about the service-thing under Linux. Here you've got other problems. In fact, I've been trying since a half year to use a low end GPU in some box, but just couldn't get it to work. But I know BOINC would detect it just fine if I just did the right thing.

MrS

Scanning for our furry friends since Jan 2002

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6,891,649
RAC: 0

RE: RE: I do not see how

Message 99652 in response to message 99651

Quote:
Quote:
I do not see how to do this. I started with seti and their servers still show all the flakiness of any other pioneer. After watching their outages on my return some ten months ago I increased my number of days of work from 3 to 5 to 10. So my return times are in days. Is there another measure in the XML files I should be looking at?

In the task list of your Phenom over here at Einstein I see that 250.91 credit WUs take from 36 to 39 ks. One could calculate an averager over 10 (or better 20) WUs, but here I'll let my belly average it to 37.5 ks. That's 24*3600/37500 = 2.3 WUs per core and day. Overall maximum RAC will be 4 * 2.3 * 251 = 2300, running only Global Correlations S5 search #1 v1.05 tasks. The 80 credit WUs of ABP search change the credits/time a bit, but the goal is only to get a quick reasonable estimate. For your Athlon II I deduce a value of RAC 3850 and see running times which are comparable to 2.4 GHz Core 2 CPUs. Is your Phenom crunching in power saving mode? Typing "cat /proc/cpuinfo" should be able to tell you, if I remember correctly.

Tried that -- a nice info thing. Thanks. The only difference is the clock which is 1.8 v 2.6 which should explain the difference. The L3 cache on the Phenom does not appear to contribute much as performance figures are proportional to the clock.

Quote:
Quote:
I see that. I just want to try to quantify it.

That's a good idea. Which project do you want?
For pure "credits for the buck" Milkyway on ATIs is almost impossible to beat. Collatz on ATIs comes close, but their science is not exactly most peoples favorite. At Milkyway my ATI HD4870 (bought for 160€ + 25€ cooling beginning of 2009) could get an RAC of 133.8k. If I wouldn't disturb it every now and then and if the server wouldn't crash every now and then (in this case it switches to Collatz as backup project). That does answer some of your questions, doesn't it? ;)
It's overclocked to 830 MHz, up from the stock 750 MHz (1:1 scaling) and the memory is underclocked by about 50% to save ~30W. Not all ATIs run Milkyway, though. With nVidia you'll certainly get less credits/€ but probably more interesting project choices.

Points are nice for bragging rights but I want something worth doing and to my interests. That means more or less pure physics admitting an SF interest got me started. That also means I do not really care about primes any other pure math(s) issues.

The reason for the slow Phenom is that it is 10 months old and bought as a refurb unit. It was to run my TV. I was originally looking for an off-lease to install the old nVidia card if it were not fast enough to drive the TV. But it has been good enough. Activating boinc was an afterthought.

Which reminds me why I dropped out of boinc for two years -- I became obsessed with it as I do with games. Call me Monk. But now that I am back in

Quote:
Quote:
BTW: I have two quads and a three year old single core at the moment. I am debating keeping the single core on line vice replacing it with another quad.

So you probably don't have many PCIe slots. That means you'll want to buy fewer faster GPUs. Because with GPUs the next generation will always be around and you'll be tempted to buy again. But then you'll need a new house for your "old" card (which is still too new to throw it away), so you'll be glad if you've got a spare PCIe slot left.

You got it! The temptation to buy above need and then regretting it. Were I a gamer nothing would too much or a waste if performance is not as expected for boinc as it would always make for better gaming. But for me it crunches faster or it doesn't.

Quote:
Regarding the nVidia onboard GPUs: sorry, I was implying you're running Windows.

No one ever went broke underestimating ...

No blame.

Quote:

Forget about the service-thing under Linux. Here you've got other problems. In fact, I've been trying since a half year to use a low end GPU in some box, but just couldn't get it to work. But I know BOINC would detect it just fine if I just did the right thing.

MrS

If just the right thing. Sometimes I entertain the heresy that Windows users are right about us linux types.

I still am surprised by the absence of a cost/performance trade-off on GPU cards. As you see I speculate the reason is gaming so the only cost consideration is for the game and the more expensive the better. Not denigrating gaming, far from it, but I can't seem to google relevant information on GPU performance as an alternative processor. It was easier to find a programming language which I am not going to tackle any times soon.

One might hope lurkers here might think about their experience and report cost instead of model number. It seems everyone agrees with using tigerdirect as the cost standard. I find that odd in its own right.

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 769
Credit: 187,482,696
RAC: 175,149

RE: That means more or less

Message 99653 in response to message 99652

Quote:

That means more or less pure physics admitting an SF interest got me started. That also means I do not really care about primes any other pure math(s) issues.

I still am surprised by the absence of a cost/performance trade-off on GPU cards.

... but I can't seem to google relevant information on GPU performance as an alternative processor.
One might hope lurkers here might think about their experience and report cost instead of model number. It seems everyone agrees with using tigerdirect as the cost standard. I find that odd in its own right.

Milkyway is an astronomy project with some physics relevance. They're not directly analyzing astronomical data like Seti and Einstein, though. They're modelling the Milkyway structure. And since I know the GPU performance over there I'll use this as an example. It is typical for GPU projects in the way that performance scales with theoretical FLOPs, even more so than for some other projects. And it's untypical in the way that nVidia hardware really sucks here. They're not built for such a task.

Name | Price | TDP | Peak FLOPs | Maximum RAC
HD4770 | 110€ | 80W | 960 GFLOPs | 79.1k
HD4850 | 80€ | 114W | 1000 GFLOPs | 82.3k
HD4870 | 115€ | 157W | 1200 GFLOPs | 98.8k
HD4890 | 190€ | 190W | 1360 GFLOPs | 112k
HD4870X2 | 320€ | 286W | 2400 GFLOPs | 198k
HD5830 | 165€ | 175W | 1800 GFLOPs | 148k
HD5850 | 225€ | 170W | 2100 GFLOPs | 173k
HD5870 | 315€ | 188W | 2720 GFLOPs | 224k
HD5970 | 510€ | 294W | 4640 GFLOPs | 382k

I've based performance on my own card runnig an optimal software configuration and 24/7. Prices are from Geizhals Germany from today. The 4770, 4890 and 4870X2 are basically discontinued. The HD5830 is not a good deal. 4850 and 4870 are nice if you can get them used. Buying new is OK from a performance/price relation, but power efficiency is less than the new cards so you'll pay more for them in the long term. Generally Milkyway uses the cards very efficiently, which means they are drawing quite some power (running CUDA Einstein V2 power consumption should be near idle levels..), so assuming TDP power draw is not too wrong. The real value is likely to be a bit smaller, but the rough order of cards remains. All cards can run at >99% performance at significantly reduced memory clocks (e.g. 4870 can save ~30W here by running at 400 - 500 MHz mem instead of 900). 5830 is more similar in power draw to 5870, whereas 5850 tends to draw more like 150W rather than 170. Also 4870 and 4890 are very close in real world power draw. Oh, and almost all cards not on this list can't run Milkyway.

Easy, isn't it? Take the FLOPs, search for the current price and keep power consumption (=running cost) in mind and you'd be good to go here. For Collatz (=Math) the relations between these cards are similar. Differences are: generally lower power consumption, a bit less credits and almost any modern ATI can run it. I can't give you such a chart for Einstein (pointless, as it largely depends on CPU speed) or SETI though, as I've been out of SETI since a long time.

MrS

Scanning for our furry friends since Jan 2002

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6,891,649
RAC: 0

RE: . ... Easy, isn't it?

Message 99654 in response to message 99653

Quote:

.
...
Easy, isn't it? Take the FLOPs, search for the current price and keep power consumption (=running cost) in mind and you'd be good to go here. For Collatz (=Math) the relations between these cards are similar. Differences are: generally lower power consumption, a bit less credits and almost any modern ATI can run it. I can't give you such a chart for Einstein (pointless, as it largely depends on CPU speed) or SETI though, as I've been out of SETI since a long time.

MrS

It will take me a while to digest this by searching on the cards and such but this appears to be exactly what I have been looking for. Thank you very much for your time in this.

But as to power consumption, if one does not have to buy a new power supply then "free" heat in winter against added A/C in summer may or may not be a cost depending upon your local climate. The fraction of loss depends the heat/cooling cost which is "too hard" for this discussion. I am in Florida USA so I opt for lower heat but back in DC I am confident it would have been neutral in annual cost.

But you have produced the important part of it and as you cite a vendor in Germany you should love extra heat compared to my marginal dislike of it. Climate here is 6 months full time A/C with maybe one month on either side in the "don't care" category. Back in DC I would have had two months on either side as "don't care" months so the actual months it mattered would be problematic.

To summarize, forget power consumption. It is too hard for us mere mortals.

mikey
mikey
Joined: 22 Jan 05
Posts: 6,601
Credit: 605,771,820
RAC: 854,875

RE: But you have produced

Message 99656 in response to message 99654

Quote:

But you have produced the important part of it and as you cite a vendor in Germany you should love extra heat compared to my marginal dislike of it. Climate here is 6 months full time A/C with maybe one month on either side in the "don't care" category. Back in DC I would have had two months on either side as "don't care" months so the actual months it mattered would be problematic.

To summarize, forget power consumption. It is too hard for us mere mortals.

I live in Northern Va, just South of Dale City/Woodbridge and the cost of the a/c is not cheap! I have 9 gpu's crunching right now and my electric bill is around $500.00 per month, I have a 2 story, not counting the basement, Colonial style home that is all electric, but have no kids at home, just the wife and I. Gpu's are NOT cheap to run and the heat generated is a consideration!! I have most of my gpu's in my basement and being a basement the airfow is HORRIBLE but my basement is right now in the low 80's while it is in the 70's most days outside right now, so about a 10F degree increase and that is with the a/c on!! Now when it is in 90's outside my basement did go up into the mid to upper 80's but the a/c kept it below 90 down there, that and a few floor fans blowing the air around. As I said it is a basement so the cooling options are poor at best.

Mad_Max
Mad_Max
Joined: 2 Jan 10
Posts: 147
Credit: 1,745,227,217
RAC: 565,572

RE: Yup. Assuming the E@H

Message 99657 in response to message 99631

Quote:


Yup. Assuming the E@H available total of ~ 320 TFLOPS are being presently allocated at 50:50 b/w GW and ABP units, then we only need 160 * ( 49/271 ) ~ 28.9 TFLOPS to keep pace with present Aricebo data production. Mind you the total E@H TFLOPS estimate is based on overall project RAC, so doesn't separate CPU FLOPS vs GPU FLOPS per se. Still a good ballpark figure though. It's impressive that 'only 1000 modest' video cards can give E@H 10% of it's computational capacity. But beware, as while speed is one thing correctness is still King ...

Cheers, Mike.


I think correct allocation is ~ 70/30 for GW and ABP work.
So that the requirements will be even less likely to suffice with 15-20 TFLOPS (~2M cr/day) to "realtime" current Arecibo data flow after finish crunch the stock of the old data.

But as Bernd wrote above, Arecibo not the only radio telescope in the world. Ðœoreover, it covers only a small fraction of the sky in her study (even if most "populated" by pulsars, but in other areas, they also exist) :)
So after the release of the computing power i hope we can start processing data from other telescopes (continuing to handle a flow of new data from Arecibo course)

tolafoph
tolafoph
Joined: 14 Sep 07
Posts: 122
Credit: 74,576,962
RAC: 21,860

RE: I think correct

Quote:
I think correct allocation is ~ 70/30 for GW and ABP work.

It used to be 70/30, but Bruce Allen said after the discovery of the pulsar, that he will change that to 50/50.
On the Einstein@Home Arecibo Binary Radio Pulsar Search Progress Page you can see a jump from 150 to 200 beams/day in the first half of the year to 250 to 300 beams/day now. Which fits into the assumption that the ration was changed to 50% GW and 50 % ABP.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.