The coming distributed processing explosion

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6891649
RAC: 0
Topic 195315

I am not trying to start a bragging war here. This is a heads up for planning purposes.

In the good old days the increase in seti@home and modestly non-bad old days boinc the participants increased at roughly the same rate as the number of computer users increased. This was only modified by the growth of new projects matching matching the interests of an existing user.

In the years before 2007 I kept old machines around and online doing seti@home so I was one of those rare birds, a single user with more than one home computer. Three was my max as those were the days of increase in processor speed and the "fourth" would be so old that "to slow" to justify the electricity was a consideration.

My how things have changed. After two years out of boinc and with a single computer I bought a computer to drive my new flat screen and do mythtv kind of things with an external encoder feeding a USB port. I was looking for a cheap off lease but slow computer from Tigerdirect into which I would insert an nVidia card I had laying around. While looking for one a refurbished Quad core by HP got my attention.

To make a long story short, I got it and when from 1 to 5 "hosts" overnight. In those good old days a five fold increase in the number of new users would have taken years.

To make it even shorter I just bought my second refurbished quad core to be my working console and am debating keeping the old single core online even though it is the fastest of the three.

In less than a year I have gone from 1 to 9 hosts.

I am not bragging because both the quads were refurbed and not intended to be anything to brag about in terms of performance. In fact reviewers generally slam them all around in comparison to the equivalent as street prices but not refurbed prices.

Yes I am more than a little bit of a nerd but ...

I know of no reliable statistics on the number of years for the average user to replace a computer. But if replacing today most people will be buying at least a dual core computer. From the ads I see real soon single cores will be for the curio cabinet next to the Atari 800 and and the Lisa.

This kind of rapid growth in hosts, that is cores, is in progress and will accelerate as normal replacement occurs. This is a growth rate we have not seen since the mid-80s when IBM got behind the PC and made it mandatory for business. As the whole boinc idea did not have a name in 2000 when seti started the distributed processing community has never seen this kind of explosive growth.

What does it matter? The first is usually the farthest behind. Seti@home is so poor in computer resources back at Berkeley that is suspends interaction with users for two to three days a week. Not much planning ahead there. Is UWM planning to stay ahead of the curve? Can UWM produce four times the data to be processed three years from now?

Processing speed got ahead of S@H around 2004 when they started reprocessing old data for narrower frequency bins. It was only the 13 channel receiver at Areicibo that produced enough data to stay ahead of the game. It also added pulse analysis. I note many projects including einstein are adding projects just so they have data to pass out.

Bottom line here is to plan ahead not only for local plant hardware to deal with multi-cores replacing single cores but producing enough data to be analyzed.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

The coming distributed processing explosion

The Allen Telescope Array is getting 1 GB/s of data. How will they cope with it is a question. Cloud computing anyone?
Tullio

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6534
Credit: 284700169
RAC: 113726

Interesting thoughts. There

Interesting thoughts. There may well be a cusp nearby. The history of technology, and computing especially is no exception, has shown that successive quantitative changes can lead to qualitative jumps. E@H and BOINC generally is, I think, currently populated largely by nerds or nerd variants. Meaning that this type of activity, distributed computing, will naturally attract such people. Which is all well and good, more power etc .... :-)

Thus from the point of view of scientists looking to engage free resources ( not otherwise affordable ) to further their research, collectively we are 'low hanging fruit' and juicy to boot. We like quad cores, fancy cooling, shaving core cycles, and buffing our computer cases with special polish ( well, I use ArmourAll ). Many contributors are not bothered by arcana in the least, and willingly and lucidly feedback in detail to mutual benefit. Apart from the odd participants who seem to want to turn a pizza shop into a hairdressing salon, all works well.

When LIGO cranks up the interferometers in their Advanced configurations the data stream will be intense, and simultaneously way more scientifically precious and fruitful too. All being well. And yes, to a point we are cooling our heels a bit at present while we await such engineering upgrades. ABP is a case in point and delightfully so. The recent pulsar discovery is a prime example of the qualitative change I mentioned, as without E@H the pathway to that would not have been travelled ( for some time at least ). Meaning that E@H already existed for another reason, placing one foot before another, and ABP tapped that. Hence your question :

Quote:
Is UWM planning to stay ahead of the curve? Can UWM produce four times the data to be processed three years from now?


is very relevant. Maybe one of our admins/devs could comment on this?

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

John Clark
John Clark
Joined: 4 May 07
Posts: 1087
Credit: 3143193
RAC: 0

A variant on this theme,

A variant on this theme, mainly related to older computers is the upgrades to fast GPUs on these systems. Refurbished quads is good, and fast GPUs is another way to keep older computers with a competitive output at a non-wallet killing price.

Shih-Tzu are clever, cuddly, playful and rule!! Jack Russell are feisty!

Matt Giwer
Matt Giwer
Joined: 12 Dec 05
Posts: 144
Credit: 6891649
RAC: 0

RE: A variant on this

Message 99591 in response to message 99590

Quote:
A variant on this theme, mainly related to older computers is the upgrades to fast GPUs on these systems. Refurbished quads is good, and fast GPUs is another way to keep older computers with a competitive output at a non-wallet killing price.

If I might add ...

I have always bought computers behind the curve. The hottest new machine (not meaning gaming machines) become fairly priced about 9-10 months after introduction. That is when I used to buy. I never saw the point of paying a premium for a box that would leave me hungry again in 9 months.

Since discovering refurbed machines I get them maybe six months behind the curve. I get the performance a few months earlier.

In the long term sense it still takes about three years before there are enough improvements worth buying.

As to GPUs. I did a quick try on my newest machine and had a problem so I took out the old nVidia card. The motherboard has CUDA capable nVidia chips. The test of the computer that is run on startup does not recognize them. Now that it is up and running I will have to try the card again and see if I can get it recognized.

What I have not come across is a rule of thumb to estimate the performance improvement to see how much effort it is worth to get it working. Any suggestions? Seems to me if the CUDA stuff is up to its reputation even the cheapest card is worth the investment.

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

In SETI@home CUDA equipped

In SETI@home CUDA equipped PCs crunch WUs so fast that they overload the servers. Result: server shutdown 3 days a week and I cannot upload my CPU results. I hope this does not happen in Einstein.
Tullio

Alex
Alex
Joined: 1 Mar 05
Posts: 451
Credit: 500127910
RAC: 214437

RE: What I have not come

Message 99593 in response to message 99591

Quote:

What I have not come across is a rule of thumb to estimate the performance improvement to see how much effort it is worth to get it working. Any suggestions? Seems to me if the CUDA stuff is up to its reputation even the cheapest card is worth the investment.

Hi,
about one year ago I focused my interests to güu-processing. I joined mw@h and recognized that there are people with more than 100.000 credits per day. This was one reason for me to buy a HD3800 graphic card. The other reason was the need of a second monitor on my main PC.
It was really tricky to install the app, but afterwards I got ~20.000 credits granted per day.
Next upgrade was a HD4850, which is good for ~50.000 credits.
Now I have one machine with two ATI-cards and one machine with nVidia (GTX260-192).
And clearly I was looking for projects that make use of my gpu's.
What I've learned: ati (with its cal or stream - application) has three main projects: MW, Collatz and DNETC.
nVidia with the CUDA - application got more friends. And it's much more complicated, since Cuda is not Cuda.
This picture from the GPUGRID forum gives you some info:
http://koschmider.de/pics/desktop_chips.png
There is Cuda 1.0, Cuda 1.1, Cuda 1.2, Cuda 1.3, Cuda 2.0, Cuda 2.1 capability aso.
More infos here: http://www.gpugrid.net/forum_thread.php?id=1150

But thats all outdated. Since AMD and nVidia have a common programming standard defined, called OpenCL (CL is computing language) the developers started to develop OpenCL apps. The first ones are available at GPUGRID.
But not all cards are OpenCL compatible. Most Fermi-cards are, from AMD there is only the HD58xx series compatible.
Speed can be compared very well with MW-apps: they deliver always the same amount of data to crunch and you always get 213 credits for a correct result.
On an modern average CPU it takes ~2.5 hours (not really shure since my last CPU-wu is one year ago), my GTX260 needs ~17 min , my HD4870 needs 286 sec (4:40 min) and my HD5830 202 sec (3:20 min).
You can find a lot of information about nVidia, their capabilities and their problems at GPUGRID. They are specialists for that type of gpu.

Hope that helps.

Alexander

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

SETI Classic first and BOINC

SETI Classic first and BOINC after were built on the idea of exploiting "unused cycles". Now people are building home supercomputers just to run BOINC projects and get more credits. The whole structure of BOINC must be changed and adapted to this new situation in the direction of volunteer cloud computing. See the London talks by David Anderson and Ben Segal, whose slides are available on the BOINC site. And, as far as I understand, the SETI Institute and SetiQuest are following the same path (see www.setiquest.org).
Tullio

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 536524330
RAC: 186769

RE: SETI Classic first and

Message 99595 in response to message 99594

Quote:
SETI Classic first and BOINC after were built on the idea of exploiting "unused cycles". Now people are building home supercomputers just to run BOINC projects and get more credits. The whole structure of BOINC must be changed and adapted to this new situation in the direction of volunteer cloud computing.

That's not new. People have been building Farms of overclocked 700 MHz Semporns when the P3 1 GHz was the price-king-of-the-hill and were able to outperform it in SETI classic for ~100€ a build (scavanged HDD & PS, no case etc.). It's just that the difference between a value and a high end rig increased dramatically during the last years. Many more cores, more expensive CPUs, several GPUs, 1 kW PSUs.. all this makes those rigs look more impressive. And makes the financial commitment to crunching more serious.

I don't think it's going to be a trend which will overwhelm all projects, though:
- by now we've got many projects to choose from (it's much more of a problem for GPU projects)
- electricity is not going to become cheaper anytime soon, whereas power draw of high performance hardware is not going to drop either (limit: ~130W for CPUs, 300W for GPUs)
- as hardware becomes more capable more people will be served by a laptop or even smaller device rather than a full scale performance desktop. These smaller devices are going to become faster, but are not going to overwhelm the projects.

MrS

Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 536524330
RAC: 186769

OT @ Alex: your ATIs are

Message 99596 in response to message 99593

OT @ Alex: your ATIs are rather slow at Milkyway. My 4870 is clocked at 805 MHz core and 462 MHz memory and crunches through the WUs in 175s. I'm using an app_info.xml with the option b-1, everything else is stock (running 1 WU at a time). It does make the system a bit sluggish, but for me the RAC is worth it ;)

MrS

Scanning for our furry friends since Jan 2002

tullio
tullio
Joined: 22 Jan 05
Posts: 2118
Credit: 61407735
RAC: 0

SETI@home is shutting down 3

Message 99597 in response to message 99595

SETI@home is shutting down 3 days a week to cope with the uploads from fast CPUs and GPUs. So people are getting hundreds of WUs just to survive the down periods, taking advantage of the long SETI deadlines. So my CPU must wait and wait to have its single result uploaded.
Tullio

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.