A way to match your PC's processing power with others for quicker results?

DigitalDingus
DigitalDingus
Joined: 15 Oct 06
Posts: 15
Credit: 1,105,785
RAC: 0
Topic 193931

I was wondering why I had over 3,200 pending credits today (which means more like 10K points when all the formuli are computed) and then got to wondering about if the pending credits are due to other computers who are not as fast.

Has anyone thought of an app for E@H which "knows" your processing power for your particular machine ID and sends only machines which are similar to yours so the credits which are pending don't "pend" as long?

tullio
tullio
Joined: 22 Jan 05
Posts: 2,038
Credit: 39,702,008
RAC: 15,192

A way to match your PC's processing power with others for quicke

Quote:

I was wondering why I had over 3,200 pending credits today (which means more like 10K points when all the formuli are computed) and then got to wondering about if the pending credits are due to other computers who are not as fast.

Has anyone thought of an app for E@H which "knows" your processing power for your particular machine ID and sends only machines which are similar to yours so the credits which are pending don't "pend" as long?


MY RAC at QMC@home is rising rapidly and will soon be #1 among the 6 projects I am taking part in. Reason:quorum at QMC is one. Instead, at SETI, deadlines are oversized and credits are pending, especially since SETI started sending out Astropulse WUs which take 115 hours on my AMD Opteron 1210 running Linux at 1.8 GHz. But other CPUs and OS take even longer.
Tullio

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,210
Credit: 43,481,151,820
RAC: 44,290,897

RE: ... wondering about if

Quote:
... wondering about if the pending credits are due to other computers who are not as fast.

Not necessarily. Work it out for yourself. Would you rather be paired up with an old clunker of a host that takes 3 days to crunch a single task but whose owner has set the work cache to be just 0.1 days, or alternatively, with an overclocked screamer that can churn out 4 tasks every 6 hours but whose owner has set an 8 day cache? You'll get a much more rapid turn around with the old clunker under those conditions.

Quote:
Has anyone thought of an app for E@H which "knows" your processing power for your particular machine ID and sends only machines which are similar to yours so the credits which are pending don't "pend" as long?

IMO, that would be a complete waste of precious server resources. There is no problem with having a few pendings. If you want to limit the pendings a bit, the simplest way is to extend your cache size by a day or two so that tasks get to "mature" for a bit before you start crunching them. That way your quorum partners will be the ones complaining about pendings rather than your goodself :-).

Cheers,
Gary.

Gundolf Jahn
Gundolf Jahn
Joined: 1 Mar 05
Posts: 1,079
Credit: 341,280
RAC: 0

RE: IMO, that would be a

Message 85618 in response to message 85617

Quote:
IMO, that would be a complete waste of precious server resources. There is no problem with having a few pendings...


That is only right if one of the precious server resources, namely disk space, is plenty available. If not, as seen at SETI lately, you could save disk space by pairing up machines with comparable turn-around times, as discussed in this message thread going on there in number crunching.

Gruß,
Gundolf

Computer sind nicht alles im Leben. (Kleiner Scherz)

tullio
tullio
Joined: 22 Jan 05
Posts: 2,038
Credit: 39,702,008
RAC: 15,192

RE: RE: IMO, that would

Message 85619 in response to message 85618

Quote:
Quote:
IMO, that would be a complete waste of precious server resources. There is no problem with having a few pendings...

That is only right if one of the precious server resources, namely disk space, is plenty available. If not, as seen at SETI lately, you could save disk space by pairing up machines with comparable turn-around times, as discussed in this message thread going on there in number crunching.

Gruß,
Gundolf


Disk space is cheap today. I just bought a 160 GB HITACHI SATA disk, made in China, for 47.5 euros and it works beautifully on my SUN WS running Linux. But bigger disks are also available for less euros or dollars per GB.
Tullio

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6,125
Credit: 126,930,764
RAC: 13,477

RE: Disk space is cheap

Message 85620 in response to message 85619

Quote:
Disk space is cheap today. I just bought a 160 GB HITACHI SATA disk, made in China, for 47.5 euros and it works beautifully on my SUN WS running Linux. But bigger disks are also available for less euros or dollars per GB.


Yup, I recently got a Seagate 1TB ( well 1000GB to be 'exact' ) at ~ $0.20 / GB ( AUD ). The Aussie dollar is reasonably strong at present, so I snapped it up.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter. Blaise Pascal

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3,516
Credit: 455,876,996
RAC: 45,674

The primary concern with the

The primary concern with the scheduler now is (and I think has to be) to distribute work in a way so that the users can re-use datafiles for different workunints as often as possible, to avoid frequent downloads of datafiles. Yes, bandwidth is cheap as well nowadyas, but there are still users that pay for volume of have low bandwidth connections.

So if you also try to optimize wingman selection on the turn-around time of the hosts, you are further narrowing down the choices of the scheduler, and in the end you will end up with even more "unsent" WUs that take quite long to find a "suitable" wingman.

CU
Bikeman

tullio
tullio
Joined: 22 Jan 05
Posts: 2,038
Credit: 39,702,008
RAC: 15,192

I have few WUs in the

I have few WUs in the "pending credits" category in Einstein compared to what I have in SETI, where Astropulse units have long deadlines and if the wingman does not finish on time they are resent again and again. No problem in QMC@home, where quorum is one, and in CPDN. LHC sends too few WUs to create problems, also they are very short. Cheers.
Tullio

John Clark
John Clark
Joined: 4 May 07
Posts: 1,087
Credit: 3,143,193
RAC: 0

Unfortunately pending and

Unfortunately pending and wingmen can give problems.

I have this slow host with the last 2 day of output stuck at pending, waiting for wingmen.

Eight of the 9 WUs in pending belong to that host, which was quick 6 years ago.

Shih-Tzu are clever, cuddly, playful and rule!! Jack Russell are feisty!

Alinator
Alinator
Joined: 8 May 05
Posts: 927
Credit: 9,352,143
RAC: 0

RE: The primary concern

Message 85624 in response to message 85621

Quote:

The primary concern with the scheduler now is (and I think has to be) to distribute work in a way so that the users can re-use datafiles for different workunints as often as possible, to avoid frequent downloads of datafiles. Yes, bandwidth is cheap as well nowadyas, but there are still users that pay for volume of have low bandwidth connections.

So if you also try to optimize wingman selection on the turn-around time of the hosts, you are further narrowing down the choices of the scheduler, and in the end you will end up with even more "unsent" WUs that take quite long to find a "suitable" wingman.

CU
Bikeman

Agreed, trying to pair up hosts by speed would be a complete waste of project side effort, for this simple reason which folks almost always fail to take into account.

Regardless of how fast a host is, if it runs enough projects (or has high bias resource shares), then processing is deferred for the tasks with the lowest share and/or 'tightness' factor.

Therefore since EAH is fairly generous with the deadline, EAH tends to get put off until it is starting to get close to the deadline before it runs. As a result you could have the situation where the current-state-of-the-art machines are still taking almost the full deadline to complete a task, even though they could run it in a fraction of it if devoted solely (or at an even share) to EAH.

Regarding SAH, keep in mind that there is typically over 3 million tasks outstanding at an given time, and the typical reaction by a large percentage of people to any backend trouble there is to bump their cache settings to the maximum and leave it there. They could eliminate their storage 'problem' entirely by cutting back on how much work they let folks have sitting around in caches just waiting for a host system crash to trash and further delay matters. ;-)

I seem to recall that decision being made here not too long ago. :-)

Alinator

Odd-Rod
Odd-Rod
Joined: 15 Mar 05
Posts: 38
Credit: 4,270,708
RAC: 308

RE: EAH tends to get put

Message 85625 in response to message 85624

Quote:
EAH tends to get put off until it is starting to get close to the deadline before it runs.

I can confirm that this is what seems to happen with Boinc (it is in fact, a Boinc issue) - at least, that's what I've noticed. Perhaps not a good thing, because an unexpected host outage could cause a missed deadline.

Let me also mention that I'm detaching a weak/slow host from EAH after yet another error. Unfortunately there was also a lot of disc thrashing on that host when Einstein crunches. So if I was your wingman, please accept my apologies for extending your pending time - at least I can say it won't happen again!

Don't worry, my other hosts are staying.
Rod

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.