I was wondering why I had over 3,200 pending credits today (which means more like 10K points when all the formuli are computed) and then got to wondering about if the pending credits are due to other computers who are not as fast.
Has anyone thought of an app for E@H which "knows" your processing power for your particular machine ID and sends only machines which are similar to yours so the credits which are pending don't "pend" as long?
Copyright © 2024 Einstein@Home. All rights reserved.
A way to match your PC's processing power with others for quicke
)
MY RAC at QMC@home is rising rapidly and will soon be #1 among the 6 projects I am taking part in. Reason:quorum at QMC is one. Instead, at SETI, deadlines are oversized and credits are pending, especially since SETI started sending out Astropulse WUs which take 115 hours on my AMD Opteron 1210 running Linux at 1.8 GHz. But other CPUs and OS take even longer.
Tullio
RE: ... wondering about if
)
Not necessarily. Work it out for yourself. Would you rather be paired up with an old clunker of a host that takes 3 days to crunch a single task but whose owner has set the work cache to be just 0.1 days, or alternatively, with an overclocked screamer that can churn out 4 tasks every 6 hours but whose owner has set an 8 day cache? You'll get a much more rapid turn around with the old clunker under those conditions.
IMO, that would be a complete waste of precious server resources. There is no problem with having a few pendings. If you want to limit the pendings a bit, the simplest way is to extend your cache size by a day or two so that tasks get to "mature" for a bit before you start crunching them. That way your quorum partners will be the ones complaining about pendings rather than your goodself :-).
Cheers,
Gary.
RE: IMO, that would be a
)
That is only right if one of the precious server resources, namely disk space, is plenty available. If not, as seen at SETI lately, you could save disk space by pairing up machines with comparable turn-around times, as discussed in this message thread going on there in number crunching.
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
RE: RE: IMO, that would
)
Disk space is cheap today. I just bought a 160 GB HITACHI SATA disk, made in China, for 47.5 euros and it works beautifully on my SUN WS running Linux. But bigger disks are also available for less euros or dollars per GB.
Tullio
RE: Disk space is cheap
)
Yup, I recently got a Seagate 1TB ( well 1000GB to be 'exact' ) at ~ $0.20 / GB ( AUD ). The Aussie dollar is reasonably strong at present, so I snapped it up.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
The primary concern with the
)
The primary concern with the scheduler now is (and I think has to be) to distribute work in a way so that the users can re-use datafiles for different workunints as often as possible, to avoid frequent downloads of datafiles. Yes, bandwidth is cheap as well nowadyas, but there are still users that pay for volume of have low bandwidth connections.
So if you also try to optimize wingman selection on the turn-around time of the hosts, you are further narrowing down the choices of the scheduler, and in the end you will end up with even more "unsent" WUs that take quite long to find a "suitable" wingman.
CU
Bikeman
I have few WUs in the
)
I have few WUs in the "pending credits" category in Einstein compared to what I have in SETI, where Astropulse units have long deadlines and if the wingman does not finish on time they are resent again and again. No problem in QMC@home, where quorum is one, and in CPDN. LHC sends too few WUs to create problems, also they are very short. Cheers.
Tullio
Unfortunately pending and
)
Unfortunately pending and wingmen can give problems.
I have this slow host with the last 2 day of output stuck at pending, waiting for wingmen.
Eight of the 9 WUs in pending belong to that host, which was quick 6 years ago.
Shih-Tzu are clever, cuddly, playful and rule!! Jack Russell are feisty!
RE: The primary concern
)
Agreed, trying to pair up hosts by speed would be a complete waste of project side effort, for this simple reason which folks almost always fail to take into account.
Regardless of how fast a host is, if it runs enough projects (or has high bias resource shares), then processing is deferred for the tasks with the lowest share and/or 'tightness' factor.
Therefore since EAH is fairly generous with the deadline, EAH tends to get put off until it is starting to get close to the deadline before it runs. As a result you could have the situation where the current-state-of-the-art machines are still taking almost the full deadline to complete a task, even though they could run it in a fraction of it if devoted solely (or at an even share) to EAH.
Regarding SAH, keep in mind that there is typically over 3 million tasks outstanding at an given time, and the typical reaction by a large percentage of people to any backend trouble there is to bump their cache settings to the maximum and leave it there. They could eliminate their storage 'problem' entirely by cutting back on how much work they let folks have sitting around in caches just waiting for a host system crash to trash and further delay matters. ;-)
I seem to recall that decision being made here not too long ago. :-)
Alinator
RE: EAH tends to get put
)
I can confirm that this is what seems to happen with Boinc (it is in fact, a Boinc issue) - at least, that's what I've noticed. Perhaps not a good thing, because an unexpected host outage could cause a missed deadline.
Let me also mention that I'm detaching a weak/slow host from EAH after yet another error. Unfortunately there was also a lot of disc thrashing on that host when Einstein crunches. So if I was your wingman, please accept my apologies for extending your pending time - at least I can say it won't happen again!
Don't worry, my other hosts are staying.
Rod