My WUs only allocated to one other host

Gordon Grant
Gordon Grant
Joined: 19 Sep 05
Posts: 15
Credit: 283586
RAC: 0
Topic 190011

All the 4 WUs that I have been allocated in the last couple of days have been allocated to only one other host, and remain so. This seems very odd, as though I am in lockstep with two other hosts which are refusing new work. Anyone seen this sort of thing before?

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5854
Credit: 111307961533
RAC: 34937469

My WUs only allocated to one other host

Quote:
All the 4 WUs that I have been allocated in the last couple of days have been allocated to only one other host. This seems very odd, as though I am in lockstep with two other hosts which are refusing new work. Anyone seen this sort of thing before?

Yes, many, many times :). It's just the way the system works. The gap can even be as high as several days before there are the full 4 crunchers. You and the other person with you are "frontrunners". Eventually data files will be sent to 2 more and they will start crunching. Just be patient - all is well.

Cheers,
Gary.

Gordon Grant
Gordon Grant
Joined: 19 Sep 05
Posts: 15
Credit: 283586
RAC: 0

Well, it's not quite as

Message 18256 in response to message 18255

Well, it's not quite as simple as that, I feel. ALL of my new WUs suffer from this problem and there is no sign of any further hosts being added to any of them. I don't see this problem when I inspect the machines of other crunchers.

Even more oddly, the ONE host is always the SAME host (machine #41082). I decided to try an experiment to see if I could shake this problem off. I aborted the three 'waiting to run' WUs. Eventually I was given three replacements and they ALL have the same problem, apart from the one for which I am the only host so far.

I'm building a long queue of pending credits and all my current work seems set to join this queue indefinitely. Perhaps I should stop crunching for a few weeks, and try again?

Michael Roycraft
Michael Roycraft
Joined: 10 Mar 05
Posts: 846
Credit: 157718
RAC: 0

RE: Well, it's not quite as

Message 18257 in response to message 18256

Quote:

Well, it's not quite as simple as that, I feel. ALL of my new WUs suffer from this problem and there is no sign of any further hosts being added to any of them. I don't see this problem when I inspect the machines of other crunchers.

Even more oddly, the ONE host is always the SAME host (machine #41082). I decided to try an experiment to see if I could shake this problem off. I aborted the three 'waiting to run' WUs. Eventually I was given three replacements and they ALL have the same problem, apart from the one for which I am the only host so far.

I'm building a long queue of pending credits and all my current work seems set to join this queue indefinitely. Perhaps I should stop crunching for a few weeks, and try again?

My advice is to continue crunching, have some patience, and you will find that everything sorts itself out. I've seen posts where a person is the SOLE cruncher on several consecutive WUs, lasting for several days. Eventually the server assigns these WUs to the full quota of 4 users, or even more, if there is a client error for one or more of them, until at least a quorum of 3 validated WUs is formed.

re: credits pending indefinately - it never happens. For example, about 2 months ago, my results list had grown to over 60 WUs, 36 of which were completed and credits pending (over 2400 credits)! I was invited out for an extended boat ride down the coast, and left my computer running, Einstein crunching. Upon my return 10 days later, my machine had crunched another 40+ WUs, but the number of WUs pending credit had shrunk to 8. Nearly all of the previously completed WUs and newly completed WUs pending credit had reached the quorum and been granted their credit, and my total credit had increased a whopping 5000! Patience, my friend.

It comes down to this - are you in it for the short term, near-instant gratification, or are you more committed to contributing to science and seeing longer-term results? Only you can decide this.

I hope this helps. Respects.

microcraft
"The arc of history is long, but it bends toward justice" - MLK

C
C
Joined: 9 Feb 05
Posts: 94
Credit: 189446
RAC: 0

The reason this happens is

The reason this happens is because a large chunk of data is downloaded to your machine, and then "pieces" of it are converted to individual WU on your machine. It crunches one of these and sends in the results, then crunches another portion of the big block. You appear to be in lock-step with a single other computer because you both received the large block of data about the same time, and it just hasn't been sent out to others yet by the server. I've had this happen several times before, with some WU sitting all by themselves for 3 or 4 days on just my computer. As Gary and Michael said - just give it time and someone else will be issued the block of data, crunch some WU and you'll suddenly receive a large pile of credits all at once.

C

Gordon Grant
Gordon Grant
Joined: 19 Sep 05
Posts: 15
Credit: 283586
RAC: 0

RE: The reason this happens

Message 18259 in response to message 18258

Quote:

The reason this happens is because a large chunk of data is downloaded to your machine, and then "pieces" of it are converted to individual WU on your machine. It crunches one of these and sends in the results, then crunches another portion of the big block. You appear to be in lock-step with a single other computer because you both received the large block of data about the same time, and it just hasn't been sent out to others yet by the server. I've had this happen several times before, with some WU sitting all by themselves for 3 or 4 days on just my computer. As Gary and Michael said - just give it time and someone else will be issued the block of data, crunch some WU and you'll suddenly receive a large pile of credits all at once.

C


Thanks, that makes perfect sense.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.