Hi!
I was looking among the results of my computer "425794" who computed the same work units... when I noticed with surprise that users was not "random".
For example, considering only the last 40 results, the computer "97034" of user "Oki" co-computed together my pc about 14 units.
The same did the computer "387767" of Rockytop (14/40) and the computer "416689" of Chiana (13/40).
The majority of the computers I found, were "recurring computers".
Why does this happen?
Is it an artifact of the scheduling agent or is it a sort of balancing system?
Copyright © 2024 Einstein@Home. All rights reserved.
Companions
)
The results you process are "sliced" from a large data file. With a fast machine you could easily have 10-15 results all from that one large data file. Whoever else gets the same large data file is going to see the same behaviour. If you don't see the sort of behaviour you mention then you should get worried :).
Towards the end of a "run" you often get just the odd result or two from each data file as the dregs are being cleaned up. Then you wont have "constant companions" :).
Cheers,
Gary.
I mean... for example: I
)
I mean... for example:
I received this workunit yesterday and it was sent also to the computer number 404698 of user "Waru".
http://einsteinathome.org/workunit/2490874
The same happened also with a workunit I received on 12 October:
http://einsteinathome.org/workunit/2335008
Does they come both from the same data file?
I would like to stress also that my computers "Connects to network about every" 0.1 days, so they download only one unit at once. It looks odd, doesn't it?
RE: http://einstein.phys.uw
)
Yes. Look at the WU name. Both begin with "l1_1445.0__1445".
Not really. It is done to save bandwidth and help volunteers with
small band internet connection.
Team Linux Users Everywhere
RE: Does they come both
)
Yes. In your BOINC folder, look in projects/einstein.phys.uwm.edu for a 6,236KB file called "l1_1445.0". That's the large data file from which you "slice" your consecutive results. Each time you request more work, you are actually sent instructions on how to take the next "slice" from the large data file.
On the website click on each of your result IDs. Each one with a name that starts with the above string was sliced from that large data file.
To have 0.1 is great since you wont have excess work lying around going stale. However it's got nothing to do with the fact that you will have a string of results sliced from the same large data file until that file is used up. It would work pretty much the same if you got 10 results in one hit or 10 results, one at a time. In both cases the data you process is all sitting on your machine all the time.
Cheers,
Gary.
Oh... that really make sense.
)
Oh... that really make sense. Sorry for the stupid question. :)
Thank you a lot for the fast answers and good computing!
If you look in the Wiki and
)
If you look in the Wiki and the einstein FAQ therein, we give an example and explain the WHY ... :)
Read up on it and report back ... you did not know you would be getting homework ... did you?
:)