Six hours ago I put a modest 2GHz P4 into the Einstein fray with Akos's S41.07. It's done 16 WUs. In another six hours it'll be out of work. Is this a record?
I got a whole bunch of those super-short, 15-minute or less on akosf's app one night. I think my Pentium D shredded through its daily quota overnight--in less than 8 hours. Now that machine is splitting Eistein time with CPDN, so I don't usually go through the quota now. Plus, I doubt it's all that likely to hit that many short ones at once again.
... Plus, I doubt it's all that likely to hit that many short ones at once again.
I think it actually is very likely that if you get one short result, you will get more of them.
It comes from the WU data pool concept of Einstein. Each WU is part of a "Super-WU", once you have downloaded such a WU data pool, the server side scheduler will try to give you as many WUs from this pool as possible to reduce downloads.
From what I have seen so far, a WU pool containing short results always has a bunch of them.
Most pools don't seem to have short results at all though.
("pool" might not be the right word for it - but it works similar to a pool)
This is probably covered in some other postings, but to sum it up again:
- The data files you download contain detector data from a frequency band that is half a Hertz wide. The frequency is contained in the filename, and the filename in turn is part of the name of the Workunits that require this file.
- The scheduler has a feature called "locality scheduling" to avoid unnecessary downloads, i.e. he will try to assign work to your host that only requires the files you already have. Thus you will get "work" for the same frequency band that is covered by the same file.
- The lower the frequency, the more coarse we can make the grid of the sky positions to cover the sky, i.e. the less calculations we have to do. In the higher frequencies we distribute the frequencies and sky positions between the Workunits so that they have more or less the same length. However this doesn't work anymore below a certain frequency, resulting in the shorter workunits there. The boundary is at about 300Hz, i.e. if the WU name has a number below 300, it's a short one.
- Currently the only way to get rid of the short WUs once you have one is to either just wait until all the WUs referring to that file are through, or to reset the project.
- In the next run we will also have 'long' and 'short' Workunits. To deal with that we plan to modify the scheduler so that it prefers to give the shorter WUs to the slowest machines.
... Plus, I doubt it's all that likely to hit that many short ones at once again.
I think it actually is very likely that if you get one short result, you will get more of them.
It comes from the WU data pool concept of Einstein. Each WU is part of a "Super-WU", once you have downloaded such a WU data pool, the server side scheduler will try to give you as many WUs from this pool as possible to reduce downloads.
From what I have seen so far, a WU pool containing short results always has a bunch of them.
Most pools don't seem to have short results at all though.
("pool" might not be the right word for it - but it works similar to a pool)
An observation and suggestion: I seems to me that the longer the connect time the closer work units are received in sequence. If I use a smaller connect time then the units are spaced out greater (r1_XXXX.X_612, _609, _605, _601, _599, _594) in opposed to _613, _611, _610, _608, _607, _605. Using that logic if one has a smaller (such as .1) connect time they won't receive as many small units as a whole. Of course the time is dependent on what is best for the users computer and needs (laptop, etc.). I normally keep 2 days in queue but I have bumped it up to 5 and beyond and seen a whole slew (20+) of wunits that were subsequential (_100, _99, _98, _97, _96, _95) with me being the first to have them assigned to.
- In the next run we will also have 'long' and 'short' Workunits. To deal with that we plan to modify the scheduler so that it prefers to give the shorter WUs to the slowest machines.
That's a good one :-)
I was always suggesting to make scheduler a bit smart and I like this plan.
Welcome back, Gary...we've
)
Welcome back, Gary...we've missed your patient, knowledgeable and incisive responses....Cheers, Rog.
Six hours ago I put a modest
)
Six hours ago I put a modest 2GHz P4 into the Einstein fray with Akos's S41.07. It's done 16 WUs. In another six hours it'll be out of work. Is this a record?
I got a whole bunch of those
)
I got a whole bunch of those super-short, 15-minute or less on akosf's app one night. I think my Pentium D shredded through its daily quota overnight--in less than 8 hours. Now that machine is splitting Eistein time with CPDN, so I don't usually go through the quota now. Plus, I doubt it's all that likely to hit that many short ones at once again.
RE: ... Plus, I doubt it's
)
I think it actually is very likely that if you get one short result, you will get more of them.
It comes from the WU data pool concept of Einstein. Each WU is part of a "Super-WU", once you have downloaded such a WU data pool, the server side scheduler will try to give you as many WUs from this pool as possible to reduce downloads.
From what I have seen so far, a WU pool containing short results always has a bunch of them.
Most pools don't seem to have short results at all though.
("pool" might not be the right word for it - but it works similar to a pool)
This is probably covered in
)
This is probably covered in some other postings, but to sum it up again:
- The data files you download contain detector data from a frequency band that is half a Hertz wide. The frequency is contained in the filename, and the filename in turn is part of the name of the Workunits that require this file.
- The scheduler has a feature called "locality scheduling" to avoid unnecessary downloads, i.e. he will try to assign work to your host that only requires the files you already have. Thus you will get "work" for the same frequency band that is covered by the same file.
- The lower the frequency, the more coarse we can make the grid of the sky positions to cover the sky, i.e. the less calculations we have to do. In the higher frequencies we distribute the frequencies and sky positions between the Workunits so that they have more or less the same length. However this doesn't work anymore below a certain frequency, resulting in the shorter workunits there. The boundary is at about 300Hz, i.e. if the WU name has a number below 300, it's a short one.
- Currently the only way to get rid of the short WUs once you have one is to either just wait until all the WUs referring to that file are through, or to reset the project.
- In the next run we will also have 'long' and 'short' Workunits. To deal with that we plan to modify the scheduler so that it prefers to give the shorter WUs to the slowest machines.
BM
BM
RE: RE: ... Plus, I doubt
)
An observation and suggestion: I seems to me that the longer the connect time the closer work units are received in sequence. If I use a smaller connect time then the units are spaced out greater (r1_XXXX.X_612, _609, _605, _601, _599, _594) in opposed to _613, _611, _610, _608, _607, _605. Using that logic if one has a smaller (such as .1) connect time they won't receive as many small units as a whole. Of course the time is dependent on what is best for the users computer and needs (laptop, etc.). I normally keep 2 days in queue but I have bumped it up to 5 and beyond and seen a whole slew (20+) of wunits that were subsequential (_100, _99, _98, _97, _96, _95) with me being the first to have them assigned to.
RE: - In the next run we
)
That's a good one :-)
I was always suggesting to make scheduler a bit smart and I like this plan.