> > Hit the enter key a little too quick. On my one testbed computer running
> two
> > projects the resources only control time spend on each project. The
> most
> > recently updated cache setting is used accross the board for both
> projects.
> > The cache for each project doesn't see the other, they both use the most
> > recently updated setting, which may make the total cache extend beyond
> the
> > deadline(s). There, think I got it right this time. But then you can
> run
> > projects independantly, on the same computer, to get around this as I
> > mentioned. Haven't tried yet.
>
> Yes, cache settings propagate to all projects you are attached to, and
> resource share is a per project setting. I'm not talking about the cache per
> se, but the resource shares effect on the cache. Nor am I talking about the
> resource shares effect on how long a particular project runs for.
>
> Let me try again...
>
> Some have brought up in this post (and others) that resource share doesn't
> influence the scheduler on their machines. I see something very different.
>
> Using LHC and Pirates as a test bed is ideal right now because they aren't
> currently giving me any work and I have no WUs from them currently in my queue
> to influence the amount of work requested. (and each update will request
> work)
>
> (Tests using CC ver. 4.19, with my current cache setting of 2 days)
>
> Pirates resource share 50 - Updating project asks for 10,173 seconds of work.
> Pirates resource share 100 - Project now asks for 19,156 seconds of work.
> Pirates resource share 1000 - Project now asks for 93,045 seconds of work.
>
> LHC resource share 50 - Updating project asks for 10,173 seconds of work.
> LHC resource share 100 - Project now asks for 19,150 seconds of work.
> LHC resource share 1000 - Project now asks for 93,013 seconds of work.
>
> Resource share is indeed influencing the amount of work being requested
> by the scheduler. Both projects are asking for almost exactly the same amount
> of work with the same share, and both ask for more with a higher share.
>
> So back to my original question. How come some are saying that resource share
> does not influence the scheduler? Apparently it works differently on various
> systems?
>
> In a nutshell, is this a bug?
>
> It seems to me if this is the case, yes, it can very definitely FUBAR how
> projects interact on some systems.
>
> Hopefully I explained it well enough that people can follow me this time,
> otherwise, I give up...
>
It may very well be a difference of CC versions. I am runnint 4.62 on Windows systems. If the people that are reporting that it DOES affect the amount of work downloaded are running 4.19, that would nicely explain the difference. If that is the case, I would think that is a bug in 4.62.
This is an interesting discussion of resource share. I was just thinking (dangerous, I know) -- on my systems, everything has the default resource share of 100. This results in a system running 5 projects getting a resource share of 20 for each. Suppose I set the resource share for each project to 20 (what it actually gets). Would each project request the same amount of work? What if I set them all to 1000? In other words, is the amount of work requested dependent on the *requested* (or absolute) resource share or the resultant (or relative) resource share? Assume I want them all to be the same.
A good experiment if I get the time to play with it, or if someone else feels like it...
> This is an interesting discussion of resource share. I was just thinking
> (dangerous, I know) -- on my systems, everything has the default resource
> share of 100. This results in a system running 5 projects getting a resource
> share of 20 for each. Suppose I set the resource share for each project to 20
> (what it actually gets). Would each project request the same amount of work?
> What if I set them all to 1000? In other words, is the amount of work
> requested dependent on the *requested* (or absolute) resource share or the
> resultant (or relative) resource share? Assume I want them all to be the
> same.
Well, I have three projects to test with that either aren't currently active, or have no work to give. Boinc beta, LHC, and Pirates. Moving their resource shares up and down simultaneously always requests the exact same amount from each project. If I move them up and down separately, the numbers are a few seconds off. (similar to my earlier post showing LHC and Pirates at different shares)
And like the tests I've already done, the lower the resource share, the less amount of work is requested.
Obviously they don't all ask for as much work when all are set to 1000, as a single project at 1000 does because of the percentage split.
Now while I had four other projects I wasn't adjusting the shares on, I don't have any reason to think that it wouldn't react the same way if I was adjusting all projects. I believe they would all request the same given equal shares.
It seems to me that this is where a lot of people are getting messed up. It looks like the general consensus is that resource share only affects how long a project runs, and people are wantonly futzing with this not realizing it is also wonking the scheduler. I see a lot of "Why am I getting so much work if I'm only set to connect every day?" type posts. I wouldn't be surprised if they have the resource share bumped up on that project and it causes them to overshoot. It gets especially hairy if they decide they want a few more WUs on the projects with lower shares so they increase the connect to value, then freak out because the project with the higher resource share is now asking for way more work than any other project.
I'd still like to hear other peoples resource share experiences and what CC they are using, because it still seems like something isn't right somewhere. JM7 is using 4.62, and resource share doesn't scale the work requested, I'm using 4.19 and it very definitely scales the requests by share.
Hmm, I'm using 4.19. I've never set any project higher than any other, (i.e. all have always been set to 100 and still are), and they all always request too much work. I used to have my cache set to 2 days (whatever that means), and I would get more than a week's work for each project responding (currently 4 out of 5, say, not counting Pirates). I have reduced the cache setting to one day, and we'll see how that works out.
I *DID* notice that my CPDN cache setting was 5 days, and would NOT track the others, so I had to manually set that also to the same thing. I have an unresolved problem with CPDN -- I can't set my email to the same as all other BOINC projects -- apparently they either can't send me the confirming email or my ISP is blocking their server, but that's a different issue I'm still trying to resolve. But anyway, it almost doesn't matter with CPDN since ANY work from them takes at least a month.
> Hmm, I'm using 4.19. I've never set any project higher than any other, (i.e.
> all have always been set to 100 and still are), and they all always request
> too much work. I used to have my cache set to 2 days (whatever that means),
> and I would get more than a week's work for each project responding (currently
> 4 out of 5, say, not counting Pirates). I have reduced the cache setting to
> one day, and we'll see how that works out.
I wasn't saying that you won't get more work than you expect with a resource share of 100. Simply that you have a much better chance of that happening if you set a project higher than 100.
And the big problem isn't really too much work from a single project, but the fact that every project you attach to tries to download a full queue. If you set your "connect to" at 2 days, all project will ask for 2 days. This can easily put some systems into the red.
> I *DID* notice that my CPDN cache setting was 5 days, and would NOT track the
> others, so I had to manually set that also to the same thing. I have an
> unresolved problem with CPDN -- I can't set my email to the same as all other
> BOINC projects -- apparently they either can't send me the confirming email or
> my ISP is blocking their server, but that's a different issue I'm still trying
> to resolve. But anyway, it almost doesn't matter with CPDN since ANY work
> from them takes at least a month.
Since CPDN doesn't contact the server very often, it can take a while to update the global settings.
> (Tests using CC ver. 4.19, with my current cache setting of 2 days)
>
> Pirates resource share 50 - Updating project asks for 10,173 seconds of work.
> Pirates resource share 100 - Project now asks for 19,156 seconds of work.
> Pirates resource share 1000 - Project now asks for 93,045 seconds of work.
>
> LHC resource share 50 - Updating project asks for 10,173 seconds of work.
> LHC resource share 100 - Project now asks for 19,150 seconds of work.
> LHC resource share 1000 - Project now asks for 93,013 seconds of work.
Without knowing what percentage of crunching time this resource share gives, I cannot tell if the amounts in seconds on your PC are reasonable or not.
Looking at my system, Boinc 4.19, and a current request for work on Predictor:
Resource share is 40 which is 12.5%, and 1 day cache.
It is requesting:
May run out of work in 1.00 days; requesting more
Requesting 12775 seconds of work
12775 seconds is in fact 14.78% of the computer's available time., or a 27.5 hour day!
If this error is being compounded on longer cache settings ....
You can't get a part WU, so the next whole one will be downloaded. This is based on the project's "target time for completion", of the WU.
We can see that the boinc client is demanding too much work and this is compounded by the WU rounding up.
> Without knowing what percentage of crunching time this resource share gives, I
> cannot tell if the amounts in seconds on your PC are reasonable or not.
In the case of LHC, the 1000 is definitely not reasonable, but that's not really the point of my posting that data. I'm simply showing that resource share does cause BOINC to ask for more work from the scheduler. (at least using CC 4.19)
The point I've been trying to make is that since some are not seeing this behavior on their systems, something is obviously messed up somewhere, and should definitely be looked at. Apparently as JM7 states, 4.62 does not take resource share into account when requesting work. I'm curious whether it is only 4.62, or if others are seeing the same thing using other CCs. (such as 4.19)
Hi,
To control exactly the % of the ressources, I use the following figures:
CPDN:500
E@H:300
P@H:80
LHC:80
Seti:40
which give: 50%,30%,8%,8%,4% :o)
With a 0.1 day of cache on one machine (the slow one) and 4 days (to have 1 week of work to crunch) of cache on the second machine.
I noticed that CC4.62 is downloading more Wus than CC4.19
> Hi,
> To control exactly the % of the ressources, I use the following figures:
> CPDN:500
> E@H:300
> P@H:80
> LHC:80
> Seti:40
> which give: 50%,30%,8%,8%,4% :o)
> With a 0.1 day of cache on one machine (the slow one) and 4 days (to have 1
> week of work to crunch) of cache on the second machine.
> I noticed that CC4.62 is downloading more Wus than CC4.19
Which would make sense if 4.62 isn't taking resource share into account when requesting more. All projects are downloading WUs as if their share was 100.
Whereas with your 4.19 install, you should see less because CPDN really won't download any more WUs with it's higher share because the WUs are so long as to make it irrelevent. It's effectively 100 as far as the scheduler is concerned. E@H will most likely only give you one or two WUs more than if it was set at 100 due to it's longer run times than the rest of the projects. However, P@H, LHC and S@H will all be requesting less WUs with the lower resource shares.
So it sounds like 4.19 is behaving, and 4.62 is hosed...
Experiment to resolve cache deadline issue when running multiple projects on one computer. Made separate directories for each project. Installed BoincCLI in each and attached to just one project, in my case just S@H and E@H. Used Webmin in my Linux system to create 3 cron jobs to turn projects on and off. To turn off each project just issed killall boinc_4.19_i686-pc-linux-gnu via cron. Killing Boinc also kills the project client which is a child process. So don't have to keep track of project client version number. Cron job wouldn't directly turn on the project(s) for some reason, got "another instance of BOINC running" error message (permissions/lockfile issue?), but top or ps didn't show anything running. So used a couple of short bash scripts #!/bin/sh, then cd /path to project directory/, then ./boinc_4.19_i686-pc-liux-gnu &. Made scripts executable. Then used cron to start each project via its bash script. Had to use two scripts because each project is in a separate directory.
Setup has been running for two days without problems. Advantages are you have complete control of run time for each project, run S@H from 9am to 5:59pm and run E@H from 6pm to 8:59am when E@H is less likely to be interrupted. Have complete control of cache for each project so can adjust for deadline. Noticed if running multiple projects from a single Boinc client that the most recent cache setting migrates to all project webpages. Since I have other computers running only S@H these were effected also, so I can now keep the cache limit I want. Some disadvantages are that more manual overhead is involved in setting up the projects. If a project runs out of wu's Boinc now won't automatically switch over to aother project, but having independant control of the cache makes this less likely. System working in Linux, should also work on a recent Mac since nix based, don't know if Windows has a similar cron function. Will migrate to other computer systems when E@H gets out of beta. Will likely discard when Boinc has cache deadline check capability built in, or a problem shows up. So far so good.
> > Hit the enter key a
)
> > Hit the enter key a little too quick. On my one testbed computer running
> two
> > projects the resources only control time spend on each project. The
> most
> > recently updated cache setting is used accross the board for both
> projects.
> > The cache for each project doesn't see the other, they both use the most
> > recently updated setting, which may make the total cache extend beyond
> the
> > deadline(s). There, think I got it right this time. But then you can
> run
> > projects independantly, on the same computer, to get around this as I
> > mentioned. Haven't tried yet.
>
> Yes, cache settings propagate to all projects you are attached to, and
> resource share is a per project setting. I'm not talking about the cache per
> se, but the resource shares effect on the cache. Nor am I talking about the
> resource shares effect on how long a particular project runs for.
>
> Let me try again...
>
> Some have brought up in this post (and others) that resource share doesn't
> influence the scheduler on their machines. I see something very different.
>
> Using LHC and Pirates as a test bed is ideal right now because they aren't
> currently giving me any work and I have no WUs from them currently in my queue
> to influence the amount of work requested. (and each update will request
> work)
>
> (Tests using CC ver. 4.19, with my current cache setting of 2 days)
>
> Pirates resource share 50 - Updating project asks for 10,173 seconds of work.
> Pirates resource share 100 - Project now asks for 19,156 seconds of work.
> Pirates resource share 1000 - Project now asks for 93,045 seconds of work.
>
> LHC resource share 50 - Updating project asks for 10,173 seconds of work.
> LHC resource share 100 - Project now asks for 19,150 seconds of work.
> LHC resource share 1000 - Project now asks for 93,013 seconds of work.
>
> Resource share is indeed influencing the amount of work being requested
> by the scheduler. Both projects are asking for almost exactly the same amount
> of work with the same share, and both ask for more with a higher share.
>
> So back to my original question. How come some are saying that resource share
> does not influence the scheduler? Apparently it works differently on various
> systems?
>
> In a nutshell, is this a bug?
>
> It seems to me if this is the case, yes, it can very definitely FUBAR how
> projects interact on some systems.
>
> Hopefully I explained it well enough that people can follow me this time,
> otherwise, I give up...
>
It may very well be a difference of CC versions. I am runnint 4.62 on Windows systems. If the people that are reporting that it DOES affect the amount of work downloaded are running 4.19, that would nicely explain the difference. If that is the case, I would think that is a bug in 4.62.
BOINC WIKI
This is an interesting
)
This is an interesting discussion of resource share. I was just thinking (dangerous, I know) -- on my systems, everything has the default resource share of 100. This results in a system running 5 projects getting a resource share of 20 for each. Suppose I set the resource share for each project to 20 (what it actually gets). Would each project request the same amount of work? What if I set them all to 1000? In other words, is the amount of work requested dependent on the *requested* (or absolute) resource share or the resultant (or relative) resource share? Assume I want them all to be the same.
A good experiment if I get the time to play with it, or if someone else feels like it...
Happy crunching
-Gene
> This is an interesting
)
> This is an interesting discussion of resource share. I was just thinking
> (dangerous, I know) -- on my systems, everything has the default resource
> share of 100. This results in a system running 5 projects getting a resource
> share of 20 for each. Suppose I set the resource share for each project to 20
> (what it actually gets). Would each project request the same amount of work?
> What if I set them all to 1000? In other words, is the amount of work
> requested dependent on the *requested* (or absolute) resource share or the
> resultant (or relative) resource share? Assume I want them all to be the
> same.
Well, I have three projects to test with that either aren't currently active, or have no work to give. Boinc beta, LHC, and Pirates. Moving their resource shares up and down simultaneously always requests the exact same amount from each project. If I move them up and down separately, the numbers are a few seconds off. (similar to my earlier post showing LHC and Pirates at different shares)
And like the tests I've already done, the lower the resource share, the less amount of work is requested.
Obviously they don't all ask for as much work when all are set to 1000, as a single project at 1000 does because of the percentage split.
Now while I had four other projects I wasn't adjusting the shares on, I don't have any reason to think that it wouldn't react the same way if I was adjusting all projects. I believe they would all request the same given equal shares.
It seems to me that this is where a lot of people are getting messed up. It looks like the general consensus is that resource share only affects how long a project runs, and people are wantonly futzing with this not realizing it is also wonking the scheduler. I see a lot of "Why am I getting so much work if I'm only set to connect every day?" type posts. I wouldn't be surprised if they have the resource share bumped up on that project and it causes them to overshoot. It gets especially hairy if they decide they want a few more WUs on the projects with lower shares so they increase the connect to value, then freak out because the project with the higher resource share is now asking for way more work than any other project.
I'd still like to hear other peoples resource share experiences and what CC they are using, because it still seems like something isn't right somewhere. JM7 is using 4.62, and resource share doesn't scale the work requested, I'm using 4.19 and it very definitely scales the requests by share.
Hmm, I'm using 4.19. I've
)
Hmm, I'm using 4.19. I've never set any project higher than any other, (i.e. all have always been set to 100 and still are), and they all always request too much work. I used to have my cache set to 2 days (whatever that means), and I would get more than a week's work for each project responding (currently 4 out of 5, say, not counting Pirates). I have reduced the cache setting to one day, and we'll see how that works out.
I *DID* notice that my CPDN cache setting was 5 days, and would NOT track the others, so I had to manually set that also to the same thing. I have an unresolved problem with CPDN -- I can't set my email to the same as all other BOINC projects -- apparently they either can't send me the confirming email or my ISP is blocking their server, but that's a different issue I'm still trying to resolve. But anyway, it almost doesn't matter with CPDN since ANY work from them takes at least a month.
> Hmm, I'm using 4.19. I've
)
> Hmm, I'm using 4.19. I've never set any project higher than any other, (i.e.
> all have always been set to 100 and still are), and they all always request
> too much work. I used to have my cache set to 2 days (whatever that means),
> and I would get more than a week's work for each project responding (currently
> 4 out of 5, say, not counting Pirates). I have reduced the cache setting to
> one day, and we'll see how that works out.
I wasn't saying that you won't get more work than you expect with a resource share of 100. Simply that you have a much better chance of that happening if you set a project higher than 100.
And the big problem isn't really too much work from a single project, but the fact that every project you attach to tries to download a full queue. If you set your "connect to" at 2 days, all project will ask for 2 days. This can easily put some systems into the red.
> I *DID* notice that my CPDN cache setting was 5 days, and would NOT track the
> others, so I had to manually set that also to the same thing. I have an
> unresolved problem with CPDN -- I can't set my email to the same as all other
> BOINC projects -- apparently they either can't send me the confirming email or
> my ISP is blocking their server, but that's a different issue I'm still trying
> to resolve. But anyway, it almost doesn't matter with CPDN since ANY work
> from them takes at least a month.
Since CPDN doesn't contact the server very often, it can take a while to update the global settings.
> (Tests using CC ver. 4.19,
)
> (Tests using CC ver. 4.19, with my current cache setting of 2 days)
>
> Pirates resource share 50 - Updating project asks for 10,173 seconds of work.
> Pirates resource share 100 - Project now asks for 19,156 seconds of work.
> Pirates resource share 1000 - Project now asks for 93,045 seconds of work.
>
> LHC resource share 50 - Updating project asks for 10,173 seconds of work.
> LHC resource share 100 - Project now asks for 19,150 seconds of work.
> LHC resource share 1000 - Project now asks for 93,013 seconds of work.
Without knowing what percentage of crunching time this resource share gives, I cannot tell if the amounts in seconds on your PC are reasonable or not.
Looking at my system, Boinc 4.19, and a current request for work on Predictor:
Resource share is 40 which is 12.5%, and 1 day cache.
It is requesting:
May run out of work in 1.00 days; requesting more
Requesting 12775 seconds of work
12775 seconds is in fact 14.78% of the computer's available time., or a 27.5 hour day!
If this error is being compounded on longer cache settings ....
You can't get a part WU, so the next whole one will be downloaded. This is based on the project's "target time for completion", of the WU.
We can see that the boinc client is demanding too much work and this is compounded by the WU rounding up.
> Without knowing what
)
> Without knowing what percentage of crunching time this resource share gives, I
> cannot tell if the amounts in seconds on your PC are reasonable or not.
In the case of LHC, the 1000 is definitely not reasonable, but that's not really the point of my posting that data. I'm simply showing that resource share does cause BOINC to ask for more work from the scheduler. (at least using CC 4.19)
The point I've been trying to make is that since some are not seeing this behavior on their systems, something is obviously messed up somewhere, and should definitely be looked at. Apparently as JM7 states, 4.62 does not take resource share into account when requesting work. I'm curious whether it is only 4.62, or if others are seeing the same thing using other CCs. (such as 4.19)
Hi, To control exactly the %
)
Hi,
To control exactly the % of the ressources, I use the following figures:
CPDN:500
E@H:300
P@H:80
LHC:80
Seti:40
which give: 50%,30%,8%,8%,4% :o)
With a 0.1 day of cache on one machine (the slow one) and 4 days (to have 1 week of work to crunch) of cache on the second machine.
I noticed that CC4.62 is downloading more Wus than CC4.19
Arnaud
> Hi, > To control exactly
)
> Hi,
> To control exactly the % of the ressources, I use the following figures:
> CPDN:500
> E@H:300
> P@H:80
> LHC:80
> Seti:40
> which give: 50%,30%,8%,8%,4% :o)
> With a 0.1 day of cache on one machine (the slow one) and 4 days (to have 1
> week of work to crunch) of cache on the second machine.
> I noticed that CC4.62 is downloading more Wus than CC4.19
Which would make sense if 4.62 isn't taking resource share into account when requesting more. All projects are downloading WUs as if their share was 100.
Whereas with your 4.19 install, you should see less because CPDN really won't download any more WUs with it's higher share because the WUs are so long as to make it irrelevent. It's effectively 100 as far as the scheduler is concerned. E@H will most likely only give you one or two WUs more than if it was set at 100 due to it's longer run times than the rest of the projects. However, P@H, LHC and S@H will all be requesting less WUs with the lower resource shares.
So it sounds like 4.19 is behaving, and 4.62 is hosed...
Experiment to resolve cache
)
Experiment to resolve cache deadline issue when running multiple projects on one computer. Made separate directories for each project. Installed BoincCLI in each and attached to just one project, in my case just S@H and E@H. Used Webmin in my Linux system to create 3 cron jobs to turn projects on and off. To turn off each project just issed killall boinc_4.19_i686-pc-linux-gnu via cron. Killing Boinc also kills the project client which is a child process. So don't have to keep track of project client version number. Cron job wouldn't directly turn on the project(s) for some reason, got "another instance of BOINC running" error message (permissions/lockfile issue?), but top or ps didn't show anything running. So used a couple of short bash scripts #!/bin/sh, then cd /path to project directory/, then ./boinc_4.19_i686-pc-liux-gnu &. Made scripts executable. Then used cron to start each project via its bash script. Had to use two scripts because each project is in a separate directory.
Setup has been running for two days without problems. Advantages are you have complete control of run time for each project, run S@H from 9am to 5:59pm and run E@H from 6pm to 8:59am when E@H is less likely to be interrupted. Have complete control of cache for each project so can adjust for deadline. Noticed if running multiple projects from a single Boinc client that the most recent cache setting migrates to all project webpages. Since I have other computers running only S@H these were effected also, so I can now keep the cache limit I want. Some disadvantages are that more manual overhead is involved in setting up the projects. If a project runs out of wu's Boinc now won't automatically switch over to aother project, but having independant control of the cache makes this less likely. System working in Linux, should also work on a recent Mac since nix based, don't know if Windows has a similar cron function. Will migrate to other computer systems when E@H gets out of beta. Will likely discard when Boinc has cache deadline check capability built in, or a problem shows up. So far so good.