BOINC and Einstein particularly don't play by the rules ...

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,835
Credit: 18,006,730,973
RAC: 3,653,857

RE: ... it would have been

Quote:
... it would have been courteous of the moderators to at least have left a message in the original thread that I was subscribed to that my posts had been moved to a separate thread. I was surprised to say the least to see my posts deleted without comment.
The standard procedure has a "courteous" component and I used it. When messages are moderated, the reasons why and any comments from the moderator are sent in an email to your registered email address. I took the trouble to include comments as to exactly why the post was moderated. The automated part of the message tells you where the post was moved to. It was NOT "deleted without comment". Have you checked your email?

Yes, I do check my mail several times a day. No message received from you or the forum. Either in private forum inbox nor my regular email. Checked the SPAM folder too. Nothing received from you or the forum. Are you sure you sent any notification that the posts were moved to a separate thread?

Cheers, Keith

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,835
Credit: 18,006,730,973
RAC: 3,653,857

RE: RE: I ask again ...

Quote:
Quote:
I ask again ... what purpose are the venues provided for?? Why do they even show separate disk space allocations if Default overrides them.

I think your error is in supposing that the venues are separately assignable by project.

Nope.

The model (BOINC's, not Einstein's) is that you assign a particular host to a particular location (aka venue) which then applies across all projects.

An extra detail is that one can over-ride location computing preferences for a single host by setting local preference on the host itself. This frees up the limitation of there being only four distinct locations, but it does not give you separate control by project, either.

I know about local preferences overriding everything else. I learned that back in the beginning when I first start crunching.

Thanks for explaining how venues work. So I ask again, what good are they and for what purpose would one use them. I can't see any functionality in them for how they actually work.

Cheers, Keith

 

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5,869
Credit: 113,267,283,410
RAC: 36,862,071

RE: Thanks for explaining

Quote:
Thanks for explaining how venues work. So I ask again, what good are they and for what purpose would one use them. I can't see any functionality in them for how they actually work.


They are very useful when you have several computers with quite different capabilities. They are also useful for allowing certain host(s) to participate in a certain set of science runs and other host(s) to have a different set. You can really use them to set quite different preference regimes for the project specific BOINC preferences. You certainly don't need venues if you only have a single machine attached, or if you want multiple machines to do exactly the same things.

Here is a list of some things you can do on a per host basis by putting different hosts in different venues. Please realise that some things might best be described as 'aspirational goals', in that BOINC may not yet have matured to a sufficient degree for all things to work exactly as intended or perhaps as the user might expect :-).

  • * give one (or more) hosts a different resource share to others.
    * control whether particular hosts do (or not do) CPU tasks.
    * control the GPU type to be used if the host has different brands of GPU and you want just one or a subset, of types.
    * control precisely which science run(s) a host will receive tasks for.
    * control which hosts are allowed to participate in beta tests.
    * for GPU tasks, control the task concurrency by setting different GPU utilization factors for different venues.

I have used the last one in the list to set x2, x3, x4 concurrencies on different hosts but I found it too cumbersome if you want to play around with different settings on a regular basis. I now have an app_config.xml file on every GPU host and I find editing that and getting BOINC to re-read the config files to be much more convenient.

I did find the venues mechanism convenient when I wanted to allow just a few hosts to join the BRP6 beta test when it first started.

Cheers,
Gary.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,835
Credit: 18,006,730,973
RAC: 3,653,857

RE: RE: Thanks for

Quote:
Quote:
Thanks for explaining how venues work. So I ask again, what good are they and for what purpose would one use them. I can't see any functionality in them for how they actually work.

They are very useful when you have several computers with quite different capabilities. They are also useful for allowing certain host(s) to participate in a certain set of science runs and other host(s) to have a different set. You can really use them to set quite different preference regimes for the project specific BOINC preferences. You certainly don't need venues if you only have a single machine attached, or if you want multiple machines to do exactly the same things.

Here is a list of some things you can do on a per host basis by putting different hosts in different venues. Please realise that some things might best be described as 'aspirational goals', in that BOINC may not yet have matured to a sufficient degree for all things to work exactly as intended or perhaps as the user might expect :-).

  • * give one (or more) hosts a different resource share to others.
    * control whether particular hosts do (or not do) CPU tasks.
    * control the GPU type to be used if the host has different brands of GPU and you want just one or a subset, of types.
    * control precisely which science run(s) a host will receive tasks for.
    * control which hosts are allowed to participate in beta tests.
    * for GPU tasks, control the task concurrency by setting different GPU utilization factors for different venues.

I have used the last one in the list to set x2, x3, x4 concurrencies on different hosts but I found it too cumbersome if you want to play around with different settings on a regular basis. I now have an app_config.xml file on every GPU host and I find editing that and getting BOINC to re-read the config files to be much more convenient.

I did find the venues mechanism convenient when I wanted to allow just a few hosts to join the BRP6 beta test when it first started.

I agree that setting up an app_config is much easier. I might concede it would be easier using venues to restrict a host just to one type of GPU via venue settings rather than editing an app_config and app_info, but slightly. In my opinion and usage I can see no real value in the venue mechanism. Everything it attempts to do can be accomplished with simple text editing. In my case, I wish I had never seen venues and been confused by them.

Cheers, Keith

 

mikey
mikey
Joined: 22 Jan 05
Posts: 12,296
Credit: 1,838,258,422
RAC: 11,313

RE: I agree that setting

Quote:


I agree that setting up an app_config is much easier. I might concede it would be easier using venues to restrict a host just to one type of GPU via venue settings rather than editing an app_config and app_info, but slightly. In my opinion and usage I can see no real value in the venue mechanism. Everything it attempts to do can be accomplished with simple text editing. In my case, I wish I had never seen venues and been confused by them.

Cheers, Keith

I will throw out something for you to consider then...I have 11 pc's running here at my home crunching with various combinations of cpu cores, gpu types and capabilities. Venues let me tweak a group of them without having to go to each pc and modify the file.

For instance I can let 3 pc's run cpu and gpu units, 3 pc's run cpu only units, and still other pc's run gpu only units, all by just changing them to a different venue. No modifying files and potentially getting it all wrong, just click, click and it's done.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,835
Credit: 18,006,730,973
RAC: 3,653,857

Hmmm, in your specific

Hmmm, in your specific scenario, I can see the benefit. For someone with a large server farm with different hardware using venues has definite advantages. For me, with just two identical servers, not so much. So, I guess I shouldn't come down so hard on the concept of venues. They can be useful. I just wish the mechanics of using them were easier for me to understand.

Cheers, Keith

 

Elektra*
Elektra*
Joined: 4 Sep 05
Posts: 948
Credit: 1,124,049
RAC: 0

RE: RE: So experts, just

Quote:
Quote:
So experts, just what BOINC setting will allow me to download E@H work that won't overcome my system and run me into inevitable deadlines?

The pair of settings which overall govern just how much work boinc on your host requests (often informally called queue size)
Computer is connected to the Internet about every n.nn days
Maintain enough work for an additional m.mm days

If you use a low enough value for the sum of these two (generic BOINC, not Einstein-specific) settings then your downloaded Einstein work will neither overcome your system nor run you into deadline problems unless your system is too tied up with work from other projects to attend to Einstein work.

Unfortunately Einstein and some other projects tend to overcome the system and run it into deadline troubles by massively underestimating the runtimes of their tasks. If you claim 4 days of work, you'll get work for 15 days on an unadjusted device. And when your client recognizes that the job durations are much longer than preempted, it switches into panic mode and probably can't finish all tasks before their deadlines especially if there is work from other projects cached.
So you have to watch your tasks queue, especially after attaching new projects, updating your BOINC client or if a project offers new applications or change them. How's the preempted runtime of a job at beginning of computing, and how is it developing with its progress? And if it turns out that the effective runtime is much longer than preempted, what happens? Does the client adjust the runtimes of the remaining tasks proportionally? And when downloading new tasks, are their preempted runtimes adjusted by the project?

Einstein and POGS have a DCF of ~3.5 resp. ~3, on my devices, which means the effective tasks duration is massively underrated by the projects. But my BOINC 6 clients had learned to deal with that, and when I want to fetch work for 7 days to bunker during a team challenge and set the queue size to 7 days, I'll get work for 7 days. With BOINC 7, at least with POGS I would get work for 21 days and have to abandon tasks worth of 14 days crunching time because they won't be finished before deadline. And unfortunately the clients won't learn, even after weeks every new job is underestimated by a factor of 3. And unfortunately DCF is disabled by most projects which set a tag in client_state.xml. So I have to compute myself a DCF by dividing the effective runtime of a finished task with its preempted runtime at the beginning of computing and keep that in mind when setting my WU cache parameters (2.3 days to get work for 7 days). This is unacceptable for me, so I've reverted back to BOINC 6. But that might only be a solution for me, as I'm crunching only CPU tasks and don't have to deal with graphic cards.

So I wish you much fun when setting your task cache parameters and your resource shares

Love, Michi

archae86
archae86
Joined: 6 Dec 05
Posts: 3,152
Credit: 7,132,074,931
RAC: 533,981

Elektra* wrote:Einstein and

Elektra* wrote:
Einstein and POGS have a DCF of ~3.5 resp. ~3, on my devices, which means the effective tasks duration is massively underrated by the projects.


Your mileage may vary. I have three hosts on which BOINC runs exclusively supporting the Einstein project, with most of the work being GPU work. As there was an application change with appreciable run time effect about a week ago, one might expect the DCFs not to have settled down yet, but on the three hosts the reported DCF ranges from .83 to 1.17 at this moment.

The (BOINC) system is far from perfect, but it is far from true that consistently Einstein "massively underrates" the task duration.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4,835
Credit: 18,006,730,973
RAC: 3,653,857

RE: Elektra* wrote:Einstein

Quote:
Elektra* wrote:
Einstein and POGS have a DCF of ~3.5 resp. ~3, on my devices, which means the effective tasks duration is massively underrated by the projects.

Your mileage may vary. I have three hosts on which BOINC runs exclusively supporting the Einstein project, with most of the work being GPU work. As there was an application change with appreciable run time effect about a week ago, one might expect the DCFs not to have settled down yet, but on the three hosts the reported DCF ranges from .83 to 1.17 at this moment.

The (BOINC) system is far from perfect, but it is far from true that consistently Einstein "massively underrates" the task duration.

Not surprised to see someone else having issues with the amount of work that Einstein downloads if unrestricted. I've beat this dead horse to the ground. I just set NNT and then ask for a handful of tasks when the cache goes to 0 or close enough. Also it is interesting to set dcf_debug and read the logfiles to see just how much wild gyrations the dcf estimate takes for Einstein. Wish that Einstein would just set dcf=1 like all my other projects and set max onboard tasks at a time. Works splendidly for SETI and MW. I choose to process all my projects on all my GPU's. Time to revert from the beta app to stock for Einstein, that experiment was a bust.

 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.