Forcing boinc to do multiple wu at once

Jordan Wilberding
Jordan Wilberding
Joined: 19 Feb 05
Posts: 162
Credit: 715,454
RAC: 0
Topic 188905

How do I force boinc to do multiple WU's at once. I am trying to see if I can run multiple processes on one host, and have my openmosix cluster migrate the processes properly.

Thanks!

such things just should not be writ so please destroy this if you wish to live 'tis better in ignorance to dwell than to go screaming into the abyss worse than hell

Divide Overflow
Divide Overflow
Joined: 9 Feb 05
Posts: 91
Credit: 183,220
RAC: 0

Forcing boinc to do multiple wu at once

> How do I force boinc to do multiple WU's at once. I am trying to see if I can
> run multiple processes on one host, and have my openmosix cluster migrate the
> processes properly.

AFAIK you can’t “force” BOINC to run multiple WU’s at the same time. You can use a multiple processor system or enable Hyper Threading on Intel CPU’s with this feature and BOINC will automatically run a WU on each logical processor. BOINC does not support client side cluster processing.

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5,385,205
RAC: 0

> How do I force boinc to do

> How do I force boinc to do multiple WU's at once. I am trying to see if I can
> run multiple processes on one host, and have my openmosix cluster migrate the
> processes properly.
>
> Thanks!

If you have set the "Use at most x processors" to some number other than 1, and BOINC has not "seen" the additional processors, then there is no way at this time for this to occur.

What you could do is to take the current BOINC code, add this feature and then petition ot have this extension added to the baseline ...

Jordan Wilberding
Jordan Wilberding
Joined: 19 Feb 05
Posts: 162
Credit: 715,454
RAC: 0

Would it be possible to run

Would it be possible to run multiple instances of boinc from different directories? Or does it check for other instances via IPC or something?

Thanks!

such things just should not be writ so please destroy this if you wish to live 'tis better in ignorance to dwell than to go screaming into the abyss worse than hell

gravywavy
gravywavy
Joined: 22 Jan 05
Posts: 392
Credit: 68,962
RAC: 0

> Would it be possible to run

Message 10288 in response to message 10287

> Would it be possible to run multiple instances of boinc from different
> directories? Or does it check for other instances via IPC or something?
>
> Thanks!
>
>

I have seen error messages to the effect that BOINC is already running. This suggests that your workround will fail, but it is still worth a try.

Three dirty ways round this come to mind:

1) run the classic and BOINC versions of software. SETI and CPDN both have non-BOINC editions so uoi could get up to three threads going that way

2) run the two BOINC's on a WIn-XP machine from separate XP users (and from separate folders), making sure you switch users rather than logout when swapping from one to another. This just might be enough to fool BOINC into not noticing the other process. I'd hope however that this idea will fail as if it worked it would raise all sorts of evil-hacker possiblities... Hopefully even if it did work there would be a patch in alater Windows update that would stop it.

3) On a LInux box run one BOINC native Linux and the other a Windows version under WINE.

4) run virtual PC software and then each virtual machine could run its own BOINC all on the same real PC. Maybe someone else can post relvant links for where to get virtual PC software. My guess is that any such attempts might cause more overhead than the CPU time they saved.

My experience

The only one I have tried is option 1.

I'd been running Zetagrid (www.zetagrid.net - not a BOINC project) and wanted to start running CPDN. They ran in parallel happily for several months, each getting a reasonable share of the CPU.

Then I dropped down to just CPDN, and shortly after decided I did not want to run a project with such long WU, and switched to E@H. I waited till close to the end of a CPDN classic WU, I installed E@H and the overlapped for about a day.

In fact what happened is that the CPDN classic got more than 99% of the machine cycles during its last few hours crunching, despite being apparently on the same "low" priority as the BOINC client. BOINC E@H only got to its 5th minute of elapsed CPU once the CPDN classic client started on its disk-intensive end-or-run processing.

This was a surpise to me, as classic CPDN and Zetagrid had shared the cpu around equally. I really don't know why some projects will share the cpu more fairly than others when all of them have the same have the same low OS priority.

Even though CPDN hogged the cpu, the handover worked perfectly, with CPDN dropping from 96% to 0% and BOINC rising from 0% to 96% in about 8sec (two task meter steps). What it would not have done is to share the CPU if had been carrying on with classic CPDN.

Until you know that projects will share concurrently, you could use the scheduling facility in both clients to give the other client a few hours a day to itself, then let them overlap for part of the day to see how they both got on. For example 4hrs just the classic project, 8hrs both, 4 hrs just the BOINC project, 8 hrs both could be programmed by telling one project to run 0200-2200 and the other 0000-1400 and 1800-0000. That would give you the chance to notice if posession of the CPU made a transient (or long term) effect. If need be you could adjust the scheduling later on to make sure both projects can get their WU in by the deadlines.

For myself, tho, I am not interested enuff to put in the effort to try it. Good luck & I hope these thoughts are useful to somebody: if so I'd be interested to know how you get on.

~~gravywavy

~~gravywavy

Jordan Wilberding
Jordan Wilberding
Joined: 19 Feb 05
Posts: 162
Credit: 715,454
RAC: 0

All of these are great ideas,

Message 10289 in response to message 10288

All of these are great ideas, but only real viable one would be running virtual pc or vmware, which would eat up too much resources. As I said before, I have 11 computers on my cluster, and 11 instances of vmware is just too much :)

I do have another idea I am going to try though, Usermode Linux. It seems like it just might do the trick, unless I am missing something in my thinking, which is very possible :)

Thanks!

> > Would it be possible to run multiple instances of boinc from different
> > directories? Or does it check for other instances via IPC or something?
> >
> > Thanks!
> >
> >
>
> I have seen error messages to the effect that BOINC is already running. This
> suggests that your workround will fail, but it is still worth a try.
>
> Three dirty ways round this come to mind:
>
> 1) run the classic and BOINC versions of software. SETI and CPDN both have
> non-BOINC editions so uoi could get up to three threads going that way
>
> 2) run the two BOINC's on a WIn-XP machine from separate XP users (and from
> separate folders), making sure you switch users rather than logout when
> swapping from one to another. This just might be enough to fool BOINC into
> not noticing the other process. I'd hope however that this idea will fail as
> if it worked it would raise all sorts of evil-hacker possiblities...
> Hopefully even if it did work there would be a patch in alater Windows update
> that would stop it.
>
> 3) On a LInux box run one BOINC native Linux and the other a Windows version
> under WINE.
>
> 4) run virtual PC software and then each virtual machine could run its own
> BOINC all on the same real PC. Maybe someone else can post relvant links for
> where to get virtual PC software. My guess is that any such attempts might
> cause more overhead than the CPU time they saved.
>
>
> My experience
>
> The only one I have tried is option 1.
>
> I'd been running Zetagrid ( href="https://einsteinathome.org/%3Ca%20href%3D"http://www.zetagrid.net/">http://www.zetagrid.net/">www.zetagrid.net[/url] - not a BOINC project)
> and wanted to start running CPDN. They ran in parallel happily for several
> months, each getting a reasonable share of the CPU.
>
> Then I dropped down to just CPDN, and shortly after decided I did not want to
> run a project with such long WU, and switched to E@H. I waited till close
> to the end of a CPDN classic WU, I installed E@H and the overlapped for about
> a day.
>
> In fact what happened is that the CPDN classic got more than 99% of the
> machine cycles during its last few hours crunching, despite being apparently
> on the same "low" priority as the BOINC client. BOINC E@H only got to its 5th
> minute of elapsed CPU once the CPDN classic client started on its
> disk-intensive end-or-run processing.
>
> This was a surpise to me, as classic CPDN and Zetagrid had shared the cpu
> around equally. I really don't know why some projects will share the cpu more
> fairly than others when all of them have the same have the same low OS
> priority.
>
> Even though CPDN hogged the cpu, the handover worked perfectly, with CPDN
> dropping from 96% to 0% and BOINC rising from 0% to 96% in about 8sec (two
> task meter steps). What it would not have done is to share the CPU if had
> been carrying on with classic CPDN.
>
> Until you know that projects will share concurrently, you could use the
> scheduling facility in both clients to give the other client a few hours a day
> to itself, then let them overlap for part of the day to see how they both got
> on. For example 4hrs just the classic project, 8hrs both, 4 hrs just the
> BOINC project, 8 hrs both could be programmed by telling one project to run
> 0200-2200 and the other 0000-1400 and 1800-0000. That would give you the
> chance to notice if posession of the CPU made a transient (or long term)
> effect. If need be you could adjust the scheduling later on to make sure both
> projects can get their WU in by the deadlines.
>
> For myself, tho, I am not interested enuff to put in the effort to try it.
> Good luck & I hope these thoughts are useful to somebody: if so I'd be
> interested to know how you get on.
>
>
> ~~gravywavy
>

such things just should not be writ so please destroy this if you wish to live 'tis better in ignorance to dwell than to go screaming into the abyss worse than hell

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.