Cross posted with the cpdn help forum.
My machine is an A64x2. Several days ago I added a single cpdn work unit to my project list.
I have 3 other projects all with equal resource share. Of these LHC doesn't have any work, and simap is suspended on this PC (I'm running it on an older box, and only added it to this one for stats display). My 4th project is einstien. It and cpdn are running at 100% on alternate cores.
The problem is that since I started the cpdn unit, boinc is refusing to download any work for einstien because my machine is over commited, and my cache has dropped from it's normal level of several days to 8hrs. Einstien itself is running normally, and has work available. Once my cache runs out, my second core will be idle. Short of aborting the cpdn unit and going back to 100% einstien, how can I get boinc to provide work for both of my cores.
Copyright © 2024 Einstein@Home. All rights reserved.
Resource share problem since adding cpdn to my active projects.
)
That's why I definitely stopped CPDN on my hosts.
Also because Carl coudn't find out why my hosts couldn't last more than 1 minute without crashing the model when they switched to hash .13 and .14 when they did fine with the previous .12
Also because I didn't accept anymore that CPDN uses my hard drives to store its Gigs of units.
Sorry if this might not help you to solve your problem ... unless you detach from CPDN
Download only one CPDN model,
)
Download only one CPDN model, then say no new work and give CPDN a very high resource share. Now the CPDN model runs always on one of the cores and the other core is free for other projects.
RE: Download only one CPDN
)
I only have a single model. I hit the no new work option as soon as the first was downloaded to avoid a second showing up.
RE: RE: Download only one
)
Check your global preferences on CPDN to make sure you are still set to use both cores. BOINC takes its global preference setting from the most recently updated project and I believe they all default to using one CPU per host.
-- Tony
Check the Long Term Debt of
)
Check the Long Term Debt of LHC.
If it's not suspended, it will (depending on your BOINC version) reach very high values and effectively reduce your Cache eventually to 0 (which will make CPDN run alot of time, at this cannot run dry until the WorkUnit is finished).
Having LHC suspended is the only way to inhibit it accumulates LTD and trigger the BOINC scheduling LTD bug which kills the Cache size. Also once suspended, its existing LTD must be reset to a value below ~100s (e.g. by resetting the Project as often as required), otherwise the Bug will still persist.
My preferences for all 4
)
My preferences for all 4 projects are 4 processors.
LHC hasn't had any work since I joined a week ago. Longterm debt is at 125505. Shortterm at 0. I'm running boinc 5.4.9 and boinc studio 0.5.5.
Once I bottomed out last night, I downloaded ~2.5 days work for one core, and appear to have both cores running einstien continually since then. My cache size is set at 5 days.
RE: My preferences for all
)
Dan, you should update your BoincStudio version ;)
http://forum.boincstudio.boinc.fr
if you replace the 5.4.9 boinc.exe with the BS core, you will be able to set the cache directly from BS (overpassing your boinc preferences)
by setting a low cache for CPDN, I think it may solve your problem
Is there a more recent
)
Is there a more recent version than 0.5.5 available somewhere? That version rarely lasted more than 24hrs before locking up, and the CRCs and dates of the development build I downloaded are identical the the release version.
RE: Is there a more recent
)
0.5.5 has automatically been updated from the developper's site (currently offline)
I placed it here for you :
http://blackholesun.neuf.fr/BoincStudio/bs_current_linux.rar
http://blackholesun.neuf.fr/BoincStudio/bs_current_win32.rar
Is this a onetime thing I can
)
Is this a onetime thing I can do and then shut the boincstudio app down? The last time I was using v0.5.5 I averaged having to endtask boincstudio two or three times a day because it locked up.