23 hours for a WU???

Moorkens
Moorkens
Joined: 9 Oct 05
Posts: 4
Credit: 19,920,995
RAC: 786
Topic 191896

I noticed a few strange things:
About three weeks ago a WU took about 24000 secs on average. Then it went up to around 44000 secs but the latest one was over 80000 secs.
This is on a Mac Mini 1.42GHz G4 with no alteration on software or set-up. A WU of 80000 or 44000 secs has the same amount of credits. Can this be? Have they changed the software or credits policy?

Greetz,

T

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2,143
Credit: 2,919,935,315
RAC: 952,963

23 hours for a WU???

Quote:
I noticed a few strange things:
About three weeks ago a WU took about 24000 secs on average. Then it went up to around 44000 secs but the latest one was over 80000 secs.
This is on a Mac Mini 1.42GHz G4 with no alteration on software or set-up. A WU of 80000 or 44000 secs has the same amount of credits. Can this be? Have they changed the software or credits policy?


No, there hasn't been a change recently. A new version of the program was released on 3 September for your machine: you have the new version, and it should have made the work substantially quicker.

In your most recent, 80,000 second, result, there is a section in the middle where it says:

2006-10-02 10:23:37.1622 [normal]: Start of BOINC application 'einstein_S5R1_4.26_powerpc-apple-darwin'.
2006-10-02 10:23:37.1702 [normal]: Started search at lalDebugLevel = 0
2006-10-02 10:23:41.1964 [normal]: Found checkpoint-file 'Fstat.out.ckp'
Failed to read checkpoint-counters from 'Fstat.out.ckp'!
2006-10-02 10:23:41.1975 [normal]: No usable checkpoint found, starting from beginning.

- in other words, there was a glitch towards the end, and it had to start again right back at the beginning. That probably accounts for the double time, and hopefully it's a one-off and won't happen again.

It looks as if your machine is running continuously, but Einstein is stopping and starting every hour - I presume you are using BOINC to run a second project. You may find that you get fewer errors like this if you change the setting "Leave applications in memory while suspended?" to "Yes". It's on the page of information for 'Your account' on this website: follow the 'General Preferences' link, and it's the fourth item down.

After you've made and saved the change on the website, go back to the BOINC program running on your machine, select Einstein on the projects tab, and click the update button to tell the program to read the new settings from the website.

Moorkens
Moorkens
Joined: 9 Oct 05
Posts: 4
Credit: 19,920,995
RAC: 786

Thanks for that,Richard, now

Thanks for that,Richard, now it makes sense!

Is there an option that Boinc finishes a wu from one project and then starts the other?

Cheers,

Tom

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2,143
Credit: 2,919,935,315
RAC: 952,963

RE: Is there an option that

Message 47459 in response to message 47458

Quote:
Is there an option that Boinc finishes a wu from one project and then starts the other?


AFAIK, no there isn't.

You can change the time slice it spends on each project, using

"Switch between applications every (recommended: 60 minutes)"

- the setting immediately below the one I pointed you to earlier - but BOINC is designed to work in a sharing, collaborative sort of way, and giving exclusivity to just one project (even temporarily until a WU finishes) rather goes against the grain.

Jim Milks
Jim Milks
Joined: 19 Jun 06
Posts: 116
Credit: 529,852
RAC: 0

RE: I noticed a few strange

Quote:

I noticed a few strange things:
About three weeks ago a WU took about 24000 secs on average. Then it went up to around 44000 secs but the latest one was over 80000 secs.
This is on a Mac Mini 1.42GHz G4 with no alteration on software or set-up. A WU of 80000 or 44000 secs has the same amount of credits. Can this be? Have they changed the software or credits policy?

Greetz,

T

Had the same thing happen on a Windows PC--normally took ~50,000 sec. for a long workunit then shot up to >150,000 sec. per long workunit. After trashing all spyware/malware/viruses, installing the latest updates, virus software, spyware/malware software, resetting the project, etc., the time per workunit had not decreased. So I took that PC off Einstein and attached it to Rosetta. It's doing fine crunching for Rosetta.

My PowerPC iMac and Intel MacBook are still attached to Einstein. The iMac is doing fine and so was the MacBook before it went down with a logic board problem last Thursday. To make it worse, Apple is out of spare parts for it, so who knows when I'll get it back. And I've got to analyze data and write grant proposals this quarter. Oh, well.

Jim Milks

Annika
Annika
Joined: 8 Aug 06
Posts: 720
Credit: 494,410
RAC: 0

That's awful, Jim, I really

That's awful, Jim, I really hope they manage to fix it quickly. I kinda know what you feel like as I have to do without my laptop atm as it's travelling all through Germany in order to have the soundcard repaired ;-) less than a week now and I'm missing it already.
@Richard: I'm not really convinced of this "leave WUs in memory" thing. Wouldn't that slow down the computer if you needed the memory for other stuff? Or do the WUs get stored in the swap space? Atm I'm not using this feature in order to avoid BOINC slowing down my PC too much and it has never caused any problems so far, but as everyone says the other way is safer I'd like to know how much performance it costs...

Pooh Bear 27
Pooh Bear 27
Joined: 20 Mar 05
Posts: 1,376
Credit: 20,312,671
RAC: 0

I use the leave in memory,

I use the leave in memory, and do not have any issues. Since there are settings on the software to allow a certain amount of virtual memory, I am betting that when a unit is not being crunched, it goes there, leaving as much active memory available for other things. I have not had a slow down problem. Of course I am not a gamer, so I do not have large gaming programs, but I do run Office, some smaller games, chat programs, web browsing (sevral windows), e-mail, etc. all simuntaeously, even on machines with only 512M of memory.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2,143
Credit: 2,919,935,315
RAC: 952,963

RE: @Richard: I'm not

Message 47463 in response to message 47461

Quote:
@Richard: I'm not really convinced of this "leave WUs in memory" thing. Wouldn't that slow down the computer if you needed the memory for other stuff? Or do the WUs get stored in the swap space? Atm I'm not using this feature in order to avoid BOINC slowing down my PC too much and it has never caused any problems so far, but as everyone says the other way is safer I'd like to know how much performance it costs...


Hi Annika. All I can say is that I've run it this way on my main machine (Windows XP, 512MB RAM), and I've never noticed any problems, even with three WUs kept in memory at the same time (one Einstein, one standard SETI, and a second SETI being fast-tracked by EDF). I think BOINC apps are just like any other app: any page of memory which hasn't recently been accessed is available to be swapped out to disk by the operating system, and will be re-fetched transparently the next time it's needed. That's exactly the same process as would happen if you try to juggle e-mail, database, photo editor and accounts on the same system. If it turns out to be too slow, the same options apply: reset things to the status quo ante, close a program when you've finished using it, or go buy more RAM!

I can't speak from personal experience of the memory behaviour of *nix or Apple/Darwin, but I would expect the general principle would be the same.

The real question is, why did Moorkens' machine write an 'Fstat.out.ckp' file which it subsequently failed to read back in? Hardware problem? AV program messing it up? Bug in the app? I doubt we'll ever know, but since it can fail that way, I'd prefer to avoid letting it fail!

Moorkens
Moorkens
Joined: 9 Oct 05
Posts: 4
Credit: 19,920,995
RAC: 786

Hi all, I tend to give the

Hi all,

I tend to give the Mini a beating once and a while. Boincing, Photoshop, Painter, iTuning, all at the same time plus some surfing trown in to top it off. With 512mb it's all a bit too much! It's a bit slower then (I/O to disk) but still does what it has to do. I'll keep an eye out for simular errors. New WU is about 45000 secs.

Jim, good luck with the Macbook!

Richard, I switched on the "leave in memory" option and increased the amount of time per project to 120 mins. It runs fine now.

Odysseus
Odysseus
Joined: 17 Dec 05
Posts: 372
Credit: 20,293,883
RAC: 8,810

RE: […] If it turns out

Message 47465 in response to message 47463

Quote:

[…] If it turns out to be too slow, the same options apply: reset things to the status quo ante, close a program when you've finished using it, or go buy more RAM!

I can't speak from personal experience of the memory behaviour of *nix or Apple/Darwin, but I would expect the general principle would be the same.


In general, I think it’s safe to say that if you have adequate RAM for the work you’re doing (and the number of apps you run simultaneously), the “leave in memory� setting won’t make an appreciable difference. I have it set to “yes� on my aging G4/733 at work, which runs BOINC constantly, while I may have Adobe InDesign, Photoshop, Acrobat Pro, & Illustrator, QuarkXPress, assorted utilities and Safari all going at the same time … but I have 2 GB of RAM in that system.

It’s possible that leaving apps in memory can cause or exacerbate a bug, concerning Mac OS shared memory, that seems to afflict mainly dual- and quadruple-CPU systems running numerous projects. It bit my dual-core G5 recently, so I’ve been reading up on it a bit … so far the SpyHill (Pirates@home) solution seems to be working.

Dronak
Dronak
Joined: 21 Mar 05
Posts: 28
Credit: 10,402,879
RAC: 0

Apparently I had the same

Apparently I had the same problem on my last work unit. :( I've turned on the leave applications in memory option and will see if that helps. I may increase the application switch time later if this alone doesn't work. But I have another question. I can understand how the error would increase the computing time. But the current work unit I have in my queue has an estimated time of just over 1 day. Shouldn't the initial estimate still be 12-15 hours or something? It shouldn't assume there will be an error that will double the computing time. I thought the estimated time was based basically on the size of the work unit and that was it. Is there something else going on here that I'm missing? Thanks.

P.S. -- I'm running on a Windows machine.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.