Task xxx exited with a DLL initialization error.
If this happens repeatedly you may need to reboot your computer.
This latest one happened while I was away from the computer and nothing else CPU intensive was happening.
If I play a game or something highly CPU intensive I always shut down BOINC first.
For one of my hosts, a Q6600 quad running Windows XP and BOINC 5.10.30, the pattern seems to be that the first scheduler request for work after a reboot immediately (as in within a second) kills off all currently running science ap tasks (SETI and Einstein), including any which are memory resident but not currently running. They then restart successfully, though the alarming status and messages are available until they complete.
This does not depend on whether the network is up, or the scheduler request is in any way successful.
After that "first after reboot" flurry, I think I don't see this at all. Possibly just killing and restarting boincmgr has the same effect--I don't do that often enough to have noticed.
Although I am posting in a 4.36 thread, my guess is that this has nothing to do with the science ap.
Until recently I was using the optimised 4.26 version, but swapped my dual P3 and dual Prestonis Xeon to the 4.36 version.
Crumbs - the code is quick on both rigs compared both the stock and the 4.26 versions.
I calculate the results are coming off 48% faster on my dual P3, and 40% faster on the dual Xeon.
When I have reached the SETI cumulative targets I want, I will swap my 2 Intel Quads to Einstein.
John, over the w/e I upped my Pentium D 3.2 Ghz to Boinc 5.10.45 and 4.36 version. the first 2 units have been crunched in HALF the previous time, and RAC is already up from 600 to 700. This latest app really motors!
Waiting for Godot & salvation :-)
Why do doctors have to practice?
You'd think they'd have got it right by now
John, over the w/e I upped my Pentium D 3.2 Ghz to Boinc 5.10.45 and 4.36 version. the first 2 units have been crunched in HALF the previous time, and RAC is already up from 600 to 700. This latest app really motors!
Remember that crunch time for any specific WU in a specific data pak will vary based on whether it's near the peak or in the trough. Peak WUs are slower than trough WUs. Also when making comparisons, be sure that the WUs being compared are from the same data pak or you're comparing apples and oranges.
Nevertheless, 4.36 is an enormous improvement over the stock app.
Anyone here notice any problems with new optimized app?
I have been having a problem of late with download speeds.
From an average of 14,000 down, with power boost, to an average of 4,500 now.
Everything checked out with Comcast.
Then I noticed something.
My computers as well as my sons laptop are all connected to my router.
My 2 computers that have the optimed client have the slow download speeds.
The 2 that don't still are reaching regular speeds.
Has anyone else noticed this?
And if this is the case, is there a way to compensate for the speed loss?
Thanks, Dan
... My 2 computers that have the optimed client have the slow download speeds.
The 2 that don't still are reaching regular speeds.
Has anyone else noticed this?
Nope.
I've got a large number of machines running both the Windows and Linux optimised apps and I've not seen anything unusual with download speeds.
I presume you are talking about the downloading of the large data files used when a task is crunching, rather than the tasks themselves. Tasks are so small that you would be hard pressed to actually see them downloading.
At the moment, many hosts are going through a transition to new higher frequency large data files and you may be seeing more frequent downloading of lots of data files. I'm seeing this on quite a few of my hosts but there is no problem with download speeds and it's certainly not related to the science app in any way.
... My 2 computers that have the optimed client have the slow download speeds.
The 2 that don't still are reaching regular speeds.
Has anyone else noticed this?
Nope.
I've got a large number of machines running both the Windows and Linux optimised apps and I've not seen anything unusual with download speeds.
I presume you are talking about the downloading of the large data files used when a task is crunching, rather than the tasks themselves. Tasks are so small that you would be hard pressed to actually see them downloading.
At the moment, many hosts are going through a transition to new higher frequency large data files and you may be seeing more frequent downloading of lots of data files. I'm seeing this on quite a few of my hosts but there is no problem with download speeds and it's certainly not related to the science app in any way.
Basically I was referring to a desktop that I use while Einstein is running. The overall speed for regular computer use such as surfing has gone down and is much slower as with any speedtest. The app has effected this area while Einstein is crunching. The client itself is great.
Basically I was referring to a desktop that I use while Einstein is running. The overall speed for regular computer use such as surfing has gone down and is much slower as with any speedtest. The app has effected this area while Einstein is crunching. The client itself is great.
OK, I thought you were referring to downloads of project files.
Just to reiterate, the science app (whether stock or optimised) does not use your internet connection. The BOINC client does that when necessary. So it's highly unlikely that any slowdown of surfing speeds could be linked to the use of an optimised app. If you are seeing a slowdown on some hosts and not on others, when all hosts are actually crunching, it sounds like something else may be causing it.
If it turns out that you think BOINC is slowing performance on all hosts, you could always change your preferences to allow BOINC to run only when a host is idle. My experience is that BOINC is very good at getting out of the way when you need your computer to be responsive. Even on older and slower hardware, I've never felt the need to limit BOINC in this way on machines I use.
Here lately I've notice a big change in times associated with the optimized client.I started out in the mid 25,000 bracket and began working my way down to 22 or 23,000 to finish a wu. Now suddenly I'm raising and 28, 29, 30,000+ to finish work units are pushing me down to the level where my x64x2 3800+ is running.I've also noticed there has been fluctionations in number of sky point this system has been getting assigned. Was around 1205, then 1202 and now 1203.
I'm running XP Pro 32 and running 2 instances of einstein, Currently I have 43 services running. I uninstalled McAfee and dropped from 54. I currently are using less than 500 megs total memory and running 2 gigs. I'm not sure what caused this drop in performance but if anyone has any ideas I'd certainly appreciate it.
Here lately I've notice a big change in times associated with the optimized client.I started out in the mid 25,000 bracket and began working my way down to 22 or 23,000 to finish a wu. Now suddenly I'm raising and 28, 29, 30,000+ to finish work units are pushing me down to the level where my x64x2 3800+ is running.
All this is to be expected. It is just the way that S5R3 crunching works. There is a cyclic pattern to crunch times and based on pure chance you sometimes draw tasks that crunch quickly (eg 22Ksecs for your host) and you sometimes draw tasks that crunch slowly (eg 31Ksecs for your host). Everybody sees the same type of cyclic patterns and it has been discussed quite heavily in several threads over the last couple of months. As an example, this thread discusses the topic quite extensively and there are some posts which give links to other threads where further information is available.
In your particular case you have just recently transitioned to new large data files around the 1015.xx frequency mark and the run of sequence numbers you have received lately (like _628, _626, _625, etc) are pretty much right at some of the slowest crunch times you will see. As you work your way down the seq#s from there, you will see improving crunch times and by the time you get down to about the _510 to _530 range, your crunch times will be fast again.
Quote:
I've also noticed there has been fluctionations in number of sky point this system has been getting assigned. Was around 1205, then 1202 and now 1203.
Small fluctuations like this are also perfectly normal and of no real consequence. The skypoints for tasks seem to average around 1200 +/- 10 at most, ie less than +/- 1% variation maximum. This will have no significant effect on crunch time.
In your particular case you have just recently transitioned to new large data files around the 1015.xx frequency mark and the run of sequence numbers you have received lately (like _628, _626, _625, etc) are pretty much right at some of the slowest crunch times you will see. As you work your way down the seq#s from there, you will see improving crunch times and by the time you get down to about the _510 to _530 range, your crunch times will be fast again.
Small fluctuations like this are also perfectly normal and of no real consequence. The skypoints for tasks seem to average around 1200 +/- 10 at most, ie less than +/- 1% variation maximum. This will have no significant effect on crunch time.
Thanks for the reply and informtion. I'd recently lost a whole drive of data and rather than put a stale backup on the drive, I opted to clean install everything. At first the processing speed was as it was before the crash and then suddenly started going upward. I was concerned there was something else going here. At least now I can relax knowing this is normal behavior.
RE: RE: Task xxx exited
)
For one of my hosts, a Q6600 quad running Windows XP and BOINC 5.10.30, the pattern seems to be that the first scheduler request for work after a reboot immediately (as in within a second) kills off all currently running science ap tasks (SETI and Einstein), including any which are memory resident but not currently running. They then restart successfully, though the alarming status and messages are available until they complete.
This does not depend on whether the network is up, or the scheduler request is in any way successful.
After that "first after reboot" flurry, I think I don't see this at all. Possibly just killing and restarting boincmgr has the same effect--I don't do that often enough to have noticed.
Although I am posting in a 4.36 thread, my guess is that this has nothing to do with the science ap.
RE: Until recently I was
)
John, over the w/e I upped my Pentium D 3.2 Ghz to Boinc 5.10.45 and 4.36 version. the first 2 units have been crunched in HALF the previous time, and RAC is already up from 600 to 700. This latest app really motors!
Waiting for Godot & salvation :-)
Why do doctors have to practice?
You'd think they'd have got it right by now
RE: John, over the w/e I
)
Remember that crunch time for any specific WU in a specific data pak will vary based on whether it's near the peak or in the trough. Peak WUs are slower than trough WUs. Also when making comparisons, be sure that the WUs being compared are from the same data pak or you're comparing apples and oranges.
Nevertheless, 4.36 is an enormous improvement over the stock app.
Seti Classic Final Total: 11446 WU.
Anyone here notice any
)
Anyone here notice any problems with new optimized app?
I have been having a problem of late with download speeds.
From an average of 14,000 down, with power boost, to an average of 4,500 now.
Everything checked out with Comcast.
Then I noticed something.
My computers as well as my sons laptop are all connected to my router.
My 2 computers that have the optimed client have the slow download speeds.
The 2 that don't still are reaching regular speeds.
Has anyone else noticed this?
And if this is the case, is there a way to compensate for the speed loss?
Thanks, Dan
RE: ... My 2 computers that
)
Nope.
I've got a large number of machines running both the Windows and Linux optimised apps and I've not seen anything unusual with download speeds.
I presume you are talking about the downloading of the large data files used when a task is crunching, rather than the tasks themselves. Tasks are so small that you would be hard pressed to actually see them downloading.
At the moment, many hosts are going through a transition to new higher frequency large data files and you may be seeing more frequent downloading of lots of data files. I'm seeing this on quite a few of my hosts but there is no problem with download speeds and it's certainly not related to the science app in any way.
Cheers,
Gary.
RE: RE: ... My 2
)
Basically I was referring to a desktop that I use while Einstein is running. The overall speed for regular computer use such as surfing has gone down and is much slower as with any speedtest. The app has effected this area while Einstein is crunching. The client itself is great.
RE: Basically I was
)
OK, I thought you were referring to downloads of project files.
Just to reiterate, the science app (whether stock or optimised) does not use your internet connection. The BOINC client does that when necessary. So it's highly unlikely that any slowdown of surfing speeds could be linked to the use of an optimised app. If you are seeing a slowdown on some hosts and not on others, when all hosts are actually crunching, it sounds like something else may be causing it.
If it turns out that you think BOINC is slowing performance on all hosts, you could always change your preferences to allow BOINC to run only when a host is idle. My experience is that BOINC is very good at getting out of the way when you need your computer to be responsive. Even on older and slower hardware, I've never felt the need to limit BOINC in this way on machines I use.
Cheers,
Gary.
Here lately I've notice a big
)
Here lately I've notice a big change in times associated with the optimized client.I started out in the mid 25,000 bracket and began working my way down to 22 or 23,000 to finish a wu. Now suddenly I'm raising and 28, 29, 30,000+ to finish work units are pushing me down to the level where my x64x2 3800+ is running.I've also noticed there has been fluctionations in number of sky point this system has been getting assigned. Was around 1205, then 1202 and now 1203.
I'm running XP Pro 32 and running 2 instances of einstein, Currently I have 43 services running. I uninstalled McAfee and dropped from 54. I currently are using less than 500 megs total memory and running 2 gigs. I'm not sure what caused this drop in performance but if anyone has any ideas I'd certainly appreciate it.
Arion
RE: Here lately I've notice
)
All this is to be expected. It is just the way that S5R3 crunching works. There is a cyclic pattern to crunch times and based on pure chance you sometimes draw tasks that crunch quickly (eg 22Ksecs for your host) and you sometimes draw tasks that crunch slowly (eg 31Ksecs for your host). Everybody sees the same type of cyclic patterns and it has been discussed quite heavily in several threads over the last couple of months. As an example, this thread discusses the topic quite extensively and there are some posts which give links to other threads where further information is available.
In your particular case you have just recently transitioned to new large data files around the 1015.xx frequency mark and the run of sequence numbers you have received lately (like _628, _626, _625, etc) are pretty much right at some of the slowest crunch times you will see. As you work your way down the seq#s from there, you will see improving crunch times and by the time you get down to about the _510 to _530 range, your crunch times will be fast again.
Small fluctuations like this are also perfectly normal and of no real consequence. The skypoints for tasks seem to average around 1200 +/- 10 at most, ie less than +/- 1% variation maximum. This will have no significant effect on crunch time.
Cheers,
Gary.
RE: In your particular case
)
Thanks for the reply and informtion. I'd recently lost a whole drive of data and rather than put a stale backup on the drive, I opted to clean install everything. At first the processing speed was as it was before the crash and then suddenly started going upward. I was concerned there was something else going here. At least now I can relax knowing this is normal behavior.
Arion