I enabled the CPU Beta test app versions again, with exception of the Mac OS X 32 Bit versions (PPC & x86), which are apparently broken.
The 64 Bit Mac OSX App seems somewhat faster than the established version 1.05. I wonder whether there is a similar speedup visible in the Linux App, too?
And of course I'd like to get some report on the progress counting - does it still happen that the progress stays at 100% for hours, or is that problem gone?
For some small-number statistics on Linux x64, I nominate this host: http://einsteinathome.org/host/8749294/tasks&appid=24
v1.05 took typically 20000 - 24000s, and v1.06 takes 20000 - 25000s. It's been marching up through the frequency bands pretty quickly, which might limit the ability to do a really fair comparison, but there's certainly no significant speed-up.
I never noticed it hang at 100% for more than a minute, and it still doesn't: can't help you there.
And of course I'd like to get some report on the progress counting - does it still happen that the progress stays at 100% for hours, or is that problem gone?
BM
I've done some observing on a Win 7 x64 system with at i7 3770K.
At the start of a task the first real progress indication comes when the task has written it's first checkpoint. This can take quite some time, for the tasks currently running it's taken 48, 48, 46, 48 and 52 minutes. During this time newer versions of Boinc will slowly increase the progress percentage (~0.007%/second) so that the task appears to be making progress, then when the first checkpoint is written and the real percentage done is reported it resets to this value. On a few tasks that I've observed this has had the effect that the percentage done climbed to ~22-23% done and then reset down to 7,333%. It then stays there until the next checkpoint (almost 50 min later) when it updates to 15,666%.
Is it possible to report percentage done every time a "f2dot" is completed?
That would mean about every 5 min or so for the above observed tasks and would be much better than the almost 50 min apart for the current app.
At the end of a task that I observed the percentage done went from 90.666% to 99.000%, it stayed there for a bit over 3 min and then went to 100% and about 8 seconds later it finished. So nothing strange there.
On a random selection of my completed S6 CPU beta tasks the final statistics recalculation takes about 3-3.5 min.
Otherwise the beta app appears to run as it should and validate so no problem there.
Here is what I observed (until now) when running the beta:
The task started and BOINC estimated about 20 minutes runtime. The task started but it took about 2 or 3 three minutes for the GPU to actually start working. GPU-Z reported a GPU usage of about 92 to 96%. The memory controller load that is reported by GPU-Z was somewhat strange. It stays at about 73% for about 40 seconds and drops to zero for about 30 seconds before it again jumps to about 73%. It seems like the app uses much more of the CPU than it should. The task-manager says it uses 25% of my CPU power, which is one (virtual core).
After about 15 minutes the progress jumped from zero to 6.692% and after another 15 minutes to 14.384%.
Also after about 18 minutes or so the estimated runtime starts counting upwards.
Lenovo-Laptop
CPU: i5-3210M (with integrated HD4000)
GPU: NVIDIA 610M
running currently only Einstein with one BRP4 on the IGPU and one beta on the 610M
I'll let it run for now and will report if any problems occur.
Everything works, no problems so far. However, I think that not enough credits are awarded for the tasks. My system achieves about 260 credits per hour for BRP4s and only about 150 for the beta.
This is on a WinXP machine with an E8400 Core2 Duo CPU (one core for each GPU). And the BRP4G takes only 17% of the CPU time, whereas the S6CasA takes over 90%.
But I don't see that as a problem or necessarily to be fixed. Not all projects need to give the same points.
Everything works, no problems so far. However, I think that not enough credits are awarded for the tasks. My system achieves about 260 credits per hour for BRP4s and only about 150 for the beta.
As far as credit goes, one must remember that the GPU S6CasA units are out there for Beta testing, as such you should consider yourself lucky that your tasks are completing and your getting any credits at all ;-)
I'm sure that once testing is over and a fully stable app is released, the credit allocation will be reviewed and adjusted if necessary.
Bernd and HB (Bikeman) are best placed to advise in that regard, when the time comes.
If BRP4 works better for you, stick with BRP4. Its your choice and the beauty is, you can always change your mind later when or if credit allocation changes. :-)
On Einstein@Home we (currently) grant credit based on "scientific value", i.e. "same credit for same work(unit)", regardless of whether a task was computed on a CPU or a GPU. The credit values are adjusted based on the runtimes of the CPU versions, as the variation is much smaller there.
The GPU App versions differ largely in efficiency (speedup compared to the CPU version), depending on e.g. development progress (how much computation has been ported to GPU) and properties of the algorithm (after all, these are three totally different codes). And they behave very different on different cards. The S6CasA app is almost solely limited by the GPU memory bandwidth, the BRP4G is much more limited by single-precision floating-point performance.
My guess is that in terms of credit/h the comparison of S6CasA vs. BRP4G will be much different on ATI cards. In case of NVidia, I'm also not sure how much e.g. the driver affects the performance ratio between CUDA and OpenCL apps.
Would it be dangerous (as in getting much more validate error or invalid tasks), if I would change GPU utilization factor of GW apps from 1.0 to 0.5 on my Gigabyte Radeon R9 280X (3GiB GDDR5 memory)?
Decreasing GPU utilization factor for BRP wasn't doing me any good. In fact it made most workunits fail on validation or invalid...
As far as credit goes, one must remember that the GPU S6CasA units are out there for Beta testing, as such you should consider yourself lucky that your tasks are completing and your getting any credits at all ;-)
That's what I do. I tried to get some to see whether they are working on my system or not ;) I just noticed the difference in credits and because credits are important for many users I reported this difference.
I will stick with BRPs for now. Not because of the credits, but because they checkpoint more frequently which is favorable for slower systems.
I enabled the CPU Beta test
)
I enabled the CPU Beta test app versions again, with exception of the Mac OS X 32 Bit versions (PPC & x86), which are apparently broken.
The 64 Bit Mac OSX App seems somewhat faster than the established version 1.05. I wonder whether there is a similar speedup visible in the Linux App, too?
And of course I'd like to get some report on the progress counting - does it still happen that the progress stays at 100% for hours, or is that problem gone?
BM
BM
For some small-number
)
For some small-number statistics on Linux x64, I nominate this host:
http://einsteinathome.org/host/8749294/tasks&appid=24
v1.05 took typically 20000 - 24000s, and v1.06 takes 20000 - 25000s. It's been marching up through the frequency bands pretty quickly, which might limit the ability to do a really fair comparison, but there's certainly no significant speed-up.
I never noticed it hang at 100% for more than a minute, and it still doesn't: can't help you there.
RE: And of course I'd like
)
I've done some observing on a Win 7 x64 system with at i7 3770K.
At the start of a task the first real progress indication comes when the task has written it's first checkpoint. This can take quite some time, for the tasks currently running it's taken 48, 48, 46, 48 and 52 minutes. During this time newer versions of Boinc will slowly increase the progress percentage (~0.007%/second) so that the task appears to be making progress, then when the first checkpoint is written and the real percentage done is reported it resets to this value. On a few tasks that I've observed this has had the effect that the percentage done climbed to ~22-23% done and then reset down to 7,333%. It then stays there until the next checkpoint (almost 50 min later) when it updates to 15,666%.
Is it possible to report percentage done every time a "f2dot" is completed?
That would mean about every 5 min or so for the above observed tasks and would be much better than the almost 50 min apart for the current app.
At the end of a task that I observed the percentage done went from 90.666% to 99.000%, it stayed there for a bit over 3 min and then went to 100% and about 8 seconds later it finished. So nothing strange there.
On a random selection of my completed S6 CPU beta tasks the final statistics recalculation takes about 3-3.5 min.
Otherwise the beta app appears to run as it should and validate so no problem there.
Here is what I observed
)
Here is what I observed (until now) when running the beta:
The task started and BOINC estimated about 20 minutes runtime. The task started but it took about 2 or 3 three minutes for the GPU to actually start working. GPU-Z reported a GPU usage of about 92 to 96%. The memory controller load that is reported by GPU-Z was somewhat strange. It stays at about 73% for about 40 seconds and drops to zero for about 30 seconds before it again jumps to about 73%. It seems like the app uses much more of the CPU than it should. The task-manager says it uses 25% of my CPU power, which is one (virtual core).
After about 15 minutes the progress jumped from zero to 6.692% and after another 15 minutes to 14.384%.
Also after about 18 minutes or so the estimated runtime starts counting upwards.
Here is my system:
Lenovo-Laptop
CPU: i5-3210M (with integrated HD4000)
GPU: NVIDIA 610M
running currently only Einstein with one BRP4 on the IGPU and one beta on the 610M
I'll let it run for now and will report if any problems occur.
Everything works, no problems
)
Everything works, no problems so far. However, I think that not enough credits are awarded for the tasks. My system achieves about 260 credits per hour for BRP4s and only about 150 for the beta.
My numbers show a similar
)
My numbers show a similar difference:
S6CasA:
GTX 650 Ti - 568 credits/hour
GTX 660 - 760 credits/hour
BRP4G:
GTX 650 Ti - 839 credits/hour
GTX 660 - 1263 credits/hour
This is on a WinXP machine with an E8400 Core2 Duo CPU (one core for each GPU). And the BRP4G takes only 17% of the CPU time, whereas the S6CasA takes over 90%.
But I don't see that as a problem or necessarily to be fixed. Not all projects need to give the same points.
RE: Everything works, no
)
As far as credit goes, one must remember that the GPU S6CasA units are out there for Beta testing, as such you should consider yourself lucky that your tasks are completing and your getting any credits at all ;-)
I'm sure that once testing is over and a fully stable app is released, the credit allocation will be reviewed and adjusted if necessary.
Bernd and HB (Bikeman) are best placed to advise in that regard, when the time comes.
If BRP4 works better for you, stick with BRP4. Its your choice and the beauty is, you can always change your mind later when or if credit allocation changes. :-)
Gavin.
On Einstein@Home we
)
On Einstein@Home we (currently) grant credit based on "scientific value", i.e. "same credit for same work(unit)", regardless of whether a task was computed on a CPU or a GPU. The credit values are adjusted based on the runtimes of the CPU versions, as the variation is much smaller there.
The GPU App versions differ largely in efficiency (speedup compared to the CPU version), depending on e.g. development progress (how much computation has been ported to GPU) and properties of the algorithm (after all, these are three totally different codes). And they behave very different on different cards. The S6CasA app is almost solely limited by the GPU memory bandwidth, the BRP4G is much more limited by single-precision floating-point performance.
My guess is that in terms of credit/h the comparison of S6CasA vs. BRP4G will be much different on ATI cards. In case of NVidia, I'm also not sure how much e.g. the driver affects the performance ratio between CUDA and OpenCL apps.
BM
BM
Hello, Would it be
)
Hello,
Would it be dangerous (as in getting much more validate error or invalid tasks), if I would change GPU utilization factor of GW apps from 1.0 to 0.5 on my Gigabyte Radeon R9 280X (3GiB GDDR5 memory)?
Decreasing GPU utilization factor for BRP wasn't doing me any good. In fact it made most workunits fail on validation or invalid...
Gavin wrote:As far as credit
)
That's what I do. I tried to get some to see whether they are working on my system or not ;) I just noticed the difference in credits and because credits are important for many users I reported this difference.
I will stick with BRPs for now. Not because of the credits, but because they checkpoint more frequently which is favorable for slower systems.