We're proud to announce the official release of our first application for AMD/ATI graphics cards (GPU) and accelerated processing units (APU). Please use this thread to discuss this release.
Minimum Requirements:
* Windows or Linux
* BOINC Client 7.0.27 (
download)
* ATI GPU or APU (OpenCL 1.1 compliant, equivalent to Radeon HD 5xxx)
* 512 MB video memory
* Catalyst Driver 12.x (don't install the APP SDK!) Notes:
* Please be aware that all previous tests we ran could only do a limited amount of testing given the vast amount of different hard- and software run by our volunteers. There might still be smaller issues, in particular with validation. Please bear with us, we'll do our best to improve the application over the next weeks and months.
* The same applies to application performance. This is our first official release. It might not be on par with our CUDA application just yet. We still have ideas on how to improve application performance and we're going to introduce them in due course.
* Tip: performance might be improved if you set BOINC
not to use all your CPU cores (e.g. all but one).
* Support for Apple Mac OS X is currently targeted for OS X Mountain Lion (10.8) Known Issues:
* When running this application (all 32-bit) on 64-bit systems you might encounter the following error message (error number 255/-1):
[ERROR] Failed to get OpenCL platform/device info from BOINC (error: -1)If that happens, please download the latest AMD/ATI Catalyst driver, reinstall it and reboot your computer.
I am running two machines one with Windows 64 bit and the other with Ubuntu 12.04. after upgrading to Ubuntu my boinc just quit working. A version of boinc came with Ubuntu version 7.0.24 I want to install ver 7.0.27 and not sure how to do it. I just started using Ubuntu and sill learning. Have not been able to remove the older version of Boinc and installing the new version. Can anyone assist me?
However, large parts of the CPU efficiency are contributed by the GPU driver and the way it yields CPU resources to the OS (other processes) while doing GPU computations. Bottom line, you should probably never compare CUDA/NVIDIA to OpenCL/ATI directly. The same applies to comparing actual performance since NVIDIA and ATI GPUs have significantly different architectures. Another reason is that according our experiences so far, ATI GPUs cover a much wider range in relative performance than NVIDIA GPUs. So your mileage may vary significantly compared to other volunteers, depending on the actual ATI GPU model you use.
Clarification requested:
If we're not to compare Nvidia CUDA to ATI OpenCL, then why do you do this on the result files?
My first task was run on BRP4cuda32nv301 v1.25 vs atiOpenCL v1.24
Shouldn't you as a project then use some form of hardware redundancy, and give out work (and compare) only CUDA + CUDA and CUDA + CPU plus ATIOpenCL + ATIOpenCL and ATIOpenCL + CPU, and NOT CUDA + ATIOpenCL and ATIOpenCL + CUDA?
I was referring to CPU efficiency, driver efficiency and relative over-all performance only, not the (numerical) task results. We do cross-platform validation of tasks results...
Please, can anyone explain what means
"the "dangerous" option of 0.5 2tasks at once"
and
"app_info.xml file - a factor of 1 runs 1 task, a factor of 0.5 runs 2 simultaneous tasks, a factor of 0.33 runs 3 simultaneous tasks, and so on and so forth..."
and what about CUDA OpenCL - i'm going to upgrade to GTX680 - will it be able to compute too? or will I have to use FireGL?
Please, can anyone explain what means
"the "dangerous" option of 0.5 2tasks at once"
and
"app_info.xml file - a factor of 1 runs 1 task, a factor of 0.5 runs 2 simultaneous tasks, a factor of 0.33 runs 3 simultaneous tasks, and so on and so forth..."
and what about CUDA OpenCL - i'm going to upgrade to GTX680 - will it be able to compute too? or will I have to use FireGL?
a GTX 680 should work fine. now, regarding the GPU utilization factor - not too long ago there was no GPU utilization factor parameter for us to play with in our Einstein@Home web preferences. the only way to run more than 1 GPU task at a time was to place an app_info.xml file in the Einstein@Home data directory. in this file, there was a section of code that looked like this:
Quote:
n
...where "n" is the GPU utilization factor. when set to a value of 1, the BRP4 GPU apps would run only 1 GPU task at a time. when set to 0.5, the apps would run 2 GPU tasks simultaneously. when set to 0.33, the apps would run 3 GPU tasks simultaneously, and so on and so forth...
the developers recently programmed the GPU utilization factor into our Einstein@Home web preferences specifically so we would no longer have to place an app_info.xml file in the E@H data directory and edit it whenever we wanted to make changes. it similar to the concept that the SETI@Home optimized app developers followed when they introduced the Lunatics installer - before the installer was around to handle the entire install process for you, you would have to manually place the appropriate files in the appropriate directories in order to "install" an optimized app. long story short, the developers are simply trying to reduce the possibility for human error.
...where "n" is the GPU utilization factor. when set to a value of 1, the BRP4 GPU apps would run only 1 GPU task at a time. when set to 0.5, the apps would run 2 GPU tasks simultaneously. when set to 0.33, the apps would run 3 GPU tasks simultaneously, and so on and so forth...
Thank you for a good explanation. Could u please speciafy a little:
I have a factor of 0,5 by default for Radeon 5750. Is this good or bad? I didn't change anything.
...where "n" is the GPU utilization factor. when set to a value of 1, the BRP4 GPU apps would run only 1 GPU task at a time. when set to 0.5, the apps would run 2 GPU tasks simultaneously. when set to 0.33, the apps would run 3 GPU tasks simultaneously, and so on and so forth...
Thank you for a good explanation. Could u please speciafy a little:
I have a factor of 0,5 by default for Radeon 5750. Is this good or bad? I didn't change anything.
hmm...the default GPU utilization factor for all of my various machines under my Einstein@Home web preferences was 1, so i kind of assumed that everyone's GPUs would default to running only 1 GPU task at a time. either way, you should be fine with a factor of 0.5 (running 2 GPU tasks simultaneously) provided your HD 5750 is a 1GB model. you see, they also made some HD 5750 with only 512MB of VRAM onboard. considering each BRP4 ATI task consumes approx. 355MB of VRAM, an HD 5750 w/ only 512MB of VRAM would only be able to handle 1 task at a time efficiently. if you tried to run 2 tasks simultaneously on a 512MB card, you'd be over-utilizing the VRAM. while i'm fairly confident that it would work and not cause compute errors, the VRAM bottlneck would probably significantly increase your GPU task run times and take a serious toll on your GPU's compute efficiency. long story short, if you have a 512MB HD 5750, change your GPU utilization factor to 1 (and run only 1 GPU tasks at a time). if you have a 1GB HD 5750, leave your GPU utilization factor at 0.5 (and run 2 GPU tasks simultaneously).
RE: Hi everyone, We're
)
okey george kalemakis
I am running two machines one
)
I am running two machines one with Windows 64 bit and the other with Ubuntu 12.04. after upgrading to Ubuntu my boinc just quit working. A version of boinc came with Ubuntu version 7.0.24 I want to install ver 7.0.27 and not sure how to do it. I just started using Ubuntu and sill learning. Have not been able to remove the older version of Boinc and installing the new version. Can anyone assist me?
RE: However, large parts of
)
Clarification requested:
If we're not to compare Nvidia CUDA to ATI OpenCL, then why do you do this on the result files?
My first task was run on BRP4cuda32nv301 v1.25 vs atiOpenCL v1.24
Shouldn't you as a project then use some form of hardware redundancy, and give out work (and compare) only CUDA + CUDA and CUDA + CPU plus ATIOpenCL + ATIOpenCL and ATIOpenCL + CPU, and NOT CUDA + ATIOpenCL and ATIOpenCL + CUDA?
I was referring to CPU
)
I was referring to CPU efficiency, driver efficiency and relative over-all performance only, not the (numerical) task results. We do cross-platform validation of tasks results...
Oliver
Einstein@Home Project
RE: Yes, instead of using
)
Thanx, this was the missing peace of information.
Please, can anyone explain
)
Please, can anyone explain what means
"the "dangerous" option of 0.5 2tasks at once"
and
"app_info.xml file - a factor of 1 runs 1 task, a factor of 0.5 runs 2 simultaneous tasks, a factor of 0.33 runs 3 simultaneous tasks, and so on and so forth..."
and what about CUDA OpenCL - i'm going to upgrade to GTX680 - will it be able to compute too? or will I have to use FireGL?
RE: Please, can anyone
)
a GTX 680 should work fine. now, regarding the GPU utilization factor - not too long ago there was no GPU utilization factor parameter for us to play with in our Einstein@Home web preferences. the only way to run more than 1 GPU task at a time was to place an app_info.xml file in the Einstein@Home data directory. in this file, there was a section of code that looked like this:
...where "n" is the GPU utilization factor. when set to a value of 1, the BRP4 GPU apps would run only 1 GPU task at a time. when set to 0.5, the apps would run 2 GPU tasks simultaneously. when set to 0.33, the apps would run 3 GPU tasks simultaneously, and so on and so forth...
the developers recently programmed the GPU utilization factor into our Einstein@Home web preferences specifically so we would no longer have to place an app_info.xml file in the E@H data directory and edit it whenever we wanted to make changes. it similar to the concept that the SETI@Home optimized app developers followed when they introduced the Lunatics installer - before the installer was around to handle the entire install process for you, you would have to manually place the appropriate files in the appropriate directories in order to "install" an optimized app. long story short, the developers are simply trying to reduce the possibility for human error.
RE: Here is our local copy
)
Linux versions are now available from the Berkeley download link:
- boinc_7.0.28_x86_64-pc-linux-gnu.sh
- boinc_7.0.28_i686-pc-linux-gnu.sh
RE: ...where "n" is the
)
Thank you for a good explanation. Could u please speciafy a little:
I have a factor of 0,5 by default for Radeon 5750. Is this good or bad? I didn't change anything.
RE: RE: ...where "n" is
)
hmm...the default GPU utilization factor for all of my various machines under my Einstein@Home web preferences was 1, so i kind of assumed that everyone's GPUs would default to running only 1 GPU task at a time. either way, you should be fine with a factor of 0.5 (running 2 GPU tasks simultaneously) provided your HD 5750 is a 1GB model. you see, they also made some HD 5750 with only 512MB of VRAM onboard. considering each BRP4 ATI task consumes approx. 355MB of VRAM, an HD 5750 w/ only 512MB of VRAM would only be able to handle 1 task at a time efficiently. if you tried to run 2 tasks simultaneously on a 512MB card, you'd be over-utilizing the VRAM. while i'm fairly confident that it would work and not cause compute errors, the VRAM bottlneck would probably significantly increase your GPU task run times and take a serious toll on your GPU's compute efficiency. long story short, if you have a 512MB HD 5750, change your GPU utilization factor to 1 (and run only 1 GPU tasks at a time). if you have a 1GB HD 5750, leave your GPU utilization factor at 0.5 (and run 2 GPU tasks simultaneously).