Hi,
I know you have had time to build and try and ....
So did I.
You may already know that seti@home might not interest some of the (two) developers any more. I'm the other GPU guy.
I'd really appreciate all information "how to" and especially how to try a one or a two of the learned skills and tricks to your project.
How old is the (2009) GPU build? Has someone tried to optimiz3? Has someone succeeded to compile in any recent years? Is there an active development society (group/individuals) somewhere?
@setiathome I was able to make the SW to gain 3-4 x improvement to the other optimized SW.
I do NVIDIA. I can read OpenCl and the compiled files for AMD and CUDA.
I need glasses and I have a high blood pressure and a short fuse!
Thanks for
any help
--
Me.
Copyright © 2024 Einstein@Home. All rights reserved.
good luck getting help! My
)
good luck getting help! My attempt at a build failed at final link as mentioned here
https://einsteinathome.org/content/clbuildprogramfailure-02mdf-gw-opencl-ati?page=1#comment-175693
later in the thread was a post the ROCm problrem was fixed by a minor change
https://einsteinathome.org/content/clbuildprogramfailure-02mdf-gw-opencl-ati?page=2#comment-175741
i tend to doubt that and may do a second download and see if anything changed
I have dropped a PM to Bernd
)
I have dropped a PM to Bernd pointing him to this thread. Hopefully you’ll be able to sort something out.
BOINC blog
OK! Nothing seems to have
)
OK!
Nothing seems to have to changed since a few years back.
Bernd was mentioned then and I got help and one of the Einstein SW (do not remember its name) to compile with CUDA 6 or 6.5 if I remember correctly.
My faint recollection is that Bernd told me how to set up an anonymous platform xml file for this project.
and @JSTATESON thanks, I'll read those links you provided me with.
I just checked the download
)
I just checked the download link again and it is the same as the one I downloaded before they fixed the ROCm problem.
https://einstein.phys.uwm.edu/download/brp-src-release.zip
changes in the zip are dated 4/24/2018
They are not updating the zip the url points to and I did not bother registering for an account (their private GitHub?) so possibly there are updates to the sources.
The script was, IMHO, very well done and appeared capable of building all versions. I made a note of edits I had to do as a few were required due to changes in a very few 3rd party sites the script pulled from. I assume their later version, if available, would not have this problem.
The BRP project has been on
)
The BRP project has been on near standstill for a few years. They've processed all the data to the point of deep diminishing returns and have shut down the PC applications for it. ARM Builds (Android and RPi) are still running - at a tiny fraction of the rate when it had PC GPU apps - because none of the other applications have been ported to the platform.
PC GPUs today are either running the Fermi application - which has a much larger speedup relative to CPU than BRP ever did - and the GW application. The GW app is only partially ported to GPU, and is heavily CPU bottlenecked due to the amount of computations still running on it. Both it and the Fermi app are OpenCL only.
From what I can tell this
)
From what I can tell this project does not want outside engineering help.
I've seen a few sassy replies to users, that i'm assuming are not engineers, with sentiment along the lines of "everything is fine, don't complain, the source code is there for you to be part of the solution https://einsteinathome.org/application-source-code-and-license"
The problem is the last time any of this documentation was updated seems to be never. I can usually usually get far enough to build a binary without a lot of hand holding, but this is a struggle. It would be nice, at the very least, to spend 10 minutes and make a very simple README, just on how E@H builds their app binaries. I don't care if it's totally specific to their own workflow and arch, just how they would distribute the current binaries delivered via boinc as of today.
I'm not trying to be a jerk, i've just noticed similar themes for "open source" software projects maintained by scientists/academia. I get the science is the primary goal, but you can get a lot by doing very little. The astrophysicist doesn't necessarily need to be the one optimizing opencl. I can't tell if i'm being private sector naive and i just don't understand the landscape. It just feels like a lot these projects were handed off to a TA, in a windowless room in the basement at CERN, then dumped on the org's intranet subversion repo and thats just the way it is. It might seem low value to the science, but even a small amount of pseudo participation can generate engagement that leads to a healthy amount of public awareness that eventually comes back to the big picture science.
Even simple trivial things, I wold love to keep your bash build script maintained, or update a how-to guide, but that wont work if we need VPN access to LIGO.
rant over
xoxoxooxxoxoxoox
FWIW I thought I'd add a few
)
FWIW I thought I'd add a few contextual comments about what "open source" probably means for this project. These are my personal* thoughts only, gleaned after years of observation. Please slap me around a little if am wrong-headed here. :-)
- while there is an invitation to build, modify, port etc the GW and BRP code :
..... I think the main reason is to be transparent about the analytic methods used to obtain scientific results. This helps the wider 'scientific community', presumably showing everyone/anyone the workings helps confidence in the validity of any science output. In theory at least a putative discovery could be replicated by any interested party, via open access to the relevant data set(s) and the methodology used upon it. Just crank it through any available/ported Turing Machine**.
- LIGO is a science collaboration of several thousand members. To be an official part of that requires signing off on alot of understandings about how to professionally operate : assuming that club will have you as a member. This is right and proper for it serves the purpose of herding all the talented cats in various desired directions ie. the collaborative bits. The E@H contributors are mostly "citizen scientists" who by definition aren't assumed to have signed up for LIGO membership per se.
- the BRP aspect was not a LIGO thing in origin. Our fearless leader ( Prof. Bruce Allen ) came up with it as a way to keep us occupied while the LIGO measuring machines, the interferometers, were being upgraded. He knew there would be a multi year hiatus in activity otherwise. The project may have withered. Good call IMHO : it kept interest in the project going plus it nailed some dozens of new pulsars onto the wall. Plus there is a technical analogy with GW signal processing and the ( continuous wave ) GW signals sought for here at E@H are presumed to come from pulsars too***.
- the GW aspect is definitely a LIGO thing. There is some management by LIGO in detail happening here. For instance, IIRC, the size of the quorums - the minimum number of machines operating asynchronously to process a particular task - is specified by LIGO. E@H forms one part of a sequence of operations on the LIGO obtained data, there is pre-processing and post-processing going on.
- there's no promise that I have seen to use any modified/augmented code, just an understanding to examine what was done by any volunteer. But let us recall Akos Fekete who delightfully waded into the topic during 2006, he wound up doing some consulting with E@H ( I wonder where he is now ). Not to forget Heinz-Bernd Eggenstein ( Bikeman ) who got a job**** at AEI !
Cheers, Mike.
* To be precise about who I am : my offline life is as a ( ageing/senior/rustic ) primary care medical practitioner, wannabe part-time programmer and holder of a Bachelor's degree in physics/math from the previous millennium. So in particular I have no special status here beyond the social engineering role of forum mod. I serve such at the pleasure of the project's admins ie. even I must also behave .... ;-)
** One wonders what Babbage would think of our efforts here at E@H !
*** What a grand thing it would be if one of the BRP discoveries was also detected on a GW channel. IIRC the GW frequency ought be twice the EM frequency.
**** In his words, he "swapped Dilbert-space for Hilbert-Space" !
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks for the explanation!
)
Thanks for the explanation! I went back and reviewed that BRP zip. When I first looked at it I spotted makefiles for CUDA and OpenCL for windows and Linux. The Binary Radio Pulser app does not use CUDA nor even OpenCL for Nvidia & AMD so I assumed, mistakenly, that those included makefiles for Nvidia, AMD, Windows, etc, were for other projects not just BRP. That can explain why I had a problem building the OpenCL app for AMD: there never was one other than planning for the possibility.
As I mentioned earlier, I found the code and especially the script to be well written and understandable. A readme would have been nice. I have written enough spaghetti code to be able to recognize it. I did not see any "WTF moments" or "FIXME! FIXME!" unlike Milkyway & Boinc clients. This code is available (some of it) even if not up-to-date. The sources did not have to be stolen unlike the climategate code that was full of programmer comments such as "[projected temperatures] ... past 1960 ... will be artificially adjusted to look closer to the real temperatures"
As Pietri mentioned, SETIa@Home end users made considerable improvements in the SETI code and IMHO Pietri was major player in that improvement.
JStateson wrote:Thanks for
)
Quite right.
A good example of what may happen with opacity laid upon the data pipeline : saps confidence in results. NB the pre and post processing of E@H data is described in the particular published papers that refer to it, often in excruciating detail or by reference to prior published works. I reckon that's pretty mandatory in a new field like GW astronomy where instrumental and other noise greatly outweighs the signal of interest.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Mike Hewson wrote:FWIW I
)
A perfectly sensible set of comments. Thanks Mike, for taking the time to put that all together.
Perhaps we also need to remind people about what the user forums are all about. The main use is for the exchange (in good faith) of basic information between volunteers. If there was any user involvement in developing optimised apps, there would be a sticky thread announcing it loud and clear, just as there was in the Akos Fekete days.
People can post information related to the project that might be of use or interest to others. People can request information related to the project but everybody should be fully aware that, unless the response is coming from a member of Staff, it should always be treated as another volunteer's opinion and therefore, non-authoritative. If you want to rant about something, all you are probably achieving is the annoying of other volunteers.
Of course, there are forums like Technical News where there will be announcements by the Staff that affect all volunteers. Occasionally, if there is a significant problem with a particular app, the Devs may choose to comment on how the issue is being handled, in any 'Problem' thread that has arisen. These days, there is little general interaction with Staff via the general message boards. They just don't have the time.
So with all that in mind, here is what the OP should have done. Firstly, ask other volunteers about exactly what apps exist that might benefit from optimisation, what languages are being used and if there are any avenues available for volunteers to contribute to app development. The short answer would have been, the GW GPU app, OpenCL, and none, respectively. Had I been making that answer I would have added something along the following lines:-
"Put 'AEI Hannover' into a search engine and browse the AEI, MPG Hannover website. You will find a full listing of Staff with their contact details, together with a general office email address. If you believe you have the necessary skill-set to contribute to the development of the GW GPU app, then you should put in the time and effort to compile a fully documented list of your previous history and achievements and any pertinent information about what you are offering to do if given access to the codebase. It's a serious business so make a serious attempt to showcase what you are able to bring to the table. You should send those details and any supporting documentation to the general email address and marked to the attention of Bruce Allen. You should expect to go through a fairly exhaustive vetting procedure, including the signing of NDAs, before being given access to the code. I have no idea if such an approach would succeed but it certainly would be much better than a frivolous forum post."
On a somewhat lighter note, if anybody reading this does have the necessary time, desire and skills to contribute, I would genuinely encourage you to put your credentials forward. It would really be great and very much appreciated by all volunteers if the GW GPU app could be enhanced so as to use the GPU more and the CPU less. I imagine it won't be easy, but would really be worthwhile, if achievable. Such a person would become a celebrated hero, just as Akos Fekete was in his day :-).
Cheers,
Gary.