GPU computing under Xorg server

bigmike206
bigmike206
Joined: 16 Aug 17
Posts: 3
Credit: 10206142
RAC: 1032
Topic 225139

I recently added a GPU to a headless Linux box.  What I've learned is that the Xserver must be up and running when BOINC launches so that the GPU will be properly detected and utilized.  My question is: does the Xserver have to stay up while BOINC is crunching?  In other words, after BOINC is up, can I safely kill the Xserver (to conserve resources,) or will that compromise GPU computing functionality?

TIA

Edit: For the record, this is an ATI GPU (OpenCL 1.2).

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5842
Credit: 109393790067
RAC: 35834153

bigmike206 wrote:... does the

bigmike206 wrote:
... does the Xserver have to stay up while BOINC is crunching?

I don't know for sure but the answer may well be 'yes'.  If you need Xorg running to detect your GPU hardware, what's the problem with allowing it to continue?  Why not just log out and remove peripherals after you have launched BOINC?  That's what I do and from that point on, I monitor and control all my hosts over an ssh connection from my main machine.

I just logged in over ssh to a machine of mine that's been running continuously for 320 days.  A ps shows Xorg running with an accumulated time of just over 2 hours.  That's about 25 secs per day on average.  Even if you could do without X running, you wouldn't save very much.

All my machines have a full KDE5 desktop.  It's very convenient to hook up peripherals and work on a machine directly (fix problems, edit configs, perform updates, etc.) when it becomes necessary to do so.

Cheers,
Gary.

bigmike206
bigmike206
Joined: 16 Aug 17
Posts: 3
Credit: 10206142
RAC: 1032

Could we get a mod to delete

Could we get a mod to delete the previous post?  Three paragraphs, of which, only the first three words actually address my question:

Gary Roberts wrote:
I don't know ...

Thanks.

bigmike206
bigmike206
Joined: 16 Aug 17
Posts: 3
Credit: 10206142
RAC: 1032

Some more testing and

Some more testing and investigation has revealed the answer: no, you cannot shutdown the Xserver while crunching GPU units.  Doing so causes "Computation error" almost immediately.  However, by investigating this behavior, I learned that this has nothing to do with BOINC itself.

Just as an experiment, I decided to recompile the BOINC client without X.  To my surprise, it had no effect on the behavior.  The GPU was properly detected and GPU work units crunched along merrily--even with no X support compiled into the client.  So the reliance on having a running X server must lie elsewhere.

What seems to be happening is that the project binaries communicate with the GPU/driver via Unix domain sockets.  These sockets are provided by the running Xserver.  If the Xserver is down (or goes down mid-task,) the project executable can't find any valid sockets to bind to, throwing the "Computation error."  (For something so basic, I would have expected the app to perhaps fail more gracefully--something along the lines of "No valid sockets found, computation suspended/deferred."  Just a thought.)

Anyway, I now have a much better understanding of how the different processes work together: the BOINC client, the project app, the GPU driver, and the Xserver.  BOINC itself is entirely agnostic regarding X.  And the Xserver provides the "pipeline" that allows the project app to talk to the GPU.  (And, as a bonus, I was able to compile slightly trimmer versions of the BOINC binaries, due to leaving out the X support.) :-)

PS. While I believe this analysis to be sound, there may be someone out there with a more intimate knowledge of the project app source code who knows otherwise.  If so, I look forward to being straightened out. ;-)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.