Write your own Einstein@home screensaver

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

Making large strides along

Making large strides along the 'conversion' path as described. :-)

He's a wee interesting issue that I ought place a record of in this 'blog'. Easiest introduced perhaps with the relevant code 'solution' I have enacted. From a framework header file :

[pre].
.
.
#ifndef SDL_MAIN_HANDLED
#define SDL_MAIN_HANDLED
#endif

#include "SDL.h"
.
.
.[/pre]
and main.cpp ( program entry point ) :
[pre]
.
.
.
// Put these first up here, in order to handle any failure(s)
// ( hopefully cleaner for debug ).
// To circumvent potential later failure of SDL_Init(), as we are not
// not using SDL_main() as a program entry point. The SDL2 Wiki entries
// ( as of 05 Jan 2014 ) are :
//
// Under 'Initialization and Shutdown' : "It should be noted that on some
// operating systems, SDL_Init() will fail if SDL_main() has not been defined
// as the entry point for the program. Calling SDL_SetMainReady() prior to
// SDL_Init() will circumvent this failure condition, however, users should be
// careful when calling SDL_SetMainReady() as improper initalization may cause
// crashes and hard to diagnose problems."
//
// Under 'SDL_SetMainReady' : "This function is defined in SDL_main.h, along
// with the preprocessor rule to redefine main() as SDL_main(). Thus to ensure
// that your main() function will not be changed it is necessary to define
// SDL_MAIN_HANDLED before including SDL.h."
//
// Err ..... some each-way bet here ??? :-O
// NB : FWIW This order of usage ( as recommended in the SDL2 Wiki )
// contradicts the statement that SDL_Init() must be called before
// using any other SDL function ! :-0
SDL_SetMainReady();
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER) != 0) {
stringstream init_error;
init_error << "\nUnable to initialize SDL: "
<< ErrorHandler::check_SDL2_Error()
<< std::endl;
ErrorHandler::record(init_error.str(), ErrorHandler::FATAL);
}

// At this point SDL initialisation will have succeeded, so make sure
// that ( for whatever 'normal' exit modes later occur ) SDL will be
// cleaned up.
atexit(SDL_Quit);
.
.
.[/pre]
I understand the SDL2 Wiki is a draft version with a number of stubs ( crucially for SDL_Main() here ), and I can't find any info to further qualify those contingent phrases 'some operating systems' and 'improper initalization may cause crashes'. Nor why SDL might have a need to fiddle with my main.cpp at all anyway, as IIRC SDL 1.x had no interest in that.

If anyone has any ideas here, please speak up. :-)

Cheers, Mike.

( edit ) BTW I don't see this as a show stopper, but a quirk to be watched and tested as development proceeds.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

(A) My research indicates a

(A) My research indicates a 'consensus' that SDL_main etc is crap library design ie. SDL_Init() ought manage all library initialisation without any such dependency. Period.

[ Seems somewhat of a fudge across differing startup behaviours and assumptions with the various operating systems. Perhaps a legacy of SDL 1.x, which did in fact fiddle with my main.cpp but I never noticed. As ever, testing will tell .... ]

(B) I may remove the 'recycling' aspect of AbstractGraphicsEngine::initialise() as allegedly SDL2 obviates context re-acquisition on window resize with win32.

(C) Other than that, I appear to have successfully transitioned all the framework code to SDL2. So I'll now proceed to fashion an 'ogl_utility' group of classes to encapsulate the forward compatibility shader stuff ....

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 34

RE: (A) My research

Message 78292 in response to message 78291

Quote:
(A) My research indicates a 'consensus' that SDL_main etc is crap library design ie. SDL_Init() ought manage all library initialisation without any such dependency. Period.

You got my support on that, if it helps :-)

Quote:

(B) I may remove the 'recycling' aspect of AbstractGraphicsEngine::initialise() as allegedly SDL2 obviates context re-acquisition on window resize with win32.

Sure, go ahead! If it's not needed anymore, get rid of it.

Quote:

(C) Other than that, I appear to have successfully transitioned all the framework code to SDL2. So I'll now proceed to fashion an 'ogl_utility' group of classes to encapsulate the forward compatibility shader stuff ....

Sounds great. You should consider pushing your changes upstream to OGLFT, or at least try to contact the maintainers. I did this previously and they were grateful for any useful contribution.

Cheers,
Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

RE: Sounds great. You

Message 78293 in response to message 78292

Quote:
Sounds great. You should consider pushing your changes upstream to OGLFT, or at least try to contact the maintainers. I did this previously and they were grateful for any useful contribution.


Sorry, that's 'ogl' as in OpenGL not OGLFT. I'm using SDL2's new font drawing facilities ( 'SDL_ttf' etc which still uses FreeType on the back-end ) which makes OGLFT quite redundant. Even worse SDL_fft functionality exceeds OGLFT IMHO. OGLFT uses display lists which don't exist in the OpenGl forward compatible contexts. Alas, there's no real need for me to have a crack at morphing/upgrading OGLFT now. In a perfect world ..... :-)

'ogl_utility' will round up all the dull/dirty/dangerous aspects of writing/loading/compiling/linking/using Graphics Library Shading Language ( GLSL ) code to be called at runtime. This is the dynamic front-end writing the back-end bit ( WABBEY ). That'll be in a new source sub-directory ( sibling to 'framework', 'orc', 'starphere' and 'solarsystem' ). I might consider having the option of makefile etc building to it's own library also.

Aside : I can make the rendered star sizes vary with their visual magnitude now ..... FWIW that sort of thing will be an advantage of writing the shaders. :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 34

RE: Sorry, that's 'ogl' as

Message 78294 in response to message 78293

Quote:

Sorry, that's 'ogl' as in OpenGL not OGLFT. I'm using SDL2's new font drawing facilities ( 'SDL_ttf' etc which still uses FreeType on the back-end ) which makes OGLFT quite redundant. Even worse SDL_fft functionality exceeds OGLFT IMHO. OGLFT uses display lists which don't exist in the OpenGl forward compatible contexts. Alas, there's no real need for me to have a crack at morphing/upgrading OGLFT now. In a perfect world ..... :-)


Yep, I remember now. You wrote about that earlier... If anything you could probably have a crack at removing OGLFT as well :-)

Quote:

Aside : I can make the rendered star sizes vary with their visual magnitude now ..... FWIW that sort of thing will be an advantage of writing the shaders. :-)


Nice :-)

Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

Arcane question : my research

Arcane question : my research would seem to indicate that gcc 4.6+* honors recent C++ standards that requires an std::vector implementation to yield contiguous** elements. Can anyone either confirm or refute that ?? :-)

Cheers, Mike.

* ie. that which is used for the native linux screensaver builds.

** This allows the often convenient technique of returning a pointer to the first 'something', in order to then access the entire sequence without explicit use of an iterator.

( edit ) Of course that still leaves the same issue to be decided for cross compilation also ....

( edit ) Motivation : I would like to store shader source code as a Resource instance. The data within which is accessed as 'const std::vector*', but a GLSL shader object wants to be given a 'const GLchar*'.

[ Currently 'GLchar' is a typedef for 'char' but may not always be so, that being the point of using a typedef. 'char' is distinct from 'signed char' and 'unsigned char' even though it will have binary equivalence to one or the other. In any case a forced conversion of 'GLchar' to 'char' isn't a worry as the Resource returned type is merely a convenient byte-wise fragmenting without interior semantics per element. ]

Now it would be great to simply pass a shader object the Resource's pointer to it's interior via a suitable static cast. Otherwise there would probably require a copying rigamarole.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

OpenGL ES 2.0 Buffers In a

OpenGL ES 2.0 Buffers In a Nutshell

An OpenGL buffer is an un-typed linear-addressed block of data. It is byte-wise granular. Typically it will be instantiated server side* ie. that will be memory on the video card. Or whatever hardware the video subsystem of the device is. OpenGL ES ( ES = embedded systems, implying less resources ) buffers have reduced capability compared to full contexts. Think mobile phones, iPads et al.

To distinguish buffer instances and to have a name to refer to in common b/w user code and API ( meaning an OpenGL context or state machine instance ), then an opaque handle is required. Opaque means user code only 'knows of' a buffer by that handle, and must present that to some API call to manipulate the buffer. As it turns out a handle is currently enacted as an unsigned integer. The binary value zero is not returned by an OpenGL context and hence may be used to semantically satisfy the idea of 'no name' or 'no handle yet assigned' or 'unbind'.

To reserve a single name for use :

GLuint buffer_name;

glGenBuffers(1 , &buffer_name);

so you give the API the address of where you want to remember the handle, and you will get one assigned. Look at the value of buffer_name after the call**. If you want to reserve more than one name at a time, and have those names represented as consecutive integers for that matter, then put that desired number of names as the first argument. While of course giving the address of such an array of integers to store the returned values as the second argument.

Now to do anything whatsoever with a buffer you have to 'bind' it. This means that you specify that a particular named buffer is connected to a 'target', for the purposes of subsequent buffer related API calls.

glBindBuffer(target, buffer_name);

where for the level of OpenGL I'll be dealing with [ ES 2.0 ] target is one of the symbolic constants*** that flag subsequent usage : GL_ARRAY_BUFFER or GL_ELEMENT_ARRAY_BUFFER. This maybe helps the context to optimise, but GL_ARRAY_BUFFER is otherwise a declaration that the buffer is intended to hold vertex data, and GL_ELEMENT_ARRAY_BUFFER indicates an intent to store indices. Those indices pick out elements within another buffer ie. one bound using GL_ARRAY_BUFFER.

While a combination of two related buffers ( vertices and indices ) seems overly complicated, you don't have to avail that construct. But it is incredibly useful for geometric objects where you want to have non-repetition of vertex attributes ( saving space ), plus have the flexibility to refer to distinct (sub)sequences of vertices. Take a sphere for instance, represented as a polygonal mesh of points equidistant from a single central point ( that's how spheres are defined ). You may want to draw longitudinal great circles, and then draw latitudinal slices. Or some other scheme like a buckyball. All from the same vertex attribute set.

We have yet to tell the API how big a buffer we want and other usage hints. For this purpose use ( after binding as above ) :

glBufferData(target, size, data_pointer, purpose);

where target is as before, meaning that we are now specifying the properties of the buffer we nominated earlier when binding. size is self evident, is a count of bytes, and the main thing is to get it right and don't allow fence-post/off-by-one type errors. data_pointer may be NULL, in which case it is ignored. If it is a non-NULL address then data beginning at that user address space location is loaded in. purpose is one of the symbolic constants GL_STREAM_DRAW, GL_STATIC_DRAW, or GL_DYNAMIC_DRAW. DRAW means they are involving in creating geometric primitives. STREAM means to write values to the buffer once and read from 'a few' times, STATIC means write once and read 'many' times, and DYNAMIC means both write and read 'many' times. That suggests the back end may be able to position the buffer on the card to suit an expected pattern of accesses, but I don't think anything blows up if you breach the original promise. Efficiency according to available resources ....

If NULL was used, and you want to specify buffer contents****, or you want to update the buffer contents with new values, then :

glBufferSubData(target, offset, size, data);

target as before. data is an address in user code memory to load from, offset is how far into the buffer to start writing values ( first byte is offset of zero ), size is how many bytes to transfer. The user is responsible that a given combination of offset and size does not cause a write beyond the end of the buffer. One can fill the whole buffer or only part thereof, as you please.

At some point you will want to make a given named buffer not current. Either :

glBindBuffer(target, 0);

OR

glBindBuffer(target, some_other_buffer_name);

will do the trick.

Note that glMapBuffer(), glUnMapBuffer, glCopyBufferSubData() and glClearBufferSubData() are not around in ES.

When you want to discard a buffer and it's name call :

glDeleteBuffers(1, &buffer_name);

... mutatis mutandis for multiple buffers. It is appropriate to help the context by doing so, releasing for other uses any limited resources.

Note that none of the above specifies any structure or detailed meaning of the buffer contents whatsoever. We have only talked of sizes, positions, generalities of usage to assist optima, acquiring and releasing. Many places in the OpenGL pipeline interact with said buffers, and the syntax and semantics are specific to circumstance.

Cheers, Mike.

* Like any Application Programming Interface you ought not really assume any detail about the back end, other than that which is deliberately exposed by the interface standard. All conforming interface instances should act identically, except where the interface specification allows variance. Hence the word 'typically' as the interface was designed to enable efficient mechanisms like placing data close to the point of use. It would be hard to imagine why anyone would do the following : have all OpenGL features done on CPU code alone and simply write to a hardware framebuffer ( like the CPU address space mapped video memory of old ). But it could be done that way and be perfectly adherent to a given OpenGL specification.

** Like all API calls there are failure paths if this can't be satisfied. The handle pool is quite large ie. effectively sizeof(unsigned int) so it would be a pretty drastic problem at the back end if not satisfied.

*** These are defined in some header file. The precise numerical value isn't important. That's the point of representing a number by a symbol .... a good symbol name is important though.

**** One can arrange matters to have the context itself place data in the buffer eg. writing in values produced somewhere during pipeline operation.

( edit ) Note that for ES 2.0 only two targets exist and thus a maximum of two buffers may be bound to a context at any given time.

( edit ) I am rather happy that I originally parked alot of this behind class interfaces. OOP rocks & rules !!! :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

robertmiles
robertmiles
Joined: 8 Oct 09
Posts: 127
Credit: 29370881
RAC: 20156

RE: * Like any Application

Message 78297 in response to message 78296

Quote:
* Like any Application Programming Interface you ought not really assume any detail about the back end, other than that which is deliberately exposed by the interface standard. All conforming interface instances should act identically, except where the interface specification allows variance. Hence the word 'typically' as the interface was designed to enable efficient mechanisms like placing data close to the point of use. It would be hard to imagine why anyone would do the following : have all OpenGL features done on CPU code alone and simply write to a hardware framebuffer ( like the CPU address space mapped video memory of old ). But it could be done that way and be perfectly adherent to a given OpenGL specification.

Note that it's often easier to debug such a program in the CPU version than it is in the GPU version. That's the main use I have seen for a CPU version of such code.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

FWIW : here is an absolutely

FWIW : here is an absolutely wicked solution relating to the problem outlined previously ( thanks stackoverflow !! ) :
[pre]
struct StringHelper {
const char *p;
StringHelper(const std::string& s) : p(s.c_str()) {}
operator const char**() { return &p; }
};[/pre]
- construct a StringHelper instance with a const std::string reference

- that initialises an interior pointer, set to the std::string's c-string representation.

- the address of that interior pointer is later returned when a double de-reference operation is required.

IF it is utilised so :

glShaderSource(shader_type, 1, StringHelper(shader_source), NULL);

then there is NO opportunity to alter the std::string's contents, which would break the 'const GLchar**' promise, while the Shader is loading the GLSL code. Plus the StringHelper instance is an un-named temporary which evaporates in short order .....

Awesome !! :-) :-)

The other brilliant bit is if OpenGL decide to change the type underlying GLchar, then all I need do is come back to StringHelper and apply a suitable cast within. Gorgeous !! :-)

Cheers, Mike.

( edit ) Gosh, you could plonk this as an inner/private class or inline it for that matter. The actual 'calculation' it does is trivial. Basically it alters the semantic viewpoint of an object you don't actually change the state of, for the sake of keeping the compiler quiet and happy. It's a form of type cast.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 317437463
RAC: 371175

Buffer Addendum

Buffer Addendum :

Quote:
.... Note that for ES 2.0 only two targets exist and thus a maximum of two buffers may be bound to a context at any given time ....


Not quite. While you can think of a target as a 'buffer binding point', they are not exactly equivalent. This is because a target ( read : position within the state machine, or point along the pipeline ) may have several buffers bound at once. For instance the vertex input end of the pipeline may have a vertex array and an index array. Textures are a special type of buffer ( technically a Texture object ) and can be thought of as attaching down the other end where fragments are processed. The language is a wee bit loose, but the circumstances normally make it clearer. It's the difference b/w 'buffer' and 'Buffer'. :-)

Cheers, Mike.

( edit ) Indeed you can 'cheat' by binding to a target ( for the purpose of allocating server memory, populating with data values, but not immediately using it with the pipeline ), unbind it and then re-bind to an entirely different target whatsoever. Of course you may run the risk of some conceptual incongruity in that ..... typically the binary values do have wide & unrelated interpretations .... caveat developer. But it could make simple sense, for instance rendering a landscape height map in 3D with shading to hint/indicate elevations.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.