Write your own Einstein@home screensaver

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

No, this project ain't dead.

No, this project ain't dead. Only sleeping .... :-) :-)

My youngest son* reports that SDL version 2.0 was emitted a few months ago. This is true. It's major advantage is in managing video contexts and the acquisition of OpenGL state machines, basically a better solution than mine in that :

- it deals with swaps from full screen to/from windows without losing OpenGL contexts.

- it covers iOS ( uses UIKit plus OpenGL ES 2.0 ) and Android ( uses JNI + OpenGL ES 1.1 thru 2.0 ) in addition to Windows ( uses win32 API + OpenGL/Direct3D ), Linux ( uses X11 + OpenGL ) and Mac OSX ( uses Cocoa + OpenGL ).

The issue of whether a given video driver does/not support a backwards compatible context ( pre OpenGL v3.2 ) is still live. However further research of mine indicates that virtually all 'modern' drivers will provide such a context upon request, but still with the likely exception of many from Intel**.

You may recall that the original screensaver support code framework provided by E@H used an earlier SDL version, and then the industry changed alot as previously noted in this thread, so now SDL has been updated to cope with said difficulties.

So, I'll review the new SDL offering. As per Baldrick : I may have a cunning plan ..... :-)

Cheers, Mike.

* Who ( if you'll pardon a proud father for mentioning ) has just finished his IT degree with distinction! :-)

** Why am I not surprised ??

( edit ) I ought add that SDL has been spruced up largely by the dev's at Valve. That's because they use it for their products and it's neat that they've made their efforts available, essentially as open source.

( edit ) Note that OpenGL ES v1.x basically maps to OpenGL v1.5 ie. fixed function pipeline model. OpenGL ES v2.x maps to OpenGL v2.0 but you must provide and use shaders ( via OpenGL Shading Language ), and thus OpenGL ES 2.x is not backwards compatible with OpenGL ES 1.x .... got that ?? :-) :-)

[ OpenGL 2.0 allows shaders to be used but doesn't require them. ]

So OpenGL ES 1.x vs OpenGL ES 2.x is the nearly the same dichotomy as an OpenGL backwards compatible context vs an OpenGL forward compatible context. The base distinction is whether the underlying hardware contains actual devices that may readily behave as the programmable shader units defined in the API.

Don't worry if you're confused, because you're in good company if so. Just join the queue.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 23

Keep it up, Mike! And

Message 78281 in response to message 78280

Keep it up, Mike! And congrats to your son - I guess he's a real chip off the old block :-)

Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

RE: Keep it up, Mike! And

Message 78282 in response to message 78281

Quote:
Keep it up, Mike! And congrats to your son - I guess he's a real chip off the old block :-)


Thank you! And yes he is, alas ..... :-) :-)

Now here's a thing or two which may solve a number of current issues pretty well simultaneously, simplifying development and also somewhat future-proofing :

- Plan for more target development paths ie. include iOS ( iPads et al ) and Android ( phones ... ). I already have a top level build script which creates any/all directories and other dependencies ( one "ROOT" per target ), copies one's latest and greatest source efforts and builds using a single target applicable build script. Expand that, initially by stubbing, if only to make 'apps' rather than screensavers. Heck, we're already cross-compiling anyway! :-)

- Accept usage of 'forward compatible contexts' ie. suck it up Mike and learn to use shaders. My son has just done all that stuff, he reckons it's not too hard*. This would mean that all targets that SDL can reach could be made subject to a common OpenGL coding subset at source level : iOS and Android ( OpenGL ES 2.0 ), windows and Linux and macOSX ( OpenGL 3.2+ ).

- I've already written a heap of code using Vertex Buffer Objects, which is the ultra blessed mode of data transfer/storage to/on the video card, and so that will be fine. VBO's are the paradigm of choice as per the Khronos group.

- Display lists are deprecated/gone, which only really affects OGLFT. However I've discovered SDL_ttf with which one may access an in-memory font ( placed there by your ResourceCompiler et al ), slap it on to an SDL_Surface in blended mode ( ie. alpha's set to suitable transparency ), then you're off and running to Render City. Slight issue with kerning : there isn't any! :-)

I can reap the benefits of a structured/layered approach with encapsulation, as the above changes will be confined to specific areas eg. code that uses objects of my Renderable class will never know of the implementation changes ( upgrading to shader use etc ) to the public methods.

I can and will bring the good old Starsphere up to speed with this, a new/modern version with precisely the same appearance and behaviour. Indeed I'll make that my first product to emit under such a new schema, being relatively simpler to change and test.

Cheers, Mike.

* What has happened to OpenGL is that it has morphed from a graphic designer's view ( high level rendering ideas and constructs ) toward a hardware programmer's cross-platform interface to on-card shader units. For instance much of the previous OpenGL Utility Library stuff is out, given that it relies on the older style eg. quad primitives have been removed. OpenGL has deprecated/removed so much that it is now largely a parameter passing structure for pipelined and parallel processors. I'm not saying that this is bad, just that the transformation is quite significant compared to circa 2002. Sure the control of visual output is stronger, but at the expense of the application programmer's burden of writing the geometric transforms for viewing and sometimes tricky lighting calculations. You still get primitive assembly, rasterisation, clipping and viewport work done for you though. I guess they had to follow the professional developer market rather than hobbyists like myself. The novitiate learning curve is rather steeper than the original API .... you need much more under-the-hood knowledge, which to my mind somewhat defeats the abstractive purpose of an API. Well, that's my two cents.

( edit ) NB : SDL_ttf runs on top of FreeType, so no change there ! :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

OK, my lad has tested SDL_ttf

OK, my lad has tested SDL_ttf with OpenGL code : works a treat ! A font choice yields character glyphs which are mapped ( with transparency ) to an SDL_Surface, said surface is then transformed wherever you want ( actually is mapped as a texture during the fragment shader ). In 3D that'll give an object in the model space ( perspective projection ), or if using an orthographic projection you'll get HUD-like behaviour on the near frustum face.

Now for a flavour of the 'new' vs 'old' approaches. A typical backwards compatible approach to rendering some series of colored points ( eg. Starsphere ) :

[pre]glNewList(Stars, GL_COMPILE);
glBegin(GL_POINTS);
{ // Some iterative construct
glColor3f(r, g, b);
glVertex3f(x, y, z);
}
glEnd();
glEndList();
.
.
.
glCallList(Stars);
.
.
.
[/pre]
.... and now the new paradigm, mentioning all the function calls ( listed in typical order of use ) that one must invoke to get any similiar output ( where glDrawArrays() finally triggers the rendering after all else is setup ) :

... being only the vertex fetching end of the pipeline. The vertex shader must specify, say, any 3D viewing transforms in detail ( 4x4 matrices ). To do color one could grab lumps of RBGA values on a per vertex basis, say from the buffer object, then pass those unmodified through the vertex shader to then be handled down the other end of the line by a suitable fragment shader. There's a comparable amount of complexity down that end, rather more so if textures are to be applied. For textures the application programmer must write a 'sampler' to pluck texels from a given buffer containing a texture map ( relying upon texture co-ordinates likewise drawn per vertex, and passed unmodified through the vertex shader on down the line ). The application programmer must write those shaders, there is no lazy default.

So I was not kidding when I said the abstractive level of the OpenGL API has radically changed ! :-) :-)

Now of course a keen C++ programmer would grab everything contained within the vertex fetch box ( blue dashed outline ) and abstract*. Likewise one would subsume the detail of the shader and fragment code ( written in Graphics Library Shading Language ) : compilation, attachment and linkage into a GLSL 'program'.

Cheers, Mike.

* Effectively rolling your own higher level API.

( edit ) I have confounded some definitions, alas. Strictly speaking an OpenGL buffer is neither typed nor attached until bound to a target, thus a vertex array object may pull vertex attributes from a buffer object but not inevitably so. Thus by 'vertex buffer object' I mean a buffer bound to a vertex array object as illustrated in the diagram ( which is BTW not the only way to configure the 'front' end ). Generally buffers are server-side ( typically an allocation within the video card's memory ) objects which can be bound at a number of points in the pipeline. It is quite possible with some targets ( read : pipeline stage ) to have several simultaneously current/active buffer bindings, in such instances there is a methodology ( layout & location ) to distinguish each from another within the relevant shader code.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

Right. I've been

Right. I've been investigating shaders. The pipeline requires a vertex shader and a fragment shader if you want graphics to appear upon some visible client area ( whole screen, some part of an OS window etc ). There are other optional shader types ( compute, tessellation control, tessellation evaluation and geometry ) which I'm not interested in. Well, at least not any time soon. :-)

[ The term shader BTW is a tad misleading : most of them don't determine, directly or indirectly, the shade of color for a pixel. I guess that name is something historical. ]

Generally I want to emit graphics primitives ( points, lines and triangles ) with a color per vertex, color interpolation upon areas ( including textures ), plus normals to provide for lighting. I've started with looking at methodologies for the 'front' end ie. the vertex shader. There are a number of possibilities for providing the pipeline with data from an application using the OpenGL forward compatible interface. Let's call them 'use cases' for want of a better term.

The simplest :

.... all vertex shader instances must assign a value to the specially named variable gl_Position ( a four component position vector with floating point entries ). If you simply set the fourth component of gl_Position to unity ( 1 ) then it'll behave like an ordinary 3D vector and you can ignore all the projective geometry guff. The vertex shader is, ultimately, an array of zero/null terminated strings that are input at run time to a GLSL compiler then linker etc. In this instance the code within the shader will generate it's own vertex attributes. However there must be a bound Vertex Array Object to manage the input to the shader, and trigger any processing. A glDrawArrays() call will instigate actual activity of the pipeline, in a 'real' program this would be once per rendering/animation cycle/frame.

Here we'll pull one vertex at a time :

... same deal as above with the addition of a client side array. Well, it may have only one entry, as we provide a pointer through glVertexAttrib*() before we invoke glDrawArrays(). Whatever loop contains those two calls must update, if at all, the pointer into said client array. The '*' on glVertexAttrib() is a placeholder for several variants describing different operand types eg. glVertexAttrib4fv().

Now we pull a series of vertices :

... where we have created and populated a buffer object ( ie. most likely in memory on the video card/subsystem ) and will now pull some number of vertices from it for each call of glDrawArrays(). Thus far glDrawArrays() has been used which is an un-indexed command.

By extension one can select a given vertex, or series of, by providing yet another array which contains indices into the first :

... where glDrawElements() signals the use of indexing. One call to that will trigger some number of 'shovel-fulls' of distinct vertex data into the pipeline, not necessarily in the order of storage within the buffer object that contains the vertex info. Phew, I think that pretty well covers all the behaviours that I want ! :-):-)

In fact I've already written ( and encapsulated ) very similiar code for the parts of the diagrams to the left of the shader. Stay tuned ... :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

Ah. Some words to the wise, a

Ah. Some words to the wise, a nod's as good as a wink etc :

This is weird. You are writing on both sides of an interface. Yes you are. You are using the client side of an interface to write detailed server side contents/behaviour. And if you don't do that, it won't happen*.

We've looked at how to throw in vertex data at the front end. Got that ? Well suppose you want ( and who wouldn't ? ) the vertices to be transformed to some viewpoint : appropriately rotated, transformed, scaled and projected ( either in a perspective or an orthographic manner ). You'll likely be morphing that viewpoint in some regard from frame to frame in an animation. At a minimum you have to provide a 'model-view' matrix on a per-frame basis. For argument's sake suppose you've done the math of the transform correctly. The gl_Position variable emitted from the vertex shader exists in what's called 'clip space', because from then on you get clipping of objects ( ie. the inside of the projection volume called a 'frustum' - a truncated square-base pyramid - for perspective or a cube for orthographic ) then stuffing what's left into some (sub-)section of a OS window 'canvas'. What follows is somewhat abbreviated. A vertex shader might look like this :
[pre]#version 150 core

uniform mat4 model_view_projection;

in vec4 in_pos;

void main()
{
gl_Position = model_view_projection * in_pos;
}[/pre]
.... where did model_view_projection come from, and what's uniform ? Uniform is easy : means it doesn't depend on each vertex. As for model_view_projection you perform something like the following in your client code :

[pre]glm::mat4 ortho_proj;
glm::mat4 view_matrix;
glm::mat4 model_matrix;

osd_program->uniformMatrix4f(osd_mvp_uniform, ortho_proj * view_matrix * model_matrix);[/pre]
... here 'glm::mat4' accesses the 'glm' namespace ( a helper library of GL-centric code, this identifies 'mat4' as a GLSL type with 'mat4' as a C++ type ) to define a 4x4-component matrix. Now uniformMatrix4f() is a member function of a shader program ( a C++ user-defined type ) which osd_program is an instance of ( well, to be exact, a pointer to ):

[pre]void ShaderProgram::uniformMatrix4f(GLuint location, const glm::mat4& mat)
{
glUniformMatrix4fv(location, 1, GL_FALSE, glm::value_ptr(mat));
}[/pre]
... thus loading the value of 'location' with that of 'osd_mvp_uniform' so that the transform vector value ( client-side code ) appears where model_view_projection is ( server-side code ). Got that ? :-)

Believe it or no, this is one of the simpler methods to achieve this ( per my son, whose code I am demonstrating ). This is rather like having a valet park a car for you when you drop it off, then another valet takes the keys and shifts it somewhere else again. But between valets the car is only known as 'that which is currently parked in spot X'. Levels of indirection, same object under different aliases either side of a transfer zone.

Isn't this a hoot !! :-)

Cheers, Mike.

* and they have the nerve to call it an Application Programming Interface .... sigh. It should be renamed Write A Bloody Back End Yourself ( WABBEY ). What I smell is a waft from the back end of a Committee Camel. :-) :-)

( edit ) Thus 'osd_mvp_uniform' and 'location' and 'model_view_projection' effectively refer to the same car parking space. I hope this analogy works ....

( edit ) What comes to mind now : I might use a more professionally written library for the matrix/vector stuff. There is 'glm' as above, but I also know of a 'vmath' library ( free ) produced by one of the guys that wrote the 6th edition of the 'OpenGL SuperBible'. It is header only ( vmath.h ), it is templated for a number of types, mostly inlined, and has all that transform manipulation guff safely squared away. Looks like a smooth move. I know I've written my own Vector3D class, which has worked fine to date, but it seems I need to go up a notch here. I can't be bothered writing a matrix class, pleasant though that would be ..... :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

OpenGL Transforms Short List

OpenGL Transforms Short List Ready Reckoner ( for OpenGL 3.2 'onwards' and OpenGL ES 2.0+ ) :

No in-built language transform help : So either DIY or get a good library ( preferred ). 'glu' facilities are also out of play as that was a layer on top of prior GL code standards, which are now removed. Also there are no matrix stacks, no stack selection, no pushing & no popping.

GLSL has matrix and vector types : fortunately 'first class' in a style of expected syntax and semantics. GLSL is much like C in appearance and behaviour. This implies that when passing data into shaders and pipelines the 'external' type representation ( client code ) must match the 'internal' type representation ( GLSL within shader ), or be suitably transformed.

Right handed Cartesian axes : ( untransformed/default 'camera' setup ) X-axis increases to the right of view, Y-axis increases to the top of view, BUT Z-axis is increasing towards viewing position ie. objects further away have more negative Z coordinates.

OpenGL distance units are arbitrary : there is no scaling to any physical device characteristics. Specifically sizes on a rendered view will be resolution independent. Compare two monitors of the same physical dimensions, then any given OpenGL code will give the same object sizes on screen for both. The higher resolution monitor will have more pixels representing a given object visible on screen ( ie. that is what higher resolution means ).

You must emit gl_Position from the vertex shader on each pipeline use : otherwise the results are 'undefined'. This may include the program behaving as you expect it 'ought', but in all likelihood not. A classic example of 'the absence of evidence is not evidence for the absence' …..

gl_Position is a 4-component vector : the fourth entry is the w coordinate giving the homogeneous representation. This is so named as equations representing geometric entities like lines and surfaces will have equal degrees - numerical powers - for each grouping of factors, when an homogeneous system is used. Note that in math terms, going from (x, y, z ) to (x, y, z, w) is a change of 'space'. They ain't the same cow and being four dimensional, an object specified in homogeneous coordinates is hard to visualise! My 'simpler' mental handle is that w is an attribute in addition to the usual x, y and z values for indicating positions in 3D space.

The projection transform : is represented by a 4x4 matrix, and is the last transform to be applied before gl_Position is emitted. That transform may alter the w value.

- for an orthographic projection that one might use for a HUD or a technical drawing ( 'plan' view or even isometric ) w won't be changed.

- for a perspective projection a new w value serves the role of remembering what factor needs to be applied later ( during perspective division ) to scale objects so that they foreshorten, meaning that normally we humans expect a fixed size object to diminish in subtended angle as it is placed further from our eye.

For preference set w = 1 when specifying vertex positions prior to the transform operations : but if you can't restrain yourself from fiddling with it then beware, as with later 'division by w' to come back to 3D space :

- w = 0 means 'at infinity'.

- a negative value will effectively flip the direction sense of all axes.

- an ( absolute ) value less than unity ( 1 ) will effectively move a vertex further away & objects built from same will seem smaller in a perspective viewpoint ( due to the usual effect of later transforms ).

- an ( absolute ) value greater than unity ( 1 ) will effectively move a vertex closer in & objects built from same will seem larger in a perspective viewpoint ( due to the usual effect of later transforms ).

Don't be silly, get a decent library to do all this matrix/transform crap for you : just understand the library's interface, understand the generalities of transformations, then cook on from there. Carefully decide upon the order in which to apply transforms. As a general statement : if you change the order the results will be different. Some interchanges 'work' in that there is no change in outcome if you swap ( that's called commutation ) but it's best not to rely on that.

As there is some ambiguity about how one thinks about transforms, I prefer the following 'algorithm' :

- construct an object ( specify a series of vertices grouped into primitives ) while thinking in terms of 'local' coordinates. So one part of said object could be given a (0, 0, 0) as an origin to refer all other vertices to that.

- because of the paradigm that OpenGL works under this construction effectively takes place at/around the (0, 0, 0) origin of the 'world space'.

- if desired scale the object to bigger/smaller, and I include 'skewing' here ( different scaling along different axes ).

- if desired rotate the object.

- then translate the object to some desired location.

- apply the model/view transform. In OpenGL going from 'model space' to 'view space' can be lumped as a single matrix. Here lies the ambiguity mentioned above. Other 3D graphics platforms/products do bung in two extra explicit steps, from model space to world space plus from world space to view space. The spaces are named so that for those of us who use several products, we can correlate the meaning of the transforms across said products. If you only use OpenGL this will rather confuse as any mention of a world space is superfluous ! Or you can do what I do which is to simply say that all objects are constructed at an origin (0, 0, 0) and then morphed, translated etc. On the math side it is a matter of either using a single 4x4 matrix ( OpenGL ) that could be expressed as two separate 4x4 matrices applied sequentially to a 4x1 vector. The single matrix is the product of the other two, or if you like the single matrix has been factorised into the other two. Phew !! :-)

- finally apply the projective/orthographic projection. Now you are in 'clip space'.

Values in the depth buffer are determined by the vertices z-components : this isn't necessarily a linear effect. Indeed a perspective transform will typically have about half of the depth buffer value range coming from the near 15% to 20% of frustum volume. The advantage of this being for simulating 3D ( which is what we're doing in perspective mode ), where the close in stuff gets better depth disambiguation where it matters for an improved 3D look. The human visual system has much the same character. Of course one can stuff about with this if you want to go for funny effects.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 23

Nice write-ups

Message 78287 in response to message 78286

Nice write-ups Mike!

Thanks,
Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6590
Credit: 319157617
RAC: 413281

Thank you Oliver and thanks

Message 78288 in response to message 78287

Thank you Oliver and thanks for reading ! :-)

I'd just be older and wiser than I used to be ..... plus I'm getting some tuition from a young bloke that I know. :-)

I'm now reading the latest Khronos official 'difference specification' for OpenGL ES vs OpenGL ie. what functional subsets do you lose for in going with ES ( ES is contained within the full 'desktop server' edition ). Thus far not anything of real note for our purposes fortunately. Looks like an OpenGL ES 2.0+ is the way to go. We can get all that we want and more, and with the newest SDL a five way build system ( hopefully ).

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 23

Sounds like fun :-) Merry

Message 78289 in response to message 78288

Sounds like fun :-) Merry Christmas and my best wishes for 2014!

Oliver

Einstein@Home Project

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.