Write your own Einstein@home screensaver

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

Just some notes for the

Just some notes for the record ..... hard lessons not immediately evident from texts. :-) :-)

As OpenGL is a state machine, any calls prior to establishing the rendering context ( ie. the machine is initialised ) will not be honoured! Soooo .... DON'T put OpenGL code in constructors as a rule ( I was wanting to be clever and put display list definitions in them ). Depending on where a class object is visible/declared in your code, it may be constructed before or after ( various C++ rules apply ) the GL machine's startup. [ yeah, go debug that sucker ..... :-( ]

When making texture objects use Oliver's method as he mentioned earlier, ie. The first step is to create a SDL_Surface from a Resource ...... the second step you use the SDL_Surface you just created and turn it into an OpenGL texture. It works a treat! :-)

Also when using OpenGL texture objects ( yep, I finally got it to work! ) you need at least TWO calls to glBindTexture(). The first establishes an identified spot in memory ( hopefully something fast or on the card ) which you then further define with an image, sizes, formats et al. The later calls occur with each rendering instance ( ie. once per frame ) to indicate ( when texture co-ordinates are generated with vertices for example ) the image to be applied.

Some simple forms - a sphere especially - can be created using GLU's quadrics objects. Really simple when you eventually work out how .... ;-)

The way to create objects in the scene is to construct at the origin, then translate and/or rotate ( careful of the order! ) to a final position - while saving and restoring the transform state via glPushMatrix() and glPopMatrix() with glMatrixMode(GL_MODELVIEW) selected. It also helps to bone up on 3D rotation rules ie. non-commutation.

[ I've created a little spacecraft like movement/model inside the scene ( drive around via the keyboard ) for inspecting results, which I reckon I'll leave in as a feature. Now for some fog, lighting, blending ...... Sun, ecliptic, search markers, constellation labels, a new HUD style with a scrolling-marquee/ticker-tape ...... etc .... a feature-itis meter perhaps .... :-) ]

Cheers, Mike.

( edit ) Any hope of gaining the top row numeric keys? That'd mean an expansion of the AbstractGraphicsEngine::KeyBoardKey enumeration .... what's the go with the way they're hardcoded ( successive bit shifting to the left )? A constraint there?

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

Hi Mike! RE: Also

Message 78059 in response to message 78058

Hi Mike!

Quote:
Also when using OpenGL texture objects ( yep, I finally got it to work! ) you need at least TWO calls to glBindTexture(). The first establishes an identified spot in memory ( hopefully something fast or on the card ) which you then further define with an image, sizes, formats et al. The later calls occur with each rendering instance ( ie. once per frame ) to indicate ( when texture co-ordinates are generated with vertices for example ) the image to be applied.

I'm not sure that this is entirely correct. You need to load the texture (image) into memory and bind it to the GL context. This is usually done by the library/API that provides the context (e.g. SDL, Qt, GLUT). Later on you call glBindTexture() to apply to a given object. The first call, though, should not be called glBindTexture(), but something non-OpenGL, like simply bindTexture().

BTW, if you're interested how this was done, check out the following repo and have a look at the isolated branch:

[pre]git://git.aei.uni-hannover.de/public/brevilo/pulsatingscience.git[/pre]
That thingie might give a few more clues about using OpenGL...

Quote:
Any hope of gaining the top row numeric keys? That'd mean an expansion of the AbstractGraphicsEngine::KeyBoardKey enumeration .... what's the go with the way they're hardcoded ( successive bit shifting to the left )? A constraint there?

I'm open to anything that's useful :-) Add those key to the list of those supported, get rid of the list entirely (without introducing major drawbacks), come up with something even smarter...

Cheers,
Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

RE: Hi Mike! RE: Also

Message 78060 in response to message 78059

Quote:

Hi Mike!

Quote:
Also when using OpenGL texture objects ( yep, I finally got it to work! ) you need at least TWO calls to glBindTexture(). The first establishes an identified spot in memory ( hopefully something fast or on the card ) which you then further define with an image, sizes, formats et al. The later calls occur with each rendering instance ( ie. once per frame ) to indicate ( when texture co-ordinates are generated with vertices for example ) the image to be applied.

I'm not sure that this is entirely correct. You need to load the texture (image) into memory and bind it to the GL context. This is usually done by the library/API that provides the context (e.g. SDL, Qt, GLUT). Later on you call glBindTexture() to apply to a given object. The first call, though, should not be called glBindTexture(), but something non-OpenGL, like simply bindTexture().


Sorry, I've poorly worded that. The relevant code is :

[pre]/***************************************************************************
* Copyright (C) 2010 by Mike Hewson *
* *
* *
* This file is part of Einstein@Home. *
* *
* Einstein@Home is free software: you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published *
* by the Free Software Foundation, version 2 of the License. *
* *
* Einstein@Home is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with Einstein@Home. If not, see . *
* *
***************************************************************************/
#include "Earth.h"

const GLfloat Earth::EARTH_GRID_RADIUS(EARTH_RADIUS);
const GLint Earth::EARTH_GRID_SLICES(36);
const GLint Earth::EARTH_GRID_STACKS(18);

const GLfloat Earth::EARTH_RGB_RED(0.2f);
const GLfloat Earth::EARTH_RGB_GREEN(0.8f);
const GLfloat Earth::EARTH_RGB_BLUE(0.95f);

const char* Earth::TEXTURE_FILE_NAME("worldmap.bmp");

Earth::Earth() : globe(NULL),
surface(NULL),
textureID(OPEN_GL_NO_TEXTURE_ID),
texture_format(0),
nOfColors(0) {
}

Earth::~Earth() {
}

void Earth::prepare() {
// We'd like texures.
glEnable(GL_TEXTURE_2D);

// Ensure the image on file gets loaded into the SDL_Surface object.
if(loadImageForTexture()) {

// Have OpenGL generate a single texture object handle for us.
glGenTextures(1, &textureID);

std::cout w, // width in pixels
surface->h, // height in pixels
0, // no border
texture_format, // the bitmap format type as discovered above
GL_UNSIGNED_BYTE, // how we are packing the pixels
surface->pixels); // the actual image data from an SDL surface

// Set the texture's stretching properties
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// Do we wrap the map? NO!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

// How it maps when texels and fragments/pixels areas don't match.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

// Use the known display list handle.
glNewList(get_render_ID(), GL_COMPILE);
// You want to paste the image on, with no show-through of what's beneath.
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);

glBindTexture(GL_TEXTURE_2D, textureID);

// What are you sticking onto?
globe = gluNewQuadric();
gluQuadricDrawStyle(globe, GLU_FILL);
gluQuadricNormals(globe, GL_SMOOTH);
gluQuadricTexture(globe, GL_TRUE);

glColor3f(Earth::EARTH_RGB_RED,
Earth::EARTH_RGB_GREEN,
Earth::EARTH_RGB_BLUE);
glPushMatrix();
glRotatef(-90.0f, 1.0f, 0.0f, 0.0f);
gluSphere(globe,
Earth::EARTH_GRID_RADIUS,
Earth::EARTH_GRID_SLICES,
Earth::EARTH_GRID_STACKS);
glPopMatrix();
// Unbind the texture? Hmm ......
glEndList();
gluDeleteQuadric(globe);
}
else {
// In this case when glCallList() is later invoked it'll be empty. No biggie.
std::cout w w & (surface->w - 1)) != 0) {
std::cout w h & (surface->h - 1)) != 0) {
std::cout format->BytesPerPixel;
std::cout format->Rmask == 0x000000ff) {
std::cout format->Rmask == 0x000000ff) {
std::cout << "format is RGB" << std::endl;
texture_format = GL_RGB;
}
else {
std::cout << "format is BGR" << std::endl;
texture_format = GL_BGR;
}
}
else {
std::cout << "warning : the image is not truecolor ... it has "
<< nOfColors << " colors." << std::endl;
// TODO this error should not go unhandled
}
}
else {
std::cout << "Earth::loadImageForTexture() - SDL could not load "
<< Earth::TEXTURE_FILE_NAME
<< " image!" << std::endl;
returnFlag = false;
}
return returnFlag;
}[/pre]
.... what I meant was that one ought distinguish between the first introduction to the context of the chosen image, and then in my case the frame-by-frame choice of texture.

BTW the above routines are only called once each. 'Earth' ( as are all of my scene components ) is subclassed from 'Renderable' which invokes the glCallList() function to draw it. This way I separate preparation from use, and in 'Renderable' I have lazy evaluation to ensure preparation is done before rendering ie. only if it's asked for! One can toggle scene elements in and out of display - axes etc. Note that prepare() is pure virtual in 'Renderable'.

[pre]void Renderable::draw() {
// Is this scene element to be shown at the moment?
if(active_flag == Renderable::ACTIVE) {
// Yes, so are the relevant preparations needed?
if(prepare_flag == Renderable::NOT_READY) {
// Yes they are, so get a display list ID
assignRenderID();
// and so prepare.
prepare();
// Remember that you've done this.
prepare_flag = Renderable::READY;
}
// Now the element can be inserted into the scene.
render();
}
}[/pre]
Anyway moving on, from the 'Red Book' :

Quote:
When the texture object is subsequently bound once again, its data becomes the current texture state. ( The state of the previously bound texture is replaced )


I was pointing this out, as the texts I've read don't quite say it right ( in my view !!?? ) : you can have only one image awaiting to be plastered onto some geometric construct at any given time. The glBindTexture() truly has slightly different semantics (a) first time to then tell OpenGL with subsequent glTexImage2D(), glTexParameteri() calls of ( probably once-off ) definitions for the other texture object fields and (b) within the display list to tell OpenGL to swap this particular texture into the 'current' slot, just prior to slapping onto a sphere in this case. I'll be using more than one texture .... and 'texture' as a word choice can get confusing because one might mean the image, the C++ language object, the notion of a texture etc. I think I've got that pretty right. Gulp. :-)

General note : the brilliant idea of the glNewList() ...... glEndList() construct is that all ( OpenGL, sub ) expressions within, that can be ( careful ! ), are evaluated to 'server side' hard values. Thus potentially not requiring a reload/recalculation from the client side.

Quote:

BTW, if you're interested how this was done, check out the following repo and have a look at the isolated branch:

[pre]git://git.aei.uni-hannover.de/public/brevilo/pulsatingscience.git[/pre]
That thingie might give a few more clues about using OpenGL...


Thanks! I'll sure check that out. :-)

Quote:
I'm open to anything that's useful :-) Add those key to the list of those supported, get rid of the list entirely (without introducing major drawbacks), come up with something even smarter...


Ok. I'll get the laser pointer, the whipper snipper and The Lil' Banger Chemistry Set For WhizzKids out and see what I can finangle ... :-):-)

Cheers, Mike.

( edit ) For the record, that was my effort upon reading this ( Red Book p411 ):

Quote:
void glBindTexture(GLenum target, GLuint textureName);
glBindTexture() does three things. When using the textureName of an
unsigned integer other than zero for the first time, a new texture object is
created and assigned that name. When binding to a previously created
texture object, that texture object becomes active. When binding to a
textureName value of zero, OpenGL stops using texture objects and returns
to the unnamed default texture.
When a texture object is initially bound (that is, created), it assumes the
dimensionality of target, which is GL_TEXTURE_1D, GL_TEXTURE_2D,
GL_TEXTURE_3D, GL_TEXTURE_CUBE_MAP, GL_TEXTURE_1D_ARRAY,
GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, or GL_TEXTURE_
BUFFER. Immediately on its initial binding, the state of the texture object
is equivalent to the state of the default target dimensionality at the
initialization of OpenGL. In this initial state, texture properties such as
minification and magnification filters, wrapping modes, border color, and
texture priority are set to their default values.

( edit ) Red Book == 'OpenGL Programming Guide Fifth Edition The Official Guide to Learning OpenGL, Version 2' ( 2006, Addison Wesley ISBN 0321335732 )

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

Ok, got it. Looks good to

Message 78061 in response to message 78060

Ok, got it. Looks good to me!

Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

Now here's a terrific little

Now here's a terrific little gem from the Red Book ( I add these sorts of things to this thread while they're hot in my mind ). Shadows. Not shading, that's a rather complicated area of discussion in OpenGL referring to surfaces and lighting models ... etc, but shadows. By this I mean a light source, an object, and some plane behind it ( with respect to the light position ) having areas which ought be given a different color depending on the object occulting/intersecting some line from the source through the object to the plane. ( See p658 - 9 which I describe here. )

Already OpenGL has matrix representations for all manner of 3D transformations. These matrices are the blood in the arteries of the rendering engine. Now if one can project a 3D object ( the 'real' thing ) to a 2D plane ( where the 'shadow' is ) via a matrix then one can simply slot this into the engine for a given object. Then voila! - you have your region ( generically a planar polygon ) to color especially as 'the shadow'. So the plan is : construct the 'shadow' matrix for a given scene element, multiply it in to the standard transform sequence, then color the result to please.

The Red Book has kindly done the math giving a complex looking solution for the matrix elements, but completely deterministic for a given/arbitrary scene arrangement of light/object/plane. As a side project I reckon I'll try to construct some ( parameterised ) code to enact this .... with input from the desired scene ( light position and shadow plane ) to uptake an array of vertices ( the 'object' ) and hence output another vertex array ( the 'shadow' ).

This can also have another use for : reflections. By this I mean the light shining onto a reflective surface and then onto a plane. You see this when sunlight hits windows on the sunny side of the street which throws bright areas onto the shadowed side of the street. Same idea really, but you will brighten the color of these areas with respect to adjacent surrounds, rather than darken. One can easily slip in a reflection matrix ( it's a legitimate 3D transform ) to reposition the light - technically create a virtual image of the 'real' light ( like one's face in the mirror ) - via some plane contained on/about/within the reflecting object.

Why do I think this is a treasure ( sophisticated math kerfuffle aside )? Even the most basic OpenGL implementation is going to be doing this fairly efficiently and almost certainly in hardware ( optimised software emulation at worst ) as the matrix representations are. Any C++ code won't have any time-expensive trig function calls, for instance. A base of linear algebra, first order mainly and a few quadratic terms at worst. So that makes this a cheap way to get good depth cues into a frame render. Especially so if the specific light-source/occulting-object/shadow-plane set is fixed within one's 'world model'. For instance 'motionless' entities with an infinitely far away light position. You can pre-compute much of that ( ie. once ) outside of the frame re-draw code ( ie. many times ).

Cheers, Mike.

( edit ) I ought add that although at run-time one can interrogate the OpenGL engine as to what version it is, maybe branch or set features accordingly, that only specifies what API specification it is supposed to or will honour. There is no standardised API method to query whether functionality is enacted in hardware ( "accelerated" ) or as software emulation. Plenty of video card vendor specific methods, and since I have a more generic target ( and I am in my right mind ) I won't pursue that. For instance most recent cards have z-buffering ( for depth test ) in hardware, but it may not be so.

So along these lines I've chosen to keep generally to the lowest common denominator of functionality in terms of sophistication of later versions of OpenGL. The performance price for those more advanced features enacted when primarily with software, and no devoted hardware facility for same, could well to be too high. [ That is the latest OpenGL version and drivers on a crappy video card.] I think shaders can come into this category ( penalty ), as say are vertex arrays ( supposedly a 'server side' enhancement ) which would be the next logical step up from display lists for streamlining vertex definitions.

For example consider the translucency of a polygon - seeing fragments at greater depth ( behind ). You can do this with blending ( alpha values .... ) but I'm told that runs like a dead whippet if not hardware based - and every frame to boot. What I have discovered ( Red Book p644 - 5 ) is 'cheesy translucency' ( in the Swiss cheese sense of a slice having holes ). One defines 'stipple bitmasks' for when a polygon is painted ( GL_FILL ), such a mask gives no paint if the bit is 0 and whatever fill color you've chosen when the bit is 1 ( also think a window/door fly-wire screen, or garden shade cloth ). You decide an overall transparency level for a given bitmask by constructing some amount of 1-bits in a 1024 bit length ( 128 byte ) array that you hand to OpenGL via glPolygonStipple(). My current exploration is with randomly generated but sparse ( of 1-bits ) patterns that can be calculated once and not with each frame render. One aspect of the stipple is that it is screen aligned and not object aligned, so distinct polygon's have aligned stipples - a bit unnatural for the purpose. Thus a per polygon random stipple mask avoids that with the added bonus of overlapping patterns from distinct polygon fills will be 'additive' in a rough binary, but clamped, sense - a sort of 2 state discrete level version of alpha summation.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

Hm, wouldn't this give you

Message 78063 in response to message 78062

Hm, wouldn't this give you "just" hard-edged shadows/reflections without shading (gradients, feathered edges)...? While this is certainly a nice feature, it lacks the realistic feel proper shading provides - and people nowadays expect nothing less than that...

Regarding the red book: have you had the chance to check out the blue book (OpenGL Superbible) as well? I'm curious how they compare.

Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

RE: Hm, wouldn't this give

Message 78064 in response to message 78063

Quote:
Hm, wouldn't this give you "just" hard-edged shadows/reflections without shading (gradients, feathered edges)...? While this is certainly a nice feature, it lacks the realistic feel proper shading provides - and people nowadays expect nothing less than that...


Agreed, and this is certainly to be tested. It may well look crappy. But at least the indicated transform method hits the correct ( realistic looking ) perspective calculations within the 3D 'world model' to locate what area one should apply some to-be-determined coloring method. The stippling for ( cheesy ) translucency can also apply to any 'shadow' polygon. One can even overlay stipple on stipple ( on stipple .... ) to give gradations of density and color change across a polygon nearside face ( ie. by clever construction of stipples ). Sort of trying to rely on the human eye/brain to do the blurring for you - similiar to the newspaper photos that use halftone/color-subtractive tricks. Why I pursue this line is the, possibly mistaken, thought that stipple rendering per se is probably/basically a lightning-fast straight-forward .... err .... bit-blit (?) into the frame buffer once the specific screen co-ordinate area is determined. Which is a guess, as the stipple is screen boundary aligned and not polygon boundary aligned. A 'moving' stippled polygon shows the stipple 'passing by underneath it' - much like peeking out of a train window watching the scenery pass by - and not travelling along.

To be more helpful : I've been led to this area of thought by attempting to enact a 3D vector/stroke font. This is not the HUD - which is effectively pasted onto the nearside clipping/frustum plane by direct pixel buffer inserts that bypass the pipeline - I'll do that with an OOP-ey version of the current technique ( OGLFT .... ). What I'm getting at is writing as scene elements which appear so because they change perspective as you scoot around inside the 'world'. You place them and render them like other objects. Say like the stuff you see in some Monty Python film opening titles. You know a big 'Life of Brian' or 'The Meaning of Life' stack of masonry blocks ( originally I was thinking of cuisenaire rods, a childhood obsession of mine ). I'm thinking of applying this to the inside of the celestial sphere to label constellations, say, or the north and south poles of the Sun and Earth. One could have an observatory name nearby an object representing a LIGO observatory just outwards of the Earth's covering texture - suitably placed to make it look right ( eg. Hanford appears stuck on the globe just inland of the Pacific northwest US coast, thus rotates properly as you animate ). Heck, even align this text alongside some vector pointing from a LIGO observatory ( ie. what is the current WU doing as per refreshBOINCInformation()? Or refreshLocalBOINCInformation()? ) to the sky target under current attention. The text could describe some WU dependent information .... whatever. You can scale to please, even apply shear and other transforms to 'cheaply' mimic ITALICS perhaps. A denser stipple pattern gets you BOLD. Little shifts and scaling get you sub- and super- scripts. Features which could have alot of use, I reckon. Well, that's the plan. Make the right tools and a nice sculpture becomes easier. If I can nail this it'll look quite slick - but greedily I want all platforms to benefit without too much penalty, as we do have work units to crunch after all! So I don't want to make an all singing and all dancing ( however beautiful ) screen saver, if it sinks someone's RAC to the ocean floor. Because then it'll be turned off. ;-(

Checkout this screenshot from my development rig ( actually one and a bit screens as I have two monitors ). It's a bit 'fish-eye' at the edges because of a 45 degree field of view :

There you can see in the foreground the 'quote', 'hash', 'dollar' and 'percent' characters writ large. It shows the random stippling technique I described earlier. One can vary the font 'depth' along the z-axis. I'm working on lower case as well, but to look good you have to stuff about with 'descenders' ( lower ends of 'p' and 'q' etc ) and such-like font characteristics. Which I'm also reading up about to gauge how fancy I ought/not get in this area. The 'space' character is a doddle! :-) In the background are some cartesian axes ( blue stippled line going to the 'minus Z' end of the z-axis, and a red stippled line going from the origin just out of left-side shot passing toward the 'plus X' axis end out of lower-side shot ). Also in the far background you can see ( you're are INSIDE the celestial sphere seeing what I've pasted there like posters on a bedroom wall ) :

- purple pulsars. The E@H/PALFA/Colvin/Gebhardt one I have special plans for.

- constellations with color coding of spectral types wrt individual stars. Star brightness or 'visual magnitude' determines dot radius - I haven't yet worked out if there is a useful equivalent concept for pulsars to apply likewise. Constellation shape suggested by the linking interrupted/stippled aqua lines.

- a fairly faint grey-ish wire frame to, if you like, apply a co-ordinate system. Right ascension and declination with - you guessed - the vector font labels at suitable intervals.

The right side of the screen shot is a bit of the terminal window showing my std::cout messages for me to keep track of behaviour and debug. The writing mostly refers to commands to my 'little spaceship' dynamic model of changing viewpoint.

Quote:
Regarding the red book: have you had the chance to check out the blue book (OpenGL Superbible) as well? I'm curious how they compare.


Yeah, I've got that book, but I haven't been game to open it yet! :-) :-)
So I ought break the ice on that and research alternatives. I could be re-inventing the horse. So thanks for the hint, I'll be bold and get back to you on that .... so far it's been a tad like drinking from a fire hose! ;0)

Now that I'm thinking about it, it's probably getting closer to when I should set up a public repo for this - GNU licensing, collaboration and all. See if anyone else can be kick-started ( don't be shy, and I can delegate ... ). I can most likely host it on my own domain, no sweat. Is 'git' the go ( latest Ubuntu )? Email me if you like, as that might be too specific/boring for the boards. :-) :-)

Again, thanks for your genuinely helpful attention. :-)

Cheers, Mike.

( edit ) As I've been OOP-ing, then the sequence of strokes/vectors to define a given character's 'character' is quite independent of the appearance of the strokes themselves. You could pop in code that would render a line/stroke as Mickey Mouse toys with linked hands. Cuisenaire rods. LIGO logos as decals. Whatever.

( edit ) Also if one seizes the nettle and goes for 'true' OOP in C++, not merely as 'C with classes', then you get all these 'conceptual stubs' to come back to. Expand. Vary. Experiment. So if a good idea is some base class, then a better idea becomes a derived class from it, you get the base plus the extra bits.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171438
RAC: 39

RE: What I'm getting at is

Message 78065 in response to message 78064

Quote:
What I'm getting at is writing as scene elements which appear so because they change perspective as you scoot around inside the 'world'. You place them and render them like other objects. Say like the stuff you see in some Monty Python film opening titles.

Just to remind you, OGLFT supports extruded font rendering (by instantiating a "Solid" face) and those text objects don't have to be projected onto the HUD, like they are now. You can embed them into the scene as you please... However, it's certainly not as flexible as a custom font engine!

Quote:

If I can nail this it'll look quite slick - but greedily I want all platforms to benefit without too much penalty, as we do have work units to crunch after all! So I don't want to make an all singing and all dancing ( however beautiful ) screen saver, if it sinks someone's RAC to the ocean floor. Because then it'll be turned off. ;-(

Sad but true :-)

Quote:

Checkout this screenshot from my development rig ( actually one and a bit screens as I have two monitors ). It's a bit 'fish-eye' at the edges because of a 45 degree field of view :

Hm, a Donnie Darko lover or just a Blender fan?

Quote:

- purple pulsars. The E@H/PALFA/Colvin/Gebhardt one I have special plans for.

Yeah baby... I'm curious about that one...

Quote:

- a fairly faint grey-ish wire frame to, if you like, apply a co-ordinate system. Right ascension and declination with - you guessed - the vector font labels at suitable intervals.

Hm, the current "globe" (press "g") feature should give this, right? Apart from the labels of course.

Quote:

Yeah, I've got that book, but I haven't been game to open it yet! :-) :-)
So I ought break the ice on that and research alternatives. I could be re-inventing the horse. So thanks for the hint, I'll be bold and get back to you on that .... so far it's been a tad like drinking from a fire hose! ;0)

Great, looking forward to getting your personal review ;-)

Quote:

it's probably getting closer to when I should set up a public repo for this - GNU licensing, collaboration and all. See if anyone else can be kick-started ( don't be shy, and I can delegate ... ). I can most likely host it on my own domain, no sweat. Is 'git' the go ( latest Ubuntu )?

git would be just great as, for me, there no justified alternatives ;-) You can either host this youself, use GitHub (free for sanely sized OSS projects) or we could offer to host the repo alongside our own ones...

Quote:

Again, thanks for your genuinely helpful attention. :-)

My pleasure, as always!

Oliver

Einstein@Home Project

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

Well it seems probable that

Well it seems probable that I've successfully created a public repo for my efforts here. When I buff my code to something more presentable, I'll add it in.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 316465377
RAC: 349191

RE: Just to remind you,

Message 78067 in response to message 78065

Quote:
Just to remind you, OGLFT supports extruded font rendering (by instantiating a "Solid" face) and those text objects don't have to be projected onto the HUD, like they are now. You can embed them into the scene as you please... However, it's certainly not as flexible as a custom font engine!


I didn't know that, I will use that, plus the custom stuff.

Quote:
Hm, a Donnie Darko lover or just a Blender fan?


As I had to look both of those up, let's put that down to Morphic Resonance .... :-)

Quote:
Yeah baby... I'm curious about that one...


Well I have to hold something back! :-)

Quote:
Hm, the current "globe" (press "g") feature should give this, right? Apart from the labels of course.


Yeah I've kept much of the Starphere stuff, concepts at least, just OOP-ed it a fair bit.

Quote:
git would be just great as, for me, there no justified alternatives ;-) You can either host this youself, use GitHub (free for sanely sized OSS projects) or we could offer to host the repo alongside our own ones...


Done. For general info : GitHub is free if you're doing open source. You only pay if it's 'private'. However beware their suggestion to have the ( SSL ) keyphrase incorporated into your system startup on Linux ( ie. changing your .bashrc and/or .profile ). I got a endless logon loop for some reason. I finally fixed that by using a live CD boot-up, making a user under that with precisely the same details ( name/password ) as what's on my hard-disk Ubuntu version. Then I could access the .bashrc/.profile files to delete the changes. Otherwise you see, you are stuffed, as with the live CD alone you can't access ( not even read ) that which you don't have permissions for! I'd call it a Catch-44 .......

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.