The Resource class can publicly give a const vector* which would do nicely for that purpose
Via Resource::data()
Quote:
( edit ) Yeah that'll work : if you want the original vanilla ResourceCompiler just use that. If you want, say, ImageResourceCompiler then call that instead .... it could inspect the filename extensions ( in resources.orc ) for SDL_Surface manage-able files and process those, and still have it call the superclass for the rest anyway.
( edit ) Naturally, I'd leave ResourceCompiler private data as such, it's the functions I'd be converting to protected !!!
Hm, this would produce a new (binary) tool one needs to call. Also, the proposed name is a bit misleading since the current ResourceCompiler is also used for images - as long as you need to preserve the image file as a whole (incl. header etc.).
How about this: I'd move the decision how to handle the data to run-time, not compile-time, literally. Thus I won't change ResourceCompiler but Resource and have it return different data as data() does today. You could implement a new method called, say, Resource:pixelData(enum format) that doesn't return the raw file content but a processed variant, i.e. stripping everything but the pixel data, based on the format you specified.
Hm, this would produce a new (binary) tool one needs to call. Also, the proposed name is a bit misleading since the current ResourceCompiler is also used for images - as long as you need to preserve the image file as a whole (incl. header etc.).
How about this: I'd move the decision how to handle the data to run-time, not compile-time, literally. Thus I won't change ResourceCompiler but Resource and have it return different data as data() does today. You could implement a new method called, say, Resource:pixelData(enum format) that doesn't return the raw file content but a processed variant, i.e. stripping everything but the pixel data, based on the format you specified.
Of course! Now here's me making things too complicated. Again. :-)
That's way easier. I'll write a Resource:pixelData(enum format), while under the hood I'll take advantage of the SDL header managing ability. Thanks.
BTW : I've just now managed to display image and text on the HUD via my new mini content-management system/hierarchy! It works basically as expected ( needs some tuning ) encompassing choice of layouts, containers, justification, resizing : most of which can be altered dynamically. Especially the content. For images it's simply a matter of switching/substituting elements in/out of rendering lists ( ie. an std::vector of the relevant pointer type ) OR in the case of text writing-over/swapping-in an existing string. Hence an event ( eg. seeing an E@H discovery on screen's starfield ) can trigger content change. :-)
Actually I've just realised I haven't enacted the search pointers - as per Starsphere - I'd like to stay continuous with that theme.
Cheers, Mike.
( edit ) I don't own a Mac but : I believe I can assume it has at least 12 function keys, plus a 'numeric keypad' on the right in the 'usual' design? Just thinking of a re-map to something a bit more user-friendly/self-evident than I'm doing for debugging.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
BTW : I've just now managed to display image and text on the HUD via my new mini content-management system/hierarchy! It works basically as expected ( needs some tuning ) encompassing choice of layouts, containers, justification, resizing : most of which can be altered dynamically. Especially the content. For images it's simply a matter of switching/substituting elements in/out of rendering lists ( ie. an std::vector of the relevant pointer type ) OR in the case of text writing-over/swapping-in an existing string. Hence an event ( eg. seeing an E@H discovery on screen's starfield ) can trigger content change. :-)
Cool!
Quote:
( edit ) I don't own a Mac but : I believe I can assume it has at least 12 function keys, plus a 'numeric keypad' on the right in the 'usual' design? Just thinking of a re-map to something a bit more user-friendly/self-evident than I'm doing for debugging.
Yep, they all have 12 function keys. the full-sized keyboards do have 19. However, only the full-sized keyboards do have a numeric pad. Unfortunately the default for desktop and laptop Macs nowadays is NOT the full-sized keyboard.
HTH,
Oliver
PS: I really appreciate our style of teamwork Mike, honestly.
Yep, they all have 12 function keys. the full-sized keyboards do have 19. However, only the full-sized keyboards do have a numeric pad. Unfortunately the default for desktop and laptop Macs nowadays is NOT the full-sized keyboard.
Ah. That's still do-able. For 'driving' the 'craft' around in space I want to put 'translation' keys on the left side, while having 'rotation' keys on the right side. I have found it confusing otherwise, so hence divide that b/w two hands. These capabilities interact in the sense that each translation command is with respect to directions defined by the instantaneous attitude of the craft. To encompass both PC-101 layout and any possible Mac type then I can dual trigger the rotations - so a right roll, for example, could be triggered by pressing the ':' key OR the keypad '6'. Simply fold those cases together in that big switch construct in keyboardPressEvent() of the AbstractGraphicsEngine. Thus if a Mac keyboard doesn't have a numeric keypad then the relevant keycodes just won't arrive. The left side will essentially have the 'wasdx' pattern of control commonly found in first-person-shooters.
Quote:
PS: I really appreciate our style of teamwork Mike, honestly.
Same here. It's a pleasure! :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
A higher level design question. May I run the bleedin' obvious past you Oliver and Bernd ? :-) :-)
- I have implemented my own variety of AbstractGraphicsEngine ie. SolarSystem.
- SolarSystem is pure virtual ie. requiring some firm function definitions in derived classes.
- so I have SolarSystemS5R3 and SolarSystemRadio that are derived from SolarSystem, giving a definition to those virtual functions.
- I have generic/common functionality ( ie. regardless of the exact WU type ) performed in SolarSystem.
- I have WU type specific behaviour in SolarSystemS5R3 and SolarSystemRadio ( with evidently a nod at extending this set someday by, say, adding SolarSystemGamma for FGRP ).
- thus during execution : a derived class function, say initialize(), calls the base class version by the same name and then adds it own specifics.
- so far, so good and is following the general pattern of Starsphere.
- now I'm at liberty not to follow the Starsphere hierarchy in detail ( provided I satisfy the AbstractGraphicsEngine interface ) as I have a rather different approach to the whole rendering mechanism. But let's keep with the generic vs. specific flavour.
- so I can, for example, use SolarSystem to render the 'basic 3D universe' - currently via the a Simulation instance member within it.
- I can pop extra 3D elements into that if I want via a virtual function call, say from within Simulation when it has finished it's rendering of the commonly viewable elements. This would possibly cover a Fermi satellite orbiting the Earth ( FRGP WU's ), or sitting on the Earth I'd have radio-telescopes like Aricebo & Parkes ( BRP WU's ) and the various IFO's ( GW WU's ).
- I do this so as to retain a single instance of Simulation - efficiency, hiding, logical coherence etc.
- currently the Simulation class also 'owns' the HUD. I want to keep this as I'd like easy correspondence b/w the 3D viewable scene and the HUD content. Also this centralizes the invocation of certain OpenGL calls - especially biggies like glMatrixMode().
- so now, mutatis mutandis, I do the same for the HUD in regards content. SolarSystem does some common HUD - even if that is not alot - and calls a virtual function in a derived class to get specifics.
- would this be a reasonable approach to 'stay in design theme' with Starphere, retaining malleability/extensibility et al ?
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Update. Ah, the joys of multi-platform coding! :-) :-)
The framework calls SDL_SetVideoMode() within the WindowManager() routine upon a window resize request : [pre]else if (event.type == SDL_VIDEORESIZE) {
m_CurrentWidth = m_WindowedWidth = event.resize.w;
m_CurrentHeight = m_WindowedHeight = event.resize.h;
// update video mode
m_DisplaySurface = SDL_SetVideoMode(m_CurrentWidth,
m_CurrentHeight,
m_DesktopBitsPerPixel,
m_VideoModeFlags);
// notify our observers (currently exactly one, hence front())
// (windoze needs to be reinitialized instead of just resized, oh well)
/// \todo Can we determine the host OS? On X11 a resize() is sufficient!
eventObservers.front()->initialize(m_CurrentWidth, m_CurrentHeight, 0, true);[/pre]
where the 'eventObserver' is one's AbstractGraphicsEngine instance and hence it's initialise() routine is that which is called. If the last parameter is true, as above, then we are 'recycling' in that we've had a context before, thus the third parameter ( being 0 ) is a NULL pointer and is ignored. If the last parameter is false then the initialise routine stores the third parameter for later :
[pre]void SolarSystem::initialize(const int width, const int height, const Resource* font, const bool recycle) {
// check whether we initialize the first time or have to recycle (required for windoze)
if(!recycle) {
// store the font resource pointer
if(font) {
spaceFontResource = font;
}
}[/pre]
that third parameter is the font resource ( pointer ) as defined in the main() routine upon program entry ( ie. once only ) :
[pre]// prepare rendering
graphics->initialize(window.windowWidth(), window.windowHeight(), fontResource);[/pre]
... the 'fourth' parameter has a default value (ie. if not provided ) of false
[pre]virtual void initialize(const int width, const int height, const Resource *font, const bool recycle = false);[/pre]
Now one needs to assume ( because you don't know which platform you're running on ) that SDL_SetVideoMode() causes one to lose the OpenGL context/state and acquire a new one, hence requiring all the work to 'reassemble' the OpenGL components. From the relevant SDL Wiki entry :
Quote:
User note 2: Also note that, in Windows, setting the video mode resets the current OpenGL context. You must execute again the OpenGL initialization code (set the clear color or the shade model, or reload textures, for example) after calling SDL_SetVideoMode. In Linux, however, it works fine, and the initialization code only needs to be executed after the first call to SDL_SetVideoMode (although there is no harm in executing the initialization code after each call to SDL_SetVideoMode, for example for a multiplatform application).
Plus Windows may/can/does cause the OGLFT instances also to become invalid ( as they in turn rely upon textures-on-quads, display lists etc ). You can't programatically determine the host OS at runtime, as that is the point of the SDL interface to isolate you from that. So far so good, you might say ....
... except when I forget that one of my HUD subclasses has an OpenGL resource allocation of some sort! At runtime you get a segment violation type error on resize - the coding equivalent of hitting a tree in a car - which is a nuisance to debug. Took me a couple of hours to sort through that, but we're all good now .... :-)
So this is why AbstractGraphicsEngine::initialise() is called first on resize, which then calls AbstractGraphicsEngine::resize(). In my code this then causes the various OpenGL elements to re-initialise themselves etc.
Cheers, Mike.
( edit ) The specific point in mentioning this is not just for others who may suffer this pitfall, but to note that SDL isn't quite platform independent ....
( edit ) Also, while the re-acquisition of OpenGL resources with resize won't really be an overall performance issue - unless one intends to resize the window all day long - it does explain a noticeable pause in the graphics evolution when such is performed.
( edit ) Plus there is a related issue that I'm investigating : the window contents very occasionally just go completely white, from corner to corner, on a resize. Then if you resize again you get your correct/expected view back. I haven't been able to reliably reproduce this behaviour. Even on Linux. Hmmmm .....
( edit ) There's another related problem - again occasional, not reproducible, and on Linux - when a resize gives the 'globes' in my simulation ( Earth and Sun currently ) not their correct texture, but what appears to be a screenshot of the desktop just prior to the resize. It looks weird having a desktop snapshot wrapped upon a sphere! I'm thinking this is a driver issue, probably related to handling the mapping b/w system and video RAM. It's not a killer as a further resize restores the expected texturing.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
- would this be a reasonable approach to 'stay in design theme' with Starphere, retaining malleability/extensibility et al ?
I think so, yes. I mean the only thing the framework cares about are its interfaces (i.e. AbstractGraphicsEngine in this case). How the actual engine is composed should be treated in a proper black-box, don't care approach. It's always useful to (sub)structure such an engine where appropriate of course. If you identify entities that might even serve an even more generic purpose, move 'em up in the hierarchy. We could even move sidewards, out of the vertical inheritance scheme, and put some eugenic stuff into classes which instances can then be used via composition - sometimes a "has a" relationship is much better better/suitable than a "is a" relationship, right. The usual PDCA cycle, here in the guise of design, implement, check, refactor :-)
Shame on me that didn't find the time to look at your latest code. I was just thinking: something I always wanted to do is to display some help-text as an overlay when the user presses "h" (not in screensaver-mode). I'd display a layer with 50% opacity in front of the current HUD (logo, info, spectrum) such that every "regular" screen element is reduced in brightness by 50%. Then in front of that, centered on the screen we could display some usage and copyright info as well as the author'S nameS. The generic part of this could be done high up in the generic parts of the model, maybe just the texts could be handled (or just augmented) in the specific derived classes... Should be trivial to do. Thoughts?
I think so, yes. I mean the only thing the framework cares about are its interfaces (i.e. AbstractGraphicsEngine in this case). How the actual engine is composed should be treated in a proper black-box, don't care approach. It's always useful to (sub)structure such an engine where appropriate of course. If you identify entities that might even serve an even more generic purpose, move 'em up in the hierarchy.
Yup. Got that.
Quote:
We could even move sidewards, out of the vertical inheritance scheme, and put some eugenic stuff into classes which instances can then be used via composition - sometimes a "has a" relationship is much better better/suitable than a "is a" relationship, right.
Yup. Got that.
[ I've even been adventurous and mixed those together. In my HUD hierarchy I have a container class which subclasses to specifics, but a container can also contain another container within! This gives two hierarchies (a) OOP inheritance and (b) composition. Which is why it took so long to write - separating "is a" and "has a" in my mind on a per instance basis :-) :-) ]
Quote:
The usual PDCA cycle, here in the guise of design, implement, check, refactor :-)
Is that what it's called ? :-) :-)
Quote:
Shame on me that didn't find the time to look at your latest code.
You have bigger fish to fry. Catch as catch can. :-)
Ooops! I haven't updated the repo for a while, I've been waiting for my implementation to settle out. I'll do that ASAP.
Quote:
I was just thinking: something I always wanted to do is to display some help-text as an overlay when the user presses "h" (not in screensaver-mode). I'd display a layer with 50% opacity in front of the current HUD (logo, info, spectrum) such that every "regular" screen element is reduced in brightness by 50%. Then in front of that, centered on the screen we could display some usage and copyright info as well as the author'S nameS. The generic part of this could be done high up in the generic parts of the model, maybe just the texts could be handled (or just augmented) in the specific derived classes... Should be trivial to do. Thoughts?
Nice one! :-)
Yup, do-able. The 50% opacity is simply a quad of screen dimensions with an alpha of 0.5 placed upon the near frustum face. The copyright display could simply be another HUD instance. Rendering order per frame is then :
- generic 3D scene ( 'solarsystem' in my case )
- specific 3D scene ( FermiLAT or radiotelescopes or IFOs in my schema )
- generic HUD
- specific HUD ( depending on WU, power spectrums, whatever )
- opacity layer
- generic copyright HUD
- specific copyright HUD
Now in my HUD scheme you could alter text in two ways (a) remove/insert entire 'HUDTextLine' objects from their containers or, even more simply (b) edit the HUDTextLine's content 'in place' via HUDTextLine::setText() and/or HUDTextLine::getText(). The second option uses std::string objects and hence is subject to whatever you want there : so for example, read the current text value, edit it, and re-insert. I've got these things to 'automagically' resize themselves upon alterations AND signal their enclosing container ( if any ) to reassess it's own sizing - onwards and upwards in a containment hierarchy. So - but depending upon constraints from other contained items - a small text change could cause an entire HUD to re-adjust itself. Where/when you do these text manipulations in some other code structure is a separate issue, provided you have a way to disclose the correct pointers! :-)
Cheers, Mike.
( edit ) I should add that it is perfectly OK to have z offsets on near-frustum-face rendering. So you could, say, render the first HUD at -0.5, the opaque layer at 0.0, and the second HUD at +0.5. You just have to not disable the depth buffer, have the orthographic projection clipping near/far planes properly set, and translate your respective HUD elements to the desired z offset. That would be fine with my schema, I could simply add a z-offset member to the HUDContainer class and as this class 'grants' the positions to it's contained elements. Then come positioning time it will pass on that offset. You could even dynamically re-arrange the offsets at runtime - I'm not sure why you would in this case - but like shuffling cards you could bring to bear any depth ordering you like.
I've pretty well done this for objects in the 3D scene, where the various elements at about the celestial sphere radius ( pulsars, supernovae, constellations, grid .... ) are individually set slightly closer or further away than that radius. This seems to look better as alot of the time there is less z-order contention to resolve ie. when do you deem two objects to be such that one is nearer/further/same distance as the other?
Note for others : make your z-order absolute range as small as reasonable to work because the z-buffer has a fixed underlying representation. Meaning that you only get a certain 'granularity', and the deeper your clipping range then the 'z quanta' are larger - so that potentially more distinct objects will be judged as equidistant for z-order purposes. That issue is disclosed with flickering/occultation of adjacent elements with slight changes in viewing position, as the OpenGL machine tries to decide which to place in front of the other.
My current z-order range ( 3D scene ) is mildly over the celestial sphere diameter, that way one can scoot around just on the outer side of the sphere and still see the sphere's inner face over on the far side from your viewpoint. But if you try to wander too far off, my Craft code will send you back inwards .... it also stops you going inside the Earth and the Sun. :-) :-0
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: The Resource class can
)
Via Resource::data()
Hm, this would produce a new (binary) tool one needs to call. Also, the proposed name is a bit misleading since the current ResourceCompiler is also used for images - as long as you need to preserve the image file as a whole (incl. header etc.).
How about this: I'd move the decision how to handle the data to run-time, not compile-time, literally. Thus I won't change ResourceCompiler but Resource and have it return different data as data() does today. You could implement a new method called, say, Resource:pixelData(enum format) that doesn't return the raw file content but a processed variant, i.e. stripping everything but the pixel data, based on the format you specified.
Cheers,
Oliver
Einstein@Home Project
RE: Hm, this would produce
)
Of course! Now here's me making things too complicated. Again. :-)
That's way easier. I'll write a Resource:pixelData(enum format), while under the hood I'll take advantage of the SDL header managing ability. Thanks.
BTW : I've just now managed to display image and text on the HUD via my new mini content-management system/hierarchy! It works basically as expected ( needs some tuning ) encompassing choice of layouts, containers, justification, resizing : most of which can be altered dynamically. Especially the content. For images it's simply a matter of switching/substituting elements in/out of rendering lists ( ie. an std::vector of the relevant pointer type ) OR in the case of text writing-over/swapping-in an existing string. Hence an event ( eg. seeing an E@H discovery on screen's starfield ) can trigger content change. :-)
Actually I've just realised I haven't enacted the search pointers - as per Starsphere - I'd like to stay continuous with that theme.
Cheers, Mike.
( edit ) I don't own a Mac but : I believe I can assume it has at least 12 function keys, plus a 'numeric keypad' on the right in the 'usual' design? Just thinking of a re-map to something a bit more user-friendly/self-evident than I'm doing for debugging.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: BTW : I've just now
)
Cool!
Yep, they all have 12 function keys. the full-sized keyboards do have 19. However, only the full-sized keyboards do have a numeric pad. Unfortunately the default for desktop and laptop Macs nowadays is NOT the full-sized keyboard.
HTH,
Oliver
PS: I really appreciate our style of teamwork Mike, honestly.
Einstein@Home Project
RE: Yep, they all have 12
)
Ah. That's still do-able. For 'driving' the 'craft' around in space I want to put 'translation' keys on the left side, while having 'rotation' keys on the right side. I have found it confusing otherwise, so hence divide that b/w two hands. These capabilities interact in the sense that each translation command is with respect to directions defined by the instantaneous attitude of the craft. To encompass both PC-101 layout and any possible Mac type then I can dual trigger the rotations - so a right roll, for example, could be triggered by pressing the ':' key OR the keypad '6'. Simply fold those cases together in that big switch construct in keyboardPressEvent() of the AbstractGraphicsEngine. Thus if a Mac keyboard doesn't have a numeric keypad then the relevant keycodes just won't arrive. The left side will essentially have the 'wasdx' pattern of control commonly found in first-person-shooters.
Same here. It's a pleasure! :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
A higher level design
)
A higher level design question. May I run the bleedin' obvious past you Oliver and Bernd ? :-) :-)
- I have implemented my own variety of AbstractGraphicsEngine ie. SolarSystem.
- SolarSystem is pure virtual ie. requiring some firm function definitions in derived classes.
- so I have SolarSystemS5R3 and SolarSystemRadio that are derived from SolarSystem, giving a definition to those virtual functions.
- I have generic/common functionality ( ie. regardless of the exact WU type ) performed in SolarSystem.
- I have WU type specific behaviour in SolarSystemS5R3 and SolarSystemRadio ( with evidently a nod at extending this set someday by, say, adding SolarSystemGamma for FGRP ).
- thus during execution : a derived class function, say initialize(), calls the base class version by the same name and then adds it own specifics.
- so far, so good and is following the general pattern of Starsphere.
- now I'm at liberty not to follow the Starsphere hierarchy in detail ( provided I satisfy the AbstractGraphicsEngine interface ) as I have a rather different approach to the whole rendering mechanism. But let's keep with the generic vs. specific flavour.
- so I can, for example, use SolarSystem to render the 'basic 3D universe' - currently via the a Simulation instance member within it.
- I can pop extra 3D elements into that if I want via a virtual function call, say from within Simulation when it has finished it's rendering of the commonly viewable elements. This would possibly cover a Fermi satellite orbiting the Earth ( FRGP WU's ), or sitting on the Earth I'd have radio-telescopes like Aricebo & Parkes ( BRP WU's ) and the various IFO's ( GW WU's ).
- I do this so as to retain a single instance of Simulation - efficiency, hiding, logical coherence etc.
- currently the Simulation class also 'owns' the HUD. I want to keep this as I'd like easy correspondence b/w the 3D viewable scene and the HUD content. Also this centralizes the invocation of certain OpenGL calls - especially biggies like glMatrixMode().
- so now, mutatis mutandis, I do the same for the HUD in regards content. SolarSystem does some common HUD - even if that is not alot - and calls a virtual function in a derived class to get specifics.
- would this be a reasonable approach to 'stay in design theme' with Starphere, retaining malleability/extensibility et al ?
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Update. Ah, the joys of
)
Update. Ah, the joys of multi-platform coding! :-) :-)
The framework calls SDL_SetVideoMode() within the WindowManager() routine upon a window resize request :
[pre]else if (event.type == SDL_VIDEORESIZE) {
m_CurrentWidth = m_WindowedWidth = event.resize.w;
m_CurrentHeight = m_WindowedHeight = event.resize.h;
// update video mode
m_DisplaySurface = SDL_SetVideoMode(m_CurrentWidth,
m_CurrentHeight,
m_DesktopBitsPerPixel,
m_VideoModeFlags);
// notify our observers (currently exactly one, hence front())
// (windoze needs to be reinitialized instead of just resized, oh well)
/// \todo Can we determine the host OS? On X11 a resize() is sufficient!
eventObservers.front()->initialize(m_CurrentWidth, m_CurrentHeight, 0, true);[/pre]
where the 'eventObserver' is one's AbstractGraphicsEngine instance and hence it's initialise() routine is that which is called. If the last parameter is true, as above, then we are 'recycling' in that we've had a context before, thus the third parameter ( being 0 ) is a NULL pointer and is ignored. If the last parameter is false then the initialise routine stores the third parameter for later :
[pre]void SolarSystem::initialize(const int width, const int height, const Resource* font, const bool recycle) {
// check whether we initialize the first time or have to recycle (required for windoze)
if(!recycle) {
// store the font resource pointer
if(font) {
spaceFontResource = font;
}
}[/pre]
that third parameter is the font resource ( pointer ) as defined in the main() routine upon program entry ( ie. once only ) :
[pre]// prepare rendering
graphics->initialize(window.windowWidth(), window.windowHeight(), fontResource);[/pre]
... the 'fourth' parameter has a default value (ie. if not provided ) of false
[pre]virtual void initialize(const int width, const int height, const Resource *font, const bool recycle = false);[/pre]
Now one needs to assume ( because you don't know which platform you're running on ) that SDL_SetVideoMode() causes one to lose the OpenGL context/state and acquire a new one, hence requiring all the work to 'reassemble' the OpenGL components. From the relevant SDL Wiki entry :
Plus Windows may/can/does cause the OGLFT instances also to become invalid ( as they in turn rely upon textures-on-quads, display lists etc ). You can't programatically determine the host OS at runtime, as that is the point of the SDL interface to isolate you from that. So far so good, you might say ....
... except when I forget that one of my HUD subclasses has an OpenGL resource allocation of some sort! At runtime you get a segment violation type error on resize - the coding equivalent of hitting a tree in a car - which is a nuisance to debug. Took me a couple of hours to sort through that, but we're all good now .... :-)
So this is why AbstractGraphicsEngine::initialise() is called first on resize, which then calls AbstractGraphicsEngine::resize(). In my code this then causes the various OpenGL elements to re-initialise themselves etc.
Cheers, Mike.
( edit ) The specific point in mentioning this is not just for others who may suffer this pitfall, but to note that SDL isn't quite platform independent ....
( edit ) Also, while the re-acquisition of OpenGL resources with resize won't really be an overall performance issue - unless one intends to resize the window all day long - it does explain a noticeable pause in the graphics evolution when such is performed.
( edit ) Plus there is a related issue that I'm investigating : the window contents very occasionally just go completely white, from corner to corner, on a resize. Then if you resize again you get your correct/expected view back. I haven't been able to reliably reproduce this behaviour. Even on Linux. Hmmmm .....
( edit ) There's another related problem - again occasional, not reproducible, and on Linux - when a resize gives the 'globes' in my simulation ( Earth and Sun currently ) not their correct texture, but what appears to be a screenshot of the desktop just prior to the resize. It looks weird having a desktop snapshot wrapped upon a sphere! I'm thinking this is a driver issue, probably related to handling the mapping b/w system and video RAM. It's not a killer as a further resize restores the expected texturing.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: - would this be a
)
I think so, yes. I mean the only thing the framework cares about are its interfaces (i.e. AbstractGraphicsEngine in this case). How the actual engine is composed should be treated in a proper black-box, don't care approach. It's always useful to (sub)structure such an engine where appropriate of course. If you identify entities that might even serve an even more generic purpose, move 'em up in the hierarchy. We could even move sidewards, out of the vertical inheritance scheme, and put some eugenic stuff into classes which instances can then be used via composition - sometimes a "has a" relationship is much better better/suitable than a "is a" relationship, right. The usual PDCA cycle, here in the guise of design, implement, check, refactor :-)
Oliver
Einstein@Home Project
Hi Mike, Shame on me that
)
Hi Mike,
Shame on me that didn't find the time to look at your latest code. I was just thinking: something I always wanted to do is to display some help-text as an overlay when the user presses "h" (not in screensaver-mode). I'd display a layer with 50% opacity in front of the current HUD (logo, info, spectrum) such that every "regular" screen element is reduced in brightness by 50%. Then in front of that, centered on the screen we could display some usage and copyright info as well as the author'S nameS. The generic part of this could be done high up in the generic parts of the model, maybe just the texts could be handled (or just augmented) in the specific derived classes... Should be trivial to do. Thoughts?
Best,
Oliver
Einstein@Home Project
Hi Oliver! :-) RE: I
)
Hi Oliver! :-)
Yup. Got that.
Yup. Got that.
[ I've even been adventurous and mixed those together. In my HUD hierarchy I have a container class which subclasses to specifics, but a container can also contain another container within! This gives two hierarchies (a) OOP inheritance and (b) composition. Which is why it took so long to write - separating "is a" and "has a" in my mind on a per instance basis :-) :-) ]
Is that what it's called ? :-) :-)
You have bigger fish to fry. Catch as catch can. :-)
Ooops! I haven't updated the repo for a while, I've been waiting for my implementation to settle out. I'll do that ASAP.
Nice one! :-)
Yup, do-able. The 50% opacity is simply a quad of screen dimensions with an alpha of 0.5 placed upon the near frustum face. The copyright display could simply be another HUD instance. Rendering order per frame is then :
- generic 3D scene ( 'solarsystem' in my case )
- specific 3D scene ( FermiLAT or radiotelescopes or IFOs in my schema )
- generic HUD
- specific HUD ( depending on WU, power spectrums, whatever )
- opacity layer
- generic copyright HUD
- specific copyright HUD
Now in my HUD scheme you could alter text in two ways (a) remove/insert entire 'HUDTextLine' objects from their containers or, even more simply (b) edit the HUDTextLine's content 'in place' via HUDTextLine::setText() and/or HUDTextLine::getText(). The second option uses std::string objects and hence is subject to whatever you want there : so for example, read the current text value, edit it, and re-insert. I've got these things to 'automagically' resize themselves upon alterations AND signal their enclosing container ( if any ) to reassess it's own sizing - onwards and upwards in a containment hierarchy. So - but depending upon constraints from other contained items - a small text change could cause an entire HUD to re-adjust itself. Where/when you do these text manipulations in some other code structure is a separate issue, provided you have a way to disclose the correct pointers! :-)
Cheers, Mike.
( edit ) I should add that it is perfectly OK to have z offsets on near-frustum-face rendering. So you could, say, render the first HUD at -0.5, the opaque layer at 0.0, and the second HUD at +0.5. You just have to not disable the depth buffer, have the orthographic projection clipping near/far planes properly set, and translate your respective HUD elements to the desired z offset. That would be fine with my schema, I could simply add a z-offset member to the HUDContainer class and as this class 'grants' the positions to it's contained elements. Then come positioning time it will pass on that offset. You could even dynamically re-arrange the offsets at runtime - I'm not sure why you would in this case - but like shuffling cards you could bring to bear any depth ordering you like.
I've pretty well done this for objects in the 3D scene, where the various elements at about the celestial sphere radius ( pulsars, supernovae, constellations, grid .... ) are individually set slightly closer or further away than that radius. This seems to look better as alot of the time there is less z-order contention to resolve ie. when do you deem two objects to be such that one is nearer/further/same distance as the other?
Note for others : make your z-order absolute range as small as reasonable to work because the z-buffer has a fixed underlying representation. Meaning that you only get a certain 'granularity', and the deeper your clipping range then the 'z quanta' are larger - so that potentially more distinct objects will be judged as equidistant for z-order purposes. That issue is disclosed with flickering/occultation of adjacent elements with slight changes in viewing position, as the OpenGL machine tries to decide which to place in front of the other.
My current z-order range ( 3D scene ) is mildly over the celestial sphere diameter, that way one can scoot around just on the outer side of the sphere and still see the sphere's inner face over on the far side from your viewpoint. But if you try to wander too far off, my Craft code will send you back inwards .... it also stops you going inside the Earth and the Sun. :-) :-0
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Forget to mention : I'm real
)
Forget to mention : I'm real close to a test version release to demonstrate the basic widgets I've developed .... maybe next week! :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal