Jump to content

  • Log In with Google      Sign In   
  • Create Account

Laval B

Member Since 30 Jul 2010
Offline Last Active Today, 02:20 PM

#5316261 C++ - win32: can i change the window controls font?

Posted by on Today, 12:55 PM

As explained in https://msdn.microsoft.com/en-us/library/windows/desktop/ms644950(v=vs.85).aspx, how to interpret the return value for SendMessage depends on the message sent. With many window messages, it will return 0 if the target didn't process the message. It doesn't necessarily mean an error occured but simply that the target didn't process the message (often because that target doesn't process that kind of message).


In general, you have to consult the documentation for the specific message to know how to interpret the return value of SendMessage.

#5316242 C++ - win32: can i change the window controls font?

Posted by on Today, 11:38 AM

It is impossible to answer your question with so little details, we need more context. I suggest you read the remark section in the MSDN page about WM_SETFONT message :




Have you tried to call GetLastError to determine why it is failing if it is really failing ? To get details about the error code returned by GetLastError, you can use Error Lookup in Visual Studio Tool menu.

#5314425 Audio System

Posted by on 09 October 2016 - 10:57 AM

Hello everyone.


I have started the development of an audio system that would be mostly oriented toward games be it 3D or 2D. So far I have been mostly concerned with the algorithms for 3D sound i.e. generating a multichannel (one channel per speaker) pcm buffer from multiple single channel sources at different locations relative to a listener, just like OpenAL, FMOD and other libraries do.


I have also been working on the real time mixing aspect of these sounds (of course). It's going very well up to that point. I still need to implement Doppler shift as well as distance attenuation and possibly hrtf and binaural filters. This part of the development is very interesting. I'm also learning alot about SSE and AVX instructions.


Just as a note, the system will use XAudio2 on Windows and probably ALSA on Unix Operating Systems like OSX/Linux. I don't know yet about mobile or even if i will eventually port it to mobile (which would be great). One of the design goals i have is to use the platform specific audio api minimally only to send pre-process samples to the device so porting will be easier. If it ever becomes decent enough, i might make it an opensource project but i'm not there yet.


As of late, i started thinking about how the system would communicate with the host application and the more i think about it, the less i'm sure about it. That's what i would like to discuss. My concerns are all related to multithreading.


So far, the application would deal with 4 classes :

  1. AudioSystem is the class used for the initialization/shutdown of the system and the main api for resource management and update.
  2. AudioSource basically represents the configuration of a sound in the scene i.e. position, speed, orientation, area of effect, etc.
  3. AudioBuffer represents the data of a sound. An AudioBuffer can be shared by multiple sources of course.
  4. Listener represents the point in the scene from which the sounds are heard. It has a position, speed, orientation so far.

The system basically has a mixing thread that cycles through the list of sources and prepares buffers for the api to consume when it needs them.


A typical real time application would likely have (or at least i have to consider it would) multiple threads working on preparing/updating the scene and each of these threads would update sources, and that is where i'm not sure how to do it. The operations performed by the application are the following :

  1. Add/remove sources from the list (or set the status to paused/stopped).
  2. Update the parameters of sources like speed, orientation and position.
  3. Update the listener's parameters (position, speed, orientation).

I'm trying to think about a way that would not impair the performance of either the application or the mixing thread. I have though about using two lists, one is the "committed" list the mixing thread is working on and the other is the list the application is working on then i could "atomically" swap the two lists ... or something like that. I don't know about locking, it could be good if done properly  i guess. It is clear to me the update of the list must be done as a transaction only once "per frame" and not multiple updates during the frame composition just like the graphics APIs do.


With the atomic swap of lists, i'm affraid i could lose updates if one side is too fast so i guess i would need to queue these updates ...


Well, that's is basically what i would like to discuss. I'm opened to ideas and suggestions.

#5289064 Vulkan Resources

Posted by on 28 April 2016 - 04:05 AM






I found these tutorials, they are nice to start with Vulkan. There is a pdf version with detailed explanations and sample code.

#5248057 Terrain LOD

Posted by on 21 August 2015 - 09:05 AM

I have made one or two posts on the subject on this forum a few years ago, I'll see if I can dig them out


Thank you, i am very interested by the approach.



Culling is done via the quad-tree.

With proper memory management, you can jump directly to adjacent cells, as well as up and down LOD levels.
When beginning the search through the quad-tree, perform a standard parent-child iteration over the tree, but select child nodes in order of closest-to-camera first.
This allows you to find the chunk closest to the camera and its appropriate LOD level.

From here, change your search method so that it branches out and away from that chunk, and for each node you pass a minimum and maximum search depth. For each neighboring node, you may only go up or down one level in the tree.

As for proper memory management, this method becomes possible if you lay out all nodes sequentially in RAM and hold an array of pointers to the start of each level. To go from Node [X, Y] to its parent, simply go up one level (use the array of pointers to the starts of each level) and index into that array of nodes [X>>1, Y>>1]. Nodes on a level will be organized like a bitmap, so you can also easily go left, right, up, and down from any given node on the same level, allowing to traverse from any node directly to any other node.

You would organize your quad-tree this way for best cache usage anyway.

L. Spiro



Thank you very much for the detailed explanation.

#5247403 Terrain LOD

Posted by on 18 August 2015 - 09:50 AM

If you're using DirectX 11-grade graphics cards, doing the LOD tessellation in the Hull/Domain shader stages is fairly neat, and avoids some of the difficulties of having to patch up T-junctions between LOD levels.  I've played around with it both this way, and more the way you are describing:

Not the greatest screenshot, but this is from my hardware-tessellated version.  I'm sure that I'm not doing things in the most optimal way, but on the hardware I was using(GTX560), the hardware-tessellated version was considerably faster.



Yes, hardware tessellation is something i need to investigate more. I have toyed a bit with a terrain tessellation demo that was published in GPU Pro 4 and it looked very neat indeed. The performace were very good on my development machine (GTX 780 Ti x2). When i tried it on a smaller card (Quadro 600) which has DX11 feature level but much less cores, it was running fine but very close to the minimum frame time with only the terrain.


I'm thinking tessellation might be good for smaller tasks like morphing in character animation when the character is close to the camera. 


Tessellation is indeed interesting for very dynamic surfaces like character's faces and water simulation. Like i told you, i need to get a good grip of it and the associated cost.


I may also have to support DX10 level hardware so i will need a falback.


Tank you for your post, i will read your references when i get back home.

#5247348 Terrain LOD

Posted by on 18 August 2015 - 04:01 AM

Yeah... some of these ideas are not very good. You only load the terrain once, and that's at level initialization time. 


The terrain can be very large and will not necessarily be entirely loaded at once. With a structure like this, the index buffers of the leaves can be reused because chunks are loaded in leaves.


The same applies to vertex normals. There's no reason why you can't just get this info at load time and store it in a custom vertex object for each terrain vertex. In fact, if you're going to be creating a height map texture for your terrain, you can also create a normal map texture for your terrain.


With modern hardware, which is the target platform, it is faster to do a bit more calculations then to transfer larger amount of data. This is especially true when loading out of core data in the background.



You're making 3D terrain. You don't use quad trees for 3D environments, you use octrees.


The quadtree is the data structure used for culling and lod selection. Each node will contain the maximum and minimum height of the terrain for the area covered by the node. 

#5199766 Codebase with multiple back-ends

Posted by on 23 December 2014 - 04:17 PM

I think a simple but somehow less manageable approach is just to use template and typedef:


In some way that's pretty much what i do ... I put the various implementations into different files (.h and .cpp) and i use include files that include the proper headers for the platform/configuration i want to compile. It's less messy that way because there isn't a bunch of conditional ifdef all over the place. It's not necessarily possible nor desirable to put all the code inline.


The real problem is not the code but the configurations and the project/make file that need to be setup on a per configuration and a per platform, basis. Remember that everything isn't just a matter of code, like linker setups and libraty paths.

#5112774 Qt and OpenGL

Posted by on 28 November 2013 - 09:33 AM

You are rendering in the update thread. At least when using QtQuick the thread that is doing all the rendering (and is the only one allowed to) is always a different thread from the normal update thread.


The only place i'm calling opengl functions are in initializeGL, paintGL and resizeGL as recomended in the documentation.




You are polluting the GL state. At least QtQuick is a bit picky about anyone changing the GL state. Make sure any buffers bound (especially vertex and index buffer) are restored to their orginal value when you are done rendering. Also, the cull mode (whether or not it is enabled and the cull face) need to be preserved. Saving these states allowed me to integrate the rather monolithic renderer I have to work with with Qt.


I'm not sure when i need to save and restore these states. Everything is flickering when i'm resizing something, even the scrollbars of the docking windows. The problem is related to the OpenGL Widget though since it doesn't happen if i replace it with a standard widget like QTextEditor.

#5112744 Qt and OpenGL

Posted by on 28 November 2013 - 08:11 AM

Have you tried using qglClearColor? Or calling glClearColor before clearing? Qt probably calls glClearColor by itself before going in the paintGL method.


qglClearColor doesn't change anything. According to the documentation the only functions that are called by the framework are makeCurrent and swapBuffers. I have also tried calling makeCurrent in renderGL, initializeGL and resizeGL to make sure the right context was used and it didn't change anything. According to the documentation, you only need to call makeCurrent if you call gl functions outside of those functions (the overriden ones).



Check exactly which Qt version you have downloaded. The 5.x generation of Qt is so far available in two versions: one using real OpenGL and one using OpenGL ES via a DirectX emulator. If you are using the emulated OpenGL you need to make sure all your OpenGL is including the emulator headers (somewhere in Qt 3rdparty folder) and you cannot just link to the usual OpenGL libraries.


I'm using Qt 5.1.1 for Windows 64 bits VS2012. There is a version of Qt 5.1.1 for Windows 64 bits VS2012 OpenGL which is not the one i'm using. Is that what you are referring to ?


An emulated framework limited to OpenGL ES is definitely not what i need. The engine uses core profile 3.3 and above. 

#5102175 Behavior of assignment with user data in Lua

Posted by on 17 October 2013 - 10:56 AM

Nevermind, it's a stupid question. I'll just implement a clone method.

#5101561 lightuser data in Lua

Posted by on 15 October 2013 - 09:38 AM


Is there a way to do this with lightuser data ?
Pretty sure, no. I ended up using the syntax that apatriarca suggests. Here's another example of the same syntax being used due to the choice to use light userdata:




Thank you, i was pretty sure it wasn't possible, i asked just in case. The method suggested by apatriarca is what we are using, the functions are exposed via a call to luaL_newlib and lua_setglobal so they appear in a "namespace" or library.

#5101522 lightuser data in Lua

Posted by on 15 October 2013 - 06:39 AM

Hello everyone.


An application we are working on needs some parts to be scripted. The scripts will only act as callbacks to controle some  part of the execution of some transactions. To do so, it must expose a few objects from the host application. The lifetime of these objects is well controled by the host and they will outlive the execution of the script. The logical choice seems to be the use of lightuser data.


We have exposed these objects using functions and lightuser data and it does work. However, lightuser data don't have their individual metatable. Instead, they all share the same so they all share the same metamethods. It would be great though if we could associate specific methods to each object exposed (like it can be done with full user data) so it would be possible to use object oriented programming notations on those objects from the script.


At the moment, we have to use a notation like this :

function f(user, logger)

   setName(user, "name")
   setGroup(user, "group")

   output(logger, "User is set.")


instead of 

function f(user, logger)


   logger:output("User is set.")


Is there a way to do this with lightuser data ?

#5075319 Uniform blocks and Uniform buffer objects

Posted by on 04 July 2013 - 03:10 PM

Ok, i think i figured it out. I can't use only one binding point, i need to have one for each uniform block. But i can use only one ubo and bind different part of the ubo to each uniform block using glBindBufferRange.


I still have one question though, must i call glGetUniformBlockIndex and glUniformBlockBinding after the program has been linked ?

#5068236 States managed by a VAO

Posted by on 08 June 2013 - 09:11 AM

Thank you for the clear answer smile.png


It makes sense indeed, i didn't pay attention when i read the glVertexAttribPointer/glEnableVertexAttribArray part and got confused when they say :


<array> is the vertex array object name. The resulting vertex array

object is a new state vector, comprising all the state values listed in
tables 6.6 (except for the CLIENT_ACTIVE_TEXTURE selector state), 6.7,
and 6.8 (except for the ARRAY_BUFFER_BINDING state).


Ref http://www.opengl.org/registry/specs/ARB/vertex_array_object.txt


I forgot that vertex attributes are not necessarily in the same vbo (over 95 % of my vbo are filled with interleaved data).


Now, index buffer will get bound when binding the VAO. I guess i can still bind the index buffer separately after binding the VAO (which will update the VAO state every time though). If i do this, i will have to make sure to bind index buffer 0 if I render a VAO with no index buffer. Would that be a problem ? You see, my code base pretty much separates the vertex (and the vertext attribute specification) from the index.