Jump to content
  • Advertisement

Martin Perry

  • Content Count

  • Joined

  • Last visited

Community Reputation

1557 Excellent

1 Follower

About Martin Perry

  • Rank

Personal Information


  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. In part one, we familiarized ourselves with positioning and sizes of single GUI parts. Now, its time to render them on the screen. This part is shorter than the previous two, because there is not so much to tell. You can look at previous chapters: Part I - Positioning Part II - Control logic Part III - Rendering This time, you will need some kind of an API for main rendering, but doing this is not part of a GUI design. At the end, you don't need to use any sophisticated graphics API at all and you can render your GUI using primitives and bitmaps in your favourite language (Java, C#, etc). Hovewer, what I will be describing next assumes a usage of some graphics API. My samples will be using OpenGL and GLSL, but a change to DirectX should be straightforward. You have two choices in rendering your GUI. First, you can render everything as a geometry in each frame on top of your scene. Second option is to render the GUI into a texture and then blend this texture with your scene. In most of the cases this will be slower because of the blending part. What is the same for both cases are the rendering elements of your system. Basic rendering To keep things as easy as possible we start with the simple approach where each element is rendered separately. This is not a very performance-friendly way if you have a lot of elements, but it will do for now. Plus, in a static GUI used in a main menu and not in an actual game, it can also be a sufficient solution. You may notice warnings in performance utilities that you are rendering too many small primitives. If your framerate is high enough and you don't care about things like power consumption, you can leave things as they are. Power consumption is more likely to be a problem for mobile devices, where a battery lifetime is important. Fewer draw calls can be cheaper and put less strain on your battery; plus your device won't be hot as hell. In modern APIs, the best way to render things is to use shaders. They offer great user control - you can blend textures with colors, use mask textures to do patterns, etc. We use one shader that can handle every type of element. The following shader samples are written in GLSL. They use an old version of notation because of a compatibility with OpenGL ES 2.0 (almost every device on the market is using this API). This vertex shader assumes that you have already converted your geometry into the screen space (see first part of the tutorial where [-1, 1] coordinates were mentioned). attribute vec3 POSITION; attribute vec2 TEXCOORD0; varying vec2 vTexCoord; void main() { gl_Position = vec4(POSITION.xyz, 1.0); vTexCoord = TEXCOORD0; } In a pixel (fragment) shader, I am sampling a texture and combining it with color using a simple blending equation. This way, you can create differently colored elements and use some grayscaled texture as a pattern mask. uniform sampler2D guiElementTexture; uniform vec4 guiElementColor; varying vec2 vTexCoord; void main() { vec4 texColor = texture2D(guiElementTexture, vTexCoord); vec4 finalColor = (vec4(guiElementColor.rgb, 1) * guiElementColor.a); finalColor += (vec4(texColor.rgb, 1) * (1.0 - guiElementColor.a)); finalColor.a = texColor.a; gl_FragColor = finalColor; } That is all you need for rendering basic elements of your GUI. Font rendering For fonts I have chosen to use this basic renderer instead of an advanced one. If your texts are dynamic (changing very often - score, time), this solution may be faster. The speed of rendering also depends on the text length. For small captions, like "New Game", "Continue", "Score: 0" this will be enough. Problems may (and probably will) occur with long texts like tutorials, credits etc. If you will have more than 100 draw-calls in every frame, your frame rate will probably drop down significantly. This is something that can not be told explicitly, it depends on your hardware, driver optimization and other factors. Best way is to try :-) From my experience, there is a major frame drop with rendering 80+ letters, but on the other hand, the screen could be static and the user probably won't notice the difference between 60 and 20 fps. For classic GUI elements, you have used textures that are changed for every element. For fonts, it would be an overkill and a major slowdown of your application. Of course, in some cases (debug), it may be good to use this brute-force way. We will use something called a texture atlas instead. That is nothing more then a single texture that holds all possible textures (in our case letters). Look at the picture below if you don't know what I mean :-) Of course, to have only this texture is useless without knowing where each letter is located. This information is usually stored in a separate file that contains coordinate locations for each letter. Second problem is the resolution. Fonts provided and generated by FreeType are created with respect to the font size from vector representations, so they are sharp every time. By using a font texture you may end up with good looking fonts on a small resolutions and blurry ones for a high resolution. You need to find a trade-off between a texture size and your font size. Plus, you must take in mind that most of the GPUs (especially the mobile ones), have a max texture size of 4096x4096. On the other hand, using this resolution for fonts is an overkill. Most of the time I have used 512x512 or 256x256 for rendering fonts with a size 20. It looks good even on Retina iPad. Example of font texture atlas I have created this texture by myself using the FreeType library and my own atlas creator. There is no support for generating these textures, so you have to write it by yourself. It may sound complicated, but it is not and you can use the same code also for packing other GUI textures. I will give some details of implementation in part IV of the tutorial. Every font letter is represented by a single quad without the geometry. This quad is created only by its texture coordinates. Position and "real texture coordinates" for the font are passed from the main application and they differ for each letter. I have mentioned "real texture coordinates". What are they? You have a texture font atlas and those are the coordinates of a letter within this atlas. In following code samples, a brute-force variant is shown. There is some speed-up, achieved by caching already generated fonts. This can cause problems if you generate too many textures and exceed some of the API limits. For example, if you have long text and render it with several font faces, you can easily generate hunderds of very small textures. //calculate "scaling" float sx = 2.0f / screen_width; float sy = 2.0f / screen_height; //Map texture coordinates from [0, 1] to screen space [-1, 1] x = MyMathUtils::MapRange(0, 1, -1, 1, x); y = -MyMathUtils::MapRange(0, 1, -1, 1, y); //-1 is to put origin to bottom left corner of the letter //wText is UTF-8 since FreeType expect this for (int i = 0; i < wText.GetLength(); i++) { unsigned long c = FT_Get_Char_Index(this->fontFace, wText); FT_Error error = FT_Load_Glyph(this->fontFace, c, FT_LOAD_RENDER); if(error) { Logger::LogWarning("Character %c not found.", wText.GetCharAt(i)); continue; } FT_GlyphSlot glyph = this->fontFace->glyph; //build texture name according to letter MyStringAnsi textureName = "Font_Renderer_Texture_"; textureName += this->fontFace; textureName += "_"; textureName += znak; if (!MyGraphics::G_TexturePool::GetInstance()->ExistTexture(textureName)) { //upload new letter only if it doesnt exist yet //some kind of cache to improve performance :-) MyGraphics::G_TexturePool::GetInstance()->AddTexture2D(textureName, //name o texture within pool glyph->bitmap.buffer, //buffer with raw texture data glyph->bitmap.width * glyph->bitmap.rows, //buffer byte size MyGraphics::A8, //only grayscaled texture glyph->bitmap.width, glyph->bitmap.rows); //width / height of texture } //calculate letter position within screen float x2 = x + glyph->bitmap_left * sx; float y2 = -y - glyph->bitmap_top * sy; //calculate letter size within screen float w = glyph->bitmap.width * sx; float h = glyph->bitmap.rows * sy; this->fontQuad->GetEffect()->SetVector4("cornersData", Vector4(x2, y2, w, h)); this->fontQuad->GetEffect()->SetVector4("fontColor", fontColor); this->fontQuad->GetEffect()->SetTexture("fontTexture", textureName); this->fontQuad->Render(); //advance start position to the next letter x += (glyph->advance.x >> 6) * sx; y += (glyph->advance.y >> 6) * sy; } To change this code to be able work with a texture atlas is quite easy. What you need to do is use an additional file with coordinates of letters within the atlas. For each letter, those coordinates will be passed along with letter position and size. The texture will be set only once and stay the same until you change the font type. The rest of the code, hovewer, remains the same. As you can see from code, texture bitmap (glyph->bitmap.buffer) is a part of the glyph provided by FreeType. Even if you don't use it, it is still generated and it takes some time. If your texts are static, you can "cache" them and store everything generated by FreeType during first run (or in some Init step) and then, in runtime, just use precreated variables and don't call any FreeType functions at all. I use this most of the time and there are no performance impacts and problems with font rendering. Advanced rendering So far only basic rendering has been presented. Many of you probably knew that, and there was nothing surprising. Well, there will probably be no surprises in this section too. If you have more elements and want to have them rendered as fast as possible, rendering each of them separately may not be enough. For this reason I have used a "baked" solution. I have created a single geometry buffer, that holds geometry from all elements on the screen and I can draw them with a single draw-call. The problem is that you need to have single shader and elements may be different. For this purpose, I have used a single shader that can handle "everything" and each element has a unified graphics representation. It means that for some elements, you will have unused parts. You may fill those with anything you like, usually zeros. Having this representation with unused parts will end up with a "larger" geometry data. If I have used the word "larger", think about it. It won't be such a massive overhead and your GUI should still be cheap on memory with a faster drawing. That is the trade-off. What we need to pass as geometry for every element: POSITION - this will be divided into two parts. XYZ coordinates and W for element index. TEXCOORD0 - two sets of texture coordinates TEXCOORD1 - two sets of texture coordinates TEXCOORD2 - color TEXCOORD3 - additional set of texture coordinates and reserved space to kept padding to vec4 Why do we need different sets of texture coordinates? That is simple. We have baked an entire GUI into one geometry representation. We don't know which texture belongs to which element, plus we have a limited set of textures accessible from a fragment shader. If you put two and two together, you may end up with one solution for textures. Yes, we create another texture atlas built from separate textures for every "baked" element. From what we have already discovered about elements, we know that they can have more than one texture. That is precisely the reason why we have multiple texture coordinates "baked" in a geometry representation. First set is used for the default texture, second for "hovered" textures, next for clicked ones etc. You may choose your own representation. In a vertex shader we choose the correct texture coordinates according to the element's current state and send coordinates to a fragment shader. Current element state is passed from the main application in an integer array, where each number corresponds to a certain state and -1 for an invisible element (won't be rendered). We don't pass this data every frame but only when the state of an element has been changed. Only then do we update all states for "baked" elements. I have limited the max number of those to be 64 per a single draw-call, but you can decrease or increase this number (be careful with increase, since you may hit the GPU uniforms size limits). Index to this array has been passed as a W component in a POSITION. The full vertex and the fragment shader can be seen in the following code snipset. //Vertex buffer content attribute vec4 POSITION; //pos (xyz), index (w) attribute vec4 TEXCOORD0; //T0 (xy), T1 (zw) attribute vec4 TEXCOORD1; //T2 (xy), T3 (zw) attribute vec4 TEXCOORD2; //color attribute vec4 TEXCOORD3; //T4 (xy), unused (zw) //User provided input uniform int stateIndex[64]; //64 = max number of elements baked in one buffer //Output varying vec2 vTexCoord; varying vec4 vColor; void main() { gl_Position = vec4(POSITION.xyz, 1.0); int index = stateIndex[int(POSITION.w)]; if (index == -1) //not visible { gl_Position = vec4(0,0,0,0); index = 0; } if (index == 0) vTexCoord = TEXCOORD0.xy; if (index == 1) vTexCoord = TEXCOORD0.zw; if (index == 2) vTexCoord = TEXCOORD1.xy; if (index == 3) vTexCoord = TEXCOORD1.zw; if (index == 4) vTexCoord = TEXCOORD3.xy; vColor = TEXCOORD2; } Note: In vertex shader, you can spot the "ugly" if sequence. If I replaced this code with an if-else, or even a switch, GLSL optimizer for ES version stripped my code somehow and it stopped working. This was the only solution, that worked for me. varying vec2 vTexCoord; varying vec4 vColor; uniform sampler2D guiBakedTexture; void main() { vec4 texColor = texture2D(guiBakedTexture, vTexCoord); vec4 finalColor = (vec4(vColor.rgb, 1) * vColor.a) + (vec4(texColor.rgb, 1) * (1.0 - vColor.a)); finalColor.a = texColor.a; gl_FragColor = finalColor; } Conclusion Rendering GUI is not a complicated thing to do. If you are familiar with basic concepts of rendering and you know how an API works, you will have no problem rendering everything. You need to be careful with text rendering, since there could be significant bottlnecks if you choose the wrong approach. Next time, in part IV, some tips & tricks will be presented. There will be a simple texture atlas creation, example of user-friendly GUI layout with XML, details regarding touch controls and maybe more :-). The glitch is, that I don't have currently much time, so there could be a longer delay before part IV will see the light of day :-) Article Update Log 19 May 2014: Initial release
  2. I have always been sort of terrified when adding sound to my games. I have even considered to make the game without sounds and ran a poll about it. Results were approximately 60:40 for sound. Bottom line, you should have your games with sound. While working with sound, obviously, you have to make sure that sound will play together with the main loop and will be correctly synced. For this, threads will probably be your first thought. Hovewer, you can go without them as well, but it will have some disadvantages (will be mentioned later). First important question is: "What library to use?" There are many libraries around and only some of them are free. I was also looking for a universal library, that will run on a desktop and on a mobile device as well. After some googling, I have found OpenAL. It's supported by iOS and by desktop (Windows, Linux, Mac) as well. OpenAL is a C library with an API similar to the one used by OpenGL. In OpenGL, all functions start with a gl prefix, in OpenAL there is an al prefix. If you read some other articles, you may come across the "alut" library. That is something similar to "glut", but I am not going to use it. For Windows OpenAL, you have to use a forked version of the library. OpenAL was created by Creative. For now it is an outdated library and no longer updated by Creative (latest API version is 1.1 from 2005). Luckily, there is an OpenAL Soft implementation (fork of the original OpenAL), that uses the same API as the original OpenAL. You can find source and windows precompiled libraries here. On Apple devices running iOS, there is a far better situation. OpenAL is directly supported by Apple, you don't need to install anything, just add references to your project from Apple's libraries. See Apple manual. One of OpenAL's biggest disadvantages is there is no direct support for Android. Android is using OpenSL (or something like that :-)). With a little digging, you can find "ports" of OpenAL for Android. What they are doing, is to map an OpenAL function call to an OpenSL call, so it is basically a wrapper. One of them can be found here (GitHub). It uses previously mentioned OpenAL Soft, only built with different flags. Hovewer, I have never tested this, so I don't know if and how it works. After library selection, you have to choose supported sound formats you want to play. Favorite MP3 is not the best choice, the decoder is a little messy and there are some patents laying around. OGG is a better choice. Decoder is easy to use, open and OGG files often have smaller size than MP3 with the same settings. It is also a good decision to support uncompressed WAV. Sound engine design Lets start with sound engine design and what exactly you need to get it working. As I mentioned before, you will need the OpenAL library. OpenAL is a C library. I want to add some object oriented wrapper for easier manipulation. I have used C++, but a similar design can be used in other languages as well (of course, you will need an OpenAL library bridge from C to your language). Apart from OpenAL, you will also need thread support. I have used the pthread library (Windows version). If you are targeting C++11, you can also go with native thread support. For OGG decompression, you will need the OGG Vorbis library (download parts libogg and libvorbis). WAV files aren't use very often, more for debugging, but it's good to have a support for that format too. Simple WAV decompression is easy to write from scratch, so I have used this solution, instead of a 3rd party library. My design is created from two basic classes, one interface (pure virtual class) and then one class for every supported audio format (OGG, WAV...). SoundManager - main class, using the singleton pattern. Singleton is a good choice here, since you probably have only one instance of an OpenAL initiated. This class is used for controlling and updating all sounds. References to all SoundObjects are held there. SoundObject - our main sound, that will be accessible and has methods such as: Play, Pause, Rewind, Update... ISoundFileWrapper - interface (pure virtual class) for different file formats, declaring methods for decompression, filling buffers etc. Wrapper_OGG - class that implements ISoundFIleWrapper. For decompression of OGG files Wrapper_WAV - class that implements ISoundFIleWrapper. For decompression of WAV files OpenAL Initialization Code described in this section can be found in class SoundManager. Full source with header is in the article attachment. We start with a code snippet for an OpenAL initialization. alGetError(); ALCdevice * deviceAL = alcOpenDevice(NULL); if (deviceAL == NULL) { LogError("Failed to init OpenAL device."); return; } ALCcontext * contextAL = alcCreateContext(deviceAL, NULL); AL_CHECK( alcMakeContextCurrent(contextAL) ); Once initiated, we won't need device and context variables any more, only in the destruction phase. OpenAL holds it's initiated state internally. You may see AL_CHECK around the alcMakeContextCurrent function. This is a macro I am using to check for an OpenAL errors in debug mode. You can see its code in the following snippet const char * GetOpenALErrorString(int errID) { if (errID == AL_NO_ERROR) return ""; if (errID == AL_INVALID_NAME) return "Invalid name"; if (errID == AL_INVALID_ENUM) return " Invalid enum "; if (errID == AL_INVALID_VALUE) return " Invalid value "; if (errID == AL_INVALID_OPERATION) return " Invalid operation "; if (errID == AL_OUT_OF_MEMORY) return " Out of memory like! "; return " Don't know "; } inline void CheckOpenALError(const char* stmt, const char* fname, int line) { ALenum err = alGetError(); if (err != AL_NO_ERROR) { LogError("OpenAL error %08x, (%s) at %s:%i - for %s", err, GetOpenALErrorString(err), fname, line, stmt); } }; #ifndef AL_CHECK #ifdef _DEBUG #define AL_CHECK(stmt) do { \ stmt; \ CheckOpenALError(#stmt, __FILE__, __LINE__); \ } while (0); #else #define AL_CHECK(stmt) stmt #endif #endif I am using this same macro for every OpenAL call everywhere in my code. Next thing you need to initialize are sources and buffers. You can create those later, when they are really needed. I have created some of them now and if more will be needed, they can always be added later. Buffers are what you probably think - they hold uncompressed data, that are played by OpenAL. The source is basically the sound that is played. It is loading a sound from buffers associated to it. There are certain limits to the number of buffers and sources. Exact value depends on your system. I have chosen to pregenerate 512 buffers and 16 sources (that means I can play 16 sounds at once). for (int i = 0; i < 512; i++) { SoundBuffer buffer; AL_CHECK( alGenBuffers((ALuint)1, &buffer.refID) ); this->buffers.push_back(buffer); } for (int i = 0; i < 16; i++) { SoundSource source; AL_CHECK( alGenSources((ALuint)1, &source.refID)) ; this->sources.push_back(source); } You may notice, that alGen* function has a second parameter pointer to an unsigned int, which is the id of the created buffer or sound. I have wrapped this into a simple struct, that has the id and boolean indicator, if it is free or used by a sound. I have created a list of all sources and buffers. Apart from this list, I have a second one, that holds only those resources that are free (not connected to any sound). for (uint32 i = 0; i < this->buffers.size(); i++) { this->freeBuffers.push_back(&this->buffers); } for (uint32 i = 0; i < this->sources.size(); i++) { this->freeSources.push_back(&this->sources); } If you are using threads, you will also need to initialize them as well. Code for this can be found in source attached to this article. Now, you have prepared all you need to start adding some sounds to your engine. Sound playback logic Before we start with some details and code, it is important to understand how sounds are managed and played. There are two solutions in how to play sounds. In the first one, you will load the whole sound data into a single buffer and just play them. It's an easy and a fast way to listen to something. As usual, with simple solutions there is a problem. The uncompressed files are way bigger than the compressed ones. Imagine, you will have more than one sound. The size of all buffers can easily be bigger than your free memory. What now? Luckily, there is a second approach. Load only small portion of a file into a single buffer, play it, than load another portion. It sounds good, right? Well, actually it is not. If you do it this way, you may hear pauses at the end of buffer playback, just before the buffer is filled again and played. We solve this by having more than one buffer filled at a time. Fill more buffers (I am using three), play the first one and if its content is played, we will play the second one immedietaly and in the "same" time, fill the finished buffer with the new data. We cycle this, until we reach the end of the sound. The number of used buffers may vary, depending your needs. If your sound engine is updated from a separate thread, the count is not such a problem. You may choose almost any number of buffers and it will be just fine. Hovewer, if you use update together with your main engine loop (no threads involved), you may have problems with a low count of buffers. Why? Imagine you have a Windows application. Now, you drag the window around your desktop. On Windows (I have not tested it on other systems), this will cause the main thread to be suspended and wait. Sound will play (because OpenAL itself has its own thread to play sounds), but only until you have buffers in a queue, that can be played. If you exhaust all of them, sound will stop. This is because your main thread is blocked and buffers are not updated any more. Each buffer has its byte size (we will set its size during sound creation, see next section). To compute duration of a sound in a buffer, you can use this equation: duration = BUFFER_SIZE / (sound.freqency * sound.channels * sound.bitsPerChannel / 8) (eq. 1) Note: If you want to calculate the current playback duration, you have to take the buffers in mind, but its not that straightforward. We will take a look at this in one of the later sectionss. Ok, enough of theory, let's see some real code and how to do it. All the interesting stuff can be found in class SoundObject. This class is responsible for a single sound management (play, update, pause, stop, rewind etc.). Creating sound Before we can play anything, we need to initialize the sound. For now, I will skip the sound decompression part and just use ISoundFileWrapper interface methods, without background knowledge. First of all, we obtain free buffers from our SoundManager (notice that we are using a singleton call on SoundManager to get its instance). We need to get as many free buffers as we want to have preloaded. Those free buffers are put into the array in our sound object. #define PRELOAD_BUFFERS_COUNT 3 .... for (uint32 i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { SoundBuffer * buf = SoundManager::GetInstance()->GetFreeBuffer(); if (buf == NULL) { MyUtils::Logger::LogWarning("Not enought free sound-buffers"); continue; } this->buffers = buf; } We need to get the sound info from our file (or memory, depending where your sound is stored). In that information, we need to have at least: struct SoundInfo { int freqency; //sound frequency (eg. 44100 Hz) int channels; //nunber of channels (eg. Stereo = 2) int bitrate; //sound bitrate int bitsPerChannel; //number of bits per channel (eg. 16 for 2 channel stereo) }; As a next step, we fill those buffers with initial data. We could do this later as well, but it must always be before we start playing sound. Now, do you remember how we generated buffers in the initialization section? They had no size set. It will change now. We decompress data from an input file/memory, using ISoundFileWrapper interface methods. Single buffer size is passed to the constructor and used in the DecompressStream method. The flag setting: loop is used to enable/disable continuous playback. If we enable looping, after the end of the file is reached the rest of the buffer is filled with a content of a file that has been reset to the initial position. bool SoundObject::PreloadBuffer(int bufferID) { std::vector decompressBuffer; this->soundFileWrapper->DecompressStream(decompressBuffer, this->settings.loop); if (decompressBuffer.size() == 0) { //nothing more to read return false; } //now we fill loaded data to our buffer AL_CHECK( alBufferData(bufferID, this->sound.format, &decompressBuffer[0], static_cast(decompressBuffer.size()), this->sound.freqency) ); return true; } Playing the sound Once we have prepared everything, we can finaly play our sound. Each sound has three states - PLAYING, PAUSED, and STOPPED. In a STOPPED state, sound is reset to the default configuration. Next time we play this sound, it will start from the beginning. Before we can actually play the sound, we need to obtain a free source from our SoundManager. this->source = SoundManager::GetInstance()->GetFreeSource(); If there is no free source, we can't play the sound. It is important to release the source from the sound once the sound has stopped or finished playing. Do not release the source from a paused sound, or you will loose the progress and the settings. Next, we set some additional properties for the source. We need to do this everytime after the source is bound to the sound, because a single source can be attached to a different sound after it has been released and that sound can have different settings. I am using these properties, but you can set some other informations as well. For the complete list of possibilities, see OpenAL guiode (page 8). AL_CHECK( alSourcef(this->source->refID, AL_PITCH, this->settings.pitch)) ; AL_CHECK( alSourcef(this->source->refID, AL_GAIN, this->settings.gain) ); AL_CHECK( alSource3f(this->source->refID, AL_POSITION, this->settings.pos.X, this->settings.pos.X, this->settings.pos.X) ); AL_CHECK( alSource3f(this->source->refID, AL_VELOCITY, this->settings.velocity.X, this->settings.velocity.Y, this->settings.velocity.Z) ); There is an important thing: We have to set AL_LOOPING to false. If we set this flag to be true, we will end up with looping of a single buffer. Since we are using multiple buffers, we are managing loops by ourselves. AL_CHECK( alSourcei(this->source->refID, AL_LOOPING, false) ); Before we actually start playback, buffers need to be set to the source buffer's queue. This queue is processed and played during playback. this->remainBuffers = 0; for (int i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { if (this->buffers == NULL) { continue; //buffer not used, do not add it to the queue } AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &this->buffers->refID) ); this->remainBuffers++; } Finally, we can start the sound playback: AL_CHECK( alSourcePlay(this->source->refID) ); this->state = PLAYING; For now, our sound should be playing, and we should hear something (if not, it seems, there may be a problem :-)). If we do nothing more, our sound will end after some time, depending on the size of our buffer. We can calculate the length of a playback with the equation given earlier and multiply this time by our buffer count. To ensure continuous playback, we have to update our buffers manually. OpenAL won't do this for us automatically. This is where the threads or main engine loop comes to the game. Update code is called from this separate thread or from main engine loop in every turn. This is probably one of the most important parts of the code. void SoundObject::Update() { if (this->state != PLAYING) { //sound is not playing (PAUSED / STOPPED) do not update return; } int buffersProcessed = 0; AL_CHECK( alGetSourcei(this->source->refID, AL_BUFFERS_PROCESSED, &buffersProcessed) ); // check to see if we have a buffer to deQ if (buffersProcessed > 0) { if (buffersProcessed > 1) { //we have processed more than 1 buffer since last call of Update method //we should probably reload more buffers than just the one (not supported yet) MyUtils::Logger::LogInfo("Processed more than 1 buffer since last Update"); } // remove the buffer form the source uint32 bufferID; AL_CHECK(alSourceUnqueueBuffers(this->source->refID, 1, &bufferID) ); // fill the buffer up and reQ! // if we cant fill it up then we are finished // in which case we dont need to re-Q // return NO if we dont have more buffers to Q if (this->state == STOPPED) { //put it back - sound is not playing anymore AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &bufferID) ); return; } //call method to load data to buffer //see method in section - Creating sound if (this->PreloadBuffer(bufferID) == false) { this->remainBuffers--; } //put the newly filled buffer back (at the end of the queue) AL_CHECK( alSourceQueueBuffers(this->source->refID, 1, &bufferID) ); } if (this->remainBuffers Stop(); } } Last thing that needs to be said, is stopping the sound playback. If the sound is stopped, we need to release its source and reset everything to the default configuration (preload buffers with the beginning of the sound data again). I had a problem here. If I just removed buffers from the source's queue, refill them and put them back to the queue in next playback, there was an annoying glitch at the beginning of the sound. I have solved this by releasing buffers from sound and acquiring them again. AL_CHECK( alSourceStop(this->source->refID) ); //Remove buffers from queue for (int i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { if (this->buffers == NULL) { continue; } AL_CHECK( alSourceUnqueueBuffers(this->source->refID, 1, &this->buffers->refID) ); } //Free the source SoundManager::GetInstance()->FreeSource(this->source); this->soundFileWrapper->ResetStream(); //solving the "glitch" in the sound - release buffers and aquire them again for (uint32 i = 0; i < PRELOAD_BUFFERS_COUNT; i++) { SoundManager::GetInstance()->FreeBuffer(this->buffers); SoundBuffer * buf = SoundManager::GetInstance()->GetFreeBuffer(); if (buf == NULL) { MyUtils::Logger::LogWarning("Not enought free sound-buffers"); continue; } this->buffers = buf; } //Preload data again ... Inside the SoundManager::GetInstance()->FreeBuffer method, I am deleting and regenerating the buffer to avoid a glitch in the sound. Maybe it's not a correct solution, but it was the only one that worked for me. AL_CHECK( alDeleteBuffers(1, &buffer->refID) ); AL_CHECK( alGenBuffers(1, &buffer->refID) ); Additional sound info During playback we often need some other information. The most important of them is probably the playback time. OpenAL doesn't have any solution for this kind of task (at least not directly and not with more than one buffer at play). We have to calculate the time by ourselves. For this, we need OpenAL and file information as well. Since we are using buffers, this is a little problematic. Position in the file doesn't correspond directly with a currently playing sound. "File time" is not in synchronization with the "playback time". Second problem is caused by looping. At some point, "file time" is again at the beginning (eg. 00:00), but the playback time is somewhere at the end of the sound (eg. 16:20 from total length of 16:30). We have take in mind all of this. First of all, we need to get time of the remaining buffers (that is a sum of all of the buffers that hasn't been played yet). From a sound file, we get the current time (for an uncompressed sound, it is a pointer into the file indicating current position). This time is, hovewer, not correct. It is a time containing all preloaded buffers, even the ones that haven't been played yet (and that is our problem). We subtract buffered time from file time. It will give us "correct" time, at least, in most of the cases. As always, there are some "special cases" (very often called problems or any other not suitable words), that can cause some headache. I have already mentioned one of them - looping sound. If you are playing sound in a loop, you are listening to sound from a buffer that contains data from the end of a file, but a pointer in the data may already be at the beginning. This will give you negative time. You can solve this by taking the duration of an entire sound and subtract the absolute value of a negative time from it. Another problem may be caused if a file is not looping or is short enough, to be kept in buffers only. For this, you take the duration of an entire sound and substract prebuffered time from it. The time we have calculated so far, is not the final one yet. It is a time of a currently playing buffer's start. To get current time, we have to add a current buffer time offset. For this, we have to use OpenAL to get buffer offset. Let's see, if we wouldn't use multiple buffers and have the whole sound in a big one, this would give us a correct time of a playback and no other tricks would be needed. As always, you can review what has been written in code snippets to get a better understanding of the problem (or if you don't understant exactly my attempt to explain the problem :-)). Total time of the sound is obtained from an opened sound file via ISoundFileWrapper interface method. //Get duration of remaining buffer float preBufferTime = this->GetBufferedTime(this->remainBuffers); //get current time of file stream //this stream is "in future" because of buffered data //duration of buffer MUST be removed from time float time = this->soundFileWrapper->GetTime() - preBufferTime; if (this->remainBuffers < PRELOAD_BUFFERS_COUNT) { //file has already been read all //we are currently "playing" sound from cache only //and there is no loop active time = this->soundFileWrapper->GetTotalTime() - preBufferTime; } if (time < 0) { //file has already been read all //we are currently "playing" sound from last loop cycle //but file stream is already in next loop //because of the cache delay //Sign of "+" => "- abs(time)" rewritten to "+ time" time = this->soundFileWrapper->GetTotalTime() + time; } //add current buffer play time to time from file stream float result; AL_CHECK(alGetSourcef(this->source->refID, AL_SEC_OFFSET, &result)); time += result; //time in seconds Sound file formats Now it seems to be a good time to look at actual sound files. I have used OGG and WAV. I have also added support for RAW data, which is basically the same as WAV without headers. WAV and RAW data are helpful during debugging, or if you have some external decompressor, that gives you uncompressed RAW data instead of compressed ones. OGG Decompression of OGG files is straightforward with a vorbis library. You just have to use their functions providing full functionality for you. You can find the whole code in the class WrapperOGG. The most interessting part of this code is the main part for filling OpenAL buffers. We have an OGG_BUFFER_SIZE variable. I have used a size of 2048 bytes. Beware, this value is not the same as OpenAL buffer size! This value indicates how many bytes do we read in a single call from the ogg file. Those buffers are then appended to our OpenAL buffer. The size of our OpenAL buffer is stored in variable minDecompressLengthAtOnce. If we reach or overflow (should not happen) this value, we stop reading and return back. minDecompressLengthAtOnce % OGG_BUFFER_SIZE must be 0! Otherwise, there will be a problem, because we read more data than the buffer can hold and our sound would be skipping some parts. Of course, we can update pointers or move them "back" to read missing data again, but why? Simple solution with modulo test is enough and produce cleaner code. There is no need to have some crazy buffer sizes, like for example 757 or 11243 bytes. int endian = 0; // 0 for Little-Endian, 1 for Big-Endian int bitStream; long bytes; do { do { // Read up to a buffer's worth of decoded sound data bytes = ov_read(this->ov, this->bufArray, OGG_BUFFER_SIZE, endian, 2, 1, &bitStream); if(bytes < 0) { MyUtils::Logger::LogError("OGG stream ov_read error - returned %i", bytes); continue; } // Append data to the end of buffer decompressBuffer.insert(decompressBuffer.end(), this->bufArray, this->bufArray + bytes); if (static_cast(decompressBuffer.size()) >= this->minDecompressLengthAtOnce) { //buffer has been filled return; } } while (bytes > 0); if (inLoop) { //we are in loop - we have reached end of the file => go back to the beginning this->ResetStream(); } if (this->minDecompressLengthAtOnce == INT_MAX) { //read entire file in a single call return; } } while(inLoop); WAV Processing a WAV file by yourself may be seen as useless by many people ("I can download library somewhere"). In some ways it is and they are correct. On the other hand, doing this you will get a little better understanding of how things work under the hood. In the future, you can use this knowledge to write streaming of any kind of uncompressed data. Solution should be very similar to this one. First, you have to calculate duration of your sound, using the equation we have already seen. duration = RAW_FILE_SIZE / (sound.freqency * sound.channels * sound.bitsPerChannel / 8) (eq. 1) RAW_SILE_SIZE = WAV_FILE_SIZE - WAV_HEADERS_SIZE In a code snippet below, you can see the same functionality as in the OGG section sample. Again, we use modulo for RAW_BUFFER_SIZE (this time, however, it is possible to avoid this, but why to use a different approach?). bool eof = false;int curBufSize = 0; do{ do { curBufSize = 0; while (curBufSize < WAV_BUFFER_SIZE) { uint64 remainToRead = WAV_BUFFER_SIZE - curBufSize; if (this->curChunk.size ReadData(&this->curChunk, sizeof(WAV_CHUNK)); } // Check for .WAV data chunk if ( (this->curChunk.id[0] == 'd') && (this->curChunk.id[1] == 'a') && (this->curChunk.id[2] == 't') && (this->curChunk.id[3] == 'a') ) { uint64 readSize = std::min(this->curChunk.size, remainToRead); //how many data can we read in current chunk this->ReadData(this->bufArray + curBufSize, readSize); curBufSize += readSize; //buffer filled from (0...curBufSize) this->curChunk.size -= readSize;//in current chunk, remain to read } else { //not a "data" chunk - advance stream this->Seek(this->curChunk.size, SEEK_POS::CURRENT); } if (this->t.processedSize >= this->t.fileSize) { eof = true; break; } } // Append to end of buffer decompressBuffer.insert(decompressBuffer.end(), this->bufArray, this->bufArray + curBufSize); if (static_cast(decompressBuffer.size()) >= this->minProcesssLengthAtOnce) { return; } } while (!eof); if (inLoop) { this->ResetStream(); } if (this->minProcesssLengthAtOnce == INT_MAX) { return; } } while (inLoop); Conclusion and Attached Code Info The code in the attachement can not be used directly (download - build - run and use). In my engine, I am using VFS (virtual file system) to handle file manipulation. I left it in the code, because it's built around it. Removing would have required some changes I don?t have time for :-) On some places, you may found some math structures, functions (eg. Vector3, Clamp) or utilities (Logging system). All of these are easy to understand from a function or a structure name. I am also using my own implementation of String (MyStringAnsi), but yet again, method names or usage is easy to understand from the code. Without the knowledge of the mentioned files, you can study and use code to learn some tricks. It is not difficult to update or rewrtite the code to suit your needs. If you have any problems, you can leave me info in the article discussion, or contact me directly via email: info (at) perry.cz. Article Update Log Keep a running log of any updates that you make to the article. e.g. 25 Aug 2014: Error correction 19 Aug 2014: Initial release
  3. In the first part of the GUI tutorial (link), we have seen the positioning and dimension system. You can also look at other chapters: Part I - Positioning Part II - Control logic Part III - Rendering Today, before rendering, we will spend some time and familiarize ourselves with basic element types used in this tutorial. Of course, feel free and design anything you like. The controls mentioned in this part are some sort of a standard, that every GUI should have. Those are Panel - usually not rendered, used only to group elements with similar functionality. You can easily move all of its content or hide it. Button - what else to say. Button is just a plain old button Checkbox - in basic principles similar to button, but has more states. We all probably know it. Image - can be used for icons, image visualization etc. TextCaption - for text rendering Control logic The control logic is maintained from one class. This class is taking care of changes of states and contains reference to the actual control mechanism - mouse or touch controller. So far, only single touch is solved. If we want to have a multi-touch GUI control, it will be more complicated. We would need to solve problems and actions, if one finger is down and the other is "moving" across screen. What happens if a moving finger crosses an element, that is already active? What if we release the first finger and keep only the second, that arrived on our element during movement? Those questions can be solved by observing existing GUI systems how they behave, but what if there are more systems and every one of them behaves differently? Which one is more correct? Due to all those questions, I have disabled multi-touch support. For main menu and other similar screens, it is usually OK. Problems can be caused by the main game. If we are creating for example a racing game, we need to have multi-touch support. One finger controls pedals and the other steering, and third one maybe shifting. For these types of situations, we need multi-touch support. But that will not be described here, since I have not used it so far. I have it in mind and I believe the described system can easily be upgraded to support this. For each element we need to test a position of our control point (mouse, finger) against the element. We use positions calculated in the previous article. Since every element is basicly a 2D AABB (axis aligned bounding box), we can use a simple interval testing in axes X and Y. Note, that we test only visible elements. If a point is inside an invisible element, we usually discard it and continue. We need to solve one more thing. If elements are inside each other, which one will receive the action? I have used a simple depth testing. A screen, as a parent to all other elements, has depth 0. Every child within the screen has depth = parentDepth + offset. And so on, recursively for children of children. A found element with the biggest depth and point inside is called "with focus". We will use this naming convention in later parts. I have three basic states for a user controller CONTROL_NONE - no control button is pressed CONTROL_OVER - controller is over, but no button is pressed CONTROL_CLICK - controller is over and a button is pressed This is 1:1 applicable to a mouse controller. For fingers and a touch control in general, CONTROL_OVER state has no real meaning. To keep things simple and portable, we preserve this state and handle it in a code logic with some condition parts. For this I have used a prepsocessor (#ifdef), but it can also be decided in a runtime with a simple if branch. Once we identify an element with current focus, we need to do several things. First of all, compare last and actual focused elements. I will explain this idea on a commented code. if (last != NULL) //there was some focused element as last { //update state of last focused element as currently no control state UpdateElementState(CONTROL_NONE, last); } if (control->IsPressed()) { //test current state of control (mouse / finger) //if control is down, do not trigger state change for mouse over return; } if (act != NULL) { //set state of current element as control (mouse / finger) over //if control is mouse - this will change state to HOVERED, with finger //it will go directly to same state as mouse down UpdateElementState(CONTROL_OVER, act); } If last and actual focused elements are the same, we need to do a different chain of responses. if (act == NULL) { //no selected element - no clicking on it => do nothing return; } if (control->IsPressed()) { //control (mouse / finger) is down - send state to element UpdateElementState(CONTROL_CLICK, act); } if (control->IsReleased()) { //control (mouse / finger) is released - send state to element UpdateElementState(CONTROL_OVER, act); } In the above code, tests on NULL are important, since if no element is focused at the moment, NULL is used for this state. Also, control states are sent in every update, so we need to figure how to change them into element states and how to correctly call triggers. An element changes and trigger actions are now special for a different types of elements. I will sumarize them in a following section. To fire up triggers, I have used delegetes from FastDelegate library / header (Member Function Pointers and the Fastest Possible C++ Delegates). This library is very easy to use and is perfectly portable (iOS, Android, Win...). In C++11 there are some built-in solutions, but I woud rather stick with this library. For each element that need some triggers, I add them via Set functions. If the associated action is triggered the delegate is called. Instead of this, you can use function pointers. Problem with them is usually associated with classes and member functions. With delagates, you will have easy to maintain code and you can associate delagets with classic functions or meber functions. In both cases, the code remains the same. Only difference will be in a delegate creation (but for this, see article about this topic on codeproject - link above). In C#, you have delegates in language core support, so there is no problem at all. In Java, there is probably also some solution, but I am not Java positive, so I dont know this :-) For other languages, there will also be some similar functionality. Elements First, there is a good reason to create an abstract element that every other will extend. In that abstract element, you will have position, dimensions, color and some other useful things. The specialized functionality will be coded in separate classes that extend this abstract class. 1. Panel & Image Both of them have no real functionality. A panel exists simply for grouping elements together and an image for showing images. That's all. Basically, both of them are very similar. You can choose its background color or set some texture on it. The reason why I have created two different elements is for better readibility of code and Image has some more background functionality, like helper methods for render targets visualization (used in debugging shadow maps, deferred rendering etc.). 1.1 Control logic Well... here it is really simple. I am using no states for these two elements. Of course, feel free to add some of them. 2. Button One of the two more interesting elements I am going to investigate in detail. A button is reccomended as a first for what you should code, when you are creating a GUI. You can try various scenarios on it - show texture, change texture, control interaction, rendering etc. Other elements are basically just a modified button :-) Our button has three states non active - classic default state hovered - this is correct only for mouse control and indicates that a mouse pointer is over the element, but no mouse button is pressed. This state is not used for a touch control active - button has been clicked or mouse / finger has been pressed on top of it You could add more states, but those three are all you need for basic effects and a nice looking button. You should have at least two different textures for each button. One that indicates the default state and the one used for an action state. There is often no need to separate active and hovered state, they can look the same. On a touch controller there is even no hovered state, so there is no difference at all. Closely related to the state changes are triggers. Those are actions that will occur when a button state changes from one to another or if it is in some state. You can think of many possible actions (if you don't know, the best way is to look for example into C# properties for UI button). I have used only a limited set of triggers. My basic used ones are onDown - mouse or finger has been pressed on the button onClick - click is generated after releasing pressed control (with some additional prerequisites) onHover - valid only for mouse control. Mouse is on the button, but not pressed onUp - mouse or finger has been released on the button (it can be seen similar to onClick without additional prerequisites) whileDown - called while button mouse or finger is pressed on the button whileHover - called while button mouse is on the button, but not pressed I have almost never seen "while" triggers. In my oppinion, they are good for repeating actions, like a throttle pedal in a touch-based racing game. You are holding it most of the time. Sometimes, you need functionality similar to a checkbox with a button. Typical case is a "play / pause" button in a media player. Once you hit the button, an action is trigerred and also the icon is changed. You can either use a real checkbox or alter the button a little bit (that is what I am using). In a trigger action code, you just simply change the icon set used for the button. See a sample code. In this I am using a button as a checkbox to enable / disable sound. void OnClickAction(GUISystem::GUIElement * el) { //emulate check box behaviour with button if (this->sound_on) { //sound is currently on - we are turning it off //change icon set GUISystem::GUIButtonTextures t; t.textureName = "soundoff"; //default texture t.textureNameClicked = "soundon"; //on click t.textureNameHover = "soundon"; //on mouse over el->GetButton()->SetTextures(t); } else { //sound is currently off - we are turning it on //change icon set GUISystem::GUIButtonTextures t; t.textureName = "soundon"; //default texture t.textureNameClicked = "soundoff"; //on click t.textureNameHover = "soundoff"; //on mouse over el->GetButton()->SetTextures(t); } //do some other actions needed to enable / disable sound } 2.1. Control logic Control logic of a button seems relatively simple from already mentioned states. There are only three basic ones. Hovewer, main code is a bit more complex. I have divided implementation into two parts. First is a "message" (basically it is not a message, it's just some function call, but it can be seen as a message) sending on a state change to the button from a controller class and a second part handles a state change and trigger calls based on a received "message". This part is coded directly inside a button class implementation. First part, inside a control class, that is sending "messages". if (ctrl == CONTROL_OVER) //element has focus from mouse { #ifdef __TOUCH_CONTROL__ //touch control has no CONTROL_OVER state ! //CONTROL_OVER => element has been touched => CONTROL_CLICK //convert it to CONTROL_CLICK ctrl = CONTROL_CLICK; #else //should not occur for touch control if (btn->GetState() == BTN_STATE_CLICKED) //last was clicked { btn->SetState(BTN_STATE_NON_ACTIVE); //trigger actions for onRelease btn->SetState(BTN_STATE_OVER); //hover it - mouse stays on top of element after click //that is important, otherwise it will look odd } else { btn->SetState(BTN_STATE_OVER); //hover element } #endif } if (ctrl == CONTROL_CLICK) //element has focus from mouse and we have touched mouse button { btn->SetState(BTN_STATE_CLICKED); //trigger actions for onClick } if (ctrl == CONTROL_NONE) //element has no mouse focus { #ifndef __TOUCH_CONTROL__ btn->SetState(BTN_STATE_OVER); //deactivate (on over) #endif if (control->IsPressed()) { btn->SetState(BTN_STATE_DUMMY); //deactivate - use dummy state to prevent some actions //associtaed with releasing control (most of the time used in touch control) } btn->SetState(BTN_STATE_NON_ACTIVE); //deactivate } Second part is coded inside a button and handles received "messages". Touch control difference is also covered (a button should never receive a hover state). Of course, sometimes you want to preserve hover state to port your application and keep the same functionality. In that case, hover trigger is often called together with onDown. if (this->actState == newState) { //call repeat triggers if ((this->hasBeenDown) && (this->actState == BTN_STATE_CLICKED)) { //call whileDown trigger } if (this->actState == BTN_STATE_OVER) { //call while hover trigger } return; } //handle state change if (newState == BTN_STATE_DUMMY) { //dummy state to "erase" safely states without trigger //delegates associated with action //dummy = NON_ACTIVE state this->actState = BTN_STATE_NON_ACTIVE; return; } //was not active => now mouse over if ((this->actState == BTN_STATE_NON_ACTIVE) && (newState == BTN_STATE_OVER)) { //trigger onHover } //was clicked => now non active if ((this->actState == BTN_STATE_CLICKED) && (newState == BTN_STATE_NON_ACTIVE)) { if (this->hasBeenDown) { //trigger onClick } else { //trigger onUp } } #ifdef __TOUCH_CONTROL__ //no hover state on touch control => go directly from NON_ACTIVE to CLICKED if ((this->actState == BTN_STATE_NON_ACTIVE) && (newState == BTN_STATE_CLICKED)) #else //go from mouse OVER state to CLICKED if ((this->actState == BTN_STATE_OVER) && (newState == BTN_STATE_CLICKED)) #endif { this->hasBeenDown = true; //trigger onDown } else { this->hasBeenDown = false; } this->actState = newState; Code I have shown is almost everything that handles a button control. 3. Checkbox Second complex element is a checkbox. Its functionality is similar to a button, but it has more states. I will not describe state changes and handling of those as detailed as I have done for a button. It's very similar, you can learn from button code and extend it. Plus it would take a little bit more space. Our checkbox has six states non active - classic default state hovered - this control is correct only for a mouse control and indicates that mouse pointer is over element, but no mouse button is pressed. This state is not used in touch controls clicked - state right after it has been clicked => in next "frame" state will be checked checked - checkbox is checked. We go to this state after clicked state checked + hovered - for checked state we need to have different hover state. It makes sense, since icon is usually also different checked + clicked - state right after it has been clicked in checked state => next "frame" state will be non active You will need two different sets of textures, one for unchecked and one for checked states. As for triggers, you can use the same as for a button, but with two additional ones. onCheck - state of the checkbox has been changed to checked onUncheck- state of the checkbox has been changed to unchecked "While" triggers can be also used together with check state, like whileChecked. Hovewer, I don't see a real use for this at the moment. 3.1. Control logic Control logic is, in its basic sense, similar to a button. You only need to handle more states. If you are lazy, you can even discard the checkbox all together and simulate its behavior with a simple button. You will put a code into an onClick trigger action. In there you will have to change the texture of the button. The is one set of textures for non-checked states and the second set for checked states and you will just swap them if one or the other state occurs. This will only affect visual appereance of the element, you will have no special triggers, like onCheck. You can emulate those with button triggers and some temporary variables. 4. Text caption Text caption is a very simple element. It has no specific texture, but contains words and letters. It's used only for small captions, so it can be added on top of a button to create a caption. If you need texts that are longer, you have to add some special functionality. This basic element is only for very simple texts (one line, no wrap if text is too long etc). More advance text elements should support multi-lines, auto wrap of a text if it is too long, padding or just anything else you can think of. 4.1. Control logic Text caption has no control logic. Its only purpose is to show you some text :-) Discussion In the second part of our "tutorial" I have covered basic elements that you will need most of the time. Without those, no GUI can be complete. I have shown more details for a button, since a checkbox is very similar and can be emulated with a simple button and some temporary variables (and some magic, of course). If you think something can be done better or is not accurate, feel free and post a comment. In an attachment, you can download source code (C++) for a descibed functionality. Code is not usable as it is, because of the dependencies on the rest of my engine. In future parts, we will investigate rendering and some tips & tricks. Article Update Log 22 May 2014: Added links to other parts of tutorial 10 May 2014: Added missing code, added description of checkbox emulation with button 9 May 2014: Initial release
  4. If you are writing your own game, sooner or later you will need some sort of user interface (or graphic user interface = GUI). There are some existing libraries lying around. Probably the most famous is CEGUI, that is also easy to integrate into OGRE 3D engine. A good looking system was also LibRocket, but its development stopped in 2011 (but you can still download and use it without any problem). There are a lot of discussion threads that deal with the eternal problem "What GUI should I use?". Not so long ago I faced the very same problem. I needed a library written in C++ that would be able to coexist with OpenGL and DirectX. Another requirement was support for classic (mouse) control as well as touch input. CEGUI seems to be good choice, but the problem is in its complexity and not so great touch support (from what I was told). Most of the time, I just need a simple button, check box, text caption and "icon" or image. With those, you can build almost anything. As you may know, GUI is not just what you can see. There is also functionality, like what happens if I click on a button, if I move mthe ouse over an element etc. If you choose to write your own system, you will have to also implement those and take into account that some of them are only for mouse control and some of them are only for touch (like more than one touch at a time). Writing a GUI from scratch is not an easy thing to do. I am not going to create some kind of super-trooper complex system that will handle everything. Our designed GUI will be used for static game menu controls and in-game menus. You can show scores, lives and other info with this system. An example of what can be created with this simple system can be seen in figures 1 and 2. Those are actuals screen-shots from engines and games I have created with this. Our GUI will be created in C++ and will work with either OpenGL or DirectX. We are not using any API-specific things. Apart from plain C++, we also will need some libraries that make things easier for us. One of the best "library" (it's really just a single header file) - FastDelegate - will be used for function pointers (we will use this for triggers). For rendering of fonts, I have decided to use the FreeType2 library. It's a little more complex, but it's worth it. That is pretty much all you need. If you want, you can also add support for various image formats (jpg, png...). For simplicity I am using tga (my own parser) and png (via not so fast, but easy to use LodePNG library). Ok. You have read some opening sentences and it's time to dig into details. You can also look at following chapters: Part I - Positioning Part II - Control logic Part III - Rendering Coordinate System and Positioning The most important thing of every GUI system is positioning of elements on your screen. Graphics APIs are using screen space coordinates in range [-1, 1]. Well... it's good, but not really good if we are creating / designing a GUI. So how to determine the position of an element? One option is to use an absolute system, where we set real pixel coordinates of an element. Simple, but not really usable. If we change resolution, our entire beautifully-designed system will go to hell. Hm.. second attempt is to use a relative system. That is much better, but also not 100%. Let's say we want to have some control elements at the bottom of the screen with a small offset. If relative coordinates are used, the space will differ depending on resolution which is not what we usually want. What I used at the end is a combination of both approaches. You can set position in relative and absolute coordinates. That is somewhat similar to the solution used in CEGUI. Steps described above were applied on the entire screen. During GUI design there is a very little number of elements that will meet this condition. Most of the time, some kind of panel is created and other elements are placed on this panel. Why? That's simple. If we move the panel, all elements on it move along with it. If we hide the panel, all elements hide with it. And so on. What I write so far for positioning is perfectly correct for this kind of situations as well. Again, a combination of relative and absolute positioning is used, but this time the relative starting point is not [0,0] corner of entire screen, but [0,0] of our "panel". This [0,0] point already has some coordinates on screen, but those are not interesting for us. A picture is worth a thousand words. So, here it is: Figure 3: Position of elements. Black color represents screen (main window) and position of elements within it. Panel is positioned within Screen. Green element is inside Panel. It's positions are within Panel. That is, along with hard offsets, the main reason why I internally store every position in pixels and not in relative [0, 1] coordinates (or simply in percents). I once calculated pixel position and I don't need to have several recalculations of percents based on an element's parent and real pixel offsets. Disadvantage of this is that if the screen size is changed, the entire GUI needs to be reloaded. But let's be honest, how often do we change the resolution of a graphics application ? If we are using some graphics API (either OpenGL or DirectX), rendering is done in screen space and not in pixels. Screen space is similar to percents, but has the range [-1,1]. Conversion to the screen space coodinates is done as the last step just before uploading data to the GPU. A pipeline of transforming points to the screen space coordinates is shown on the following three equations. Input pixel is converted to point in range [0, 1] by simply dividing pixel position by screen resolution. From point in [0, 1], we map it to [-1, 1]. pixel = [a, b] point = [pixel(a) / width, pixel(b) / height] screen_space = 2 * point - 1 If we want to use the GUI without a graphics API, lets say do it in Java, C#... and draw elements into image, you can just stick with pixels. Anchor System All good? Good. Things will be little more interesting from now on. A good thing in GUI design is to use anchors. If you have ever created a GUI, you know what anchors are. If you want to have your element stickied to some part of the screen no matter the screen size, that's the way you do it - anchors. I have decided to use a similar but slighty different system. Every element I have has its own origin. This can be one of the four corners (top left - TL, top right - TR, bottom left - BL, bottom right - BR) or its center - C. The position you have entered is than relative to this origin. Default system is TL. Figure 4: Anchors of screen elements Let's say you want your element always to be sticked in the bottom right corner of your screen. You can simulate this position with TL origin and the element's size. Better solution is to go backward. Position your element in a system with changed origin and convert it to TL origin later (see code). This has one advantage: you will keep user definition of GUI unified (see XML snippet) and it will be more easy to maintain. All In One In the following code, you can see full calculation and transformation from user input (eg. from above XML) into internal element coordinate system, that is using pixels. First, we calculate pixel position of the corner as provided by our GUI user. We also need to calculate element width and height (proportions of elements will be discussed further in the next part). For this, we need proportions of the parent - meaning its size and pixel coordinate of TL corner. float x = parentProportions.topLeft.X; x += pos.x * parentProportions.width; x += pos.offsetX; float y = parentProportions.topLeft.Y; y += pos.y * parentProportions.height; y += pos.offsetY; float w = parentProportions.width; w *= dim.w; w += dim.pixelW; float h = parentProportions.height; h *= dim.h; h += dim.pixelH; For now, we have calculated the pixel position of our reference corner. However, internal storage of our system must be unified, so everything will be converted to a system with [0,0] in TL. //change position based on origin if (pos.origin == TL) { //do nothing - top left is default } else if (pos.origin == TR) { x = parentProportions.botRight.X - (x - parentProportions.topLeft.X); //swap x coordinate x -= w; //put x back to top left } else if (pos.origin == BL) { y = parentProportions.botRight.Y - (y - parentProportions.topLeft.Y); //swap y coordinate y -= h; //put y back to top left } else if (pos.origin == BR) { x = parentProportions.botRight.X - (x - parentProportions.topLeft.X); //swap x coordinate y = parentProportions.botRight.Y - (y - parentProportions.topLeft.Y); //swap y coordinate x -= w; //put x back to top left y -= h; //put y back to top left } else if (pos.origin == C) { //calculate center of parent element x = x + (parentProportions.botRight.X - parentProportions.topLeft.X) * 0.5f; y = y + (parentProportions.botRight.Y - parentProportions.topLeft.Y) * 0.5f; x -= (w * 0.5f); //put x back to top left y -= (h * 0.5f); //put y back to top left } //this will overflow from parent element proportions.topLeft = MyMath::Vector2(x, y); proportions.botRight = MyMath::Vector2(x + w, y + h); proportions.width = w; proportions.height = h; With the above code, you can easily position elements in each corner of a parent element with almost the same user code. We are using float instead of int for pixel coordinate representations. This is OK, because at the end we transform this to screen space coordinates anyway. Proportions Once we established position of an element, we also need to know its size. Well, as you may remember, we have already needed proportions for calculating the element's position, but now we discuss this topic a bit more. Proportions are very similar to positioning. We again use relative and absolute measuring. Relative numbers will give us size in percents of parent and pixel offset is, well, pixel offset. We must take in mind one important thing - aspect ratio (AR). We want our elements to keep it every time. It would not be nice if our icon was correct on one system and deformed on another. We can repair this by only specifying one dimension (width or height) and the relevant aspect ratio for this dimension. See the difference in example below: a) - create element of size 10% of parent W b) - create element of size 10% of parent W and H Both of them will create an element with the same width. Choice a) will always have correct AR, while choice b) will always have the same size in respect of its parent element. While working with relative size, it is also a good thing to set some kind of maximal element size in pixels. We want some elements to be as big as possible on small screens but its not neccessary to have them oversized on big screens. A typical example will be phone and tablet. There is no need for an element to be extremly big (eg. occupy let's say 100x100 pixels) on a tablet. It can take 50x50 as well and it will be enough. But on smaller screens, it should take as much as possible according to our relative size from user input. Fonts Special care must be taken for fonts. Positioning and proportions differ a little from classic GUI elements. First of all, for font positioning it is often good to put origin into its center. That way, we can center very easily fonts inside parent elements, for example buttons. As mentioned before, to recalculate position from used system into system with TL origin, we need to know the element size. Figure 5: Origin in center of parent element for centered font positioning That is the tricky part. When dealing with text, we are setting only height and the width will depend on various factors - used font, font size, printed text etc. Of course, we can calculate size manually and use it later, but that is not correct. During runtime, text can change (for instance the score in our game) and what then? A better approach is to recalculate position based on text change (change of text, font, font size etc.). As I mentioned, for fonts I am using the FreeType library. This library can generate a single image for each character based on used font. It doesn?t matter if we have pregenerated those images into font atlas textures or we are creating them on the fly. To calculate proportions of text we don't really need an actual image, but only its size. Problem is size of the whole text we want to display. This must be calculated by iterating over every character and accumulating proportions and spaces for each of them. We also must take care of new lines. There is one thing you need to count on when dealing with text. See the image and its caption below. Someone may think that answer to "Why?" question is obvious, but I didn't realize that in time of design and coding, so it brings me a lot of headache. Figure 6: Text rendered with settings: origin is TL (top left), height is set to be 100% of parent height. You may notice, text is not filling the whole area. Why ? The answer is really simple. There are diacritics marks in the text that are counted in total size. There should also be space for descendents, but they are not used for capitals in the font I have used. Everything you need to take care of can be seen in this picture: Figure 7: Font metrics Discussion This will be all for now. I have described the basics of positioning and sizing of GUI elements that I have used in my design. There are probably better or more complex ways to do it. This one used here is easy and I have not run across any problems using it. I have written a simple C# application to speed up the design of the GUI. It uses the basics described here (but no fonts). You can place elements, change their size and image, drag them to see positions of them. You can download the source of application and try it for yourself. But take it as "alpha" version, I have written it for fast prototyping during one evening. In future parts (don't worry, I have already written them and only doing finishing touches) I will focus on basic types of elements, controls, rendering. That is, after all, one of the important things in GUI :-) Article Update Log 22 May 2014: Added links to following parts 3 May 2014: Initial release
  5. I have been writing an DirectX / OpenGL rendering engine recently. As you may know, DirectX is by default associated with a left-handed coordinate system (LH) and OpenGL with a right-handed system (RH). You can compare both of them in the article title image to the right. You can look at those two systems in another way. If you want to look in a positive direction, for LH, you have Y as UP axis and for RH, you have Z as UP axis. If you dont see it, rotate the RH system in the image. Today, in time of shaders and, you can use one or another in both systems, but you need to take care of few things. I have calculated both versions of matrices for both systems. I am tired of remembering everything and/or calculating it all over again. For this reason I have created this document, where I summarize needed combinations and some tips & tricks. This is not meant to be a tutorial "How projection works" or "Where those values come from". It is for people who are tired of looking how to convert one system to another or how to convert one API to another. Or it is for those who don't care "why" but they are happy to copy & paste of my equations (however, don't blame me if there is something wrong). RH system has become some kind of a standard in a computer graphics. However, for my personal purposes, LH system seems more logical to visualise. In my engine, I wanted to give the decision to the user. At the end, my system supports both orientations. If we looked more closely at DirectX and OpenGL, we can see one important difference in a projection. It doesn't matter if we use LH or RH system, in DirectX projection is mapped to interval [0, 1] while in OpenGL to [-1, 1]. What does that mean? If we take the near clipping plane of a camera, it will be always mapped to 0 for DirectX, but in OpenGL it is more complicated. For LH system, near will be 1, but for RH, it will became -1 (see graphs 5 and 6 in a later section). Of course, we can use DirectX mapping in OpenGL (not the other way), but in that case, we are throwing away half of the depth buffer precision. In the following sections, we will discuss this more closely. Personally, I think that whoever invented OpenGL depth coordinates must have had a twisted sense for humour. DirectX's solution is far better and easier to understand. Matrix order used in this article will be row based. All operations will be done in order vector ? matrix (as we can see at (1) ) with indexing from (2). (1) (2) For column based matrix, order of operations will be reversed - matrix ? vector (as we can see at 3). You also need to change elements of matrix, as you can see from example. (3) In a time of a fixed function pipeline, that was more problematic than today. In a time of shaders, we may use whatever system and layout we want and just change the order of operations or read values from the different positions in matrices. World to View transformation In every transformation pipeline, we need to first transform geometry from the world coordinates to a view (camera) space. After that, you can do a projection transformation. View matrix must use the same system as your final projection, so it must be LR or RH. This section is mentioned only for complete look up, so you know how to transform a point. There will be no additional details for view transformation. View matrix has the same layout for both of the systems (4) (4) Differences are in base vectors and the last row elements calculation. You can see it in table 1. [table][tr][td][/td][td]LH[/td][td]RH[/td][/tr][tr][td]look[/td][td]|wLook - eye|[/td][td]|eye - wLook|[/td][/tr][tr][td]right[/td][td]|wUp x look|[/td][td]|wUp x look|[/td][/tr][tr][td]up[/td][td]|look x right|[/td][td]|look x right|[/td][/tr][tr][td]A[/td][td]-dot(right,eye)[/td][td]dot(right,eye)[/td][/tr][tr][td]B[/td][td]-dot(up, eye)[/td][td]dot(up, eye)[/td][/tr][tr][td]C[/td][td]-dot(look, eye)[/td][td]dot(look, eye)[/td][/tr][/table] Table 1: View vectors calculation. wLook is camera lookAt target, eye is camera position and wUp is camera up vector - usually [0,1,0]. "x" stands for a vector product Perspective projection For "3D world" rendering, you will probably use a perspective projection. Most of the time (like in 90% of cases) you will need a simplified perspective matrix (with a symmetric viewing volume). Pattern for such a projection matrix can be seen at 5. As you can see, this pattern is symmetric. For column and row major matrices, this simplified pattern will be the same, but values of D and E will be transposed. Be aware of this, it can cause some headaches if you do it the other way and not notice it. (5) Now, how projection works. We have an input data in the view space coordinates. From those we need to map them into our screen. Since our screen is 2D (even if we have so called 3D display), we need to map a point to our screen. We take a simple example: (6) (7) where x,y,z,w is an input point ( w is a homogenous coordinate, if we want to "debug" on a paper, the best way is to choose this value as 1.0). Division by ( D ? z ) is performed automatically after vertex shader stage. From equations 6 we have coordinates of a point on 2D screen. You may see, that those values are not coordinates of pixel (like [756, 653]), but they are in a range [-1, 1] for both axis (in DirectX and also in OpenGL). From equation 7 we have depth of pixel in range [0, 1] for DirectX and [-1, 1] for OpenGL. This value is used in depth buffer for closer / distant object recognition. Later on, we show how depth values look like. Those +1 / -1 values, that you will obtain after projection, are known as a normalized device coordinates (NDC). They form a cube, where X and Y axis are in interval [-1, 1] for DirectX and OpenGL. Z axis is more tricky. For DirectX, you have an interval [0, 1] and for OpenGL [-1, 1] (see 2). As you can see now, NDC is a LH system, doesn't matter what input system you have chosen. Everything, that is inside of this cube, is visible on our screen. Screen is taken as a cube face at Z = 0 (DirectX), Z = 1 (OpenGL LH) or Z = -1 (OpenGL RH). What you see on your screen is basically content of a NDC cube pressed to single plane. Figure 2: OpenGL (Left) and DirectX (Right) NDC We summarize computations for LH / RH system and for DirectX and OpenGL in two different tables. Those values are different for LH / RH system and of course for API used. In following sections, you can spot the differences. If you are interested where those values come from, look elsewhere (for example OpenGL matrices are explained here: Link). There are plenty of resources and it will be pointless to go over it again here. DirectX Table 2: Projection matrix calculation for DirectX. Input parametrs are: fovY - field of view in Y direction, AR - aspect ratio of screen, n - Z-value of near clipping plane, f - Z-value of far clipping plane Changing only values at the projection matrix won't work as expected. If we render same scene with same DirectX device settings, we end up with turned scene geometry for one of those matrices. This is caused by depth comparison in depth buffer. To change this settings is a little longer in DirectX, than for OpenGL. You need to call functions in code snippet 1 with values in table 3. deviceContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); .... depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL; device->CreateDepthStencilState(&depthStencilDesc,&depthStencilState); deviceContext->OMSetDepthStencilState(depthStencilState, 1); Code 1: Code snippet settings for LH DirectX rendering [table][tr][td][/td][td]LH[/td][td]RH[/td][/tr][tr][td]D3D11_CLEAR_DEPTH[/td][td]1.0[/td][td]0.0[/td][/tr][tr][td]depthStencilDesc.DepthFunc[/td][td]D3D11_COMPARISON_LESS_EQUAL[/td][td]D3D11_COMPARISON_GREATER_EQUAL[/td][/tr][/table] Table 3: OpenGL setting for both systems OpenGL Table 4: Projection matrix calculation for OpenGL. Input parametrs are: fovY - field of view in Y direction, AR - aspect ratio of screen, n - Z-value of near clipping plane, f - Z-value of far clipping plane Again, changing only values at the projection matrix won't work as expected. If we render same scene with same OpenGL device settings, we end up with turned scene geometry for one of those matrices. This is caused by depth comparison in depth buffer. We need to change two things as we see in table 5. [table][tr][td]LH[/td][td]RH[/td][/tr][tr][td]glClearDepth(0)[/td][td]glClearDepth(1)[/td][/tr][tr][td]glDepthFunc(GL_GEQUAL)[/td][td]glDepthFunc(GL_LEQUAL)[/td][/tr][/table] Table 5: OpenGL setting for both systems Conclusion If you set the comparison and depth buffer clear values incorrectly, most of the time, you will end up with result like on the figure 3. Correct scene should look like on the figure 4. Figure 3: Incorrectly set depth function and clear for current projection Figure 4: Correctly set depth function and clear for current projection Using equation 6, we can calculate projected depth for any input value. If we do this for values in interval [near, far], we will get the following result (see image 5 and 6). Notice second graph x-axis. For RH system, we need to change sign of near to -near in order to obtain same results as for LH system. That means in plain language, that for LH we are looking in positive Z direction and for RH we are looking in negative Z direction. In both cases, viewer is located at origin. Figure 5: Projected depth with DirectX and OpenGL LH matrices (values used for calculation: near = 0.1, far = 1.0) Figure 6: Projected depth with DirectX and OpenGL RH matrices (values used for calculation: near = -0.1, far = -1.0) From above graphs, we can see that for the distances near to the camera, there is a good precision in the depth buffer. On the other hand, for larger values the precision is limited. That is not always desired. One possible solution is to keep your near and far distances together as close as possible. There will be less problems if you use interval [0.1, 10] instead of [0.1, 100]. This is not always possible if we want to render large 3D world enviroments. This issue can be however solved as we show in the next section. Depth precision As mentioned before, using a classic perspective projection brings us a limited depth precision. The bigger the distance from viewer, the lower precision we have. This problem is often noticable as flickering pixels in distance. We can partially solve this by logarithmic depth. We decrease precision for near surroundings, but we have almost linear distribution throughout the depth range. One disadvantage is that logarithm is not working for negative input. Triangles, that are partially visible and have some points behind viewer (negative Z axis), won't be calculated correctly. Shader programs usually won't crash with negative logarithm, but the result is not defined. There are two possible solutions for this problem. You either tesselate your scene to have triangles so small, that the problem won't matter, or you can write your depth in a pixel shader. Writing depth in a pixel shader brings disadvantage with turned off depth testing for geometry before rasterizing. There could be some performance impact, but you can limit it by doing this trick only for near geometry, that could be affected. That way, you will need a condition in your shader or use different shaders based on geometry distance from viewer. If you use this modification, be aware of one thing: The depth from vertex shader has range [-1, 1], but gl_FragDepth has range [0, 1]. It's again something OpenGL only, since DirectX has depth in [0, 1] all the time. For a more detailed explenation, you can read an excellent article at Outtera blog (Link). Equations in their solution are using RH system (they aimed primary for OpenGL). So once again, we show same equation in LH and RH system. Both version are at table 6. This time only for OpenGL, since in DirectX problem can be solved, as proposed in article, by swapping near and far. [table][tr][td]LH[/td][td]gl_Position.z = (-2.0) * log((-gl_Position.z) * C + 1.0) / log(far * C + 1.0) + 1.0[/td][/tr][tr][td]RH[/td][td]gl_Position.z = (2.0) * log((gl_Position.z) * C + 1.0) / log(far * C + 1.0) - 1.0[/td][/tr][/table] Table 6:Calculation of new Z coordinate for depth using log. C is linearized component, default value is 1.0, far is camera far plane distance, gl_Position is output value from vertex shader (in perspective projection). You MUST remember to multiply gl_Position.z by gl_Position.w before returning it from shader. If you have read the Outtera article and looked at my equations, you may notice that I used gl_Position.z in logarithm calculations. I don't know if it is a mistake by Outtera, but with W, I have nearly same results for RH system (as if I used Z), but LH is totally messed up. Plus, W is already linearized depth (distance of point from viewer). So first visible point has W = near and last one has W = far. If we plot classic vs logarithm depth with equations from 6, we end up with the two following graphs. Red curve is same as in previous chapter, green one is our logarithmic depth. Figure 7: Projected depth with classic perspective and with logarithmic one in LH (values used for calculation: near = 0.1, far = 1.0, C = 1.0) Figure 8: Projected depth with classic perspective and with logarithmic one in RH (values used for calculation: near = 0.1, far = 1.0, C = 1.0) You can observe the effect of both projections (classic and logarithmic one) at this video (rendered with LH projection in OpenGL): Oblique projection Last section related to a projection will be a little different. So far, we have discussed perspective projection and precision for rendering. In this section, another important aspect will be converted to LH and RH system and to OpenGL / DirectX. Oblique projection is not some kind of special projection, that makes everything shiny. It is classic perspective projection, only with improved clipping planes. Clipping plane for classic projection is near and far, but here we change near to get different effect. This kind of projection is mostly used for water reflection texture rendering. Of course, we can set clipping plane manually in OpenGL or in DirectX, but that won't work in a mobile version (OpenGL ES), a web version (WebGL) and in DirectX we will need a different set of shaders. Bottom line, solution with clipping plane is possible, but not as clean as oblique projection. First we need to precompute some data. For a clipping, we need obviously a clipping plane. We need it in our current projective space coordinates. This can be achieved by transforming our plane vector with transposed inverse of the view matrix (we are assuming that the world matrix is set as identity). Matrix4x4 tmp = Matrix4x4::Invert(viewMatrix); tmp.Transpose(); Vector4 clipPlane = Vector4::Transform(clipPlane, tmp); Now calculate the clip-space corner point opposite the clipping plane float xSign = (clipPlane.X > 0) ? 1.0f : ((clipPlane.X < 0) ? -1.0f : 0.0f); float ySign = (clipPlane.Y > 0) ? 1.0f : ((clipPlane.Y < 0) ? -1.0f : 0.0f); Vector4 q = (xSign, ySign, 1, 1); Transform q into camera space by multiplying it with the inverse of the projection matrix. For a simplified calculation, we have already used an inverted projection matrix. DirectX In DirectX system, we need to be careful, because original article is using OpenGL projection space with Z coordinate in range [-1, 1]. This is not possible in DirectX, so we need to change equations and recalculate them with Z in a range [0, 1]. Following solution is valid for LH system: q.X = q.X / projection[0][0]; q.Y = q.Y / projection[1][1]; q.Z = 1.0f; q.W = (1.0f - projection[2][2]) / projection[3][2]; float a = q.Z / Vector4::Dot(clipPlane, q); Vector4 m3 = a * clipPlane; OpenGL The following equations can be simplified, if we know handness of our system. Since we want to have an universal solution, I have used a full representation, that is independent on the used sytem. q.X = q.x / projection[0][0]; q.Y = q.Y / projection[1][1]; q.Z = 1.0 / projection[2][3]; q.W = (1.0 / projection[3][2]) - (projection[2][2] / (projection[2][3] * Matrix.M[3][2])); float a = (2.0f * projection[2][3] * q.Z) / Vector4::Dot(clipPlane, q); Vector4 m3 = clipPlane * a; m3.Z = m3.Z + 1.0f; In calculation of m3.Z we can use directly addition of value +1.0. If we write separate equations for LH and RH system, we can see why: LH: m3.Z = m3.Z + projection[2][3]; //([2][3] = +1) RH: m3.Z = m3.Z - projection[2][3]; //([2][3] = -1) Final matrix composition Final composition of the projection matrix is easy. Replace the third column with our calculated vector. Matrix4x4 res = projection; res[0][2] = m3.X; res[1][2] = m3.Y; res[2][2] = m3.Z; res[3][2] = m3.W; Attachment I have added an Excel file with projection matrices. You can experiment for yourself by changing near and far, or any other parameters and see the differences in depth. This is the same file that I used for creation of posted graphs. References OpenGL Projection Matrix - http://www.songho.ca/opengl/gl_projectionmatrix.html Logarithmic Depth Buffer - http://outerra.blogspot.sk/2009/08/logarithmic-z-buffer.html http://www.terathon.com/gdc07_lengyel.pdf http://msdn.microsoft.com/en-us/library/microsoft.directx_sdk.matrix.xmmatrixperspectivefovlh%28v=vs.85%29.aspx DX9 LH - Perspective matrix layout and calculation http://www.codeproject.com/Articles/42848/A-New-Perspective-on-Viewing/ Lengyel, Eric. "Oblique View Frustum Depth Projection and Clipping". Journal of Game Development, Vol. 1, No. 2 (2005), Charles River Media, pp. 5-16. (http://www.terathon.com/code/oblique.html)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!