• # Moving Beyond OpenGL 1.1 for Windows

Graphics and GPU Programming

Author's note: Keep in mind that although this article is being published on GameDev.net in April 2003, it was written in April 2002, and some things have changed in the past year, notably the release of the OpenGL 1.4 specification, and the standardization of pixel and vertex shaders. I've chosen not to update the article to reflect these changes because I wanted to keep the text consistent with what was published in the book, and because they really make no difference as far as the the purpose of the article is concerned.

This article originally appeared in the book Game Programming Tricks of the Trade, 2002, Premier Press. Many members of the GameDev.net community, including several GDNet staff members, contributed to the book, so you're encouraged to check it out.

[size="5"]Trick #10 from [size="5"]Game Programming Tricks of the Trade[size="5"], Premier Press

Once you've been programming with OpenGL for Windows for a while, you'll probably notice something: the headers and libraries you're using are old. Dig around in the gl.h header, and you'll see this:

#define GL_VERSION_1_1 1
This means that you're using OpenGL 1.1, which was released in 1996. In the world of graphics, that's ancient! If you've been paying attention, you know that the current OpenGL specification is at 1.3 (at least at the time of this writing). OpenGL 1.4 should be released later this year, with 2.0 following soon after. Obviously, you need to update your OpenGL headers and libraries to something more recent.

As it turns out, the most recent headers and libraries for Windows correspond to ... OpenGL 1.1. That's right, the files you already have are the most recent ones available.

[size="5"]What You Will Learn

• Explain in greater detail why you need to take some extra steps to use anything beyond OpenGL 1.1.
• Explain OpenGL's extension mechanism, and how it can be used to access OpenGL 1.2 and 1.3 functionality.
• Give you an overview of the new options available in OpenGL 1.2 and 1.3, as well as a look at some of the most useful extensions.
• Give you some tips for using extensions while ensuring that your game will run well on a wide range of systems.
• Provide a demo showing you how to use the techniques described.
[size="5"]The Problem

If you're new to OpenGL or have only ever needed the functionality offered in OpenGL 1.1, you may be confused about what the problem is, so let's clarify.

To develop for a given version of OpenGL on Windows, you need three things. First, you need a set of libraries (i.e. opengl32.lib and possibly others such as glu32.lib) and headers (i.e. gl.h, and so on) corresponding to the version you'd like to use. These headers and libraries contain the OpenGL functions, constants, and other things you need to be able to compile and link an OpenGL application. Second, the system you intend to run the application on needs to have an OpenGL dynamic link library (OpenGL32.dll), or OpenGL runtime library. The runtime needs to be for either the same or a more recent version of OpenGL as the headers and libraries you're using. Ideally, you will also have a third component, called an Installable Client Driver (IDC). An IDC is provided by the video card drivers to allow for hardware acceleration of OpenGL features, as well as possible enhancements provided by the graphics vendor.

So, let's look at these three things and see why you have to jump through a few hoops to use anything newer than OpenGL 1.1:
• Headers and libraries. As I mentioned in the introduction, the latest version of the OpenGL headers and libraries available from Microsoft correspond to version 1.1. If you look around on the Internet, you may come across another OpenGL implementation for Windows created by Silicon Graphics. SGI's implementation also corresponds to OpenGL 1.1. Unfortunately, this implementation is no longer supported by SGI. In addition, the Microsoft implementation is based upon it, so you really gain nothing by using it. Where does that leave us?

Well, there is reason to hope that someone will release up to date libraries. Although, to my knowledge, no one has committed to doing so, several parties have discussed it. Microsoft is the obvious candidate, and despite years of promising and not delivering, it appears that they have taken an interest in the recently proposed OpenGL 2.0. Whether or not that interest will lead to action remains to be seen, but given the large number of graphics workstations running Windows NT and Windows 2000, it's not beyond the realm of possibility.

Besides Microsoft, there has apparently been discussion among the members of OpenGL's Architectural Review Board (ARB) to provide their own implementation of the headers and libraries. At present, though, this is still in the discussion stage, so it may be a while before we see anything come of it.
• The runtime. Most versions of Windows (the first release of Windows 95 being the exception) come with a 1.1 runtime. Fortunately, this isn't really as important as the other elements. All that the runtime does is guarantee a baseline level of functionality, and allow you to interface with the ICD.
• The ICD. This is the one area where you're okay. Most hardware vendors (including NVIDIA and ATI) have been keeping up with the latest OpenGL standard. For them to be able to advertise that their drivers are compliant with the OpenGL 1.3 standard, they have to support everything included in the 1.3 specification (though not necessarily in hardware). The cool thing about this is that the ICD contains the code to do everything in newer versions of OpenGL, and we can take advantage of that. The thing that's important to note here is that although the headers and libraries available don't directly allow you to access newer OpenGL features, the features do exist in the video card drivers. You just need to find a way to access those features in our code. We do that by using OpenGL's extension mechanism.

[size="5"]OpenGL Extensions

As you're aware, the graphics industry has been moving at an alarmingly rapid pace for many years now. Today, consumer-level video cards include features that were only available on professional video cards (costing thousands of dollars) a few years ago. Any viable graphics API has to take these advances into account, and provide some means to keep up with them. OpenGL does this through extensions.

If a graphics vendor adds a new hardware feature that they want OpenGL programmers to be able to take advantage of, they simply need to add support for it in their ICD, and then provide developers with documentation about how to use the extension. This is oversimplifying a bit, but it's close enough for our purposes. As an OpenGL programmer, you can then access the extension through a common interface shared by all extensions. You'll learn how to do that in the "Using Extensions" section, but for now, let's look at how extensions are identified, and what they consist of.

[size="3"]Extension Names

Every OpenGL extension has a name by which it can be precisely and uniquely identified. This is important, because hardware vendors will frequently introduce extensions with similar functionality but very different semantics and usage. You need to be able to distinguish between them. For example, both NVIDIA and ATI provide extensions for programmable vertex and pixel shaders, but they bear little resemblance to each other. So, if you wanted to use pixel shaders in your program, it wouldn't be enough to find out if the hardware supported pixel shaders. You'd have to be able to specifically ask whether NVIDIA's or ATI's version is supported, and handle each appropriately.

All OpenGL extensions use the following naming convention:

PREFIX_extension_name
The "PREFIX" is there to help avoid naming conflicts. It also helps identify the developer of the extension or, as in the case of EXT and ARB, its level of promotion. Table 1 lists most of the prefixes currently in use. The "extension_name" identifies the extension. Note that the name cannot contain any spaces. Some example extension names are [font="Courier New"][color="#000080"]ARB_multitexture[/color][/font], [font="Courier New"][color="#000080"]EXT_bgra[/color][/font], [font="Courier New"][color="#000080"]NV_vertex_program[/color][/font], and [font="Courier New"][color="#000080"]ATI_fragment_shader[/color][/font].

Table 1 - OpenGL Extension Prefixes Prefix Meaning/Vendor ARBExtension approved by OpenGL's Architectural Review Board (first introduced with OpenGL 1.2) EXT Extension agreed upon by more than one OpenGL vendor 3DFX 3dfx Interactive APPLE Apple Computer ATI ATI Technologies ATIX ATI Technologies (experimental) HP Hewlett-Packard INTELIntel Corporation IBM International Business Machines KTX Kinetix NV NVIDIA Corporation MESA http://www.mesa3d.org OML OpenML SGI Silicon Graphics SGISSilicon Graphics (specialized) SGIX Silicon Graphics (experimental) SUN Sun Microsystems SUNX Sun Microsystems (experimental) WIN Microsoft
[bquote]Caution: Some extensions share a name, but have a different prefix. These extensions are generally not interchangeable, as they may use entirely different semantics. For example, [font="Courier New"][color="#000080"]ARB_texture_env_combine[/color][/font] is not the same thing as [font="Courier New"][color="#000080"]EXT_texture_env_combine[/color][/font]. Rather than making assumptions, be sure to consult the extension specifications when you're unsure.[/bquote]
[size="3"]What an Extension Includes

You now know what an extension is, and how extensions are named. Next, let's turn our attention to the relevant components of an extension. There are four parts of an extension that you need to deal with.

Name Strings
Each extension defines a name string, which you can use to determine whether or not the OpenGL implementation supports it. By passing [font="Courier New"][color="#000080"]GL_EXTENSIONS[/color][/font] to the [font="Courier New"][color="#000080"]glGetString()[/color][/font] method, you can get a space-delimited buffer containing all the extension name strings supported by the implementation.

Name strings are generally the name of the extension preceded by another prefix. For core OpenGL name strings, this is always GL_ (e.g. GL_EXT_texture_compression). When the name string is tied to a particular windows system, the prefix will reflect which system that is (e.g. Win32 uses WGL_).
[bquote]Some extensions may define more than one name string. This would be the case if the extension provided both core OpenGL functionality and functionality specific to the windows system.[/bquote]
Functions
Many (but not all) extensions introduce one or more new functions to OpenGL. To use these functions, you'll have to obtain their entry point, which requires that you know the name of the function. This process is described in detail in the "Using Extensions" section.

The functions defined by the extension follow the naming convention used by the rest of OpenGL, namely [font="Courier New"][color="#000080"]glFunctionName()[/color][/font], with the addition of a suffix using the same letters as the extension name's prefix. For example, the [font="Courier New"][color="#000080"]NV_fence[/color][/font] extension includes the functions [font="Courier New"][color="#000080"]glGetFencesNV()[/color][/font], [font="Courier New"][color="#000080"]glSetFenceNV()[/color][/font], [font="Courier New"][color="#000080"]glTestFenceNV()[/color][/font], and so on.

Enumerants
An extension may define one or more enumerants. In some extensions, these enumerants are intended for use in the new functions defined by the extension (which may be able to use existing enumerants as well). In other cases, they are intended for use in standard OpenGL functions, thereby adding new options to them. For example, the [font="Courier New"][color="#000080"]ARB_texture_env_add[/color][/font] extension defines a new enumerant, [font="Courier New"][color="#000080"]GL_ADD[/color][/font]. This enumerant can be passed as the [font="Courier New"][color="#000080"]params[/color][/font] parameter of the various [font="Courier New"][color="#000080"]glTexEnv()[/color][/font] functions when the pname parameter is [font="Courier New"][color="#000080"]GL_TEXTURE_ENV_MODE[/color][/font].

The new enumerants follow the normal OpenGL naming convention (i.e. [font="Courier New"][color="#000080"]GL_WHATEVER[/color][/font]), except that they are suffixed by the letters used in the extension name's prefix, such as [font="Courier New"][color="#000080"]GL_VERTEX_SOURCE_ATI[/color][/font].

Using new enumerants is much simpler than using new functions. Usually, you will just need to include a header defining the enumerant, which you can get from your hardware vendor or from SGI. Alternately, you can define the enumerant yourself if you know the integer value it uses. This value can be obtained from the extension's documentation.
[bquote]Extensions don't need to define both functions and enumerants (though many do), but they usually include at least one of the two. The few cases where that's not true is when existing functions or enumerants are combined in new ways.[/bquote]
Dependencies
Very few extensions stand completely alone. Some require the presence of other extensions, while others take this a step further and modify or extend the usage of other extensions. When you begin using a new extension, you need to be sure to read the specification and understand the extension's dependencies.

Speaking of documentation, you're probably wondering where you can get it, so let's talk about that next.

[size="3"]Extension Documentation

Although vendors may (and usually do) provide documentation for their extensions in many forms, there is one piece of documentation that is absolutely essential-- the specification. These are generally written as plain text files, and include a broad range of information about the extension, such as its name, version, number, dependencies, new functions and enumerants, issues, and modifications/additions to the OpenGL specification.

The specifications are intended for use by developers of OpenGL hardware or ICDs, and as such, are of limited use to game developers. They'll tell you what the extension does, but not why you'd want to use it, or how to use it. For that reason, I'm not going to go over the details of the specification format. If you're interested, Mark Kilgard has written an excellent article about it which you can read at OpenGL.org. [[alink='ref']1[/alink]]

As new extensions are released, their specifications are listed in the OpenGL Extension Registry, which you can find at the following URL:

http://oss.sgi.com/p...ample/registry/

This registry is updated regularly, so it's a great way to keep up with the newest additions to OpenGL.

For more detailed descriptions of new extensions, your best bet is the websites of the leading hardware vendors. In particular, NVIDIA [[alink='ref']2[/alink]] and ATI [[alink='ref']3[/alink]] both provide a wealth of information, including white papers, Power Point presentations, and demos.
[bquote]Extensions that are promoted to be a part of the core OpenGL specification may be removed from the extension registry. To obtain information about these, you'll have to refer to the latest OpenGL specification. [[alink='ref']4[/alink]][/bquote]

[size="5"]Using Extensions

Finally, it's time to learn what you need to do to use an extension. In general, there are only a couple of steps you need to take:
• determine whether or not the extension is supported
• obtain the entry point for the any of the extension's functions that you want to use
• define any enumerants you're going to use. Let's look at each of these steps in greater detail.
[bquote]Caution: Before checking for extension availability and obtaining pointers to functions, you MUST have a current rendering context. In addition, the entry points are specific to each rendering context, so if you're using more than one, you'll have to obtain a separate entry point for each.[/bquote]
[size="3"]Querying the Name String

In order to find out whether or not a specific extension is available, first get the list of all the name strings supported by the OpenGL implementation. To do this, you just need to call [font="Courier New"][color="#000080"]glGetString()[/color][/font] using [font="Courier New"][color="#000080"]GL_EXTENSIONS[/color][/font], like so:

char* extensionsList = (char*) glGetString(GL_EXTENSIONS);
After this executes, [font="Courier New"][color="#000080"]extensionsList[/color][/font] points to a null-terminated buffer containing the name strings of all the extensions available to you. These name strings are separated by spaces, including a space after the last name string.
[bquote]I'm casting the value returned by [font="Courier New"][color="#000080"]glGetString()[/color][/font] because the function actually returns an array of unsigned chars. Since most of the string manipulation functions I'll be using require signed chars, I do the cast once now instead of doing it many times later.[/bquote]
To find out whether or not the extension you're looking for is supported, you'll need to search this buffer to see if it includes the extension's name string. I'm not going to go into great detail about how to parse the buffer, since there are many ways to do so, and it's something that at this stage in your programming career, you should be able to do without much effort. One thing you need to watch out for, though, is accidentally matching a substring. For example, if you're trying to use the [font="Courier New"][color="#000080"]EXT_texture_env[/color][/font] extension, and the implementation doesn't support it, but it does support [font="Courier New"][color="#000080"]EXT_texture_env_dot3[/color][/font], then calling something like:

strstr("GL_EXT_texture_env", extensionsList);
is going to give you positive results, making you think that the [font="Courier New"][color="#000080"]EXT_texture_env[/color][/font] extension is supported, when it's really not. The [font="Courier New"][color="#000080"]CheckExtension()[/color][/font] function in the demo program included with this article shows one way to avoid this problem.

[size="3"]Obtaining the Function's Entry Point

Because of the way Microsoft handles its OpenGL implementation, calling a new function provided by an extension requires that you request a function pointer to the entry point from the ICD. This isn't as bad as it sounds.

First of all, you need to declare a function pointer. If you've worked with function pointers before, you know that they can be pretty ugly. If not, here's an example:

void (APIENTRY * pglCopyTexSubImage3DEXT) (GLenum, GLint, GLint, GLint, GLint, GLint, GLint, GLsizei, GLsizei) = NULL;
[bquote]Update 4/24/03: For the book, and initially here, I used the function name (i.e. glCopyTexSubImage3DEXT) as the pointer name. A reader pointed out to me that on a number of operating systems (e.g. Linux) this can cause serious problems, so it should be avoided. Thanks, Ian![/bquote]
Now that we have the function pointer, we can attempt to assign an entry point to it. This is done using the function [font="Courier New"][color="#000080"]wglGetProcAddress()[/color][/font]:

PROC wglGetProcAddress( LPCSTR lpszProcName );
The only parameter is the name of the function you want to get the address of. The return value is the entry point of the function if it exists, or [font="Courier New"][color="#000080"]NULL[/color][/font] otherwise. Since the value returned is essentially a generic pointer, you need to cast it to the appropriate function pointer type.

Let's look at an example, using the function pointer we declared above:

pglCopyTexSubImage3DEXT = (void (APIENTRY *) (GLenum, GLint, GLint, GLint, GLint, GLint, GLint, GLsizei, GLsizei)) wglGetProcAddress("glCopyTexSubImage3DEXT");
And you thought the function pointer declaration was ugly.

You can make life easier on yourself by using [font="Courier New"][color="#000080"]typedefs[/color][/font]. In fact, you can obtain a header called "glext.h" which contains [font="Courier New"][color="#000080"]typedefs[/color][/font] for most of the extensions out there. This header can usually be obtained from your favorite hardware vendor (for example, NVIDIA includes it in their OpenGL SDK), or from SGI at the following URL:

http://oss.sgi.com/p...ple/ABI/glext.h

Using this header, the code above becomes:

PFNGLCOPYTEXSUBIMAGE3DEXTPROC pglCopyTexSubImage3DEXT = NULL; pglCopyTexSubImage3DEXT = (PFNGLCOPYTEXSUBIMAGE3DEXTPROC) wglGetProcAddress("glCopyTexSubImage3DEXT");
Isn't that a lot better?

As long as [font="Courier New"][color="#000080"]wglGetProcAddress()[/color][/font] doesn't return NULL, you can then freely use the function pointer as if it were a normal OpenGL function.

[size="3"]Declaring Enumerants

To use new enumerants defined by an extension, all you have to do is define the enumerant to be the appropriate integer value. You can find this value in the extension specification. For example, the specification for the [font="Courier New"][color="#000080"]EXT_texture_lod_bias[/color][/font] says that [font="Courier New"][color="#000080"]GL_TEXTURE_LOD_BIAS_EXT[/color][/font] should have a value of 0x8501, so somewhere, probably in a header (or possibly even in gl.h), you'd have the following:

[font="Courier New"]#define GL_TEXTURE_LOD_BIAS_EXT 0x8501[/font]
Rather than defining all these values yourself, you can use the glext.h header, mentioned in the last section, since it contains all of them for you. Most OpenGL programmers I know use this header, so don't hesitate to use it yourself and save some typing time.

[size="3"]Win32 Specifics

In addition to the standard extensions that have been covered so far, there are some extensions that are specific to the Windows system. These extensions provide additions that are very specific to the windowing system and the way it interacts with OpenGL, such as additional options related to pixel formats. These extensions are easily identified by their use of "WGL" instead of "GL" in their names. The name strings for these extensions normally aren't included in the buffer returned by [font="Courier New"][color="#000080"]glGetString(GL_EXTENSIONS)[/color][/font], although a few are. To get all of the Windows-specific extensions, you'll have to use another function, [font="Courier New"][color="#000080"]wglGetExtensionsStringARB()[/color][/font]. As the ARB suffix indicates, it's an extension itself ([font="Courier New"][color="#000080"]ARB_extensions_string[/color][/font]), so you'll have to get the address of it yourself using [font="Courier New"][color="#000080"]wglGetProcAddress()[/color][/font]. Note that for some reason, some ICDs identify this as [font="Courier New"][color="#000080"]wglGetExtensionsStringEXT()[/color][/font] instead, so if you fail to get a pointer to one, try the other. The format of this function is as follows:

const char* wglGetExtensionsStringARB(HDC hdc);
[bquote]Caution: Normally, it's good practice to check for an extension by examining the buffer returned by glGetString() before trying to obtain function entry points. However, it's not strictly necessary to do so. If you try to get the entry point for a non-existant function, [font="Courier New"][color="#000080"]wglGetProcAddress()[/color][/font] will return NULL, and you can simply test for that. The reason I'm mentioning this is because to use [font="Courier New"][color="#000080"]wglGetExtensionsStringARB()[/color][/font], that's exactly what you have to do. It appears that with most ICDs, the name string for this extension, [font="Courier New"][color="#000080"]WGL_ARB_extensions_string[/color][/font], doesn't appear in the buffer returned by [font="Courier New"][color="#000080"]glGetString()[/color][/font]. Instead, it is included in the buffer returned by [font="Courier New"][color="#000080"]wglGetExtensionsStringARB()[/color][/font]! Go figure.[/bquote]
Its sole parameter is the handle to your rendering context. The function returns a buffer similar to that returned by [font="Courier New"][color="#000080"]glGetString(GL_EXTENSIONS)[/color][/font], with the only difference being that it only contains the names of WGL extensions.
[bquote]Some WGL extension string names included in the buffer returned by [font="Courier New"][color="#000080"]wglGetExtensionsStringARB()[/color][/font] may also appear in the buffer returned by [font="Courier New"][color="#000080"]glGetString()[/color][/font]. This is due to the fact that those extensions existed before the creation of the [font="Courier New"][color="#000080"]ARB_extensions_string[/color][/font] extension, and so their name strings appear in both places to avoid breaking existing software.[/bquote]
Just as there is a glext.h header for core OpenGL extensions, so is there a wglext.h for WGL extensions. You can find it at the following link:

http://oss.sgi.com/p...le/ABI/wglext.h

[size="5"]Extensions and OpenGL 1.2 and 1.3, and the Future

Back at the beginning of this article, I said that OpenGL 1.2 and 1.3 features can be accessed using the extensions mechanism, which I've spent the last several pages explaining. The question, then, is how you go about doing that. The answer, as you may have guessed, is to treat 1.2 and 1.3 features as extensions. When it comes right down to it, that's really what they are, since nearly every feature that has been added to OpenGL originated as an extension. The only real difference between 1.2 and 1.3 features and "normal" extensions is that the former tend to be more widely supported in hardware, because, after all, they are part of the standard.
[bquote] Sometimes, an extension that has been added to the OpenGL 1.2 or 1.3 core specification will undergo slight changes, causing the semantics and/or behavior to be somewhat different from what is documented in the extension's specification. You should check the latest OpenGL specification to find out about these changes.[/bquote]
The next update to OpenGL will probably be 1.4. It will most likely continue the trend of promoting successful extensions to become part of the standard, and you should be able to continue to use the extension mechanism to access those features. After that, OpenGL 2.0 will hopefully make its appearance, introducing some radical changes to the standard. Once 2.0 is released, new headers and libraries may be released as well, possibly provided by the ARB members. These will make it easier to use new features.

[size="5"]What You Get

As you can see, using OpenGL 1.2 and 1.3, and extensions in general, isn't a terribly difficult process, but it does take some extra effort. You may be wondering what you can gain by using them, so lets take a closer look at them. The following sections list the features added by OpenGL 1.2 and 1.3, as well as some of the more useful extensions currently available. With each feature, I've included the extension you can use to access it.

[size="3"]OpenGL 1.2

3D Textures
allow you to do some really cool volumetric effects. Unfortunately, they require a significant amount of memory. To give you an idea, a single 256x256x256 16 bit texture will use 32 MB! For this reason, hardware support for them is relatively limited, and because they are also slower than 2D textures, they may not always provide the best solution. They can, however, be useful if used judiciously. 3D textures correspond to the [font="Courier New"][color="#000080"]EXT_texture3D[/color][/font] extension.

BGRA Pixel Formats make it easier to work with file formats which use blue-green-red color component ordering rather than red-green-blue. Bitmaps and Targas are two examples that fall in this category. BGRA pixel formats correspond to the [font="Courier New"][color="#000080"]EXT_bgra[/color][/font] extension.

Packed Pixel Formats provide support for packed pixels in host memory, allowing you to completely represent a pixel using a single unsigned byte, short, or int. Packet pixel formats correspond to the [font="Courier New"][color="#000080"]EXT_packed_pixels[/color][/font] extension, with some additions for reversed component order.

Normally, since texture mapping happens after lighting, modulating a texture with a lit surface will "wash out" specular highlights. To help avoid this affect, the Separate Specular Color feature has been added. This causes OpenGL to track the specular color separately and apply it after texture mapping. Separate specular color corresponds to the [font="Courier New"][color="#000080"]EXT_separate_specular_color[/color][/font] extension.

Texture Coordinate Edge Clamping addresses a problem with filtering at the edges of textures. When you select [font="Courier New"][color="#000080"]GL_CLAMP[/color][/font] as your texture wrap mode and use a linear filtering mode, the border will get sampled along with edge texels. Texture coordinate edge clamping causes only the texels which are actually part of the texture to be sampled. This corresponds to the [font="Courier New"][color="#000080"]SGIS_texture_edge_clamp[/color][/font] extension (which normally shows up as [font="Courier New"][color="#000080"]EXT_texture_edge_clamp[/color][/font] in the [font="Courier New"][color="#000080"]GL_EXTENSIONS[/color][/font] string).

Normal Rescaling allows you to automatically scale normals by a value you specify, which can be faster than renormalization in some cases, although it requires uniform scaling to be useful. This corresponds to the [font="Courier New"][color="#000080"]EXT_rescale_normal[/color][/font] extension.

Texture LOD Control allows you to specify certain parameters related to the texture level of detail used in mipmapping to avoid popping in certain situations. It can also be used to increase texture transfer performance, since the extension can be used to upload only the mipmap levels visible in the current frame, instead of uploading the entire mipmap hierarchy. This matches the [font="Courier New"][color="#000080"]SGIS_texture_lod[/color][/font] extension.

The Draw Element Range feature adds a new function to be used with vertex arrays. [font="Courier New"][color="#000080"]glDrawRangeElements()[/color][/font] is similar to [font="Courier New"][color="#000080"]glDrawElements()[/color][/font], but it lets you indicate the range of indicies within the arrays that you are using, allowing the hardware to process the data more efficiently. This corresponds to the [font="Courier New"][color="#000080"]EXT_draw_range_elements[/color][/font] extension.

The Imaging Subset is not fully present in all OpenGL implementations, since it's primarily intended for image processing applications. It's actually a collection of several extensions. The following are the ones that may be of interest to game developers.
• [font="Courier New"][color="#000080"]EXT_blend_color[/color][/font] allows you to specify a constant color which is used to define blend weighting factors.
• [font="Courier New"][color="#000080"]SGI_color_matrix[/color][/font] introduces a new matrix stack to the pixel pipeline, causing the RGBA components of each pixel to be multiplied by a 4x4 matrix.
• [font="Courier New"][color="#000080"]EXT_blend_subtract[/color][/font] gives you two ways to use the difference between two blended surfaces (rather than the sum).
• [font="Courier New"][color="#000080"]EXT_blend_minmax[/color][/font] lets you keep either the minimum or maximum color components of the source and destination colors. [size="3"]OpenGL 1.3

The Multitexturing extension was promoted to ARB status with OpenGL 1.2.1 (the only real change in that release), and in 1.3, it was made part of the standard. Multitexturing allows you to apply more than one texture to a surface in a single pass, which is useful in many things, such as lightmapping and detail texturing. It was promoted from the [font="Courier New"][color="#000080"]ARB_multitexture[/color][/font] extension.

Texture Compression allows you to either provide OpenGL with precompressed data for your textures, or to have the driver compress the data for you. The advantage in doing so is that you save both texture memory and bandwidth, thereby improving performance. Compressed textures were promoted from the [font="Courier New"][color="#000080"]ARB_compressed_textures[/color][/font] extension.

Cube Map Textures provide a new type of texture consisting of six two-dimensional textures in the shape of a cube. Texture coordinates act like a vector from the center of the cube, indicating which face and which texels to use. Cube mapping is useful in environment mapping and texture-based diffuse lighting. It is also important for pixel-perfect dot3 bumpmapping, as a normalization lookup for interpolated fragment normals. It was promoted from the [font="Courier New"][color="#000080"]ARB_texture_cube_map[/color][/font] extension.

Multisampling allows for automatic antialiasing by sampling all geometry several times for each pixel. When it's supported, and extra buffer is created which contains color, depth, and stencil values. Multisampling is, of course, expensive, and you need to be sure to request a rendering context that supports it. It was promoted from the [font="Courier New"][color="#000080"]ARB_multisampling[/color][/font] extension.

The Texture Add Environment Mode adds a new enumerant which can be passed to [font="Courier New"][color="#000080"]glTexEnv()[/color][/font]. It causes the texture to be additively combined with the incoming fragment. This was promoted from the [font="Courier New"][color="#000080"]ARB_texture_env_add[/color][/font] extension.

Texture Combine Environment Modes add a lot of new options for the way textures are combined. In addition to the texture color and the incoming fragment, you can also include a constant texture color and the results of the previous texture environment stage as parameters. These parameters can be combined using passthrough, multiplication, addition, biased addition, subtraction, and linear interpolation. You can select combiner operations for the RGB and alpha components separately. You can also scale the final result. As you can see, this addition gives you a great deal of flexibility. Texture combine environment modes were promoted from the [font="Courier New"][color="#000080"]ARB_texture_env_combine[/color][/font] extension.

The Texture Dot3 Environment Mode adds a new enumerant to the texture combine environment modes. The dot3 environment mode allows you to take the dot product of two specified components and place the results in the RGB or RGBA components of the output color. This can be used for per-pixel lighting or bump mapping. The dot3 environment mode was promoted from the [font="Courier New"][color="#000080"]ARB_texture_env_dot3[/color][/font] extension.

Texture Border Clamp is similar to texture edge clamp, except that it causes texture coordinates that straddle the edge to sample from border texels only, rather than from edge texels. This was promoted from the [font="Courier New"][color="#000080"]ARB_texture_border_clamp[/color][/font] extension.

Transpose Matrices allow you to pass row major matrices to OpenGL, which normally uses column major matrices. This is useful not only because it is how C stores two dimensional arrays, but because it is how Direct3D stores matricies, which saves conversion work when you're writing a rendering engine that uses both APIs. This addition only adds to the interface; it does not change the way OpenGL works internally. Transpose matrices were promoted from the [font="Courier New"][color="#000080"]ARB_transpose_matrix[/color][/font] extension.

[size="3"]Useful Extensions

At the time of writing, there are 269 extensions listed in the Extension Registry. Even if I focused on the ones actually being used, I couldn't hope to cover them all, even briefly. Instead, I'll focus on a few that seem to be the most important for use in games.

If you're unfamiliar with shaders, then a quick overview is in order. Vertex shaders allow you to customize the geometry transformation pipeline. Pixel shaders work later in the pipeline, and allow you to control how the final pixel color is determined. Together, the two provide incredible functionality. I recommend that you download NVIDIA's Effects Browser to see examples of the things you can do with shaders.

Using shaders can be somewhat problematic right now due to the fact that NVIDIA and ATI both handle them very differently. If you want your game to take advantage of shaders, you'll have to write a lot of special case code to use each vendor's method. At the ARB's last several meetings, this has been a major discussion point. There is a great deal of pressure to create a common shader interface. In fact, it is at the core of 3D Labs' OpenGL 2.0 proposal. Hopefully, the 1.4 specification will address this issue, but the ARB seems to be split as to whether a common shader interface should be a necessary component of 1.4.

Compiled Vertex Arrays
The [font="Courier New"][color="#000080"]EXT_compiled_vertex_arrays[/color][/font] extension adds two functions which allow you to lock and unlock your vertex arrays. When the vertex arrays are locked, OpenGL assumes that their contents will not be changed. This allows OpenGL to make certain optimizations, such as caching the results of vertex transformation. This is especially useful if your data contains large numbers of shared vertices, or if you are using multipass rendering. When a vertex needs to be transformed, the cache is checked to see if the results of the transformation are already available. If they are, the cached results are used instead of recalculating the transformation.

The benefits gained by using CVAs depend on the data set, the video card, and the drivers. Although you generally won't see a decrease in performance when using CVAs, it's quite possible that you won't see much of an increase either. In any case, the fact that they are fairly widely supported makes them worth looking into.

WGL Extensions
There are a number of extensions available that add to the way Windows interfaces with OpenGL. Here are some of the main ones.
• [font="Courier New"][color="#000080"]ARB_pixel_format[/color][/font] augments the standard pixel format functions (i.e. [font="Courier New"][color="#000080"]DescribePixelFormat[/color][/font], [font="Courier New"][color="#000080"]ChoosePixelFormat[/color][/font], [font="Courier New"][color="#000080"]SetPixelFormat[/color][/font], and [font="Courier New"][color="#000080"]GetPixelFormat[/color][/font]), giving you more control over which pixel format is used. The functions allow you to query individual pixel format attributes, and allow for the addition of new attributes that are not included in the pixel format descriptor structure. Many other WGL extensions are dependent on this extension.
• [font="Courier New"][color="#000080"]ARB_pbuffer[/color][/font] adds pixel buffers, which are off-screen (non-visible) rendering buffers. On most cards, these buffers are in video memory, and the operation is hardware accelerated. They are often useful for creating dynamic textures, especially when used with the render texture extension.
• [font="Courier New"][color="#000080"]ARB_render_texture[/color][/font] depends on the pbuffer extension. It is specifically designed to provide buffers that can be rendered to and be used as texture data. These buffers are the perfect solution for dynamic texturing.
• [font="Courier New"][color="#000080"]ARB_buffer_region[/color][/font] allows you to save portions of the color, depth, or stencil buffers to either system or video memory. This region can then be quickly restored to the OpenGL window. Fences and Ranges
NVIDIA has created two extensions, [font="Courier New"][color="#000080"]NV_fence[/color][/font] and [font="Courier New"][color="#000080"]NV_vertex_array_range[/color][/font], that can make video cards based on NVIDIA chipsets use vertex data much more efficiently than they normally would.

On NVIDIA hardware, the vertex array range extension is currently the fastest way to transfer data from the application to the GPU. Its speed comes from the fact that it allows the developer to allocate and access memory that normally can only be accessed by the GPU.

Although not directly related to the vertex array range extension, the fence extension can help make it even more efficient. When a fence is added to the OpenGL command stream, it can then be queried at any time. Usually, it is queried to determine whether or not it has been completed yet. In addition, you can force the application to wait for the fence to be completed. Fences can be used with vertex array range when there is not enough memory to hold all of your vertex data at once. In this situation, you can fill up available memory, insert a fence, and when the fence has completed, repeat the process.

There are two extensions, [font="Courier New"][color="#000080"]SGIX_shadow[/color][/font] and [font="Courier New"][color="#000080"]SGIX_depth_texture[/color][/font], which work together to allow for hardware-accelerated shadow mapping techniques. The main reason I mention these is that there are currently proposals in place to promote these extensions to ARB status. In addition, NVIDIA is recommending that they be included in the OpenGL 1.4 core specification. Because they may change somewhat if they are promoted, I won't go into detail about how these extensions work. They may prove to be a very attractive alternative to the stencil shadow techniques presently in use.

[size="5"]Writing Well-Behaved Programs Using Extensions

Something you need to be very aware of when using any extension is that it is highly likely that someone will run your program on a system that does not support that extension. It's your responsibility to make sure that when this happens, your program behaves intelligently, rather than crashing or rendering garbage to the screen. In this section, you'll learn several methods to help ensure that your program gets the best possible results on all systems. The focus is on two areas: how to select which extensions to use, and how to respond when an extension you're using isn't supported.

[size="3"]Choosing Extensions

The most important thing you can do to insure that your program runs on as many systems as possible is to choose your extensions wisely. The following are some factors you should consider.

Do you really need the extension?
A quick look at the Extension Registry will reveal that there are a lot of different extensions available, and new ones are being introduced on a regular basis. It's tempting to try many of them out just to see what they do. If you're coding a demo, there's nothing wrong with this, but if you're creating a game that will be distributed to a lot of people, you need to ask yourself whether or not the extension is really needed. Does it make your game run faster? Does it make your game use less video memory? Does it improve the visual quality of your game? Will using it reduce your development time? If the answer to any of these is yes, then the extension is probably a good candidate for inclusion in your product. On the other hand, if it offers no significant benefit, you may want to avoid it altogether.

What level of promotion is the extension at?
Extensions with higher promotion levels tend to be more widely supported. Any former extension that has been made part of the core 1.2 or 1.3 specification will be supported in compliant implementations, so they are the safest to use (1.2 more so than 1.3 since it's been around for longer). ARB-approved extensions (the ones that use the [font="Courier New"][color="#000080"]ARB[/color][/font] prefix) aren't required to be supported in compliant implementations, but they are expected to be widely supported, so they're the next safest. Extensions using the [font="Courier New"][color="#000080"]EXT[/color][/font] prefix are supported by two or more hardware vendors, and are thus moderately safe to use. Finally, vendor specific extensions are the most dangerous. Using them generally requires that you write a lot of special case code. However, they often offer significant benefits, so they should not be ignored. You just have to be especially careful when using them.
[bquote]There are times when a vendor-specific extension can be completely replaced by an EXT or ARB extension. In this case, the latter should always be favored.[/bquote]
If your target audience is hardcore gamers, you can expect that they are going to have newer hardware that will support many, if not all, of the latest extensions, so you can feel safer using them. Moreover, they will probably expect you to use the latest extensions; they want your game to take advantage of all those features they paid so much money for!

If, on the other hand, you're targeting casual game players, you'll probably want to use very few extensions, if any.

When will your game be done?
As mentioned earlier, the graphics industry moves at an extremely quick pace. An extension that is only supported on cutting-edge cards today may enjoy widespread support in two years. Then again, it may become entirely obsolete, either because it is something that consumers don't want, or because it gets replaced by another extension. If your ship date is far enough in the future, you may be able to risk using brand new extensions to enhance your game's graphics. On the other hand, if your game is close to shipping, or if you don't want to risk possible rewrites later on, you're better off sticking with extensions that are already well-supported.

[size="3"]What To Do When an Extension Isn't Supported

First of all, let's make one thing very clear. Before you use any extension, you need to check to see if it is supported on the user's system. If it's not, you need to do something about it. What that "something" is depends on a number of things, as we'll discuss here, but you really need to have some kind of contingency plan. I've seen OpenGL code that just assumes that the needed extensions will be there. This can lead to blank screens, unexpected rendering effects, and even crashes. Here are some of the possible methods you can use when you find that an extension isn't supported.

Don't Use the Extension
If the extension is non-critical, or if there is simply no alternate way to accomplish the same thing, you may be able to get away with just not using it at all. For example compiled vertex arrays ([font="Courier New"][color="#000080"]EXT_compiled_vertex_array[/color][/font]) offer potential speed enhancements when using vertex arrays. The speed gains usually aren't big enough to make or break your program, though, so if they aren't supported, you can use a flag or other means to tell your program to not attempt to use them.

Try Similar Extensions
Because of the way that extensions evolve, it's possible that the extension you're trying to use is present under an older name (for example, most ARB extensions used to be EXT extensions, and vendor specific extensions before that). Or, if you're using a vendor-specific extension, there may be extensions from other vendors that do close to the same thing. The biggest drawback to this solution is that it requires a lot of special case code.

Find an Alternate Way
Many extensions were introduced as more efficient ways to do things which could already be done using only core OpenGL features. If you're willing to put in the effort, you can deal with the absence of these extensions by doing things the "old way". For instance, most things that can be done with multitexturing can be done using multipass rendering and alpha blending. Besides the additional code you have to add to handle this, your game will run slower because it has to make multiple passes through the geometry. That's better than not being able to run the game at all, and arguably better than simply dumping multitexturing and sacrificing visual quality.

Exit Gracefully
In some cases, you may decide that an extension is essential to your program, possibly because there is no other way to do the things you want to do, or because providing a backup plan would require more time and effort than you're willing to invest. When this happens, you should cause your program to exit normally, with a message telling the user what they need to be able to play your game. Note that if you choose to go this route, you should make sure that the hardware requirements listed on the product clearly state what is needed, or your customers will hate you.

[size="5"]The Demo

I've created a simple demo (see attached resource file) to show you some extensions in action. As you can see from Figure 1, the demo itself is fairly simple, nothing more than a light moving above a textured surface, casting a light on it using a lightmap. The demo isn't interactive at all. I kept it simple because I wanted to be able to focus on the extension mechanism.

Figure 1: Basic lightmapping (click to enlarge)

The demo uses seven different extensions. Some of them aren't strictly necessary, but I wanted to include enough to get the point across. Table 2 lists all of the extensions in use, and how they are used.

Table 2 - Extensions used in the demo Extension Usage ARB_multitexture The floor in this demo is a single quad with two textures applied to it: one for the bricks, and the other for the lightmap, which is updated with the light's position. The textures are combined using modulation. EXT_point_parametetersWhen used, this extension causes point primitives to change size depending on their distance from the eye. You can set attenuation factors to determine how much the size changes, as well as define maximum and minimum sizes, and even specify that the points become partially transparent if they go below a certain threshold. The yellow light in the demo takes advantage of this extension. The effect is subtle, but you should be able to notice it changing size. EXT_swap_control Most OpenGL drivers allow the user to specify whether or not screen redraws should wait for the monitor's vertical refresh, or vertical sync. If this is enabled, your game's framerate will be limited to whatever the monitor refresh rate is set to. This extension allows you to programmatically disable vsync to get to avoid this limitation. EXT_bgra Since the demo uses Targas for textures, using this extension allows it to use their data directly without having to swap the red and blue components before creating the textures. ARB_texture_compression Since the demo only uses two textures, it won't gain much by using texture compression, but since it's easy, so I used it anyway. I allow the drivers to compress the data for me, rather than doing so myself beforehand. EXT_texture_edge_clamp Again, this extension wasn't strictly necessary, but the demo shows how easy it is to use. SGIS_generate_mipmap GLU provides a function, gluBuild2DMipMaps, that allows you to specify just the base level of a mipmap chain and automatically generates the other levels for you. This extension performs essentially the same function, with a couple of exceptions. One, it is a little more efficient. Two, it will cause all of the mipmap levels to be regenerated automatically whenever you change the base level. This can be useful when using dynamic textures. The full source code to the demo is included on the CD, but there are a couple of functions that I want to look at.

The first is [font="Courier New"][color="#000080"]InitializeExtensions()[/color][/font]. This function is called at startup, right after the rendering context is created. It verifies that the extensions used are supported, and gets the function entry points that are needed.

bool InitializeExtensions() { if (CheckExtension("GL_ARB_multitexture")) { glMultiTexCoord2f = (PFNGLMULTITEXCOORD2FARBPROC) wglGetProcAddress("glMultiTexCoord2fARB"); glActiveTexture = (PFNGLCLIENTACTIVETEXTUREARBPROC) wglGetProcAddress("glActiveTextureARB"); glClientActiveTexture = (PFNGLACTIVETEXTUREARBPROC) wglGetProcAddress("glClientActiveTextureARB"); } else { MessageBox(g_hwnd, "This program requires multitexturing, which " "is not supported by your hardware", "ERROR", MB_OK); return false; } if (CheckExtension("GL_EXT_point_parameters")) { glPointParameterfvEXT = (PFNGLPOINTPARAMETERFVEXTPROC) wglGetProcAddress("glPointParameterfvEXT"); } if (CheckExtension("WGL_EXT_swap_control")) { wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT"); } if (!CheckExtension("GL_EXT_bgra")) { MessageBox(g_hwnd, "This program requires the BGRA pixel storage" "format which is not supported by your hardware", "ERROR", MB_OK); return false; } g_useTextureCompression = CheckExtension("GL_ARB_texture_compression"); g_useEdgeClamp = CheckExtension("GL_EXT_texture_edge_clamp"); g_useSGISMipmapGeneration = CheckExtension("GL_SGIS_generate_mipmap"); return true; }
As you can see, there are two extensions that the demo requires: multitexturing and BGRA pixel formats. Although I could have provided alternate ways to do both of these things, doing so would have unnecessarily complicated the program.

Report Article

## User Feedback

There are no comments to display.

## Create an account

Register a new account

• 0
• 0
• 32
• 0
• 1

• 12
• 16
• 26
• 10
• 44
• ### Similar Content

• I have been having difficulty with many lights and deferred shading in opengl. Some users here helped me but I'm still unsuccessful. ( i posted in stackoverflow but i dont like how the site work, i prefer here)
I'm at a time trying to add lights to my scene but unfortunately it's to no avail.
Following the learnopengl deferred shading tutorial, several lights are shown but in the final screen quad shader, and I wanted to render my lights independently.
At the end of the session the author indicates how to do it, which is adding beads as "light source", as shown below:

How could I accumulate all the lights in my main scene? Why is not working ?
That is, each light was rendered on the ball along with the framebuffer.
A snippet of my code:
Vertex:
layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoords; uniform mat4 projection; uniform mat4 view; uniform mat4 model; out vec2 TexCoords; void main() { TexCoords = aTexCoords; gl_Position = projection * view * model * vec4(aPos, 1.0); } //////////////// FRAGMENT ////////////// out vec4 FragColor; in vec2 TexCoords; uniform sampler2D gPosition; uniform sampler2D gNormal; uniform sampler2D gAlbedoSpec; struct Light { vec3 Position; vec3 Color; float Linear; float Quadratic; }; uniform Light light; uniform vec3 viewPos; void main() { // retrieve data from gbuffer vec3 FragPos = texture(gPosition, TexCoords).rgb; vec3 Normal = texture(gNormal, TexCoords).rgb; vec3 Diffuse = texture(gAlbedoSpec, TexCoords).rgb; float Specular = texture(gAlbedoSpec, TexCoords).a; // then calculate lighting as usual vec3 lighting = Diffuse * 0.5; // hard-coded ambient component vec3 viewDir = normalize(viewPos - FragPos); // diffuse vec3 lightDir = normalize(light.Position - FragPos); vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Diffuse * light.Color; // specular vec3 halfwayDir = normalize(lightDir + viewDir); float spec = pow(max(dot(Normal, halfwayDir), 0.0), 16.0); vec3 specular = light.Color * spec * Specular; // attenuation float distance = length(light.Position - FragPos); float attenuation = 1.0 / (1.0 + light.Linear * distance + light.Quadratic * distance * distance); diffuse *= attenuation; specular *= attenuation; lighting += diffuse + specular; FragColor = vec4(lighting, 1.0); }
Has anyone experienced this and could guide me in this situation? I can not make the lights pile up to have a final result with all the lights.
For more details, I put the code here.
How could I accumulate all the lights in my main scene?
Please, I've been trying for a long time, have patience.

Thank u

• Hello!
My texture problems just don't want to stop keep coming...
After a lot of discussions here with you guys, I've learned a lot about textures and digital images and I fixed my bugs. But right now I'm making an animation system and this happened.

Now if you see, the first animation (bomb) is ok. But the second and the third (which are arrows changing direction) are being render weird (They get the GL_REPEAT effect).
In order to be sure, I only rendered (without using my animation system or anything else i created in my project, just using simple opengl rendering code) the textures that are causing this issue and this is the result (all these textures have exactly 115x93 resolution)

I will attach all the images which I'm using.
giphy-27 and giphy-28 are rendering just fine.
All the others not.They give me an effect like GL_REPEAT which I use in my code. This is why I'm getting this result? But my texture coordinates are inside the range of -1 and 1 so why?
My Texture Code:
#include "Texture.h" #include "STB_IMAGE/stb_image.h" #include "GLCall.h" #include "EngineError.h" #include "Logger.h" Texture::Texture(std::string path, int unit) { //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //This image is not supported. else { std::string err = "The Image: " + path; err += " , is using " + m_channels; err += " channels which are not supported."; throw VampEngine::EngineError(err); } //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw VampEngine::EngineError("There was an error loading image \ (Myabe the image format is not supported): " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); }
My Render Code:
#include "Renderer.h" #include "glcall.h" #include "shader.h" Renderer::Renderer() { //Vertices. float vertices[] = { //Positions Texture Coordinates. 0.0f, 0.0f, 0.0f, 0.0f, //Left Bottom. 0.0f, 1.0f, 0.0f, 1.0f, //Left Top. 1.0f, 1.0f, 1.0f, 1.0f, //Right Top. 1.0f, 0.0f, 1.0f, 0.0f //Right Bottom. }; //Indices. unsigned int indices[] = { 0, 1, 2, //Left Up Triangle. 0, 3, 2 //Right Down Triangle. }; //Create and bind a Vertex Array. GLCall(glGenVertexArrays(1, &VAO)); GLCall(glBindVertexArray(VAO)); //Create and bind a Vertex Buffer. GLCall(glGenBuffers(1, &VBO)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, VBO)); //Create and bind an Index Buffer. GLCall(glGenBuffers(1, &EBO)); GLCall(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO)); //Transfer the data to the VBO and EBO. GLCall(glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW)); GLCall(glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW)); //Enable and create the attribute for both Positions and Texture Coordinates. GLCall(glEnableVertexAttribArray(0)); GLCall(glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(float) * 4, (void *)0)); //Create the shader program. m_shader = new Shader("Shaders/sprite_vertex.glsl", "Shaders/sprite_fragment.glsl"); } Renderer::~Renderer() { //Clean Up. GLCall(glDeleteVertexArrays(1, &VAO)); GLCall(glDeleteBuffers(1, &VBO)); GLCall(glDeleteBuffers(1, &EBO)); delete m_shader; } void Renderer::RenderElements(glm::mat4 model) { //Create the projection matrix. glm::mat4 proj = glm::ortho(0.0f, 600.0f, 600.0f, 0.0f, -1.0f, 1.0f); //Set the texture unit to be used. m_shader->SetUniform1i("diffuse", 0); //Set the transformation matrices. m_shader->SetUniformMat4f("model", model); m_shader->SetUniformMat4f("proj", proj); //Draw Call. GLCall(glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL)); }
#version 330 core layout(location = 0) in vec4 aData; uniform mat4 model; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * model * vec4(aData.xy, 0.0f, 1.0); TexCoord = aData.zw; }
#version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); }

• Hello!
For those who don't know me I have started a quite amount of threads about textures in opengl. I was encountering bugs like the texture was not appearing correctly (even that my code and shaders where fine) or I was getting access violation in memory when I was uploading a texture into the gpu. Mostly I thought that these might be AMD's bugs because when someone was running my code he was getting a nice result. Then someone told me "Some drivers implementations are more forgiven than others, so it might happen that your driver does not forgive that easily. This might be the reason that other can see the output you where expecting". I did not believe him and move on.
Then Mr. @Hodgman gave me the light. He explained me somethings about images and what channels are (I had no clue) and with some research from my perspective I learned how digital images work in theory and what channels are. Then by also reading this article about image formats I also learned some more stuff.
The question now is, if for example I want to upload a PNG to the gpu, am I 100% that I can use 4 channels? Or even that the image is a PNG it might not contain all 4 channels (rgba). So I need somehow to retrieve that information so my code below will be able to tell the driver how to read the data based on the channels.
I'm asking this just to know how to properly write the code below (with capitals are the variables which I want you to tell me how to specify)
stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, HOW_MANY_CHANNELS_TO_USE); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); GLCall(glTexImage2D(GL_TEXTURE_2D, 0, WHAT_FORMAT_FOR_THE_TEXTURE, m_width, m_height, 0, WHAT_FORMAT_FOR_THE_DATA, GL_UNSIGNED_BYTE, data)); } So back to my question. If I'm loading a PNG, and tell stbi_load to use 4 channels and then into glTexImage2D,  WHAT_FORMAT_FOR_THE_DATA = RGBA will I be sure that the driver will properly read the data without getting an access violation?
I want to write a code that no matter the image file, it will always be able to read the data correctly and upload them to the GPU.
Like 100% of the tutorials and guides about openGL out there (even one which I purchased from Udemy) where not explaining all these stuff and this is why I was experiencing all these bugs and got stuck for months!

Also some documentation you might need to know about stbi_load to help me more:
// Limitations: // - no 12-bit-per-channel JPEG // - no JPEGs with arithmetic coding // - GIF always returns *comp=4 // // Basic usage (see HDR discussion below for HDR usage): // int x,y,n; // unsigned char *data = stbi_load(filename, &x, &y, &n, 0); // // ... process data if not NULL ... // // ... x = width, y = height, n = # 8-bit components per pixel ... // // ... replace '0' with '1'..'4' to force that many components per pixel // // ... but 'n' will always be the number that it would have been if you said 0 // stbi_image_free(data)

• Hello!

I was trying to load some textures and I was getting this access violation atioglxx.dll access violation
stb image which i'm using to load the png file into the memory, was not reporting any errors.

I found this on the internet explaining that it is a bug from AMD.
I fixed that problem by changing the image file which i was using. The image that was causing this issue was generated by this online converter from gif to pngs.

Does anyone know more about it?

Thank you.
• By lGuy
Hi, I've recently been trying to implement screen space reflections into my engine, however, it is extremely buggy. I'm using this tutorial : http://imanolfotia.com/blog/update/2017/03/11/ScreenSpaceReflections.html
The reflections look decent when I am close to the ground (first image), however when I get further away from the ground (the surface that is reflecting stuff), the reflections become blocky and strange (second image).
I have a feeling that it has something to do with the fact that the further the rays travel in view space, the more scattered they get -> therefore the reflected image is less detailed hence the blockiness. However I really am not sure about this and if this is the case, I don't know how to fix it.
It would be great if anyone had any suggestions around how to debug or sort this thing out. Thanks.
Here is the code for the ray casting
vec4 ray_cast(inout vec3 direction, inout vec3 hit_coord, out float depth_difference, out bool success) { vec3 original_coord = hit_coord; direction *= 0.2; vec4 projected_coord; float sampled_depth; for (int i = 0; i < 20; ++i) { hit_coord += direction; projected_coord = projection_matrix * vec4(hit_coord, 1.0); projected_coord.xy /= projected_coord.w; projected_coord.xy = projected_coord.xy * 0.5 + 0.5; // view_positions store the view space coordinates of the objects sampled_depth = textureLod(view_positions, projected_coord.xy, 2).z; if (sampled_depth > 1000.0) continue; depth_difference = hit_coord.z - sampled_depth; if ((direction.z - depth_difference) < 1.2) { if (depth_difference <= 0) { vec4 result; // binary search for more detailed sample result = vec4(binary_search(direction, hit_coord, depth_difference), 1.0); success = true; return result; } } } return vec4(projected_coord.xy, sampled_depth, 0.0); } Here is the code just before this gets called
float ddepth; vec3 jitt = mix(vec3(0.0), vec3(hash33(view_position)), 0.5); vec3 ray_dir = reflect(normalize(view_position), normalize(view_normal)); ray_dir = ray_dir * max(0.2, -view_position.z); /* ray cast */ vec4 coords = ray_cast(ray_dir, view_position, ddepth);

×