• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Tocs1001

Members
  • Content count

    142
  • Joined

  • Last visited

Community Reputation

695 Good

About Tocs1001

  • Rank
    Member
  1. The wiki says "A forward compatible context must fully remove deprecated features in the version that it returns; you should never actually use this." It's actually bolded on wiki so I took it at face value. Perhaps the wiki is overzealous. It seems like it would be a good idea to not include deprecated features, however I suppose if somewhere down the line features I'm using became deprecated and I asked for the latest context without deprecated features I would suddenly break my program. That seems like a far out case though. Interestingly enough if I set major to 1 and minor to 0. I get a 3.3 context. I tried updating the drivers, same results.
  2. Interesting thought. Alas, 4.3 fails as well.   Here's where the code's at now. void DisplayWindowsError() { LPVOID lpMsgBuf; DWORD dw = GetLastError(); FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, dw, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPTSTR)&lpMsgBuf, 0, NULL); if (lpMsgBuf) { std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl; LocalFree(lpMsgBuf); } else { std::cout << "Windows Error:" << dw << ": unknown " << std::hex << dw << std::endl; } } GraphicsContext::GraphicsContext(ContextTarget &target) : Target (target) { PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 32Bit Z-Buffer (Depth Buffer) 8, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; PixelFormat = 1; if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd))) { DisplayWindowsError(); cout << "Failed to choose pixel format." << endl; } if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd)) { //DestroyGameWindow (); //Insert Error DisplayWindowsError(); cout << "Failed to set pixel format." << endl; } HGLRC temp; temp = wglCreateContext(target.GetHDC()); if (!temp) { //DestroyGameWindow (); //Insert Error cout << "Failed to create context" << endl; } DisplayWindowsError(); if (!wglMakeCurrent(target.GetHDC (), temp)) { //DestroyGameWindow (); cout << "Failed to make current." << endl; GLErrorCheck(); } DisplayWindowsError(); GLenum err = glewInit(); if (err != GLEW_OK) { char *error = (char *)glewGetErrorString(err); cout << "GLEW INIT FAIL: " << error << endl; } int contextattribs [] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 4, WGL_CONTEXT_MINOR_VERSION_ARB, 3, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, #ifdef _DEBUG WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_DEBUG_BIT_ARB, #endif 0 }; int pfattribs[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_COLOR_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 24, WGL_STENCIL_BITS_ARB, 8, //WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0 }; if (wglewIsSupported ("WGL_ARB_create_context") == 1) { unsigned int formatcount; if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount)) { std::cout << "Failed to find a matching pixel format" << std::endl; DisplayWindowsError(); } if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd)) { DisplayWindowsError(); std::cout << "Failed to set pixelformat" << std::endl; } hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs); if (!hRC) { DisplayWindowsError(); std::cout << "Failed to create context." << std::endl; } wglMakeCurrent(nullptr, nullptr); DisplayWindowsError(); wglDeleteContext(temp); DisplayWindowsError(); GLErrorCheck(); MakeCurrent (); } else { cout << "Failed to create context again..." << endl; } #ifdef _DEBUG glEnable(GL_DEBUG_OUTPUT); glDebugMessageCallback(dbgcallback, nullptr); #endif char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION); //GLErrorCheck; char *version = (char *)glGetString(GL_VERSION); //GLErrorCheck; std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl; glViewport (0,0,Target.GetWidth (), Target.GetHeight ()); GLErrorCheck (); SetClearColor (Color(0,0,0,0)); SetClearDepth(1000.0f); //EnableDepthBuffering (); //DisableDepthTest (); NormalBlending (); glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted GLErrorCheck(); //glLoadIdentity (); }   Outputs: Windows Error:0: The operation completed successfully. Windows Error:0: The operation completed successfully. Windows Error:3221692565: unknown c0072095 Failed to create context.   Maybe my drivers need updating or something...
  3. I noticed 3221692565 is 0xC0072095. According to the NVidia create context spec 0x2095 is ERROR_INVALID_VERSION_ARB. So I bumped the version down to 3.3 and it successfully creates a context. Which raises more questions because I should be able to create a 4.2 context. I need a 4.2 context because of shader_storage_buffers and other things.    
  4. I suppose there isn't really a great reason to not use GLFW. Other than I do my own window creation as well. I have a little UI framework that the context creation hooks into as well. I suppose I could fiddle with GLFW to get it all to work in harmony. But I've been using my own context creation for like a year now, it only recently decided to die on my laptop. Is there any reason I can't use my own context creation? It's part "not invented here" syndrome and part curiosity as to what I'm doing wrong. Perhaps I'll look at GLFW's source for hints.
  5. Hmm I placed it into the "contextattribs []" array in the new code. Still getting the same result. Thanks for the help.
  6. Something weird has happened. I had context creation working just fine. It worked on my desktop and my laptop and I was happily plugging along doing graphics programming. One day I attempt to build my code on my laptop again, suddenly context creation starts to fail. I don't know what has changed.   My original context creation code looks like this https://gist.github.com/LordTocs/f227528a729986df9643   It's sloppy and has next to no error handling but it at least worked. It still works on my desktop and fails on my laptop.   Specifically wglCreateContextAttribsARB fails.   I started to modify the code in an attempt to figure out what was wrong. I added some "GetLastError()" print outs in hopes I was doing something silly and it would tell me what was wrong. Using "FormatMessage()" to change error code into readable stings.   Instead of a usable error I was greeted with GetLastError() returning 3221692565. Which FormatMessage() had no idea what to do with. A quick cursory internet search lead me to a single result on the opengl forums. Which didn't yield any results.    After some reading I was told not to create a forward compatible context. And that I should use wglChoosePixelFormatARB to get the appropriate pixel format. Thinking this was the issue, I tried to use this function, it didn't help.   So now I'm left with this code that doesn't work and I'm very confused. void DisplayWindowsError() { LPVOID lpMsgBuf; DWORD dw = GetLastError(); FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, dw, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPTSTR)&lpMsgBuf, 0, NULL); if (lpMsgBuf) { std::cout << "Windows Error:" << dw << ": " << (char *)lpMsgBuf << std::endl; LocalFree(lpMsgBuf); } else { std::cout << "Windows Error:" << dw << ": unknown" << std::endl; } } GraphicsContext::GraphicsContext(ContextTarget &target) : Target (target) { PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 32Bit Z-Buffer (Depth Buffer) 8, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; PixelFormat = 1; if (!(PixelFormat = ChoosePixelFormat (target.GetHDC (), &pfd))) { DisplayWindowsError(); cout << "Failed to choose pixel format." << endl; } if (!SetPixelFormat(target.GetHDC(),PixelFormat, &pfd)) { //DestroyGameWindow (); //Insert Error DisplayWindowsError(); cout << "Failed to set pixel format." << endl; } HGLRC temp; temp = wglCreateContext(target.GetHDC()); if (!temp) { //DestroyGameWindow (); //Insert Error cout << "Failed to create context" << endl; } DisplayWindowsError(); if (!wglMakeCurrent(target.GetHDC (), temp)) { //DestroyGameWindow (); cout << "Failed to make current." << endl; GLErrorCheck(); } DisplayWindowsError(); GLenum err = glewInit(); if (err != GLEW_OK) { char *error = (char *)glewGetErrorString(err); cout << "GLEW INIT FAIL: " << error << endl; } int contextattribs [] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 4, WGL_CONTEXT_MINOR_VERSION_ARB, 2, #ifdef _DEBUG WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_DEBUG_BIT_ARB, #endif 0 }; int pfattribs[] = { WGL_DRAW_TO_WINDOW_ARB, GL_TRUE, WGL_SUPPORT_OPENGL_ARB, GL_TRUE, WGL_DOUBLE_BUFFER_ARB, GL_TRUE, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, WGL_COLOR_BITS_ARB, 32, WGL_DEPTH_BITS_ARB, 24, WGL_STENCIL_BITS_ARB, 8, 0 }; if (wglewIsSupported ("WGL_ARB_create_context") == 1) { unsigned int formatcount; if (!wglChoosePixelFormatARB(target.GetHDC(), pfattribs, nullptr, 1, (int *)&PixelFormat, &formatcount)) { std::cout << "Failed to find a matching pixel format" << std::endl; DisplayWindowsError(); } if (!SetPixelFormat(target.GetHDC(), PixelFormat, &pfd)) { DisplayWindowsError(); std::cout << "Failed to set pixelformat" << std::endl; } hRC = wglCreateContextAttribsARB(Target.GetHDC(), nullptr, contextattribs); if (!hRC) { DisplayWindowsError(); std::cout << "Failed to create context." << std::endl; } wglMakeCurrent(nullptr, nullptr); DisplayWindowsError(); wglDeleteContext(temp); DisplayWindowsError(); GLErrorCheck(); MakeCurrent (); } else { cout << "Failed to create context again..." << endl; } #ifdef _DEBUG glEnable(GL_DEBUG_OUTPUT); glDebugMessageCallback(dbgcallback, nullptr); #endif char *shadeversion = (char *)glGetString (GL_SHADING_LANGUAGE_VERSION); //GLErrorCheck; char *version = (char *)glGetString(GL_VERSION); //GLErrorCheck; std::cout << "Version: " << version << std::endl << "Shading Version: " << shadeversion << std::endl; glViewport (0,0,Target.GetWidth (), Target.GetHeight ()); GLErrorCheck (); SetClearColor (Color(0,0,0,0)); SetClearDepth(1000.0f); //EnableDepthBuffering (); //DisableDepthTest (); NormalBlending (); glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); //Doesn't get Abstracted GLErrorCheck(); //glLoadIdentity (); } Gist mirror https://gist.github.com/LordTocs/9266d8c8f7e3eb9a498e   If anyone knows what I'm doing wrong I'd love to know. Thanks.            
  7.     Wait, did you donate $300, or did you get something for it?  Cause $300 was the price of the DK1... if you got one of those then that sounds like your transaction was complete and you got what you paid for. I don't expect nVidia to give me a discount every time they release a new graphics card.     While it's true I did get a kit for my 300, the original plan wasn't even to have a DK2. The original plan was if you paid 300 dollars you got the dev kit which was proper for developing software for the consumer version. Which DK1 is not suited for developing software for the consumer version. The difference is with NVidia you're getting a complete product, you pay the money and you get the item. You don't pay the money up front, wait for them to develop the card possibly changing some of the details, and then end up requiring a second card for proper use.   So while I did get a really cool dev kit which has been fun to use and make things for, and one could argue my "transaction was complete". I still feel a little underappreciated mostly because DK2 wasn't in the picture when I payed that 300 dollars, now I need an additional 350 dollars (+ 22 shipping) to get when I originally expected. A dev kit suited for development for the commercial version. I would have appreciated a little "sorry for the bump in the roadmap" bonus.   However, more to your point, I'm not an "investor" so they don't really have an obligation to pander to my interests. It's just how I felt, regardless of whether it was justified.
  8. I was already a little peeved they didn't give any sort of bonus to the original kickstarter backers. I went in for 300 bucks on day 2 and for DK2 they didn't give us a discount (understandable pre-acquisition) but they could have given us priority ordering or some sort of other low cost bonus for being an early supporter. I felt a little underappreciated. Now I feel a lot underappreciated. I didn't want my contribution to go to facebook. A company who focuses on monetization and microtransactions as a fundamental part of games. I wanted VR to be about the experience and not about squeezing the last dime out people by manipulating them.   It seems like Palmer's incentive to sell is so they can use the money to manufacture custom parts. I can't see Facebook not wanting some sort of required integration. Perhaps some sort of required Facebook Credit interaction. Some sort of facebook login. They hinted at a "VR Launcher" but I can't see myself wanting that either. I liked the simplicity of the dev kit. You didn't need special software running in the background or some strange interface that felt like that bloat-ware you get with printers. It was just a simple library you could integrate into your game and you got this great experience.   It's not that Facebook doesn't give any "pros" to Oculus but the potential "cons" are entirely too scary for me.
  9. I added a new "input type" for vertex attributes because I needed them for particle effects. Individual particle information is stored in a VBO so it has to be passed in through the vertex shader.   So if my vertex shader has  out vec4 VertexColor; In my material file I can type  Color vertex_input("VertexColor") And the generator will create  in vec4 VertexColor; vec4 Color; and  Color = VertexColor; Which adds an extra step, but I could add a check to see if the desired name and the input name are the same to only generate an input variable.   It's pretty easy with my implementation to add new ways of accepting input since it just involves deriving from a base input class and writing a handful of methods.
  10. For the longest time I've struggled with how I wanted to handle materials in my graphics framework. When searching around for existing solutions I found basically two things. A: Shaders with strict inputs: A single shader that had specific inputs that were textures, floats, etc etc. B: Node based shaders: Crazy flexible graphical editors for materials constructed from graphs of various building block elements. 'A' wasn't flexible enough for me, and 'B' seemed like something way to big and time consuming to properly create myself. So I decided on something somewhat inbetween... I built a GLSL preprocessor templated shader generator. It takes in GLSL code with extra markup. Instead of specifying inputs as strictly typed GLSL uniform variables. I tell the generator the input names and the desired final data type (float, vec2, vec3, vec4). Then a set of inputs is given to the generator and it creates a shader that samples the textures, looks up values, and makes sure there's a variable with the requested name that contains the appropriate value. It's easier to show what I mean... Here's my Blinn-Phong material shader template. Templates#version 140#include "shaders/brdf/blinnphong.hglsl"#include "shaders/brdf/NormalMapping.hglsl"in vec2 TextureCoordinate;in vec3 GeometryNormal;in vec3 GeometryTangent;void ShadePrep (){ }vec4 ConstantShading(vec4 AmbientLight){ vec4 result = AmbientLight * DiffuseColor; result += Emissive; return result;}vec4 Shade (vec3 LightDir, vec3 ViewDir, vec3 LightColor, float Attenuation){ vec3 normal = normalize(GeometryNormal); normal = NormalMapping(normal,normalize(GeometryTangent),NormalMap); return BlinnPhong (LightDir, ViewDir, LightColor, Attenuation, normal, DiffuseColor, SpecularColor, SpecularPower, SpecularIntensity);} : Specifies input names and desired types to the generator. Some inputs can be optional, the generator won't raise an error if these aren't supplied. : This tells the generator where to define extra variables it may need. While technically redundant because the extra variables could be placed where the tag is, I wrote it with the tag and didn't bother to remove it. : This tells the generator where to put the code that's needed to get the proper final value, such as sampling a texture. : This lets the generator do different things based on the types of input you supply. In the above shader I add an extra line of code to transform the normal if a NormalMap texture is supplied. I also apply an Emissive term if the shader is supplied one. Input Sets Sets can be constructed in code, or loaded from file. The file contains text like this:DiffuseColor tex("sword/diffuse.png")SpecularPower 6.0SpecularIntensity tex("sword/specular.png").rSpecularColor color(255,255,255)NormalMap tex("sword/normal.png")Emissive tex("sword/glow.png") It's pretty easy to tell what this does. The generator takes the input set, and generates a shader which can utilize it. You might notice I specify a swizzle for SpecularIntensity. You can pick different channels out of a texture for a certain input, if you specify the same texture twice it's smart enough to only sample it once and swizzle the sample in the shader. When I plug those inputs in, this is what it generates (I fixed the whitespace up though...)#version 140#include "shaders/brdf/blinnphong.hglsl"#include "shaders/brdf/NormalMapping.hglsl"uniform sampler2D Texture_0;uniform vec4 SpecularColor;uniform float SpecularPower;uniform sampler2D Texture_4;uniform sampler2D Texture_2;uniform sampler2D Texture_1;in vec2 TextureCoordinate;in vec3 GeometryNormal;in vec3 GeometryTangent;vec4 DiffuseColor;vec4 Emissive;vec3 NormalMap;float SpecularIntensity;void ShadePrep (){ vec4 Sample0 = texture2D(Texture_0, TextureCoordinate); DiffuseColor = Sample0.xyzw; vec4 Sample1 = texture2D(Texture_1, TextureCoordinate); Emissive = Sample1.xyzw; vec4 Sample2 = texture2D(Texture_2, TextureCoordinate); NormalMap = Sample2.xyz; vec4 Sample4 = texture2D(Texture_4, TextureCoordinate); SpecularIntensity = Sample4.x;}vec4 ConstantShading(vec4 AmbientLight){ vec4 result = AmbientLight * DiffuseColor; result += Emissive; return result;}vec4 Shade (vec3 LightDir, vec3 ViewDir, vec3 LightColor, float Attenuation){ vec3 normal = normalize(GeometryNormal); normal = NormalMapping(normal,normalize(GeometryTangent),NormalMap); return BlinnPhong (LightDir, ViewDir, LightColor, Attenuation, normal, DiffuseColor, SpecularColor, SpecularPower, SpecularIntensity);} These are just simple inputs, but you can do more interesting things with it as well. For particle systems I can connect the material inputs to values passed in from the particle data. For instance getting a index for an array of different particle textures. If you wanted an animated texture you can have an input type represent all the frames and switch between them. Additionally the input sets generate a hash code (probably)unique to the generated shader, so if you have similar input sets that use the same generated shader, it's only created once. I was also hoping to cache the compiled shader binaries. However, even in the latest OpenGL the spec says that glShaderBinary must be supported but there doesn't have to exist a format to save them in. So it's pretty useless and disappointing, it's possible to cache the linked programs though. It was fairly easy to implement, allows decent flexibility, and cuts down on the time I have to spend writing little differences in shaders. There's obviously a lot of improvements I could make (as with anything) but I'm getting a lot of mileage out of it's current state. What do you think? I'd love some feedback. Also, Obligatory Progress Screenshot:
  11. I know this topic is 14 days old but I started working on the subsurface scattering that uses these maps and needed a way to generate them. I found XNormal has a built in option for this type of map. It's called "Translucency" and not "Local Thickness". You won't need to flip your normals and such. It also has a C++ library if you want to integrate baking into some sort of tool you're building, and it's free.    Edit: Didn't read carefully enough XNormal was already suggested.   Edit: Here's the translucency map generated from XNormal used in the shader...
  12. OpenGL

    Sounds like you're maybe talking about Instancing?   glDrawElementsInstanced() will draw the same model several times. In order to position things differently you need to do one of a two things.    A) Pass each transform and model specific data in via a Uniform Buffer Object   B) Pass each transform and model specific data in via a VBO with a vertex attribute divisor   You can draw many models (All the same) with a single draw call at different transformations and such.
  13. Well the loops aren't entirely slow. They're broken up by 32x32 tile. So spatially pixels in the same area are using the same list of lights. So it has the potential for the same warp/wavefront (I think those are the words NV/AMD respectively) to be using the same list.    http://www.cse.chalmers.se/~olaolss/papers/tiled_shading_preprint.pdf   Though when I looked up the undefined-ness of texture sampling. You were correct. However, there are some texture samplings that are ok, mainly ones that don't rely on the computation of mip-maps or filtering. Because I'm using texelFetch() it should be defined... I think. (http://www.opengl.org/wiki/Sampler_(GLSL)#Non-uniform_flow_control)   Which points out in the sampling of shadow maps is non-uniform and I'm relying on the filtering. Perhaps it's the cause of the issue, though I'm not really sure how to fix it.   Thank's for your input.   EDIT: Missed the new post by Kaptein.   My shader loader checks for compile errors on every shader. And link errors on every program. If anything comes up it prints the result and asserts(false); So I can see the error. It's compiling. I just didn't include the vertex shader as it didn't seem related and there was already an enormous bit of code to look through. The vertex shader also isn't doing anything interesting.
  14. I recently completed my Single Pass Order Independent Transparency shader.   I decided to add some shadows to my lighting shader. Since my lighting is computed with Tiled Forward Shading I put my shadow maps into texture array of cube maps. When I added the line of code to sample the cube maps. It started to hang on glDrawElements(). After some time the graphics driver kills the program for having too many errors. However it doesn't hang right away, there's a couple seconds of it working correctly before it breaks.    I gave it a try on my laptop (NV 630m) and it works completely, and seemingly smoother than my desktop(NV 770) without the shadows.   If I comment out the line sampling the cube map array for shadows it works. attenuation *= texture(ShadowMaps, vec4(WL,shadow),comparedepth);  Curiously if I leave the line sampling the shadows, don't call my BRDF portion, and instead output attenuation. The shader doesn't hang. color += vec4(attenuation,attenuation,attenuation,0.1); What that looks like:     I've pasted my shaders here. http://pastie.org/8624432#85,92 Since they're kind of large.   And an opengl log, though CodeXL doesn't seem to want to capture the whole log for a single frame. https://gist.github.com/LordTocs/c2a59de6c3d9fa811d2b   I'm hoping it's not the drivers because I tried updating them to the latest. I hope someone spots something I'm doing incorrectly. I know it's a lot to sift through but I'm running out of ideas.   Screenshot from my laptop: http://i.imgur.com/9WspPLc.png   EDIT:   I've since added a debug callback to my OpenGL context. When the shader locks up, this comes over the debug output.   Debug(api, m): PERFORMANCE - Program/shader state performance warning: Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.  
  15. Over the past couple of days I wanted to try out order independent transparency. AMD showed it off in their "Mecha Demo". And with a little help from here I was able to get it functioning in my own graphics framework. In Cyril's blog he uses some bindless buffer extensions from NVidia. In my implementation I use Image Load Store. (Cyril also had an implementation for this). I also swapped out an Atomic Counter for one of Cyril's bindless buffers. Allowing me to be independent of NVidia extensions (despite how cool bindless buffers are). While I was basically re-implementing what Cyril had in his blog I learned a lot and it was a lot of fun. I don't know how efficient it is, my 770 GTX sometimes stutters when I fill up the screen with too many layers. But I am running a debug build with lots of checks and outputs so it's hard to tell what the source of lag is. I paired the transparency stuff with tiled forward shading so I can evaluate many lights in a scene with transparent geometry. I haven't done much bench marking or optimization work. However, I was happy it functioned and thought it was worth a share. However glass just doesn't look quite right without some refraction. And the table's texture is horrendous that close to the camera.