michalferko

Members
  • Content count

    9
  • Joined

  • Last visited

Community Reputation

796 Good

About michalferko

  • Rank
    Newbie
  1. OpenGL 3.3+ Tutorials

    Hi, nice to see students from our university retaining interest in OpenGL. Keep up the good work.   However, the tutorials seem to be a bit older, and I suggest you do a revision of the older tutorials. More specifically, tutorial no. 3 introduces shaders, but just informs the user that a shader could not be compiled (no errors are shown). Retrieving a shader info log in the case that happens should be mandatory and people should be taught to do that from the beginning.   The same goes for the font tutorial. Creating a separate texture for each character is probably the worst possible way you could do it.   Both of these you created a few years back, so don't think that I'm saying your tutorials are bad. I am just trying to increase the quality of your work.
  2. OpenGL Drawing to own buffer?

    First of all, its 32 bit (4 byte) color, not 32 byte color. So your buffer size should be 1280 * 800 * 4.   Second of all, you are writing to the wrong pixels. Each pixel contains 4 bytes, but you assume the size of a pixel is just 1 byte.  void drawPixel(const int x, const int y) { int offset = (y * SCREEN_WIDTH + x) * 4; back_buffer[offset + 0] = 0xFF; // set R value to 255 back_buffer[offset + 1] = 0xFF; // set G value to 255 back_buffer[offset + 2] = 0xFF; // set B value to 255 back_buffer[offset + 3] = 0xFF; // set A value to 255 } I still think the problem is somewhere else. You should better post your whole code. Most probably your problem was that the alpha values might never have been set, making the result fully transparent. The setup of blending could affect the result of such operations, likely ignoring pixels with alpha set to zero.   There are a lot more things that can go wrong and why you don't see an image. I suggest you go through a detailed tutorial and then start doing experiments like this.   Finally, why would you do such a thing? Just use textures and shaders to produce images. You are throwing GPU acceleration out of the window with such an approach.
  3. There's AMD's CodeXL: http://developer.amd.com/tools-and-sdks/opencl-zone/opencl-tools-sdks/codexl/ CodeXL is a successor to gDEBugger and offers pretty much the same functionality with small updates. CodeXL works also on NVIDIA GPUs, but you cannot debug shaders etc. Viewing buffer and texture contents works OK. It however does not support direct state access calls, which is a pain in the ass.   The second option is NVIDIA NSight. AFAIK it works only on NVIDIA GPUs (I don't have an AMD to test it), but works really great.  http://www.nvidia.com/object/nsight.html     Both support Visual Studio integration and both really help debugging. I myself consider NSight more mature and somewhat easier to work with.   There was a nice video about Nsight, I think this is it: https://www.youtube.com/watch?v=HAm5ziXE6pA
  4. For int and uint layouts, you have to use the corresponding types, prefixed "i" and "u": https://www.opengl.org/registry/specs/EXT/shader_image_load_store.txt   So, if you use iimage1D, it will work. rgba32ui -> uimage*D rgba32f -> image*D
  5. Spells in entity component systems

    I would make the staff component hold only a reference to the current spell (could be a child entity). You would then have 3 instances of a Spell entity, one for each of your spells, and these would contain in a component all the spell info like cooldown, cost, etc.. Then just keep somewhere a list of all spells your character has, and by pressing Q or E, you just remove the child entity from the staff and add a new entity that is the next spell in your list. The spell entity itself could have some animation specific to the spell and you could be showing it. Very simple imho.   I hope you got the idea :)
  6. These are described in the actual DSA extension: http://www.opengl.org/registry/specs/EXT/direct_state_access.txt You can get arguments and their names from that site.   If you are unsure, the usual approach is that whatever you call in a Bind function (BindVertexArray, BindTexture) becomes the first parameters of the DSA call. The other parameters are then the exact same parameters that you would use in a non-DSA function.    Also take a look at this: http://www.g-truc.net/post-0363.html   Keep in mind that when working with vertex array objects, there is one thing you cannot do with DSA and must use the old way: http://stackoverflow.com/questions/3776726/how-to-bind-a-element-buffer-array-to-vertex-array-object-using-direct-state-a
  7. What does "does not work well" mean? First of all, your commented line is multiplying the position in the wrong order (should be matrix first, then vector) since you are using column vectors (see the vertexpos line). Another problem with your line is that you are assigning a vec4 (the result of multiplying a 4x4 with a 4x1 vector) into a vec3 variable. This should not compile and you should check the shader info log.   If it's something else, we need more information before we can help you.
  8. peculiar request

    If you still insist (despite everyone not recommending it) on using your idea, you could use an atomic counter variable (with default value 0) and once the first fragment shader executes, increment it. Each fragment shader would test the atomic variable at the beginning, and if it has been set to 1, it will automatically discard the fragment. But it's an ugly hack.
  9. deferred shadow mapping problem

    There are lots of things that can go wrong.   Make sure: 1. The shadow map is bound correctly and you can read values from it using texture2D in the shader. The uniform value of tShadowMap might be setup wrong, the shadow map might be bound to the wrong unit, it is highly possible the sampler state is the culprit (expecting mip-maps that are not present for the generated shadow map). 2. The matrices are correct. Test in on the CPU with a few points and then make sure the results are correct. You might test setting the output color to the computed frag coords to make sure those are correct. 3. gl_FragData[1] writes to the second bound MRT buffer. Don't you want to write to the first?   These are probably the most common mistakes, but it could still be something different. Like a wrong format or errors when rendering the shadow map. Try to debug it and make sure which steps are correct and you will quickly identify the problem.
  10. Do you extend GLSL?

    I am currently working on simplified GLSL effects, kinda like http://gleffect.sourceforge.net/ but much simpler and focused on GLSL 330 and higher. I am writing it myself to plug it into my engine, I didn't want the external dependencies that GLeffect! includes. I also wanted something that works perfectly with my engine.
  11. This list almost exactly describes my beginning years, the order was not really the same and there were some extras (like texture loading), but I surely did all of those things.   However, that was about 10 years ago and since then, 3D programming has evolved to the point where there must be shader programming, the sooner the better.
  12. Managing Decoupling

    Very interesting. I am definitely going to rethink the culprits mentioned in Section 1.   I have one question. Obviously, the rendering system should be separated from the game object (or entity) system. Both systems are represented by a scene graph (not sure what it's called for the entities). My question is, should the hierarchy be rebuilt for the rendering system? Since we will probably be creating some entities that have some kind of render-able component, but we want to keep the hierarchy for the renderer for (at least) spatial culling. Am I correct, or should it be done some other way?