dpadam450

Members
  • Content count

    3271
  • Joined

  • Last visited

Community Reputation

2357 Excellent

About dpadam450

  • Rank
    Contributor

Personal Information

  1. FINALLY, upgrading my engine to openGL 4. I was having some trouble so I started with a stripped down application and was wondering if VAO's are required, because I have a sample working, but if I remove the VAO then it doesn't seem to like drawing my triangle.
  2. Optimization SIMD 256-bit

    Sadly the 256 simd was the same as my normal matrix multiplication. 128 simd was 2ms faster.
  3. Optimization SIMD 256-bit

    Right, that was on that website, that was what I was looking for run-time detection.
  4. Optimization SIMD 256-bit

    NVM found a solution. https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions
  5. I've been starting to optimize my code in anyway possible. I saw that some cpu's actually have 256-bit SIMD, but I was wondering if there is a way to detect this and fallback to the 128-bit on an unsupported cpu, or how else to deal with this.
  6. IBL Diffuse wrong color

    I can support HDR in my engine, but at the asset level, I dont support HDR textures. So yes at some point I need to support loading in my skies specifically as HDR (no other specific asset in games other than potentially an emissive channel can I think would need HDR). HDR and PBR are completely separate things. PBR is how light interacts and bounces. HDR is in respect to capturing a proper range of light/photons. So no HDR is not a requirement of PBR.
  7. IBL Diffuse wrong color

    The skybox is my cubemap for IBL for my PBR shader.
  8. https://tse3.mm.bing.net/th?id=OIP.fCJwPm8kAFQIdWH9AZ7lWQEsDI&pid=15.1&P=0&w=251&h=168 First, I'm using standard RGB8 textures for my skybox, not HDR. If I use a skybox with an image like above, downsampling it simply gives me a blue tint down my mip map chain. For PBR rougher surfaces, they all just look blue tinted when they shouldn't be. I was thinking maybe this was because of using LDR textures where the brighter pixels would spread further, and thus making the downsamples brighter. I would think that the clouds wouldn't be so bright to overcome downsampling more to white even as HDR, but maybe that is wrong. Even in a very cloudy day with minimal sun and blue sky, I would think even with HDR that it would just downsample to blue, but it depends on how many bright pixels are clustered and how high their range is.
  9. When you get more complex animations due to input changes (think of something like FIFA), then you may blending several animation and it is easier to deal with on the cpu, especially since you will need final bone transforms for physics related things.  However putting baked animations into a buffer/texture can be good for something like crowds, where they have a few idle or walk animations. You can then offset into the buffer and have all your instances playing a random section of an idle loop.
  10. One way to think of it is that your object vertices start in model space, so the first matrix you apply is always in model space. For instance I have an airplane AI in my RTS game. Every frame I will decide if the plane needs to adjust roll/pitch/yaw etc, based on his current heading. I simply take the current matrix and do a model space transform before, to get the final matrix. (Curr plane matrix world )*(some Yaw matrix model space) = new current matrix world (new current matrix world)*(some Roll matrix model space) = new current matrix world. So if you understand that you can always pre-apply a model space matrix, that is, you are multiplaying a local space matrix first, and then applying the world space matrix, you can always do anything relative to your model, such as continually apply new yaw before your current matrix, and make the model spin on its own yaw axis.  
  11. Going to have to debug more than posting a shader. Output the linear world positions and read them back as floats. Do they look correct? float dist      = LinearizeDepth(gbuffer0Val.r); Are you even using this variable to test against the other samples?
  12. Deferred texturing

    A.) Fix your tessellation problem. There are plenty of ways to reduce polycount via any modeling tool: Decimate/remesh etc. This is not just an issue with texures but writing to the depth buffer and other buffers several times, not just texture fetching. B.) Unless youre entire scene fits into a giant texture array or massive texture, then how would you fetch random  textures from a gbuffer index?
  13. Not sure what you are trying to do. Full ambient lighting will assume every face receives all the ambient light, so if your ambient term is 100% then face normals won't matter.
  14. OpenGL MSAA reasons?

    MSAA is a form of supersampling. You are rendering more pixels than what is output to the screen. However all of those pixels in the msaa buffer can change every frame (such as thin grass blades at far distances). You can still get aliasing when MSAA downsamples to resolve to your screen resolution. A lot of techniques, which can be combined with msaa, will deal with the screen resolution buffer and try to anti-alias the final rendered images (FXAA does this by detecting edges/high contrast and blurring).
  15. I'd talk to someone quickly that can give actual legal advice. Since it was suggested above about transfer of IP rights or such, I would believe that since you paid them to do work it would be your IP per the contract. Not sure if music is any different.  You aren't paying to license already written music for a movie soundtrack, you are paying them for their time to compose music.