dpadam450

Members
  • Content count

    3266
  • Joined

  • Last visited

Community Reputation

2357 Excellent

About dpadam450

  • Rank
    Contributor

Personal Information

  1. IBL Diffuse wrong color

    I can support HDR in my engine, but at the asset level, I dont support HDR textures. So yes at some point I need to support loading in my skies specifically as HDR (no other specific asset in games other than potentially an emissive channel can I think would need HDR). HDR and PBR are completely separate things. PBR is how light interacts and bounces. HDR is in respect to capturing a proper range of light/photons. So no HDR is not a requirement of PBR.
  2. IBL Diffuse wrong color

    The skybox is my cubemap for IBL for my PBR shader.
  3. https://tse3.mm.bing.net/th?id=OIP.fCJwPm8kAFQIdWH9AZ7lWQEsDI&pid=15.1&P=0&w=251&h=168 First, I'm using standard RGB8 textures for my skybox, not HDR. If I use a skybox with an image like above, downsampling it simply gives me a blue tint down my mip map chain. For PBR rougher surfaces, they all just look blue tinted when they shouldn't be. I was thinking maybe this was because of using LDR textures where the brighter pixels would spread further, and thus making the downsamples brighter. I would think that the clouds wouldn't be so bright to overcome downsampling more to white even as HDR, but maybe that is wrong. Even in a very cloudy day with minimal sun and blue sky, I would think even with HDR that it would just downsample to blue, but it depends on how many bright pixels are clustered and how high their range is.
  4. When you get more complex animations due to input changes (think of something like FIFA), then you may blending several animation and it is easier to deal with on the cpu, especially since you will need final bone transforms for physics related things.  However putting baked animations into a buffer/texture can be good for something like crowds, where they have a few idle or walk animations. You can then offset into the buffer and have all your instances playing a random section of an idle loop.
  5. One way to think of it is that your object vertices start in model space, so the first matrix you apply is always in model space. For instance I have an airplane AI in my RTS game. Every frame I will decide if the plane needs to adjust roll/pitch/yaw etc, based on his current heading. I simply take the current matrix and do a model space transform before, to get the final matrix. (Curr plane matrix world )*(some Yaw matrix model space) = new current matrix world (new current matrix world)*(some Roll matrix model space) = new current matrix world. So if you understand that you can always pre-apply a model space matrix, that is, you are multiplaying a local space matrix first, and then applying the world space matrix, you can always do anything relative to your model, such as continually apply new yaw before your current matrix, and make the model spin on its own yaw axis.  
  6. Going to have to debug more than posting a shader. Output the linear world positions and read them back as floats. Do they look correct? float dist      = LinearizeDepth(gbuffer0Val.r); Are you even using this variable to test against the other samples?
  7. Deferred texturing

    A.) Fix your tessellation problem. There are plenty of ways to reduce polycount via any modeling tool: Decimate/remesh etc. This is not just an issue with texures but writing to the depth buffer and other buffers several times, not just texture fetching. B.) Unless youre entire scene fits into a giant texture array or massive texture, then how would you fetch random  textures from a gbuffer index?
  8. Not sure what you are trying to do. Full ambient lighting will assume every face receives all the ambient light, so if your ambient term is 100% then face normals won't matter.
  9. OpenGL MSAA reasons?

    MSAA is a form of supersampling. You are rendering more pixels than what is output to the screen. However all of those pixels in the msaa buffer can change every frame (such as thin grass blades at far distances). You can still get aliasing when MSAA downsamples to resolve to your screen resolution. A lot of techniques, which can be combined with msaa, will deal with the screen resolution buffer and try to anti-alias the final rendered images (FXAA does this by detecting edges/high contrast and blurring).
  10. I'd talk to someone quickly that can give actual legal advice. Since it was suggested above about transfer of IP rights or such, I would believe that since you paid them to do work it would be your IP per the contract. Not sure if music is any different.  You aren't paying to license already written music for a movie soundtrack, you are paying them for their time to compose music.
  11. Recent Grad Job Hunting

    A repository isn't really a portfolio in my opinion. It doesn't even have your resume on there. Get a free website host and if you want to create a projects page with some basic pictures or video. It just seems like unless you spent all your free time doing projects and in internships you have no chance. Most game people love what they do and a lot of graduates from random CS degrees can't even do string reversal or know the difference between arrays/vectors/linked lists. There was a time at EA before I worked there in which I heard they wouldn't even look at Full Sail programming graduates, even though 3 or 4 people on my team graduated from there before. The quality of candidates in interviews were bad so they just all together gave up. When I graduated I had a pretty sweet game with my own entire engine supporting 3D animations, character I modeled and animated, shadows, gameplay etc. I interviewed at maybe 15 different places. Sometimes you lose jobs to other people, sometimes you realize something dumb or wrong you said in your interview. At that time I applied to Insomniac Games, I took their 2 hour test and got a call back next week were a guy said, you did extremely well on the test but you don't have the experience we are looking for. And that was that. Basically, you never know why you get overlooked, but for games jobs, there are definitely good people out there applying alongside you. My first job was a medical software job.  The best thing you can do is make some kind of 3D game, no matter how simple it is.  
  12. I'm looking at some code that is supposed to support triple buffering.  DXGI_SWAP_CHAIN_DESC1.BufferCount = 3; However it looks like it calls swapChain->GetBuffer(0,.....);   And then it continually does OMSetRenderTarget(Buffer0) to the same exact buffer/view over and over not taking into account that there are 3 buffers. I also modified a directx example that was cycling 2 buffers for double buffering and modified its code to continually set the same render target instead of cycling them. I would think there would be flicker as one buffer would never be used. I'm assuming the driver knows if you set the same buffer that it will automatically correct itself?  The game definitely is running 60fps, so I can only assume that this is an intended DirectX thing? I'm assuming this only ever needs to be setup correctly if you are using DXGI_PRESENT_DO_NOT_WAIT ? Thanks.
  13. You want to make sure the first time you bind a texture at creation that you setup the glTexParameter  calls that describe your sampling types for minification and magnification such as anisotropic filtering or simple point sampling.
  14. Nearest just gives you the exact texel color that is closest to your uv location. Linear will blend 4 as said, but its weighting everything by how close the sample is to any given sample. So as suggested, if vertically its dead center on a row, then the 2 pixel in that row are weighted. Further more..... there is tri-linear filtering, which does all of this linear stuff on 2 mip maps, and then takes those 2 outputs from the linear weighting, and weights them as well.
  15. Project Update 3 - Management

    Note that publicly commenting on a final product, versus commenting on hundreds of concepts, or iterations to some gameplay feature, is much different.  Judging art is fine, however if I posted the 10 references images to get to these final pieces and 90/100 people liked a different concept.... who cares. The point is you can't please every person on your indie or mod team in every opinion they have. So don't ask everyone their opinion on everything. Either keep people segregated completely, or to a smaller subsection of skillsets. The point is about managing a team. First, loss in time. This blog took me maybe 30 mins to complete. If I have an online team where I have an open forum communication, that means multiple people posting stuff that you have to read ad reply to.... could be 5-10 hours a week just typing stuff. Secondly, if 5 people like 2D character concept A and 5 people like concept B because you allowed an open vote, then 5 people on the team "lost" in a sense. They are losing their creative input because again, you have 10 people that you basically gave creative control by allowing an open discussion on literally every aspect of your project. And when you give that much freedom to believe they hold weight on every single decision of the game, they take things personal and could lead to them leaving. If you just have an artist you give concepts to and he completes them, he will be happy 100% of the time. He has personal connection with his work, not his own vision of a product.