_Silence_

Members
  • Content count

    152
  • Joined

  • Last visited

Community Reputation

968 Good

About _Silence_

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. Render Pass

    Originally, a pass was a full and complete rendering pass. Games usually rendered an image, then in a second pass, might render more lights, then in another pass, add some bump, lightmapping/shadowing... Nowadays a pass can be a first rendering from the camera view into some buffer(s), another one from the lights view (to store the depth textures), another one for the ambient occlusion, bloom effects... Except for the depth-map generation, most of the passes are generally done from the camera view. When you render in deferred, the full Gbuffer is generally filled once in a single pass. Doing a lookup in some already generated buffers, is generally not considered to be a render pass. This is generally done in some render pass in order to retrieve some information at some pixels.
  2. Worth Going Back to School in Mid-30s?

    I went back to school at 27. It was not easy. But I was like you: I did not have any relevant degree and companies love degrees in Europe... I was also wanting to enter the game industry. I had contacts with one in Paris during my studies where they liked my profile a lot. (For personal reasons I moved to something else). I then made a master degree internship in the french national research institute for computer sciences. Then I was looking for my first job with that degree. It wasn't that easy. I made a lot of job interviews which ended 6 months after my degree to some job opportunities. I then worked for a digital planetarium company. The next job was in an optical simulation company. These two jobs were nice but when the working days were over, I had no will to work on my personal projects anymore (since I was doing similar things all the day). Now, for personal reasons, I'm far from the graphics programming. But I have a decent job. This job however allows me to work again on my projects and I'm happy with this. At the end, my original plan to work in the game industry had been forgotten. But I don't regret it. And maybe one day I'll have that opportunity
  3. Dealing with frustration

    I can agree. I guess that, when you're alone, you should fix affordable goals. You should also change these goals quite frequently, once you discover that a goal you was thinking to be done quite easily will require other things to be done first. I was making my own format too some years ago. And now it is completely deprecated due to advances in other areas. This is really annoying, for sure. And the most difficult part is to find/create/use 'assets' or any other resources that we can use easily, that will have a lifetime long enough before their deprecations and that will allow to develop our other stuffs (lighting, shadowing, parallax-mapping, lods...) with not too much harms. This is absolutely true for the model formats, this is also true for LODs, normal-mapping (including tangent-space and normal-map generation - these are so easy for studios (probably even little ones) but so hard for own developers to have), for soft-bodies, image formats and probably many other things that I'm not thinking at, at this moment.
  4. You have it in your own code: #ifdef __cplusplus
  5. Bounding Sphere for collision Detection

    You don't need to loop twice to get the radius. You already have the min and max vertices. So your radius is just Length(max-center). To check for collisions, there is potentially one if the distance between the 2 centers is less or equal the sum of both the radii.
  6. This looks good. For me however, the most intriguing thing is what looks like portals, since I don't know Riemannian geometry
  7. Check GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS for the vertex shader and GL_MAX_TEXTURE_IMAGE_UNITS for the fragment shader.
  8. How to animate the UV correctly?

    You can also try to use a texture matrix. Changing values in the location row/column will make the image move.
  9. Career Paths

    Having a good degree will always help you in your full life. As I already suggested to others, take into consideration the case where for some reasons (personal life, getting fed-up, getting older...), you would like to embrace another kind of career. So take into consideration also a more general degree (computer graphics, computer sciences).
  10. Hi everyone, First of all, I want to render an HDR sky. What I'm doing now is transforming this HDRI image into a cubemap, using an external tool, since the common way to display a sky is to do it with a cubemap. Is it the right way ? Or maybe it is common to render a sky dome instead in that case ? Second, assuming that the HDRI is transformed into a cubemap texture (whether cross, or threw 6 different files), the tools I use allow me to create whether DDS or KTX images in order to preserve all the HDR information. I wanted to use DDS since it seems to be wider supported (is this correct ?). Unfortunately, both Devil and FreeImage fail to load these DDS generated images. I'm planning to try SOIL, but for some reasons I start to believe this will end with the same failure and it would be very frustrating to have 'lost' all this time for nothing. For information Devil reports that the header of the file is wrong and FreeImage simply returns a null pointer without more explanations... I understand that having my own format might help. But this will imply more works on my side: I'll have to transform the HDRI to a cubemap myself, I'll have to create a new image format myself (which most certainly will look more or less like DDS or KTX), plus I'll have to maintain it and to ensure it will work the same on all the expected supported platforms. Also, my own format won't be optimized for the GC. Transforming an HDRI into a cubemap might not be that hard, but if I go this way, then I'll have to do the same work for supporting other kind of images too (ie radiance and irradiance). These will require a lot of code, more chance of having errors, and probably a non negligible cost in the accuracy of the result. So what do you suggest will be best for a single man with only loose-time availabilities ? Thanks in advance. EDIT: I haven't tested KTX yet...
  11. What is a Game Engine

    I guess many engine developers are in the same situation than you. Whereas it can have technical reasons as you are looking for, it might also just have publishing reasons. For example, if your website changed often, or if your website is not well referenced (threw google or so). Or if you didn't make any promotion in any visited and relevant websites (like this one or other forums)... The other thing is that there are now many and many engines. I also suspect wikipedia to list 'little' engines when their authors or workers are adding them Or once they start to get some notoriety, so as Hodgman said, games should be published using your engine.
  12. You can read this. I personally don't like to do that just because doing arithmetic (including bitwise operations) on enums definitely lead to have values not defined in the enum. So you start with a finite set of elements and by allowing such operations you can end up with an infinite set of elements (if you limit to 'or' bitwise operations you are still stuck with a finite number of elements, but their number is large). So typical C operations like a switch will not be able to handle easily all the values. Also, when debugging, the debugger will not be able to print the matching name of an enum value. And if you are in C++, this tends to pervade the nature of your enumeration type. This is just what I think about that
  13. One thing you can do is trying to see in their sources. They might do a post-process in order to change these colors as transparent colors (avoid to store alpha component can save space in the texture files). Or they might treat these colors as transparent colors, and thus do any blending or other treatment in the shaders during rendering time.