Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Jul 2001
Offline Last Active Yesterday, 09:21 PM

#5265353 What is a suitable default value for buffers/vertex arrays/textures etc

Posted by on 07 December 2015 - 04:55 PM

Soooo long story short, don't predicate the validity or non-validity of a texture object on its GL handle. glGenTextures et al won't ever return 0, which you can use to detect errors at the point of call. Handle that appropriately, but don't use the value itself as your check.



The reason I have gotten confused is glGetUniformLocation as a location of 0 is perfectly valid. What is a good default for that, -1?

Unlike the Gen functions, glGetUniformLocation returns a signed value and -1 is a perfectly sane default, as well as the error value for the function itself.


All of which takes us back to the problem that the entire API feels like it was carelessly mashed together from pieces that were lying around and everyone is worse off for it.

#5265221 Would it be worth it to attend GDC, or would it be a waste of money?

Posted by on 06 December 2015 - 11:05 PM

There's an extensive array of job recruitment/meetup events at GDC, and a million people trying to get in on them. Seriously we're talking lines out the door and people waiting for on the spot preliminary interviews. It is simultaneously the best and worst thing for getting a job. Personally I'm not terribly fond of the whole idea - being part of a massive crowd vying for attention doesn't seem like the way to go.

#5264510 Looking for Input on a Low-Pay Internship

Posted by on 01 December 2015 - 06:33 PM

I've seen better and worse out there. In general the larger companies will pay better, the smaller ones not so much. Location also matters - things are much better in Silicon Valley or Seattle or Austin than many other places. At the very least, I'd look at all your options if you don't have to respond right away. That includes non game software development options.

#5264191 Quick tutorial: Variable width bitmap fonts

Posted by on 30 November 2015 - 12:06 AM

cool.png Don't worry, I just checked and this code is still driving the text rendering for our current gen engine. I think a few minor tweaks were made over the years, and that's it.

#5264043 Questions regarding how did you manage school

Posted by on 28 November 2015 - 05:20 PM

Please keep in mind that there are gigantic cultural differences involved. School in the US is a vastly difference experience from school in India, which makes advice and stories from the US somewhat difficult to apply. It's possible here in the US to build a life from a non traditional path, leaving school early and returning on your own terms later or never returning at all. From what I can figure, this is next to impossible in India. Additionally, your choice of major is limited depending on your school and exam scores which also doesn't happen in the US. 


There's a strong American culture of being self-made and self taught which doesn't necessarily work in other countries as a viable life path.


With that in mind, everything that's been said above about learning online is basically correct for basic education. Especially thanks to programs like MIT CourseWare and several foundations building free textbook collections, it's possible to gain massive amounts of education completely on your own.

#5263870 How to use mouse as view control of camera?

Posted by on 27 November 2015 - 02:26 PM

Hey there, I'm currently making a game using DirectX11 with c++

I've managed to create key inputs to move the camera, forward,backwards,left,right,up,down.

However i'm struggling with making the mouse control. I need to use the mouse to view which direction the camera is facing.

So it doesn't need clicks its just whichever way you move the mouse with the cursor, the camera will follow.


Had no luck so far.


Any pointers or help would be appreciated thanks

This is not the best way but it is the easiest


First, call GetCursorPos each frame and store that value. By comparing the result of the current and previous frame, you know how far the mouse moved. Divide this number by some fixed "sensitivity" constant, and apply that as a rotation to your camera matrix. Typically you will want "FPS style" camera control. That means that the mouse X coordinate rotates around the world Y axis, and the mouse Y coordinate rotates around the camera X axis. Then you need to compute the movement vectors in the camera's local coordinates. This is an excellent exercise in beginning linear algebra for games.


As a hint, the variables you need are:









#5263760 Why do they have. What number fits in the blank

Posted by on 26 November 2015 - 11:22 PM


I've had companies send me pre-interview coding exams that take multiple hours.


I always thought this was standard.  Certainly every job in gamedev for which I've ever landed an interview required a pre-interview coding test that took multiple hours, some timed and some not.


Common perhaps, but not standard. I simply kick these job openings to the bottom of the pile - and if the employer isn't compelling, I won't bother. When Bungie asked me to do the test, the answer was yes. (That was a looooong test, too.) If random no-name small developer asks me to do it, meh forget them.


My own personal belief is that it's the wrong way to filter applicants.

#5263366 Phong versus Screen-Space Ambient Occlusion (with source code)

Posted by on 23 November 2015 - 09:01 PM



We have to say it's not physically correct to multiply them because it's an ambient value but it's the common trick to multiply after the lighting.
That will result to have the ambient occlusion visible even if the zone is lighted but it's an artistic value.

Yeah but nothing about SSAO is physically correct, it's an aesthetic hack. What we want is hemispherical occlusion applied to a GI light contribution, but here in the real world...



Do you have any URLs that demonstrate hemispherical occlusion?


You probably already understand it from implementing SSAO. Take a point on a plane - this point has a clear unobstructed view of "outside" from any direction on the hemisphere centered around the geometry normal. Now fold the plane at a 90 degree angle (so it defines a quarter space now) and take a point along the fold line. This point only has half of the available directions visible from outside and half occluded, so our hemispherical occlusion is 0.5. In the microfacet BRDF world, we describe this using the geometric occlusion term G. That's on a "micro" scale, whereas hemispherical occlusion is on a "macro" scale. Once you know how much visibility any point on the object has to infinity integrated across the hemisphere, you just multiply that by your GI term (or ambient, in the naive case) and there you go, real ambient occlusion.


In practice, real-time gives us neither the occlusion term nor a true GI term and so we use SSAO to come up with a cheap visually similar look. I believe you'll see a transition to true ambient occlusion in the next couple years, especially considering this seems to be in Unreal now:


#5263066 Phong versus Screen-Space Ambient Occlusion (with source code)

Posted by on 21 November 2015 - 07:21 PM

We have to say it's not physically correct to multiply them because it's an ambient value but it's the common trick to multiply after the lighting.
That will result to have the ambient occlusion visible even if the zone is lighted but it's an artistic value.

Yeah but nothing about SSAO is physically correct, it's an aesthetic hack. What we want is hemispherical occlusion applied to a GI light contribution, but here in the real world...

#5263025 C++ and C#

Posted by on 21 November 2015 - 02:00 PM

C++ is THE tool of professional development, for better or worse. There are many reasons for that. But in most cases - certainly as an indie developer - C# is far more productive and much easier and faster to work with. So you have to decide on your goals. If professional employment is something you're aiming at, then it may well be worth learning C++ despite the long, arduous road. That's particularly true for more specialized architectural roles (engine, graphics, physics, etc programmers). On the other hand, if you really want to focus on creating a game and having a usable product, C# is the better choice. This gets you to the point of making a polished, tested, tuned game in a finished playable state much faster and is much more useful if your focus is game design.

#5260674 Water rendering

Posted by on 05 November 2015 - 05:45 PM

Most of the ocean water rendering you see today in movies or games is a derivative of the Tessendorf work, first seen in Titanic. Short version, it involves summing up various octaves of noise to come up with something that follows the same general structure of ocean waves in terms of geometry and normals. Once you've learned what this particular technique looks like, you will see it everywhere. For a more modern, game centric approach, you can look to Assassin's Creed.


As a high level overview: the first challenge is generating water that is geometrically plausible. Tessendorf covers this for deep ocean environments but isn't appropriate for shallow water or shorelines. In simple cases you can actually skip the geometric generation entirely, but you'll never get a truly convincing look without it. There's a fair bit of work on doing full blown fluid simulation models for water in more contained conditions. Most of the realtime-useful ones are based on smoothed-particle hydrodynamics (SPH). A search for "SPH water" should produce plenty of results.


Next step is the core shading work. At its heart this simply involves computing reflection and refraction components, sampling textures for both of those components, and putting it together with a BRDF specular model. The reflection and refraction components are both fundamentally functions of V and N, which means that you can get away without any geometric generation just by moving normal maps across the surface and sampling them, then blending using the fresnel term. This will fall apart at oblique viewing angles but works reasonably well at sharper viewing angles or longer viewing distances. The lighting model can be as trivial as Blinn-Phong specular with a ludicrously high specular power, but you'll see good visual benefits to using a proper physically based BRDF. C/LEAN mapping will do a lot to clean up the aliasing or roughness inconsistencies that will occur from a surface with such high frequency detail. That sparkling specular effect at the tips of the waves is the key of what makes water rendering work visually. Subsurface scattering also adds a lot of visual punch.


The last step is "embellishments". Sea foam, splashes, edge effects, etc. These do a lot to add presence and a sense of reality to the water, rather than having it simply slice through surfaces. I don't have a lot of information on this part, unfortunately. The Assassin's Creed paper does cover some of it.


While that's not comprehensive, it does provide the core elements you'll need and plenty to start on. IMO the easiest thing to do is render a gigantic flat plane, tile octaves of noise from a normal map across it, render the refraction with a clip plane and render to texture, use a stock skybox for the reflection, and get the basic fresnel blend and lighting correct. It's much easier to start doing the fancy stuff once you have that skeleton to work with.

#5260588 Quickest way to glBufferSubData

Posted by on 05 November 2015 - 01:24 AM

You may want to look into whether OES_mapbufferis available on your target platforms.



From what I understand, lets say I have a buffer that can hold 1000 sprites. Then I fill this buffer with 1000 sprites. So the buffer is now full. Then I want another sprite, now this is where I orphan. I make a call out to glBufferData() using NULL as my data param. And I get a fresh block of memory
Now I can write to this block of mem and I still have all the previous data (the first 1000 sprites) sitting out on the GPU
The only caveat is that any time I orphan I have to reallocate the buffer space.  Right?

When you call it with a NULL data pointer (and the same size as before), you do not "get a fresh block of memory". It merely tells the driver that you no longer care about what was in the memory. It may be the same memory, it may be new memory, it may be a mix of things. Your sprites may or may not be preserved, depending on what the system's doing at that moment in time. All draw calls prior to that point will not be affected, but there are no promises after that. It is legal to ignore the call entirely. It's an optimization technique to try and avoid stalls when uploading data, not a rule about how things behave.


In general, one of two things will happen. Either the driver will need the contents of that buffer for a submitted draw call that has not yet been sent through the pipeline, in which case it will allocate a new block of memory. This case is going to perform slowly. Or the driver is done with the memory, and it will simply do nothing. Doing nothing is pretty fast. Long story short, doing this more than about once per frame on any given buffer is more or less equivalent to simply manually creating new buffers and tends to show poor performance. The bad news is that all of the sane mechanisms for handling buffers in OpenGL did not make it into ES 2.0 and exist only as extensions. MapBuffer is good, MapBufferRange is better. If you cannot use MapBufferRange, it's best to simply allocate lots of buffers ahead of time and avoid uploading data to them more than once per frame. 


Also consider that ES devices are typically running unified memory and that simply omitting VBOs outright and submitting draw calls from client memory may be faster than doing any of this. Maybe.


Lastly, I recommend reading this book chapter.

#5259507 How to did Spelunky not get sued

Posted by on 28 October 2015 - 10:28 PM

While I'm not familiar with the specific case you're describing, I think this might be generally informative: https://en.wikipedia.org/wiki/Video_game_clone

Note in particular:

In present-day law, it is upheld that game mechanics of a video game are part of its software, and are generally ineligible for copyright.

#5259380 ETC and PVRTC dead to an unified compression ?

Posted by on 28 October 2015 - 01:17 AM

I think you mean ASTC as the common format, not BC. Hopefully everyone will settle on ASTC in a few years but we're not there yet and there's a lot of legacy devices.

#5259317 Which to learn first: Wwise or FMOD?

Posted by on 27 October 2015 - 03:13 PM

There's very little to "learn" with FMOD. You create a "system" object which can load sounds (either all-at-once or streamed), or play them (once or looped) which returns a channel object. Then you can change properties of the channel they're assigned to such as volume or pitch. There, now you know FMOD.