Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


ill

Member Since 18 Jul 2009
Offline Last Active Apr 15 2014 08:36 PM

#5094067 Classic FPS Games Perpsective

Posted by ill on 14 September 2013 - 02:40 PM

Well nowadays you can probably just use OpenGL.  In fact there are many OpenGL powered source ports of games like Doom, Duke Nukem, etc...  They don't have to rely on any tricks for rendering geometry.  They just render a bunch of arbitrary polygons and don't even think about perspective or anything.  Just set up the camera correctly, etc...

 

Since you have the zbuffer, you don't need to worry as much about rendering polygons in the correct order.  Overdraw isn't too much of a problem either, games overdraw nowadays all the time just fine.  There are ways to prevent it a bit, by roughly rendering polygons front to back.  That requires some sorting though.  Some engines do a depth prepass by rendering the geometry only to the depth buffer with color write off, making the pixel shaders do no complex work.  Then rerender the scene again, and this time there's no pixel shader overdraw on solid geometry.

 

You can still have sprite based FPSes, just draw a flat billboarded rectangle with the sprite instead of a model.  In fact there's even a killing floor mod with doom enemies. https://www.youtube.com/watch?v=X2O7nmuvwDo&noredirect=1

 

Doom did some things, such as making all walls at 90 degree angles with slightly brighter or darker lighting to add contrast.  Also the diminished distance lighting, making things darker farther away.  Now that's just done with fog.




#5093976 Classic FPS Games Perpsective

Posted by ill on 14 September 2013 - 04:54 AM

There's this excellent review: http://fabiensanglard.net/doomIphone/doomClassicRenderer.php

 

Being that it's from back in the day it's full of a few hax that you can probably avoid nowadays.




#5090297 Shader permutation: permutation support detection

Posted by ill on 29 August 2013 - 11:00 PM

In GLSL parts of a shader get optimized out so even if you mistakenly keep an unecessary define it'll not be part of the compiled code.  I find this makes debugging REALLY hard if I randomly comment out a portion of code that uses a diffuse texture for example.  I then have to comment out the C++ code that also passes in the diffuse texture sampler or else it says unknown uniform.  

 

I found using

 

min(0, sampler2D(diffuseTexture)) for example, would force a zeroing out without optimizing the diffuseTexture uniform out of the shader, making me not have to recompile the C++ if I'm debugging.

 

Also I have a material resource where I specify things like what normal map I use, what diffuse texture I use, etc...  So these bitmasks are generated automatically for me and there's no human error.  If I provide a normal map, my system automatically says:

 

bitmask |= NORMAL_MAP

bitmask |= TEX_COORDS

bitmask |= TANGENTS

 

If I just provide a diffuse texture I'd say

 

bitmask |= DIFFUSE_MAP

bitmask |= TEX_COORDS

 

since tangents aren't necessary unless you're doing normal mapping or other tangent space calculations in the shader.  Textures all need UVs passed in so TEX_COORDS is passed in.

 

Since I have a deferred shader I always pass in normals for the deferred shading stage.

 

If I have a full bright unlit material, that doesn't need normals and is rendered in the forward shading stage.  If I want to render a lit forward shaded material I can set the Fullbright flag to false in my material, meaning this material will be affected by lights.  Then I do:

 

bitmask |= NORMALS

 

Then if I need skeletal animations for the material I also have the user say this material will be used on skinned meshes and then do:

 

bitmask |= SKELETAL_ANIMATIONS

 

It's possible to generate 2 shaders instead for the material, one with skeletal animations, and one without.  But I figured I'd most likely almost never try to texture a character with a brick wall texture, and I'd definitely never try to texture a wall with a character skin.  If I really wanted to I can just create 2 separate materials and the textures would be shared between them with my resource management system anyway.  I'd still have 2 separate shaders.

 

Then the renderer can also have some asserts and other debug build only checks that make sure valid things are passed in so i don't mistakenly have a mesh without tangents in its vbo being used with a material that needs tangents.




#5090236 Shader permutation: permutation support detection

Posted by ill on 29 August 2013 - 04:27 PM

My shader system worked pretty much exactly that way as well and using GLSL.  Startup times are just fine.  I reuse shader programs between materials that are already compiled.  If I have 10 materials that all use diffuse, normal, and specular, and 3 materials that are only diffuse, I only compile 2 shaders on demand.  Performance is pretty good.

 

And if you try to use an unsupported uniform or attribute for that shader, GlGetUniform will return an error, so it makes it easy to debug during development.

 

When I load up a shader I have my C++ code look at the bit mask.

 

Then I say:

 

if(shaderBitMask & NORMALS) {

   defines += "#define NORMALS\n";

}

 

Then that defines string is passed in along with the shader text to the GLSL compiler.  My shader text has

 

#ifdef NORMALS

   code that uses normal goes here

#endif

 

At the moment though I'm writing an even more flexible system but this has worked for my school project for the last year or so.  I was able to render some really huge environments with many shader permutations very nicely without any noticeable performance losses.




#5088992 Making one game engine like any game engine, source engine, unreal enigne and...

Posted by ill on 25 August 2013 - 02:53 PM

You may not really want to do that BSP style of renderer nowadays anyway.  Most games now are Mesh soups.  You create a bunch of modular pieces and put them together like legos to make a level.  Then use Heightmaps for the terrain.

 

Even Unreal Engine games hardly use BSP now.  If you look at UDK at some of the UT maps that come with it, only a tiny portion of the map is done with BSP geometry.

 

Then if you look at some other maps that come with UDK they are completely made of meshes with 0 BSP geometry.  BSP was good for the hardware of the time, but it also made for static geometry that couldn't easily be modified at runtime.

 

This is also how modern engines like Cryengine, Frostbite, etc... work.  I think even the idTech 5 engine that they used for Rage works this way too although I can't say for sure.  I remember John Carmack was talking about how artists like to do the very thing I mentioned earlier, make modular set pieces and put them together like lego blocks.

 

You can then manage the mesh soup however you like, Bounding Volume heirarchy, Octree, Uniform Grid...

I personally use a Uniform Grid and ended up doing a Masters Thesis on it: http://digitalcommons.calpoly.edu/theses/975/




#5046060 Random number generation

Posted by ill on 23 March 2013 - 03:48 PM

//why not use random integer code instead of using the random float code to generate a random integer?!?!
int roll = (int) floor(glm::linearRand(1.0f, 7.0f) + 0.5f);

//why not say someValue = roll - 1
if (roll == 1)
	someValue = 0;
else if (roll == 2)
	someValue = 1;
else if (roll == 3)
	someValue = 2;
else if (roll == 4)
	someValue = 3;
else if (roll == 5)
	someValue = 4;
else if(roll == 6)
	someValue = 5;
else if(roll == 7)
	someValue = 6;
else
	someValue = 0;

The things I have to deal with sometimes...




#5046032 Negative Scaling Flips Face Winding (Affects backface culling)

Posted by ill on 23 March 2013 - 01:43 PM

Yeah I had a symmetric building and the easiest way to make it was to just mirror the hallway on the other side.  I definitely think supporting negative scale is worth it for this kind of thing.  I'm probably just gonna change the face culling mode depending on the matrix determinant thing.




#5041218 OpenGL or DirectX

Posted by ill on 09 March 2013 - 12:23 PM

OpenGL is one of the most universal APIs, it's supported by just about every platform I've developed for.  I haven't developed for consoles, so I'm sure Xbox uses DirectX, PS3 uses some libgraphics or something, and the WII might use OpenGL.

 

Windows, Linux, Mac, iPhone, Android, all use OpenGL and it takes minimal effort to port my graphics code between the platforms.  I may learn DirectX some time just because I'm also pretty interested in learning DirectX.

 

Basically, both OpenGL and DirectX are equally capable of Extreme Graphics.  OpenGL is sometimes ahead thanks to extensions, while DirectX is straight up, NOPE, I'm Direct X 11, this feature isn't available.  Wait for Direct X 12.




#5041209 Super Simple Camera Question

Posted by ill on 09 March 2013 - 12:04 PM

I've figured out that the modelView matrix is the inverse of the camera transform matrix.  

 

You don't even have to do an inverse, you can do a fast affine inverse which I think is just a transpose.  This is because the matrix properly represents rotation and position.  Don't quote me 100% on this because I'm not 100% sure but more like 95% sure about the math behind it.  All that matters is if you use some math library that has a fast affine matrix inverse rather than regular inverse, use that.  And if you're rolling your own, do a bit of research but I think it's just a transpose.

 

Also the normal matrix is just the 3x3 portion of the modelView matrix.  Don't make the mistake I made and send the normal matrix untransformed by the model's transform.  I was sending the modelViewProjection * object transform to the shader but sending the 3x3 portion of the untransformed modelView matrix.  The correct thing to send was modelView * object transform and get the 3x3 portion out of that.




#4927695 Are Pack Files (PAK, ZIP, WAD, etc) Worth It?

Posted by ill on 02 April 2012 - 05:37 PM

I use PhysFS myself and it works great I think. It allows you to not use an archive, but instead mount an actual folder.

This means in development you can still be using PhysFS and be working with the resources on disk directly, and then create an archive and switch to using the archive by mounting the .pak file or whatever.

PhysFS has really nice FileIO functions too.

I find it's also pretty easy to write a little batch script or shell script that creates the archive from a folder in one click if you add something and want to see changed results.


#4912302 Multithreading for Games

Posted by ill on 12 February 2012 - 12:09 PM

Here's a really good article that answered a lot of questions for me. Also I really suggest you use Intel Thread Building Blocks since they implement a lot of stuff you need like a task manager and lockless data structures.

Designing the framework of a parallel game engine

I'm still working on my engine but my idea so far is to have separate threads for the game logic and the rendering and audio. Then each thread spawns tasks each update. The rendering happens completely asynchronously from the game. I've never really been strong on the whole Model View Controller idea especially in games, but I ended up going with that kind of model naturally without even trying. The game thread has a controller that updates the entities, which are like the model. Then the renderer is like the view and has scene nodes that get updated by the controller.

Also PhysX 3 uses this kind of task based system which fits naturally into the multithreaded task processing architecture I'm going for.


#4912176 Mixed indoor and outdoor environment

Posted by ill on 11 February 2012 - 10:23 PM

I'm trying to figure out how to manage a scene with some arbitrary mixed indoor and outdoor environment.

I guess the outdoor would be using some heightmap related stuff and rendered using proper level of detail.

The indoor areas are extremely detailed buildings with rooms interconnected.

Here's an example with some arbitrary rooms and hills.
Posted Image

I was reading about various things such as portal culling for indoor environments. I understand the concept but not how to implement it.

I've done some Doom 3 mapping so I know how to set up portals as a mapper, but I'm not sure how the engine knows what the different rooms are between the portals. It seems to determine what the actual rooms are somehow without the mapper having to say, "This is room A" for example. You just place portals in tight corridors and door ways and the engine somehow figures out the rest.

I'm familiar with how back in the day Quake and all those games used BSP trees for that but from what I hear BSP isn't really being used nowadays.

I fooled around a bit with the Crysis 2 Sandbox editor and I feel like the engine I'm making can be similar to that. I haven't been able to find any good tutorials on mapping interior spaces in Crysis 2 though. After playing Crysis 2 I know it's possible to have some detailed interior areas inside buildings as well as large outdoor areas. I'm not sure how level designers do that kind of thing though. All I know is you create terrain and then import some 3DS max models for the more complex shapes. They also have their own modeling tools similar to 3DS max's for modeling directly inside the level. This is fine for modeling a village of some sort with very simple buildings that you either can't go inside of, or have very simple rooms. I'm not sure how they make the more complex levels that you see a lot of in Crysis 2.

I'd also like to know how they would make a map like Operation Metro in Battlefield 3. There's a lot of outdoors going on as well as networks of complex interior rooms inside the buildings and the subway tunnel.

I've also fooled around with the Skyrim Creation Kit a bit and watched some tutorials. They have complex interior areas as separate scenes that the player loads in and out of which isn't really what I want. I want the player to see a building, and walk inside it seamlessly. Maybe taking a position at a window and firing down at some enemies. Pretty standard stuff in Battlefield 3.

In my previous work I've been using 3DS max to make arbitrary polygon environments organized by a uniform 3D grid. I have View Frustum Culling operating on the 3D grid which works, but I have no other forms of occlusion to prevent rooms you can't see into from being drawn. This was fine for a 2D platformer rendered in 3D.


#4908228 Creating DLL's from your game engine

Posted by ill on 31 January 2012 - 07:09 PM

I started doing this. I kinda gave up after I started making entire separate dlls for every tiny little thing.
Instead I'm just going to have one giant library for the engine itself.

I was looking in the Crysis folders and saw they had a lot of separate dlls called things like CryThread.dll, CryFont.dll, so they managed to split up the different parts pretty nicely.

I also found that in theory a lot of systems can easily be separate, but there are still many dependancies between everything. It's definitely good to try to decouple systems as much as you can but it doesn't work out quite that well in theory.

For example the developer console is it's own system.
The graphics depend on the developer console for all the variables and console commands to do with the graphics system.
The developer console depends on the graphics to display itself.

Basically you start needing to make things extremely generic which takes a lot of effort. While before I could hook two systems up in five minutes, I now need to take an hour or so to think about how to create complex generic interfaces between them and then implement it. This would be fine if I wanted to release, let's say, a Developer Console library for the community to use in their games. I have no interest in that though, all I want to do is make a game rather than spending hours of my day making everything with extremely good design. So I do good design but not perfect design.

In the end I have faster results. An average person playing my game isn't going to think, "OH MY GOD, WHAT AMAZING DESIGN AND DECOUPLING THEY HAVE BETWEEN ALL THE COMPONENTS!!!!" They'll see an awesome game that I was able to make quickly.

On the other hand if you worry about absolutely perfect design at all times, you'll be stuck rewriting your engine constantly for years with no results.

It took me about 6 years of making games before I learned this, and I'm still trying to transition into that line of thinking. It's just hard because I care about good design a little too much sometimes.

So basicacally it's good to draw the line somewhere. The reailty is, it won't be all that much different to patch one dll or to just patch the entire 3-6MB executable that is your game. It might seem a lot cleaner but it takes a lot more effort and thought without very much benefit to split up an engine into different dlls.

In the end I decided to just have my engine be a giant statically linked library. If I'm making a tool for the engine that uses the renderer, the tool uses the renderer and no other part of the engine. The person making the tool still has access to the ENTIRE engine library which might seem a bit unclean but really it's no different than having access to the entire C standard library when all you want to do is make a "Hello World" application or something.

The important thing is results. Did you deliver the tool maker the necessary library functionality so he can make use of your renderer? Or are you spending months constantly rewriting your engine to have better encapsulation between the components while the person needing your renderer is left waiting?

Now imagine instead that that's a game you could have potentially released and gotten praise for from random gamers who couldn't care what so ever how you made that game.

(LOL in the end this rant became more something aimed at one of my friends...)


#4908223 if cube is far don't render.

Posted by ill on 31 January 2012 - 06:52 PM

View Frustum Culling is basically what you need. If you're just trying to randomly draw lots of cubes for fun or something don't worry about it. This is more helpful in actual complex scenes seen in games like Battlefield, Unreal, Quake, etc...

Olof Hedman up above was talking about organizing your geometry in a spatial datastructure. View Frustum Culling queries that data structure in some efficient ways to only get parts of your world that are in the view.

This way you don't send geometry that you don't see down the pipeline to the card. The card is good at culling invisible triangles but it doesn't exactly help if you're still using up Graphics Card Bandwidth sending millions of triangles when you might potentially be seeing only thousands.

This doesn't need to be insanely precise. As I said, the card is good at culling out invisible triangles, especially if you enable back face culling. You should be sending entire models to the card to draw if you determine that even a tiny part of the model is visible. It's easier to check if the entire model is visible by seeing if it's in the View Frustum on the CPU and just have the Graphics card cull out the rest of the model's triangles that aren't visible. The same is true for chunks of the level. If you have a room and the room is in the view frustum, just draw the whole room and the graphics card will take care of the rest. You'll still be saving a lot of work for the card by not drawing the ENTIRE level.

You don't wanna be culling individual triangles on the CPU with View Frustum culling because the CPU isn't built for that and performance will go down dramatically. So just send down entire models or chunks of your environment after determining they are roughly in the frustum.

You can extend View Frustum Culling with Occlusion Culling. This is more of an advanced topic and may only be necessary for a games like FPSes. I haven't gotten around to implementing this in my own engine yet so I'm not exactly a pro at it or anything so far. But basically if objects in the view frustum are completely occluding other objects in the view frustum, the occluded objects don't need to be drawn. So you are doing View Frustum Culling along with culling invisible objects in the View Frustum itself, sending even less objects to the card for drawing.


#4907906 realistic minimum GL version support in a year?

Posted by ill on 31 January 2012 - 01:04 AM

I'm basically targetting OpenGL 3.3 and later. OpenGL 3.0 cards support 3.3 with a driver update.

So this basically means you need cards from 2007 or later such as the Geforce 8 series.

I'm doing Deferred shading so anything older than that probably won't run the graphics very well anyway due to the high memory bandwidth.

Also I think the audience for my game would either be gamers with fairly up to date decent PCs with ATI or Nvidia cards, or console gamers. My friend's laptop with a low end ATI card from the same era as the Geforce 8 cards runs my engine at about 27 FPS at the moment. His laptop is pretty damn old. Anyone with a laptop older than that is likely not the kind of gamer that would play my game anyway or can't afford a newer computer so they probably won't be looking for new games to buy anyway.

You just need to make sure you're not wasting your time supporting older hardware when the benefit isn't all that high. It feels great just using modern high end features. I plan on possibly supporting GL 4.2 as well and possibly DirectX even...




PARTNERS