Jump to content

  • Log In with Google      Sign In   
  • Create Account

Yann L

Member Since 06 Feb 2002
Offline Last Active Mar 30 2012 02:53 PM

#4801999 Free or paid 3D models?

Posted by on 23 April 2011 - 11:27 AM

I'd say it's good to at least know how to open a program like wings3D or blender and make a quick prototype shaped object. Also rigging in blender while challenging can be learned in a few days. It's not as difficult as it first might seem.

It's in fact much more difficult than it first seems. Sure you can get up and running in a few days and even be able to create some acceptable models for testing, beyond the usual teapots and geospheres. And it's certainly a good thing to know, if just to create good testing environments for your game without needing to resort to an artist. But there's a world of difference between the programmer art generated this way and something like this or this. It takes years to master the skills needed to create really good 3D models. And that applies to characters as well as most other complex 3D models or scenes. Keep in mind that 3D art is about much more than just creating the geometry (which is already hard enough). Textures and materials are just as important.

It's actually a major challenge to find really good 3D artists nowadays, even if significant pay is involved.

#4800085 OpenGL worth the try?

Posted by on 18 April 2011 - 04:24 PM

My point here is that historically ES has always been a subset of full OpenGL. Use any full OpenGL features that are not supported in the ES version you're targetting and you're in rewrite-land. The extent to which you're in rewrite-land varies of course, but in a worst case scenario you may be rewriting your entire rendering backend to go from immediate mode to vertex arrays/vbos. If that's changed in more recent versions then it's a good positive step forward, but for older versions it's certainly not full portability. Portability isn't something that you magically conjure just by using OpenGL - you still have to work for it.

And the main point of interest there is the term historical. We're talking about writing a new engine here, in 2011. Or heck, let's even go back a few years. We are not talking about obsolete 15 year old legacy code bases. If your backend is relying on immediate mode, then ES portability is going to be the least of your worries. Vertex arrays have been available for 16 years in OpenGL (yes, 16 years !). To put that in perspective, that was around the time DirectX 2.0 was released. VBOs have been available since 2005, give it a year more for universal driver adoption. The FFP was officially deprecated since OpenGL 3. Portability is a non-issue nowadays. If a developer uses legacy features that may hinder portability, then this is by choice, not by necessity.

D3D10/11 forces this choice, which may indeed be an advantage, as it helps to not propagate obsolete techniques.

Playstation, nintendo, Android, Apple, Linux. Have you ever heard of these? All of these use OpenGL as opposed to Direct x.

Neither Playstation nor Nintendo consoles use OpenGL. They use their own proprietary low level APIs.

#4799009 OpenGL worth the try?

Posted by on 15 April 2011 - 08:48 PM

Thanks for your fast reply PropheticEdge, I realized that OpenGL is not a learning library as it's very powerful and complex for a none skilled developer, but I labeled as "learning" because as far as I can see most of the people use it to do some experiments and small games, but when the people comes to the big games they changed to DirectX, I'm trying to wonder why is that, as far as I can read OpenGL is easier to learn than directx because it abstracts the hardware interaction and with DirectX you need to handle a lot of things that OpenGL does for you,

Not really. OpenGL and D3D offers a similar level of abstraction. The reasons why D3D is preferred in current game development has several reasons, many of them historical and logistical. As a quick and dirty summary, until a couple of years ago, OpenGL used to be very badly managed. It was not clear where OpenGL would be heading, because as the committee-driven API it used to be, everybody was pulling into a different direction. DirectX, being under sole control of Microsoft, had a clear evolutionary path. This, coupled with the fact that some hardware vendors were unable or unwilling to deliver stable OpenGL drivers (Intel and ATI before it was acquired by AMD) pushed the industry towards D3D, which was (rightfully) perceived to be more 'stable' and reliable from a vendor support point of view.

Nowadays, things have cleared up. OpenGL management has been streamlined and future development of the API is much more transparent. Except for Intel, drivers are now as good and reliable as D3D drivers. Current OpenGL has feature parity with D3D11. So all in all, one could just as well use OpenGL for a modern AAA game. It's not yet widely done because of existing D3D code bases of course. But the emerging Mac game market might partially shift development back to OpenGL (especially if/when OSX Lion is finally going to support OpenGL 3.x)

Now, even before that, OpenGL was never a 'learning API'. It has always been, and still is, the most widely used API for industrial 3D, such as CAD/CAM, medical imaging, military applications, flight simulators, etc.

From my gamer experience I found that OpenGL is far slower than DirectX but I blame the dedication that developers tend to give to DirectX because it's the higher seller, and OpenGL is for some geeks that play on weird platforms like linux (I'm this kind of geek that's why I started with OpenGL).

OpenGL code being slow is a sign of incorrect usage by the developer. Correctly written OpenGL code is just as fast as equivalent D3D code. The bottleneck is the hardware here. In fact, OpenGL is usually significantly faster than D3D9 (due to not doing the user space/kernel transitions D3D9 does). D3D10/11 have eliminated this bottleneck, making OpenGL and D3D pretty much identical performance wise.

Slow OpenGL code is probably using immediate mode, which is an 'easy' but completely obsolete (and deprecated) way of using the API.

XNA [...] Microsoft's tentacles.

DirectX and XNA are good frameworks, and so is OpenGL. You should choose your technology based on objective assessments of your requirements and your intended target audience. Don't let ideology trouble your objectivity.

#4798843 creating exe file for opengl game

Posted by on 15 April 2011 - 11:42 AM

Come on guys, what has become of the newbee friendliness of GDNet ? The new site is already enough of a pain, we don't really need to go Slashdot in our replies on top of that.

The OP is obviously a beginner. NSIS might be a powerful tool (although I'm not a big fan), but directing a beginner to an application whose doc starts with "To create a NSIS installer, you first have to write a NSIS script" is asking for trouble. Requiring him to learn how to write the equivalent of an installer makefile just to get such a simple task done is a bit overkill.

yoavi2, try Advanced Installer. It's a very powerful tool, similar to NSIS, but with an extremely simple to use and straightforward interface. You can build your first MSI with only a few mouse clicks in their wizard. If this is still too complex, a simple self extracting ZIP will most probably do it, as BitMaster suggested.

#4783342 How many people have bailed on Gamedev.net?

Posted by on 08 March 2011 - 06:18 PM

I find the new site highly irritating. GDNet was and still is the only forum I actively participate in, and the old forum system was an important part of that. It was minimalistic, functional, data centric and efficient. That new IPB stuff is the exact opposite. Now, I fully understand the technical, logistical and financial needs to move on to a new system. And I also realize that we're still in a startup phase here, with many issues still in to-be-fixed status.

Still, the site looks like freaking Facebook. I hate Facebook. Icons everywhere. Menus fading in and out when you move the mouse over stuff. Space inefficient and cluttered layout. Completely unnecessary 'social media features'. Without Benryves' black CSS override I wouldn't even be visiting the forums anymore (thanks a lot for this hack btw, much appreciated !). And I experience a fair share of technical issues as well.

I'm still optimistic that things will get ironed out eventually. But as of now, the new system is very offputting to me and has considerably reduced the frequency of my visits on the forums.

#4722121 Polygon Tessellation/Triangulation Implementations

Posted by on 20 October 2010 - 09:50 AM

I tried this with a couple of other GLU implementations, and I think I know what is going on.

I'll repost Samsters image here for convenience:

Now consider the following three triangles:
9 0 4
9 4 5
9 5 8

The first, 9 0 4, may look like a triangle creating two t-junctions with vertex 5 and 8. But in fact, it does not. It is topologically connected to these points by the two zero-area triangles 9 4 5 and 9 5 8. If you are having trouble visualizing this, then imagine vertices 5 and 8 being slightly offset to the right.

So geometrically speaking, the algorithm is not generating t-junctions. It is connecting the vertices using degenerated triangles, which is perfectly fine mathematically. Such a mesh will render correctly, and will be correctly processed by any geometrical algorithm that handles zero-area triangles (which a numerically stable algorithm should). For example, creating a connected edge graph on this mesh will work fine. If you removed these two degenerated triangles, then you would in fact have a t-junction, and an edge graph will fail to cover the resulting topological gap.

#4609018 floorcaek, bears, and EAT ROPE

Posted by on 23 February 2010 - 02:35 PM

In no particular order:

New topic theory
MP3 beating compression
Specular Lightosis
Landfish and EGG
The ultimate thread
Blackmage and the Mexican wrestler incident
Yoda, Fongerchat and all that stuff. Poppet.
The Warsong collection
The pink gang
Resist banners
Go jump off a cliff
Feelevil and his numerous plots of wiping out humanity
That incident where someone (I think it was AdmiralBinary) SQL injected the GDNet database and run a script that would offset all topic IDs by one
The resulting witchhunt and SHilbert's GDNet 'hack'
The fork story
The rise and fall of Bishop_pass
Nurgle being demodded for massive power abuse
The G art competition
Basically everything involving Nes or Pouya
Nope and "uh, no"

and probably many more I forgot.

Oh yeah, and then the short, yet very intense time span between the moment someone realized that you could insert javascript into your posts and the staff blocking it...

#4542373 VAO slower than not using it

Posted by on 15 October 2009 - 10:43 AM

Original post by samoth
No experience here, but a quick Google brought me to the OpenGL forums where there's a thread with pretty much everyone agreeing that VAO is 100% the same speed-wise as not using it, making it a total waste of time using.

I don't really agree with this. While not stellar, I have experienced a performance increase using them, of about 10% in CPU limited cases. But this may very well depend on the exact usage scenario. In my case, I was sourcing a large amount of vertex attribute streams, so that the old style per-frame call of multiple glVertexAttribPointer would show a significant impact. I had about 1.2m faces visible in the frame, distributed over about 500 VBOs.

And of course you won't notice any impact if you're not CPU limited. If, for example, you're fragment shader or memory bandwidth limited (which is often the case), then VAOs speedup will be about zero.

#4252501 How can i get the world coordinates in a GLSL pixel shader by using the depth...

Posted by on 21 June 2008 - 08:41 AM

Original post by bzroom
You are correct in thinking you can reconstruct the world coordinates from screen position and depth (i actually clicked this thread with hopes of finding out how).

There are several methods to do it. The main two ones are either by reverse raycasting or by simple matrix math. They both give the same results (minus floating point rounding errors), and performance depends on your situation (especially in what coordsys you need your results to be).

I'll quickly outline the second method. The idea is to reverse the screenspace remapping, the perspective divide, the projection transform and the view transform. All that in the pixel shader, as fast as possible. While that seems a lot to do, it's actually very simple, because almost all operations can be concatenated in a single matrix beforehand.

First, get the current screenspace coordinates of the pixel under consideration. Note that gl_FragCoord returns values in viewport range. We first need to remap them to [0..1], so that we can use them to index the depth texture. We later need to remap them to [-1..1] range for the matrix math.

vec2 screen = (vec2(gl_FragCoord.x, gl_FragCoord.y) - viewport.xy) * viewport.zw;

Here, viewport.xy contain the lower-left offset of your viewport (usually 0,0), and zw contains the reciprocal of your screen size. Look at the values you supplied to glViewport for reference.

Next, get the depth from the depth buffer at the position of our current fragment. I assume a correctly set up and bound depth texture. This will return a depth value in the 0..1 range. Again, we will later need to remap this to [-1..1]:

float depth = texture2D(DepthTexture, screen).x;

Now comes to magic. We build a homogeneous vector from our normalized screenspace coordinates, and multiply it with the inverse transform matrix IM. Look down below on how to construct it:

vec4 world = vec4(screen, depth, 1.0) * IM;

Finally, we need to undo the perspective divide, by dividing our vector by its w component. See it as a 'dehomogenisation' ;)

world.xyz /= world.w;

world.xyz now contains the worldspace position of the current fragment.

Now, what remains is that IM matrix. Essentially, what we need is a matrix that first remaps our [0..1] coordinates to [-1..1], then undos the projection, and finally undos the view transform (camera matrix).

Assuming column major order, the matrix is constructed as follows:

IM = inverse(ProjectionMatrix * CameraMatrix) * RemapMatrix;

RemapMatrix is a simple scale + translate matrix, that will remap an input vector from [0..1] to [-1..1] range.

#4157230 Copy protection - how do I do?

Posted by on 11 February 2008 - 01:28 AM

Original post by dktekno
However, if I include the SDK, it is basically the source code for whole game.

That would be a very badly designed SDK, then... The entire idea of an SDK is to not ship any source code, besides abstract interfaces in header files, and precompiled libs.

Original post by dktekno
The problem is that people might then easily remove the copy-protection code within the source code, them compile it and then they have a copy-protection free version they can distribute to everyone.

Well, doh [grin]

Seriously, what's up with all this newb run on copy protection systems lately ? Look mate, large publishers put millions and millions of Dollars into the development of cutting edge copy protection systems, and they still get cracked within days by some Russian or Chinese cracking groups. Do you really think you can do any better, without any prior experience ?

Let's settle that once and for all, shall we ?

Implementing your own copy protection scheme is a completely futile effort. The more energy you put into making it crack-proof, the more time you'll waste. It will always be cracked, often within only a fraction of the time that you invested into developing it.

#4151740 GLSL: Bump mapping. Is tangent needed?

Posted by on 02 February 2008 - 10:24 PM

No, it's not inherently needed. It depends on how you do bumpmapping.

Objectspace bumpmapping doesn't need tangent or bitangent information at all. The normalmap encodes the normal in objectspace, so it can be used for lighting calculations directly (assuming you do lighting in objectspace, otherwise you'll need a coordsys adjustment, but still no tangents). The drawback is the large memory footprint of such maps. On modern hardware, and with powerful normalmap compression, this might not be such a problem anymore.

When using tangent space bumpmapping, you'll obviously need the tangent. Otherwise, you can't define the coordinate space the normalmap is encoded in, and you won't be able to correctly transform the normal. However, supplying the tangent per vertex is only one possibility. On modern hardware, you can directly evaluate the tangent in the fragment shader, by local finite differencing. This is probably going to be more expensive than simply supplying the precomputed tangent on current hardware, but this will quickly change in the future. It can already be very viable on skinned geometry, since you don't have to transform the tangents and/or bitangents anymore.

There are several different ways to compute the vertex tangents. Each approach has advantages and drawbacks, often dependent on the style of modelling you use in your geometry. Eric posted a pretty good method on his website.

#3900058 Tutorial on HDR or the components of HDR?

Posted by on 26 February 2007 - 12:42 PM

Ok, let's take that apart:

Original post by all_names_taken
1. which ARB extensions do I need support for in order to set up HDR?

Not that much. Essentially, you need:

* GLSL support (ARB_shading_language_100, ARB_shader_objects, ARB_vertex_shader, ARB_fragment_shader)
* FBOs (EXT_framebuffer_object)
* FP texture support (ARB_texture_float or ATI_texture_float)
* A way to handle NPOT textures is highly recommended, but not essential.
* Some additional extensions will make it faster or nicer (eg. EXT_framebuffer_multisample)

Original post by all_names_taken
2. how do I create a floating point render target texture and set up so things are rendered to it? How do I turn off this binding again once I'm ready, in order to redirect rendering to the frame buffer again?

Look into standard FBO usage. Floating point textures are handled just like normal ones, only with a different internal format (eg. GL_RGB16F_ARB). So you would create an FBO, add a depth renderbuffer to it (so to get z-buffering), and attach your floating point texture to it. Once done rendering, you detach it again and use it as a normal texture.

Original post by all_names_taken
3. I'm using SDL, which AFAIK does surface creation for the frame buffer. Does this affect how I will handle floating point render targets and how I later render to the framebuffer from them?

I don't know much about SDL, but I don't think it would interfere.

Original post by all_names_taken
4. how do I make an fp render target texture? Also: should I use fp16, or fp32, or some other format? Which is the standard used in most modern games that support HDR?

You simply detach it from the FBO, and bind it as a normal texture. On current hardware, you should stick with FP16 formats.

Original post by all_names_taken
5. once having created my fp render target, how do I set up so the rendering goes to it? And how will my shader code - if at all - be affected by rendering to an fp render target texture instead of the frame buffer?

You use glBindFramebufferEXT to bind the framebuffer object, with your FP texture attached to it. Your shaders are (usually) not affected, unless you manually clamp/saturate to [0..1] range somewhere.

Original post by all_names_taken
6. when I do quad rendering from an fp texture to the frame buffer with a shader, will my shader code be affected in any way, or will a simple skeleton shader just clamp color to 0..1 automatically because the render target is a non-fp target?

This will not work well. When rendering the FP16 texture to the framebuffer, you need to do something called 'tonemapping'. Basically, you have to convert the HDR data to LDR in the framebuffer. Simply clamping is an option, but doing so will render the entire HDR setup useless - you'd be doing the exact same thing as if your entire pipeline was LDR in the first place ! Instead, you'll have to range compress the data. Many different forms of tonemapping operators exist: static ones, dynamic or adaptive ones (simulating the human iris), simple one and Über-complex ones. Choosing a good tonemapping operator highly depends on the visual result you're looking for, and is almost a religious debate for some people.

One of the easy ones is simple exposure. It typically looks something like this:

uniform float Exposure;
uniform sampler2D FP16Texture;

void main()
vec4 c = texture2D(FP16Texture, gl_TexCoord[0].xy);

gl_FragColor.rgb = vec3(1.0, 1.0, 1.0) - exp2(-Exposure * c.rgb);

The exposure value controls how light or dark your final image will be. Much like the f-stop value on a real camera.


Also, when I bind the fp texture, do I use the normal code for binding textures to shaders, like this:

Yes. Floating point textures are nothing special, they're just another internal format.


Bloom != HDR. Close but not quite ;-)

Correct. Bloom can be done even without HDR, although it usually doesn't look very good. Note that HDR also looks much nicer with added bloom.

#3530679 Do you still support fixed function pipeline?

Posted by on 24 March 2006 - 12:13 AM

Wise words from Prozak there.

With a well designed engine, you shouldn't even need to decide these sort of things. FFP, SM2, SM3 - they're all just abstract concepts of hardware capability.

So, do I support FFP ? Yes, of course I do. But do I have FFP codepaths, separated by ugly if/else blocks or similar ? No, definitely not. My engine itself isn't even aware of the existance of either FFP or SM levels. It only selects the appropriate shaders depending on capabilities, and these can fall back to everything including an FFP shader if necessary.

A lot of people don't have the latest highend cards, especially notebook users. Or they have cards that support a higher shader model, but do so very slowly (eg. the GF 5200). These people might prefer playing with less eye candy, but at a much higher framerate. Give them this option.

I've seen a lot of shader abuse over the years, especially with the advance of high level shader languages. People add a small little effect that uses SM3 features, and although it isn't even visible 99% of the time (or worse, it looks ugly), they kill compatibility with >80% of their potential target market. Often, these effects could have just as well been designed with lower shader models (or even - gasp - the FFP !), but SM3 just seems 'easier'. That's not the way it should work.

Rules of thumb:

* Provide fallback paths, automatic ones and manual ones.

* Use a shader model only as high as your effect requires it. Don't use SM3 just for the heck of it, or because that per pixel 'if()' just looks so easy.

* Don't sacrifice advanced effect performance for the fallback shaders, and vice versa. Yes, it is perfectly possible to include fallback support without additional performance cost for next-gen shader paths.

#2610396 oblique frustum clipping for projectively textured water plane

Posted by on 11 August 2004 - 08:18 AM

Original post by _the_phantom_
Also, iirc, when you projectively texture you should multiply the texture matrix by the projection matrix that was used to create the texture, however you cant do that by just calling Eric's code as it does a glLoadMatrixf() not a glMultMatrixf(), instead you need to store that matrix before you reset the projection matrix.

You shouldn't use Erics modified matrix in the projection pass. While it is true that the same projection matrix used to generate the texture should be applied when projecting, this isn't the case with the oblique frustum trick. The reason is that this modification isn't supposed to change the fragments coordinates in any way, other than culling some of them away. Unfortunately, the trick does in fact modify the fragment depth component, but this is an unwanted (yet unavoidable) artifact of the method. You wouldn't want to reproduce that behaviour in the projection pass.

Since the oblique modification won't touch the x,y components, you typically won't see any differences when projecting with or without the oblique matrix - because standard projective texture mapping doesn't use the [r] component. But some advanced per-pixel effects (water-haze or some methods to do sparkles, for example) make use of the [r] component, and will be messed up if you apply the oblique matrix in the projection pass.

There is one exception to that rule: if your projective texture contains depth values in addition to the colour. Because then, the depth components will be in oblique space, instead of the standard perspective one. This is an uncommon situation though, and there are methods to avoid it entirely by interpolating an additional non-oblique depth channel when creating the texture.

#2531255 Displacement mapping and terrain rendering

Posted by on 28 June 2004 - 02:38 PM

Original post by Raloth
Using your parametric method, how do you handle level of detail? Wouldn't this require updating vertex buffers every time something changes?

Of course you need to reupload data to the card in this case, since the tesselation is currently still done on the CPU. But caching can help a lot, and also discretizing the LOD steps with vertex shader based geomorphing between them. Actually, in my engine I'm using the same LOD technique on terrain patches as the one I use for generic objects - a tesselated patch is considered a standard mesh object, albeit a temporary one.

Charles B, sBibi: I think it's clear we aren't talking about the visual surface appearance when comparing heightmaps with other approaches. Surface rendering and topographic representation are both obviously independent of each other. So it's not so much about the visual appearance of the terrain surface, but about topographical features. The fact that heightmaps limit any possible slope to 90° at infinite height of a grid point is a tremendeous constraint while designing terrains. Especially phenomena such as erosion will change geomorphology into heavily concave shapes. Those just cannot be accurately represented by a heightmap.

I mean, think about it: limting your terrain to a heightmap is like limiting your level geometry to a datset from a 2.5D raycaster such as Doom-1 ! Sounds extreme, but unfortunately true. For accurate geometric representation of complex terrain shapes, you just need more than one height per grid point, and you need slopes greater than 90°.

As sBibi mentioned, it's mostly an augmented realism thing, and can also be a nice addition to gameplay.


The challenges will be to combine many technologies into a coherent whole. Various techniques at different LODs, for water, trees, grass, ground, ...

I certainly agree with that.