Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Vanderry

Member Since 23 Jul 2009
Offline Last Active Apr 03 2015 04:36 AM

Topics I've Started

Maximum color output

02 April 2015 - 01:49 AM

Hello!

 

I'm attempting to create a classic RTS fog of war effect by rendering a number of circle gradients into a texture. It would appear black where there is no vision, white where there is an observer and an in between value for areas that have been explored and then left. The individual color channels could be used to store different stage values.

 

My first thought was to sample the texture in the shader, to make sure that the max(newColor, oldColor) is stored, but I have since learned that this is not supported by GLSL. OpenGL ES 2.0 doesn't seem to support glBlendEquation with GL_MAX. Might there be some other way that wouldn't require a second texture?

 

- David


Shadow mapping with GLSL

09 March 2015 - 01:32 PM

Hey guys!

 

I'm having some serious trouble implementing shadow mapping in my OpenGL/OpenGL ES application. There are many sources and discussions out there, but they often cover different versions of the API or omit some details. The way I'm doing it is by first constructing an orthographic projection matrix based on a light source. The "frustum" is adjusted to contain a terrain mesh.

 

Vector3 offset = {1.0f, -1.0f, 1.0f}; // Light direction
offset.Normalize();
offset *= 30.0f;
Vector3 center = {15.0f, 0.0f, 15.0f};
Vector3 eye = center - offset;
Vector3 up = {0.0f, 1.0f, 0.0f};
Matrix4 viewMatrix = LookAt(eye, center, up);

Matrix4 projectionMatrix = Ortho(-30.0f, 30.0f, -30.0f, 30.0f, 0.0f, 60.0f); // Depth range is 60.0

Matrix4 shadowMatrix = viewMatrix * projectionMatrix;

 

Then I am drawing all the models that populate the world to the shadow map, and projecting it onto the terrain. It looks great for things that are positioned above ground. The problem emerges when they are not, in which case they are projected just the same. Naturally they should appear in the shadow map, so I am assuming that it is the depth comparison that fails. The shadow map is being rendered directly to a texture (no render buffer involved), with settings GL_RGBA and GL_UNSIGNED_BYTE. The attachment is done through GL_COLOR_ATTACHMENT0, and for the color value I have tried to use a linearized and normalized gl_FragCoord.z, but now I am more inclined to use the same manually projected vertex z that is used in the other render step. The shaders look roughly like this:

 

// Shadow vertex shader
attribute vec4 a_Position;
uniform mat4 u_ShadowMatrix;
uniform mat4 u_WorldMatrix;
varying vec4 v_ShadowCoordinate;
void main()
{
  v_ShadowCoordinate = u_ShadowMatrix * u_WorldMatrix * a_Position;
  gl_Position = v_ShadowCoordinate;
}
 
// Shadow fragment shader
varying vec4 v_ShadowCoordinate;
void main()
{
  gl_FragColor = vec4(v_ShadowCoordinate.z / 60.0); // Without packing
}
 
// Render vertex shader
uniform mat4 u_WorldMatrix;
uniform mat4 u_ViewProjectionMatrix;
uniform mat4 u_ShadowMatrix;
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinate;
varying vec2 v_TextureCoordinate;
varying vec4 v_ShadowCoordinate;
void main()
{
  // I have a suspicion this might mess up the z, but it's vital for the projection
  mat4 shadowBiasMatrix = mat4(
    0.5f, 0.0f, 0.0f, 0.0f,
    0.0f, 0.5f, 0.0f, 0.0f,
    0.0f, 0.0f, 0.5f, 0.0f,
    0.5f, 0.5f, 0.5f, 1.0f);
 
  v_TextureCoordinate = a_TextureCoordinate;
  v_ShadowCoordinate = shadowBiasMatrix * u_ShadowMatrix * u_WorldMatrix * a_Position;
  gl_Position = u_ViewProjectionMatrix * u_WorldMatrix * a_Position;
}
 
// Render fragment shader
uniform sampler2D u_Texture;
uniform sampler2D u_ShadowMap;
varying vec2 v_TextureCoordinate;
varying vec4 v_ShadowCoordinate;
void main()
{
  vec4 textureColor = texture2D(u_Texture, v_TextureCoordinate);
  vec4 shadowMapColor = texture2D(u_ShadowMap, v_ShadowCoordinate.xy);
  if ((shadowMapColor.x * 60.0) < v_ShadowCoordinate.z) // Same projection as in the shadow shader, right?
    shadowMapColor = vec4(vec3(0.5), 1.0);
  else
    shadowMapColor = vec4(1.0); // No shadow
  
  gl_FragColor = textureColor * shadowMapColor;
}
 
I have tried different ways of computing the shadow map, with more elaborate packing methods. Ultimately I am assuming that OpenGL clamps the values between 0.0 and 1.0, or rather 0 and 255 as in the case of my texture format. There has to be a more obvious problem with my implementation. I can mention that changes to the render state such as dither, depth test and face culling doesn't make a noticeable difference. Can anyone think of a possible flaw? Any tips are greatly appreciated.
 
- David

Data oriented design

23 February 2015 - 01:46 PM

Hey you cool code crunchers!

 

I'm trying to wrap my head around data oriented design. There are many good explanations and examples out there, but I would love to verify that my understanding is correct before making changes to my old code.

 

So far I have been trying to keep my data simple by avoiding inheritance and using only pods, std::vectors and nested structs. As a very simplified example:

 

struct State
{
  struct Unit
  {
    I32 maxHealth;
    I32 health;
    Float rotationAngle;
    Vector3 position;
  };
 
  struct Player
  {
    std::vector<Unit> units;
  };
 
  std::vector<Player> players;
};

 

My first intuition would be to redefine the data like this:

 

struct State
{
  struct Unit
  {
    I32 maxHealth;
    I32 health;
    I32 positionVector3Index;
    Float rotationAngle;
  };
 
  struct Player
  {
    std::vector<I32> unitIndices;
  };
 
  std::vector<Player> players;
  IndexedVector<Unit> units;
  IndexedVector<Vector3> vector3s;
};

 

IndexedVector is just an std::vector that secures element indices by flagging vacant slots (unless there's a sequence at the end, at which point deallocation occurs). I'm sure there exists a refined variation somewhere.

 

Does this look right, or should I separate even pods of different types but equal memory sizes, such as 32 bit integers and floats? It seems to me like when an index would occupy the same or more memory than the referenced data type, separation wouldn't be a good idea. Thank you very much for any tips!

 

Ps. I managed to close my browser while writing this, so the post had to be restored from memory. Sorry if I lost some point along the way.

 

- David


Variable id mapping

13 February 2015 - 09:11 AM

Hey guys!

 

I frequently encounter situations where I want to map variables in C++ with data in scripts or xml-files. It feels most intuitive to associate them with string ids and so far I have done the parsing either by using templates such as:

 

template <class T>
struct NamedVariable
{
  const char* name;
  T* variable;
};
 
And supply functions with one name but different parameter lists (making the compiler figure out which one to use).
Or, I would add a datatype enumerator to the structure and manually specify the type:
 
struct NamedVariable
{
  enum Type { Type_Integer, Type_Float, Type_StlString, etc... };
  const char* name;
  Type type;
  void* variable;
};

 

Is any of these methods preferable or risky beyond my understanding (I have the most distaste for the void* casting), or might there be another way? I seem to remember one could paste snippets of C-like code on these forums before, hope you'll excuse the crude format. Thank you for your time!

 

- David


Matrix decomposition

30 January 2015 - 08:24 AM

Hey guys!

 

Quite simply, I was wondering if I maintain a transformation matrix of translations, scalings and rotations and also another one with just the rotations - could I extract the correct resulting scale and possibly translation if I multiply the combined matrix by the inverse rotation matrix? I have a feeling the math might not be that simple, but I'm just not sure. Thanks in advance!


PARTNERS