Jump to content

  • Log In with Google      Sign In   
  • Create Account

PaloDeQueso

Member Since 16 Oct 2001
Offline Last Active Apr 21 2014 11:02 PM

Topics I've Started

Pipelining Game Engine Question

20 June 2013 - 06:59 AM

I'm working on a design for a pipelining game engine, with data some parallelization at each pipeline stage as well.

 

If I have 4 stages (input/networking, physics/animation, logic/ai, rendering). If I understand the idea of pipelining correctly, then each stage would get it's own copy of the game state, including a copy of the physics simulator (in my case a bullet physics world object). That way each stage can operate concurrently without affecting the other.

 

Now let's say that at stage 3, an object gets added to the simulation, stages 1, 2 and 4 are now out of sync, and since their simulation state data is ahead/behind respectively, then how can I even add this object to those simulation states without having them be out of sync from the start?

 

I'm sure this is something commonly solved, I just haven't been able to find any literature on the subject.

 

Thanks in advance!


Atmospheric Shader

18 July 2012 - 07:55 AM

I started working on an atmospheric shader. I don't want it to look exactly like the earth everytime, so I looked for more configurable scattering algorithms. I came across this http://petrocket.blo...-treegrass.html. This looks more like what I was looking for so I converted it to glsl and tried to convert it to world space calculations as this is done in my deferred renderer which all happens in world space. The only issue I'm thinking is that I get odd results when I'm less than the distance of the light away from the planet. The shader seems to think I'm inside the atmosphere or something already. It looks great far away, and great inside the atmosphere, but just terrible in the middle ground.
I'll post screenshots and shaders for analysis...

Good far away screenshot!
Posted Image
Good Inside Atmosphere Screenshot
Posted Image

Bad Middle Ground Screenshot:
Posted Image

Here is my shader, remember, all incoming positions, normals, etc, are in object space, and I output to my deferred renderer in world space. In the fragment you'll notice I'm using MRTs. I only write to the last one in the case of a translucent forward shaded object like this. Any thoughts on this would surely be appreciated!

Vertex Shader
#version 130
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
uniform mat4 normal_matrix;
in vec4 vertex_position;
in vec4 vertex_normal;
in vec4 vertex_texcoord;
in vec4 vertex_binormal;
in vec4 vertex_bitangent;
in vec4 vertex_weights;
in ivec4 vertex_weight_indices;
smooth out vec3 object_position;
smooth out vec3 world_position;
smooth out vec3 normal;
void main() {
	object_position = vertex_position.xyz;
	world_position = (model_matrix * vertex_position).xyz;
	normal = normalize(object_position);
	gl_Position = projection_matrix * view_matrix * model_matrix * vertex_position;
}

Fragment Shader
#version 130
smooth in vec3 object_position;
smooth in vec3 world_position;
smooth in vec3 normal;
uniform vec3 camera_position;
uniform float material_specularity;
uniform vec4 material_color;
uniform int normal_mapping_enabled;
uniform int receives_lighting;
uniform sampler2D decal_map;
uniform vec4 light_position;
uniform float inner_radius;
uniform float outer_radius;
out vec4 fragment0;
out vec4 fragment1;
out vec4 fragment2;
out vec4 fragment3;
void main() {
	// constants
	float stretch_amount = 0.001;
	float exposure = 0.5;
	float g = -0.990;
	float g2 = g * g;
	float tweak_amount = 0.025;
	float outer_radius2 = outer_radius * outer_radius;
	float camera_height = length(camera_position);
	vec3 camera_to_position = world_position - camera_position;
	float far_distance = length(camera_to_position);
	vec3 light_direction = normalize(light_position.xyz - camera_position);
	vec3 ray_direction = camera_to_position / far_distance;
	float camera_height2 = camera_height * camera_height;
	float B = 2.0 * dot(camera_position.xyz, ray_direction);
	float C = camera_height2 - outer_radius2;
	float det = max(00, B*B - 4.0 * C);
	float near_distance = 0.5 * (-B - sqrt(det));
	vec3 near_position = camera_position.xyz + (ray_direction * near_distance);
	vec3 near_normal = normalize(near_position);
	float lc = dot(light_direction, camera_position / camera_height);
	float ln = dot(light_direction, normal);
	float lnn = dot(light_direction, near_normal);
	float altitude = camera_height - inner_radius;
	float horizon_distance = sqrt((altitude * altitude) + (2.0 * inner_radius * altitude));
	float max_dot = horizon_distance / camera_height;
	altitude = max(0.0, camera_height - outer_radius);
	horizon_distance = sqrt((altitude * altitude) + (2.0 * outer_radius * altitude));
	float min_dot = max(tweak_amount, horizon_distance / camera_height);
	float min_dot2 = ((camera_height - inner_radius) * (1.0 / outer_radius - inner_radius)) - (1.0 - tweak_amount);
	min_dot = min(min_dot, min_dot2);
	float pos_dot = dot(camera_to_position / far_distance, -camera_position.xyz / camera_height) - min_dot;
	float height = pos_dot * (1.0 / (max_dot - min_dot));
	ln = max(0.0, ln + stretch_amount);
	lnn = max(0.0, lnn + stretch_amount);
	float brightness = clamp(ln + (lnn * lc), 0.0, 1.0);
	vec2 uv = vec2(brightness * clamp(lc + 1.0 + stretch_amount, 0.0, 1.0), height);
	height -= min(0.0, min_dot2 + (ln * min_dot2));
	float alpha = height * brightness;
	vec3 negative_ray_dir = -ray_direction;
	vec3 new_light_direction = normalize(light_position.xyz - world_position.xyz);
	vec4 diffuse = texture(decal_map, uv);
	vec4 diffuse2 = texture(decal_map, vec2(min(0.5, uv.x), 1.0));
	float cosine = dot(normalize(new_light_direction.xyz), normalize(negative_ray_dir));
	float cosine2 = cosine * cosine;
	vec4 diffuse_color = diffuse * alpha;
	float mie_phase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + cosine2) /(1.0 + g2 - 2.0 * g * cosine);
	vec4 mie_color = diffuse2 * mie_phase * alpha;
	vec4 color_out = vec4(1.0) - exp((diffuse_color * (1.0 + uv.y) + mie_color) * -exposure);
	if (color_out.a < 0.0001) {
		discard;
	}
	fragment0 = vec4(0.0, 0.0, 0.0, color_out.a);
	fragment1 = vec4(0.0, 0.0, 0.0, color_out.a);
	fragment2 = vec4(0.0, 0.0, 0.0, color_out.a);
	fragment3 = vec4(color_out);
}

Modern Tessellation Based Planet Renderer needs Collision

12 July 2012 - 11:04 AM

So I wrote a basic planet shader which takes a very low poly (32 triangles) sphere and tessellates it to up to 2 million triangles of detail for the surface of the planet. This is a real time dynamic LOD all done with the tessellation processor on the GPU. That means I never store the result. As far as the other parts of the shaders are concerned, as well as the c++ end, it's still a 32 triangle sphere. The issue comes when trying to simulate physics. I've already worked out how to do gravity when near planets, that's fine. What I'm worried about now is how to simulate collision when something actually lands on a planet.

Obviously, it's out of the question to generate a 2 million triangle shape and pump that into bullet. But the planet is generated from perlin noise based height maps (using cube mapping), so I thought maybe I could use the height map shape in bullet, but I realize that's only for 2d. I can certainly generate some very basic geometry for bullet but because of the size of a planet, I'm worried it won't be high enough resolution when a player is just walking on the ground.

So my question is, am I still stuck generating LODs for physics simulation anyway? Or does anyone who perhaps has more bullet experience have a better idea of how to accomplish this?

I am still working on the atmospheric scattering shader, so I haven't delved into this but am starting to think toward the future a bit, and can't lie that I'm a bit worried that it will be a show stopper.

Assimp Skeletal Animation Follies

29 June 2012 - 12:50 PM

I've gone through all of the motions of adding skeletal animation with assimp to my game engine, but it seems I'm not quite generating transforms correctly or something. I have my AssimpImporter.h/cpp source file which reads all of the data and stores it in the data structures in Animation.h/cpp.

Basically I store a tree of Bones for the bind pose and each bone knows it's parent and children, as well as it's bind pose offset matrix which I believe are pre-multiplied, but I'm not sure.

I also store an "Animation" which has for each bone id a vector of time stamp/transform component pairs.

When animating, I pass a timestamp into the animation and it loops through those components for each bone and generates a list of transforms and returns those.

Then I take those and for each one multiply up the tree of bones starting with that bone's bind pose matrix.

The issue is that my model goes from nicely rendered static model to a mess of data with some unanimated parts blatently showing just fine like my character's head just floats around in the mess of triangles, fully in tact.

I've verified by other means that I'm sending in the weight and bone index data property as vertex attributes.

Basically I need someone to take a look at the way I'm calculating my bone transforms and double check my math, hopefully given some knowledge of assimp and glm going together.

here are links to my github page's code for the relevant files...

My Transform Generation is in Here: (Ignore the fact that it prints lots of junk)
https://github.com/p...Source/Math.cpp

Assimp Loading Code:
https://github.com/p...simpInterface.h
https://github.com/p...mpInterface.cpp

Animation Code:
https://github.com/p...ics/Animation.h
https://github.com/p...e/Animation.cpp

Thanks ahead of time for any insight, as I'm at a loss...
If you want any screenshots, do't' hesitate to ask!

Shadow Mapping work on Intel but not Nvidia?

03 October 2011 - 09:22 PM

I recently had some free time at work and the intel graphics card in my machine supports opengl 3 so I decided to see if my engine would run... it does. So I thought, the only thing it's really missing right now is to convert my shadow mapping. I do it and it works with flying colors. It only took me about 3 hours to add it to the renderer.

So I thought, that was too easy, especailly for an intel card. So I brought the code home and put it in my svn and compiled on my home computer with a geforce 550 from nvidia themselves. It supports opengl 4.2. Suddenly, I realized the shadows do not work at all, in fact the whole screen is black...

Has anyone else ran into this? I've been faithful to NVidia and it has to work on nvidia and ati, and this is very strange. Normally I'm fighting with Intel cards.

Anyway, just was hoping for some quick advice, maybe a couple things to check out... here's some screen shots

Here is a screenshot from work of the shadows working on the intel gma card:
Posted Image


Here is the shadow map as generated on my NVidia card (It is exactly the same as the one as the Intel card!):
Posted Image


But alas, like I said, the screen is black...

I looked at gDeBugger and there seems to be no errors, especially no OpenGL Errors.

Once again, any thoughts would be greatly appreciated!

PARTNERS