• Create Account

# Tocs

Member Since 22 Apr 2005
Offline Last Active Dec 08 2013 11:29 AM

### Managing Materials / Shaders, Their Inputs, and Different Types of Geometry.

09 September 2013 - 09:28 PM

I wanted to have this great "material" system where I could create a simple description of a material in a text file and slap it on just about any geometry I wanted. Different types of geometry are things like static meshes, animated meshes, particles, instanced geometry, ribbon-trails, etc.

These different types of geometry require different vertex shaders, maybe tessellation or geometry shaders as well, and have inputs to go with those shaders such as bone orientations for animated meshes.

So in my head it made sense to create a Geometry class who's purpose is to contribute the vertex processing part of the shaders, handle the vertex processing inputs, and control how the primitives were submitted to the GPU.

Then to go with it I created a Shading class who supplied the fragment processing part of the shaders, the inputs for the fragment processing, and controlled the order in which "bucket" the draw call should go in. (Transparent, Forward Opaque, Deferred, Glowing, Distortion, etc).

There were agreed upon inputs/outputs between shading stages. (Normal, Position, TextureCoordinate) etc.

This turned out horribly, or at least the way I handled it did.

• You couldn't have shading that required special vertex shading. Like this.
• You had to write special shaders for instanced stuff anyway if they were to have any variation in shading. Thus defeating the purpose.
• Particles were a mess because you have the behavior of a particle influence both the vertex and fragment processing ends. And ultimately defeated the purpose of separating geometry and shading.
• Forward lighting became problematic because of the multiple shaders for different types of lights while other types of shading like deferred didn't need to know about the light.

So the system is pretty much broken and I need to replace it with a better way of doing things. So how do you handle minimizing your shader rewriting? How do you pair your "materials" with your "geometry"?

I tried to search around for related information but I wasn't entirely sure what to search for so if I've missed a great thread post me a link.

### LaTeX math via MathJax?

18 July 2013 - 07:28 AM

A few weeks ago I tried to make a post in the math forum with a bit of LaTeX mixed in. A sticky post said to use the "eqn" tag. The tag seems to query a server to render the latex and serve an image. However it seemed quite easy to break the server so it wouldn't output an image at all.

The other day I was stumbling across the internet and found http://www.mathjax.org/ which renders latex into HTML elements. It seems to function quite well. Has anyone considered this for use in the forums / journals? I did a quick search of the forums and couldn't find any mention of it. So I thought I'd post it here.

### Rope Simulation with Point Based Dynamics

17 June 2013 - 01:44 PM

Looks like the eqn tag died, tried to make readable "ascii math"

I tried asking this question on MathOverflow but it doesn't seem to be gaining any speed there... This forum seems much better suited. So I saw some papers on laproscopic surgery and simulating thread and thought "That would make some wicked cool rope to play with" Something to shove together with my Razer Hydra and Oculus. The most current of the papers is  this which in turn references this other paper.

In Müller's paper he talks about constraint functions being C : R^3 -> R Which makes sense because using the constraint solver you solve attempt to get each constraint function either equal to 0 or greater than or equal to 0.

However if you look at fratarcangeli's paper he gives the contact constraint function as

C(p) = [p - (p_n0 + p_v)]

Where p must be some vertex of the rope, p_v is the penetration vector and p_n0 is "the current position of point p. This is where things stop making sense for me. Because it appears that fratarcangeli's constraint equation is in R^3 and not R}. Perhaps I'm miss-understanding the equation?

My second issue with his constraint function is

p_v = (|p_n0 - p_n1| - r) dot (p_n0 - p_n1)/|p_n0 - p_n1|

and he gives a very loose definition of what p, p_n0, p_n1 are.

Perhaps someone can explain what his constraint function is supposed to be?

---------------------------

I attempted to figure out what the constraint function should be.

I assumed if two segments were colliding I would have to apply a contact constraint to all 4 points. Since I'm applying the constraint to both sides I only need to move each mass point halfway out of the collision.

When a collision occurs between two line segments p_1 -> q_1 and  p_2 -> q_2  And p = p1. I get the two closest points c_1 and c_2 on those segments respectively.Most of the time c_1 != p. So I have to define my constraint function with that in mind.

I call the collision normal n = (c_1 - c_2) / |c_1 - c_2| and an offset o = (p - c_1) dot n which is the offset along the collision normal of p down to the contact point. This handles when c_1 != p Note: o is calculated once at the beginning of a collision, it's expected to stay constant.

The goal is to have the constraint function equal to 0 when p has moved halfway to resolve the collision (the other segment will move the other half) and > 0 when the point has moved further than halfway.

So I define

C(p) = -(2r - ((p - c_2) dot n - o))/2

Here's a poorly drawn diagram to illustrate my thoughts.

Which when I punch that through the method described in Müller's paper. I get delta_p = -2C(p)n However when I plug this into my simulation it's stable up until I tie a knot which given the nature of the papers means I've done something wrong. Can anyone elaborate on where I'm going wrong?

### Simple fullscreen quad not showing up.

30 November 2012 - 12:26 PM

Case Closed:

I'm an idiot, a typo in ibo.Write() caused the program to not write the indices to the IBO and thus it didn't render anything. It works now.

I've been working on a "graphics framework" (I use this term loosely), I've put OpenGL concepts into C++ classes so I can use them more easily, at any rate I decided to give them a simple test of a fullscreen quad. I init opengl to 3.1, set the clear color to blue, and start the renderloop. The quad is set to show up as white, yet I still see blue so the quad is not showing up.

First off here's my code using my framework stuff

This probably wont help as much because you can't see the OpenGL, so I went inside each function and copied down the OpenGL calls so I could see more clearly the order the functions were being called. And got this.

Note: All OpenGL calls have a Debug error check, all calls are giving GL_NO_ERROR
And in case the shader may be the problem here they are.
[source lang="cpp"]//Vert#version 130precision highp float;in vec3 InPosition;in vec2 InTextureCoordinate;out vec2 TextureCoordinate;void main(){ TextureCoordinate = InTextureCoordinate; gl_Position = vec4(InPosition.xy,0,1);} //Frag#version 130precision highp float;uniform sampler2D Texture;in vec2 TextureCoordinate;out vec4 Color;void main(){ //Color = texture2D(Texture,TextureCoordinate); Color = vec4(1,1,1,1);} [/source]
I'm hoping it's something simple I've overlooked and someone will be able to point it out fairly quickly. Any comments, insights, insults, or other thoughts are welcome.

Thanks

EDIT: For a more complete view, I generated a log of the application startup through the first frame with gDEbugger.

### Two Co-Op offers.

13 November 2012 - 12:03 PM

My university has "Co-operative Education" where they send us out to work in industry for a semester or two to gain experience and contacts.

I have received two offers for my next round of co-op

One for a company which does 3D data processing for consumer products. The particular bit of work I'd be doing is working on their application which takes CAD data and then runs physics simulations on it. So they can digitally crush, smash, and throw product designs. I'd be doing a lot of OpenGL and 3D data work, so I think it would still be relevant to my end goal of a job in the game industry.

The other is for the only game studio my university has contacts with. While a game studio, they produce games that I'm not really that excited about (Educational / Promotional games). As well as some of their work looking a little less than easy on the eyes. It would still most likely be a fun place to work and an interesting experience.

So my question is, would it be "better" for me to work at the game studio, or should I go to the simulation job? I'd would enjoy both jobs a lot. But I'm not sure which would help me more towards getting a job at a game studio that produces games I'd be more interested in myself.

Any suggestions would be awesome.

Thanks,
~Tocs

PARTNERS