Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


karwosts

Member Since 15 Sep 2009
Offline Last Active Feb 19 2013 11:51 PM

#4811789 What is the most immersive game you have played?

Posted by karwosts on 16 May 2011 - 11:33 PM

I haven't played all the games out there, but I remember the original STALKER really drawing me in (or at least it would have if it wasn't crashing all the time, but I guess that's beside the point). Also Morrowind. I'll attribute this to the ambiance of the worlds rather than any particular gameplay elements. I think I could get drawn into a game that had no objective but to walk around in a breezy grass meadow, with the sound of a nearby forest's trees rustling in the wind. Just the feel of being in a place is really important to me, it allows me to revisit a place in nature that's kind of lost in modern life.


Also I tend to think that immersion kind of depends on the state of the player as much as the game itself too. I remember back in my younger days I would play Asheron's Call for a long time, it totally gripped me for a couple years of my youth. I think that has something to do with the age I was at the time and the fact that it was the first really persistent open world game that I played. I think if I picked it up again now I might not be as impressed anymore, but it had a magical quality at the time.


#4811775 Why doesn't my Inverse DFT work correctly?

Posted by karwosts on 16 May 2011 - 10:54 PM

I'm hand waiving over a lot of theory here, but essentially it boils down to: your input function has an infinite frequency, so you'd need to use an infinite number of frequency samples to perfectly reconstruct it. Because your maximum frequency of your input is greater than the Nyquist frequency (look it up if you don't know it), you lose some information about the original signal.

I plotted your result and it looks like what I'd expect to see given the parameters you've used. Original signal is in red, the result is in blue.

dft.jpg

There's nothing wrong with your code, you just need to brush up on theory a little bit.


#4811130 Porting Existing Game

Posted by karwosts on 15 May 2011 - 10:47 AM

No it's not legal. You don't own it, whether its free or not makes no difference. You can either ask SEGA for permission (don't count on it...), or make a different game.


#4810552 Doubts about strategy of rendering

Posted by karwosts on 13 May 2011 - 10:46 PM

No problem. I'll leave you to explore which method you want to use (instancing a single ground tile versus rendering an array of tiles), I think with either way you should be able to achieve a good result.

I'll have some problem with using GLSL or VBO VertexArray? I still have no idea how to bump map with GLSL and afraid to take issue withthe method of rendering.


Nope, you won't have any problem using GLSL with VBOs, they work together perfectly. Shaders are a whole other beast entirely, but when you're ready for them you should be able to integrate them into your VBO renderer without too much trouble.


#4810545 Doubts about strategy of rendering

Posted by karwosts on 13 May 2011 - 10:26 PM

Definitely forget about display lists and direct rendering, they're old and deprecated methods. Most objects it will be best to use a VBO, though you can use Vertex Array for small independent vertices that change every frame (like a particle emitter).

The rest of your question I don't quite understand, but maybe this is enough help to get you started. What is "several instances of the class" referring to?


#4808397 Multiple Mesh World Transformation help

Posted by karwosts on 09 May 2011 - 12:08 AM

You want to build the transform of the child bones by appending them to the transforms of the parent bones.

First, some abbrebiations to make things simpler:

Mb: body matrix
Ma: arm matrix

Sb: body scale
Rb: body rotation
Tb: body translation
Sa: arm scale
Ra: arm rotation
Ta: arm translation

If your body matrix is defined like this:

Mb = Sb * Rb * Tb,

then your arm matrix would be defined like this:

Ma = Sa * Ra * Ta * Mb
or
Ma = Sa * Ra * Ta * Sb * Rb * Tb

Basically everything that you put on the left hand side adds to the chain of transforms. In this case the arm translation is only the offset from the center of the body to the arm position, it has nothing to do with the arm's global position. Because you're chaining it together with the body translation, you can say that the arm translation is defined in "body space", instead of "world space", where you would supply the actual coordinate of it. Again the arm rotation is only the rotation of the arm in its own local space, it doesn't have anything to do with the rotation of the body.

You could do a similar thing with a hand mesh, by defining its rotation and translation relative to the position of the arm.

Mh = Sh * Rh * Th * Sa * Ra * Ta * Sb * Rb * Tb.

This concept comes up a lot when doing animation, basically in the simplest explanation is that when you have an object whose transform is a derivative of some parent, then you want to define the position of the sub object relative to the coordinate space of the parent, don't try to compute it directly in the world space. Its the same concept whether you're dealing with hands on arms on torsos or moons orbiting planets orbiting stars.


#4808211 Help with *you guessed it!* shaders..

Posted by karwosts on 08 May 2011 - 01:31 PM

First of all, I've never heard of HGSL before. I'm not sure if you're referring to HLSL (directX shading language), or GLSL (opengl shading language), but I don't think hgsl is a real thing, which may be complicating your search efforts. Anyway...

What confuses me is what they are actually supposed to do, I've read so many descriptions but many of them never actually tell me what shaders are actually used for. Many say effects, but is that all? I could make effects using point sprites, and they are a hell of a lot easier to use. What kind of effect are they used for?

The second thing is how they work. I understand that a vertex shader takes a vertex and does.. something to it. And I understand that a pixel shader takes a pixel and does... something else to it. But my problem is, do shaders go through every single vertex/pixel in-game/on the screen?


Shaders are a replacement for a large chunk of the existing graphics pipeline that used to be fixed purpose. This makes rendering much more flexible and powerful than when you only have a few switches to toggle. If you want to get an idea of some things you can do with shaders, this is a pretty nice list from nvidia of some effects that you can achieve:

http://developer.download.nvidia.com/shaderlibrary/webpages/shader_library.html

They are essentially small programs that you write that have somewhat fixed inputs and outputs. The inputs and outputs are different depending on which kind of shader you are referring to. A vertex shader generally takes an input vertex, and transforms it into screen space. After all the vertices of a primitive are transformed, it is then rasterized (converted into pixels) by the API, and then the fragment shader is executed on each pixel.

So if you draw a triangle that covers 200 pixels on the screen, your vertex shader will be run 3 times, and the fragment shader will be run 200 times.

The third thing is how do I actually apply them? All the tutorials I've read never actually go into detail about applying Shaders to things other than pre-defined vertices that have been hard-coded, but what about when I load a model from a .x file, at what stage do the Vertex/Pixel shaders take action, and how do I make them take action on the model, or even the whole scene?

This varies a little bit depending on the api you're using, but generally you bind a pair of vertex/pixel shaders, and then every draw call you make while they are bound is processed by the shaders instead of the normal pipeline. You can apply different shaders to different objects by changing the bound shader before rendering a particular object.

The fourth thing, seen as Shaders can modify color values (including the Alpha channel), is this how a fog shader would me made? I get that somehow the vertex/triangle/scene/object's color is darkened/faded, but how do I actually tell how far away it is? Do I pass the camera's co-ordinates into the shader along with the vertex, and use that to calculate a distance? And another thing... what shader type would actually be used to do this!?!?

Yes, fog could be something that you replicate in the shader. After you transform the primitive to screen space, you will know how far away it is from the eye (you get this as part of the transform process), so you get the depth as an input into the fragment shader. You then use this depth value in a function to determine how strong your fog should be. A little pseudocode snippet of a fog fragment shader would look like this:

uniform float4 fogColor;
in float4 fragDepth;
in float4 vertexColor;

out float4 fragColor;
void FragShader() {
  
  fragColor = (1-fragDepth)*vertexColor + fragDepth * fogColor;

}

The inputs are variables that vary per vertex, while uniforms are constant for the whole program (like what color you want the fog to be, be it gray fog or red fog or yellow fog, etc).

The fifth, and hopefully final thing, I've seen many of these tutorials talk about how lighting is applied with shaders, but this is completely different to how I create lighting. Instead, I use the D3DLIGHT9 structure to create its values and the SetRenderState function to apply them. Whats the difference??


Yes, once you move away from fixed pipeline you won't use the fixed light structures anymore. You will define your own shader uniforms for things like "lightDirection", and "lightIntensity", and then use these values in your pixel shader calculations to compute the lighting on a fragment. Instead of SetRenderState, you'll use an API command to set program uniforms instead.


#4807985 All objects being lit even though they are each being pushed to matrix stack?

Posted by karwosts on 08 May 2011 - 02:10 AM

PushMatrix and Popmatrix only effect the transform matrix stack, it doesn't do anything with any enables or lighting options that you set.

If you want to push/pop things like glMaterial, you need to make use of glPushAttrib/glPopAttrib;

For lighting options, this would be done by:

glPushMatrix();
glPushAttrib(GL_LIGHTING_BIT); //<-------------
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, whiteSpecularMaterial);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mShininess);
glColor3f(1.0f, 0.0f, 0.0f);
glTranslatef(-0.6f, 0.7f, 0.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glutSolidTeapot(0.15);
glPopAttrib();  //<----------------
glPopMatrix();

Alternatively you could just disable the lighting options after you're done with them.


#4806031 I need to learn how to load a mesh in OpenGL...

Posted by karwosts on 03 May 2011 - 11:30 AM

There are no opengl api functions related to loading models. Opengl simply accepts lists of triangles and lists of vertices. Transforming data formatted in certain filetypes to these lists is the responsibility of the user.

However even though it is not part of the OpenGL API, there are many libraries out there that will assist with loading one type or another, such as Assimp. Or you can write the interpreter yourself. OBJ is a common format to use for beginners as it is pretty simple. But basically you can use any format, but you must parse it into arrays of vertices for OpenGL to understand.


#4804692 How to load a GLSL shader?

Posted by karwosts on 30 April 2011 - 01:52 AM

You ask how to load a shader, I link you to a site that explicitly explains how to load glsl shaders, and you say that it doesn't help you at all.

This type of question is exactly what these forums are for


No, these forums are not for "how do I do X I have no time to research anything myself".

Really sorry I bothered to answer your question, it won't happen again.


#4804671 How to load a GLSL shader?

Posted by karwosts on 30 April 2011 - 12:17 AM

Read: http://www.lighthouse3d.com/opengl/glsl/

You really ought to be able to find this stuff for yourself, that's what search engines are for.


#4804300 3rd Texture Ordinate

Posted by karwosts on 28 April 2011 - 11:31 PM

Hm, I don't have any idea what that could mean. I'd just recommend you try ignoring it and see how far that gets you. Its possible the artist was using it as just a place to store some extra arbitrary attribute for something in a shader, but as a general rule I don't think a 3rd texture coordinate has any specific meaning for sampling a 2D texture.


#4804243 3rd Texture Ordinate

Posted by karwosts on 28 April 2011 - 08:13 PM

It would only make sense in the context of a 3d texture.

Where did you get the obj from? Are you sure you understand the data correctly? You can post a snippet if you're not sure.


#4802881 Game Design

Posted by karwosts on 25 April 2011 - 06:40 PM

I guess you just need to ask yourself what your vector really contains. Its not really clear what your design intention is such that every GameObject contains a vector of more GameObjects.

If every GameObject has an age, then put the age value inside of the GameObject class.

If every item in your vector is guaranteed to be a Rabbit (or derivative), then you can either cast the iterator, or change it to a vector of Rabbits instead of a vector of GameObjects

If GameObjects don't necessarily have an age, and your vector does not necessarily only contain rabbits, then why are you asking for the age of a gameobject?

These are the kinds of things you need to think about.


#4802874 How to draw "lines" in Op

Posted by karwosts on 25 April 2011 - 06:20 PM

I think osmanb's extension might be GL_EXT_blend_minmax. No idea if its supported on iOS.

That's true that my suggestion won't work when the line doubles over on top of itself, I didn't think of that. OR doesn't work because it's not the same thing as taking the max. (2'b10 | 2'b01) != max(2'b10, 2'b01).

The only other idea I can think of is to render the line just as a solid single line to a separate render target, and then try a bloom-like blur filter to the entire snake. Then throw down that image on top of the rest of your scene. Its not terribly elegant though. Sometimes the things you want to do with blending don't always fit nicely into what the hardware can achieve :-\




PARTNERS