Jump to content

  • Log In with Google      Sign In   
  • Create Account

Yours3!f

Member Since 04 Feb 2011
Offline Last Active Dec 25 2014 06:15 AM

#5011208 Bottom Row of 4x4 Matrix

Posted by Yours3!f on 16 December 2012 - 03:03 AM

well first take a look at this:
http://www.songho.ca/opengl/gl_transform.html
(the other pages are very useful too!!! so look around)

I'm not really sure if A is used at all, I did this a long time ago, but B is supposed to be the perspective divide element.
Or at least B and the projection matrix modify the 4th element of the position, so that in clip space if you divide by it, you get the ndc coordinates.
like when you do the transformations:
-Model (world) space (apply T here):
model_space_pos = model_mat * vertex
-View (camera) space (apply R here):
view_space_pos = view_mat * model_space_pos
-Clip space (apply projection matrix here):
clip_space_pos = projection_mat * view_space_pos
-normalized device coordinates (do the perspective divide, apply B here):
ndc_pos = clip_space_pos.xyz / clip_space_pos.w
-viewport coordinates (scale & bias to get texture coordinates, scale by window coordinates)
viewport_pos = (ndc_pos.xy * 0.5 + 0.5) * vec2( screen_width, screen_height )

and to the other questions:
you would apply the 4x4 matrix to a 4 component vector (ie vec4)
and you would invert A and B by inverting the 4x4 matrix.


#5007062 is gdebugger wrong or do I really have 300gb in renderbuffers?

Posted by Yours3!f on 04 December 2012 - 07:18 AM

Hi,

I get false values too, I use a 1024x1024 RGBA8 cubemap texture. This should take:
1024 (width) x 1024 (height) x 6 (sides) x 4 (bytes per pixel) ~ 25 MB
instead it says 148 MB... So it is definitely wrong.
As far as I've seen it, it report correct values for 2D textures, but false ones for pretty much everything else.


#5006236 Blur shader (2 passes)

Posted by Yours3!f on 02 December 2012 - 02:33 AM

I'm learning how to use GLSL shaders at the moment. I'm implementing a Gaussian blur shader which uses 2 passes, one for horizontal and one for vertical blur.
My question is, do I need to create a texture target for each pass? This way I would render the scene to the first texture, then I would render the first texture to the second texture using the horizontal blur shader, and finally render the second texture to the screen using the vertical shader. I'm not sure if this is the right way.
The second possibility I have is to render the scene to the same texture two times once with each shader (and add a * 0.5 to the end of each shader). But with this solution, all the scene geometry has to be drawn twice (and not just 2D textures).

Is the double texture approach better? Is there another one?
Thank you!


This is how you do it:
-render the scene to a fbo #1 (ie to texture #1 attached to it).
-blur texture #1 using gaussian blur horizontally (or vertically) outputting to fbo #2 (ie to texture #2 attached to it)
-blur texture #2 outputting to fbo #3 (ie texture #1 that is attached to it)

You can see that I used 3 fbos and only 2 textures. This is because an fbo is just a concept, it only determines WHAT TO the gpu will output the pixels. The storage (which has megabytes of magnitude) is the texture attached to it.
But this "ping pong" rule applies to many other effects like depth of field etc. Where you need the same type of output texture, you can simply cleverly reuse the existing ones, thus saving gpu memory (which you can use later for storing additional assets).


#5005972 Non-Microsoft Market Share

Posted by Yours3!f on 01 December 2012 - 02:22 AM

macs have at most 5%, but you can only use OGL 3.x
linux based systems have at most 1%, and latest OGL

For cross platform development you need to use cross platform libraries, and build with non-cross platform IDEs and compilers. For cross platform libraries your best choice IMO would be SFML + OGL + GLEW (and other libs like freetype, freeimage, assimp etc. if needed)
For example I chose cmake to create both a makefile (so yeah gcc/g++) on Linux and a Visual Studio solution on Windows. As far as I know it works for mac too.
On linux your best choice is cmake and kdevelop IMO, but 100 developers will give you 100 answers to this question. You still have other IDEs like Eclipse and Qt Creator.
In my experience OGL has been enough for everything, so I don't see why you'd use DX even on Windows (why would you develop the same thing twice?).


#5005325 sky rendering

Posted by Yours3!f on 29 November 2012 - 09:16 AM

hi,

I'm trying to do sky rendering based on this: http://codeflow.org/entries/2011/apr/13/advanced-webgl-part-2-sky-rendering/

I got through most of the set up, I can now render to a cubemap, and display it, however I can't get the inverse view rotation matrix.
This is because I only worked with view-space, projection space so far. I tried to pass the inverse modelview matrix, but that didn't quite work out.
So my question is, is it possible to do this using view-space? If so how?

here's the shader I'm using:
#version 420 core

//uniform mat3 inv_view_rot;
uniform mat4 inv_proj;
uniform mat4 inv_modelview;
uniform vec3 lightdir, kr;
//vec3 kr = vec3(0.18867780436772762, 0.4978442963618773, 0.6616065586417131); // air
uniform float rayleigh_brightness, mie_brightness, spot_brightness, scatter_strength, rayleigh_strength, mie_strength;
uniform float rayleigh_collection_power, mie_collection_power, mie_distribution;
float surface_height = 0.99;
float range = 0.01;
float intensity = 1.8;
const int step_count = 16;

in cross_shader_data
{
  vec2 tex_coord;
} i;

out vec4 color;

//original
/*vec3 get_world_normal()
{
  vec2 frag_coord = gl_FragCoord.xy/viewport;
  frag_coord = (frag_coord-0.5)*2.0;
  vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
  vec3 eye_normal = normalize((inv_proj * device_normal).xyz);
  vec3 world_normal = normalize(inv_view_rot*eye_normal);
  return world_normal;
}*/

//what i tried to do
vec3 get_world_normal()
{
vec4 device_coords = vec4(i.tex_coord * 2.0 - 1.0, 0, 0);
return normalize((inv_modelview * inv_proj * device_coords).xyz);
}

float atmospheric_depth(vec3 position, vec3 dir)
{
  float a = dot(dir, dir);
  float b = 2.0*dot(dir, position);
  float c = dot(position, position)-1.0;
  float det = b*b-4.0*a*c;
  float detSqrt = sqrt(det);
  float q = (-b - detSqrt)/2.0;
  float t1 = c/q;
  return t1;
}

float phase(float alpha, float g)
{
  float a = 3.0*(1.0-g*g);
  float b = 2.0*(2.0+g*g);
  float c = 1.0+alpha*alpha;
  float d = pow(1.0+g*g-2.0*g*alpha, 1.5);
  return (a/b)*(c/d);
}

float horizon_extinction(vec3 position, vec3 dir, float radius)
{
  float u = dot(dir, -position);
  if(u<0.0)
  {
    return 1.0;
  }
  vec3 near = position + u*dir;
  if(length(near) < radius)
  {
    return 0.0;
  }
  else
  {
    vec3 v2 = normalize(near)*radius - position;
    float diff = acos(dot(normalize(v2), dir));
    return smoothstep(0.0, 1.0, pow(diff*2.0, 3.0));
  }
}

vec3 absorb(float dist, vec3 color, float factor)
{
  return color-color*pow(kr, vec3(factor/dist));
}

void main(void)
{
  vec3 eyedir = get_world_normal();

  float alpha = dot(eyedir, lightdir);
  float rayleigh_factor = phase(alpha, -0.01)*rayleigh_brightness;
  float mie_factor = phase(alpha, mie_distribution)*mie_brightness;
  float spot = smoothstep(0.0, 15.0, phase(alpha, 0.9995))*spot_brightness;
  vec3 eye_position = vec3(0.0, surface_height, 0.0);
  float eye_depth = atmospheric_depth(eye_position, eyedir);
  float step_length = eye_depth/float(step_count);
  float eye_extinction = horizon_extinction(eye_position, eyedir, surface_height-0.15);

  vec3 rayleigh_collected = vec3(0.0, 0.0, 0.0);
  vec3 mie_collected = vec3(0.0, 0.0, 0.0);

  for(int i=0; i<step_count; i++)
  {
    float sample_distance = step_length*float(i);
    vec3 position = eye_position + eyedir*sample_distance;
    float extinction = horizon_extinction(position, lightdir, surface_height-0.35);
    float sample_depth = atmospheric_depth(position, lightdir);
    vec3 influx = absorb(sample_depth, vec3(intensity), scatter_strength)*extinction;
    rayleigh_collected += absorb(sample_distance, kr*influx, rayleigh_strength);
    mie_collected += absorb(sample_distance, influx, mie_strength);
  }

  rayleigh_collected = (rayleigh_collected*eye_extinction*pow(eye_depth, rayleigh_collection_power))/float(step_count);
  mie_collected = (mie_collected*eye_extinction*pow(eye_depth, mie_collection_power))/float(step_count);

  vec3 result = vec3(spot*mie_collected + mie_factor*mie_collected + rayleigh_factor*rayleigh_collected);
  color = vec4(result + vec3(0.1)/*just to make sure that I am writing to the cubemap*/, 1.0);
}


EDIT: it seems to work now. I just had to use the original, and pass inverse view matrix (it turned out to be the upper left 3x3 matrix of the modelview matrix), and I had to use the correct cameras (not the players camera, but turned to left, right up, do etc.).

Best regards,
Yours3lf


#5003809 driver bug?

Posted by Yours3!f on 24 November 2012 - 02:20 PM


It shouldn't crash under any circumstances, right?

Well... not necessarily. For example, if you give it a garbage pointer or size and cause it to commit an access violation, it's free to crash.

I'm not much of a video programmer, but here's my guess as to why it crashed: glTexImage2D requires you to pass a pointer to the texture data, and in your OP you're passing 0. If you read the docs for that function, it says "If target is GL_PROXY_TEXTURE_2D, GL_PROXY_TEXTURE_1D_ARRAY, GL_PROXY_TEXTURE_CUBE_MAP, or GL_PROXY_TEXTURE_RECTANGLE, no data is read from data." In the next paragraph, it says "If target is GL_TEXTURE_2D, GL_TEXTURE_RECTANGLE or one of the GL_TEXTURE_CUBE_MAP targets, data is read from data..." (emphasis mine) This may explain why it crashes. I could be wrong, but that's my hunch.


well it is read if not 0 is passed.
"data may be a null pointer.
In this case, texture memory is
allocated to accommodate a texture of width width and height height.
You can then download subtextures to initialize this
texture memory.
The image is undefined if the user tries to apply
an uninitialized portion of the texture image to a primitive."

and that function didn't crash, but the fbo checker. And that function doesn't receive any pointer.


#5001214 picking in 2d

Posted by Yours3!f on 15 November 2012 - 07:29 AM

hello.
I implemented a simple 2d drawing sysytem that draws points and lines(for now)
I must implement a picking in 2d for intersection of a mouse click and a point and a line.
Has a sense create a ray for the intersection in 2d?
how i can implement these algorithm?
I also have to consider the modelview matrix and the scale/rotate that i can apply to the document .
there is an example?
thanks.


hey there, I don't really know how to deal with lines (approximate them as thin quads maybe?), but I've recently done this using triangles/quads:
http://www.gamedev.net/topic/633528-2d-picking-theory/

you might get an idea.


#4996807 uniforms

Posted by Yours3!f on 03 November 2012 - 03:16 AM

Why do you need this feature? If it is for the learning sake only, please wait until stable drivers release.
There are several sources of the problem.

What drivers are you using? GL_ARB_explicit_uniform_location is the part of GLSL 4.3 core spec, not 4.2. If you are using NV 310.xx drivers, then there could be a bug, since they are in a beta phase. If you are using previous releases, then you should check whether the extension is supported and probably enable it explicitly in GLSL (if it is supported).

I guess you don't have support GL_ARB_explicit_uniform_location and the reason it works for the first uniform is accidental correspondence with compiler's associated locations. NV uses alphabetical order of names. So, locations are as follows:
0 - mvp
2 - texture0
1 - overlay_color

Before trying to use some new feature check if it is presented and read specification to understand how it works. ;)


:( I'm on AMD with only OGL4.2 drivers... I guess I'll have to stick with glGetUniformLocation...
as for the why: I just wanted to get rid of keeping track of uniform locations. It's not that cumbersome, but it would be great if I didn't have to.
I guess for the rest you're right, I just accidentally got the uniform locations right. Well, thank you for clarifying the situation. I just thought this was available since gl 3.1


#4990024 Using shaders vs fixed pipline

Posted by Yours3!f on 14 October 2012 - 06:51 AM

ok, thanks for all the replies.
I actually am working on the project for a pretty long time now, but switching still may be worth it.
So I decided to learn more about shaders and program two or three and probably I will stick with them.
"Master of my own lighting" sounds very good, but I´d rather like to focus on features for the gameplayPosted Image

Even if I don´t use it in this project in the end, I hope it at least will be worth the experience.


here's a cg implementation of the whole OpenGL fixed function pipeline. Cg is very similar to GLSL so you should have no problem porting it after you learned a bit of GLSL.
http://www.codesampler.com/source/ogl_cg_fixed_function.zip


#4989810 Using shaders vs fixed pipline

Posted by Yours3!f on 13 October 2012 - 10:33 AM

for each object you need to choose between rendering by shaders or fixed pipeline.
for example you can use shaders for the terrain, but render everything else using the fixed pipeline.
however shaders give you much greater flexibility, and new features (because of programmability).
generally it is only advisable to use fixed pipeline when the given hardware (and driver) can't handle shaders.

and yes, you need to reimplement the fixed function pipeline in the shaders if you want those effects (ie. lighting).


#4973441 Deferred shading and High Range Definition

Posted by Yours3!f on 26 August 2012 - 04:22 AM

since you're doing deferred LIGHTING - and I assume you know the difference between deferred lighting and deferred shading - therefore you have a nice solution by Cryengine 3. In the CE3 slides it is explained how they did this.
Essentially it goes like this:
1. render opaque geometry to the G-Buffer ( 24 bit depth + 8 bit stencil, RGBA8 rgb for normals, a for specular exponent )
2. do the shading pass (ie. read in the 2 textures, do Blinn-Phong shading based on them, and save the result to 2 textures, RGBA16F for diffuse, RGBA16F for specular )
3. re-render the opaque geometry, read in the 2 RGBA16F textures, use early-z using the depth buffer, and combine the shading result with the material properties (ie. surface diffuse color and surface specular color), plus add the ambient term, save the result into a RGBA16F texture.

use this RGBA16F texture for input for further post-processing.

you usually use this resulting texture for depth of field and bloom/hdr/tonemapping. These are usually dependent on each other, so you don't need to blend.


#4966948 OpenGL 4.3 - compute shaders and much more

Posted by Yours3!f on 07 August 2012 - 03:04 AM

CL/GL interop didn't fail to work. It did work quite well. Despite this I have to admit that it is way more complicated than the DX11 compute shaders, BUT it DID work. In fact I could port DX11 compute shaders to OpenCL and make it work together with OpenGL. see:
http://www.gamedev.n...via-opencl-r233
I'm looking forward to trying out OGL compute shaders though, as it seems more reasonable to use it for processing textures / lighting.
The debugging feature is quite an improvement as such functionality was missing.


#4963060 [HELP] Porting SMAA from Direct3D to OpenGL

Posted by Yours3!f on 25 July 2012 - 02:40 PM

Hi,

I hope this doesn't count as resurrecting old topic :)
SMAA got finally fixed, it now works in OGL/GLSL, you can find the source code here: https://github.com/scrawl/smaa-opengl


#4962650 Cleanup at end of program

Posted by Yours3!f on 24 July 2012 - 10:30 AM

its important to distinguish between ogl resources and other resources. ogl resources get freed automatically when your drawing context is deleted, that is when you call Window::close. You only want to delete them manually if you want to unload resources at runtime, and load something else, like streaming textures. if you have time for it then it is advisable to do manual deleting just for the sake of practicing.
other resources such as cpu side memory are usually freed by your os upon exit, but special oses may not. you still want to delete stuff here though to make sure youre not doing memory leaking.
Edit: the non-annoying way would be to encapsulate your objects in classes and make a on_delete function in each class. then when you react to the sf::Event::Close, just call the on_delete function of each object, then in the end call window.close(). you may want to employ some kind of global class that holds all these objects in some way, so that you can access your objects anywhere in your code. by making your global class lightweight and the actual objects heavyweight you can easily manage memory later (ie. the global object will live until your app is closed, everybody else dies as soon as they get deleted / get out of context).


#4958342 Seriously don't understand WHY my framebuffer isn't rendering as a te...

Posted by Yours3!f on 12 July 2012 - 04:31 AM

So as a further note, is there a debug tool available that can check the contents of textures/video memory? I'm not sure if there is a problem with rendering to a texture or something happening to the texture (somehow) between the render to the framebuffer and the render to the full screen quad

google: gDEBugger




PARTNERS