Jump to content

  • Log In with Google      Sign In   
  • Create Account


SaTANO

Member Since 25 Dec 2009
Offline Last Active Yesterday, 12:46 PM
-----

Topics I've Started

iOS Multisample problem

30 October 2011 - 04:41 PM


I was planning to implement multisampling in my application but I discovered some difficulties
Problem is that if I try to multisample frame buffer which has texture instead of renderbuffer target multisampling simply did not work on device.

iPod touch 4G: iOS 4.1


When I try to multisample frambuffer with renderbuffer as target everything works as expected.
When I use framebuffer with texture as target i got exact same copy without multisampling.

Both versions work in simulator but yes a lot of things works in simulator :)


if there is any chance that I have bad code here it is
FB generation
    	glGenFramebuffers(1, &framebuffer);
    	glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
   	
    	glGenTextures(1, &texture);
    	glBindTexture(GL_TEXTURE_2D, texture);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, SHADOW_TEXTURE_FILTER);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, SHADOW_TEXTURE_FILTER);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, resolutionX, resolutionY, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
    	glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture,0);
    	////////////////////////////////////////////////////////////////////////////////////////////////
    	//                         					msaa                       					//
    	////////////////////////////////////////////////////////////////////////////////////////////////
    	
    	glGenFramebuffers(1, &msaaFrameBuffer);
    	glBindFramebuffer(GL_FRAMEBUFFER, msaaFrameBuffer);
    	
    	glGenRenderbuffers(1, &msaaRenderBuffer);
    	glBindRenderbuffer(GL_RENDERBUFFER, msaaRenderBuffer);
    	glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGB8_OES, resolutionX, resolutionY);
    	glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, msaaRenderBuffer);


and after rendered into msaaFrameBuffer

	glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, msaaFrameBuffer);
	glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, framebuffer);
	glResolveMultisampleFramebufferAPPLE();
	GLenum attachments[] = {GL_COLOR_ATTACHMENT0};
	glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, attachments);

As I said it works with renderbuffer but no multisample applied when texture used in framebuffer. I've also tried to add depth buffer (for some reasons this should be also problem) but nothing seems to help.


Is there anyone who successfully implemented multisampling with texture as frambuffer target?

deferred shadow maps

05 October 2011 - 03:26 PM


Hi everybody

There are some topics in gamedev forum, but none of those describe my situation
After some time of working with matrix stuff I thought I am finally really good in it. Well of course I was wrong.
I was playing a lot with skeletal animations cascaded shadow maps and other stuff which require matrix math, but when I tried do simple shadow mapping deferred using two depth textures I got stuck.

What am I trying to achieve is to make correct shadow mapping using two depth textures (light and camera).

I can represent texture UV as X,Y and value (depth) as Z and that is basically how the shadow mapping works. It is just about moving vector from one space to another, so where is the problem? I've quickly make a formula and generate texture matrix. And guess what. It doesn't work.

My idea of compare was simple
Data stored in camera depth texture will be used for vertex world position reconstruction.
Grab depth value at UV and generate postion vector with UV*2.0-1.0 (from [0,1] into [-1,1]) and depth. Using inverse camera projection I should get position in camera space so multiplying camera matrix with inverse camera projection multiplied by vertex i should get vertex in world position (modelMatrix * vertex). On the rest I should use basic formula known from camera transformation but instead of camera I will use light.


ModelMatrix = CameraModelMatrix x CameraProjectionMatrix-1
ViewMatrix = LightModelMatrix-1
ProjectionMatrix = LightProjectionMatrix


ModelViewMatrix = LightModelMatrix-1 x CameraModelMatrix x CameraProjectionMatrix-1
ModelViewProjectionMatrix = LightProjectionMatrix x LightModelMatrix-1 x CameraModelMatrix x CameraProjectionMatrix-1

VertexCameraSpacePosition = U*2-1,V*2-1,depthFromCameraSpace at UV position
VertexLightSpacePosition = LightProjectionMatrix x LightModelMatrix-1 x CameraModelMatrix x CameraProjectionMatrix-1 x VertexCameraSpacePosition

So If I now compare VertexLightSpacePosition.z with depthFromLightSpace (value of depth texture stored form light space) at VertexLightSpacePosition.xy I should know if fragment is in shadow



It is not a big deal because I can make correct light shadow texture in forward render, store it (another render target with MRT) and compare/smooth it deferred without need of camera depth texture.
Recently I found this method right here at gamedev: above method
But I am curious guy so I would like to know how to do the whole process deferred (if there is an easy solution to deal with this).

I am not asking directly for answer, but rather for some resource(I was googling a lot but I didn't find any good resource for deferred shadow mapping).
Of course someone who can describe his solution will be welcome.

column vs. rows

15 July 2011 - 04:37 PM


I've been wondering which matrix format are you using.

Many applications use row-major matrices but GLSL vertex shader uses column-major matrices (so do I).
When I was writing export plugin for blender I discovered that blender uses row-major matrices (it gives me different results like those in my application but its was not problem at all)
It's not big problem so I can use both because post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices.

Just simplified example:
I want to get local position of object in coordinate system 'A' (A is 4x4 matrix, position of object is represented by vector...). This is simple and easy to imagine way to determine multiplication order
-------------------------------------------------
If I use row vector to represent position:
localPosition (x,y,z,w)
to get global position I can use:
localPosition * A = globalPosition

so
localPosition = globalPosition*A-1

-------------------------------------------------
If I use column vector to represent position:
localPosition (x; y; z; w)
to get global position I can use:
A * localPosition = globalPosition


using transpose:
localPositionT * AT = globalPositionT
localPositionT = globalPositionT* (A-1)T

so
localPosition = A-1 * globalPosition
(direct analogy to pre/post multiplying)
-------------------------------------------------

This is same with matrices multiplication when converting from global to local space(A and B are 4x4 matrices). It is bit harder to imagine but same as first example.


row-major:
to get global coordinate system
localB * globalA =globalB

so

localB=globalB*globalA-1
------------------------------------------------------------------------
column-major:
to get global coordinate system
globalA* localB=globalB

fo example using invert:
globalA* localB*globalB-1=globalB*globalB-1
globalA=(localB*globalB-1)-1
globalA-1=localB*globalB-1


so
localB=globalA-1*globalB
------------------------------------------------------------------------


What am I trying to point out is which vector format (matrix format) is correct, if there is any standard/convention...

Thans for your opinions

EVSM negative moments

31 August 2010 - 08:07 AM

My current version of EVSM store only positive moments and reduce bleeding quite well but there are still some bleeding problems.

I read there is a way to significantly reduce bleeding with negative wrap but I am not sure how to do it.

Here is what I think should work:
- I wrap moment of negative depth. Well everything bellow 0 got into 0 - 1 range and I am using e^(c*d*-1.0) and bigger the depth is smaller values will be stored => if negative moment is bigger than scene depth, I am in shadow (just reverse to positive moment)
- ok I stored moments into RGBA32F and compute positive and negative Chebyshev bound and use minimum of them

Now I should be able to reduce bleeding but nothing seems to help. If I use only negative moments I got similar looking shadow so I think my computation is correct

Possible problem should by with constant because I am unable to use higher constants (c>5.0) for negative wrap because I don't see almost any part of shadow. I am using c=40 for positive wrap without problem but I am not sure if negative moments are stored correctly.
Maybe I misunderstand something so any help will be welcome.

thx

///////////////////////////////////////////////////////////////////////////////////////////////
update
Values are stored correctly I just use too big max value. Here are pictures from scene (PSSM using EVSM with 3x 1024x1024):

shadow from negative moments:
negative

shadow from positive moments:
positive

As you can see negative moments cover smaller area so final shadow will be same as shadow from positive moments. Maybe z distance difference between objects is too big so bleeding cannot be successfully reduced.

[Edited by - SaTANO on September 1, 2010 3:55:07 AM]

Solved: help with APPLE float color (FBO for ESM)

22 August 2010 - 02:12 PM

I am currently trying to implement EVSM.

First of all I need to do ESM but I got some problems with texture clamping. Of course I need to write values grater then 1.0f but I can't disable camping into [0,1]. I read some apple mailing lists and more people got this problem.
For example here:
list1
list2

Here I found that I should for some reason disable blend or try to use internal format 0x8815 but nothing seems to changed anyway.

apple extensions has OpenGL extension guide of supported ext with link to
color_buffer_float so I think It should be supported.

I tried to call void ClampColorARB(enum target, enum clamp); but It looks like OpenGL did not recognize it.

Well I forgot to send my init parameters for NSOpenGLView object so here they are:

CGDirectDisplayID display = CGMainDisplayID();

GLuint attributes[] =
{
NSOpenGLPFAScreenMask, (NSOpenGLPixelFormatAttribute)CGDisplayIDToOpenGLDisplayMask(display),
NSOpenGLPFANoRecovery,
NSOpenGLPFAAccelerated,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAColorFloat,
NSOpenGLPFAColorSize, (NSOpenGLPixelFormatAttribute)128, // THIS ONE STAY AT 128
NSOpenGLPFADepthSize, (NSOpenGLPixelFormatAttribute)24, // THIS ONE FAIL INTO 8
NSOpenGLPFAMinimumPolicy,
0
};

NSOpenGLPixelFormat *format = [[NSOpenGLPixelFormat alloc] initWithAttributes: (NSOpenGLPixelFormatAttribute*)attributes];

[self setPixelFormat:format];




I think this is specific for apple so I am asking how should I setup FBO or texture if I don't want to clamp this values.
///////////////////////////////////////////////////////////////////////////////////////////

UPDATE:
Clamping was caused by Interface Builder bug (switched to default - use only manual setup from source code). Now I am able to store values greater than 1.0f but I've got a lot of garbage (it looks like I am not biasing but I do)

Any idea where should be problem?


///////////////////////////////////////////////////////////////////////////////////////////

UPDATE:

Everything works fine now only problem was because my bias was performed after exponential function...
If there is anyone who got problems with floating point textures (16/32) with Apple platform, try to set up NSOpenGLView attributes (NSOpenGLPixelFormatAttribute posted here) and set default values in Interface Builder

I hope this will help someone and save some time while trying to find where should be problem



[Edited by - SaTANO on August 23, 2010 3:31:43 PM]

PARTNERS