Jump to content
  • Advertisement
Sign in to follow this  
DracoDraco

OpenGL Need help getting started with direct volume rendering.

This topic is 2600 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to learn voxel rendering with gpu raycasting in XNA. I'm not making a minecraft clone but I want the terrain to have a similar cuboid appearance, just on a larger scale. I've seen software renderers on youtube that can render 1024^3 volumes, so I should be able to do just as good or better with the gpu right? My goal is a volume of 1024x1024x64. Ideally it would twice those dimensions. I'd like to have the voxels textured with diffuse lighting and possibly ambient occlusion.

Anything I search for leads me to acceleration structures and other advanced concepts. I can't find anything on how to just do the basics. The only thing I've been able to actually understand and implement is this, which uses a weird method that doen't seem normal to me. Primarily it only works when the camera is outside of the volume.

My experience with graphics programming isn't much beyond basic polygon phong shading and shadow maps. Is jumping straight into direct volume rendering too big of a leap? It seems that much of the material I find assumes a good understanding of raytracing. I can understand the concepts of the 3DDDA traversal and sparse voxel octrees, but I don't know how to actually put them in place.

Based on the example in the link, I've got raycasting and traversing though the volume and sampling a color from a 3d texture. It uses a weird light accumulation thing to make it look all transparent and medical-like. I'm trying to get started from that but I'm not getting any further.

My first question is whether I should stick with XNA. I know it's not the best choice for this, but is it a terrible choice? It's what I'm most familiar with and I would like to publish to XBLIG. I've never really made anything with OpenGl but that would be the alternative, mostly to go multiplatform.

Second, how do you get the per-pixel raycasting going? The example draws a polygon cube to get to the pixel shader. I would think using a quad and putting it in front of the camera would be better. I'm sure there is a standard solution to this but I can't find it.

How do you access the volume data from the pixel shader? Based on the link I'm using a Texture3D, which is supported in the shader. What will I do when I want to implement a spacial hashing structure?

How do you determine the color of the pixel once you reach a voxel in the raycasting algorithm? After sampling the color from the volume, I assume you'll want to calculate the normal based off which side you hit it from to do the lighting. How would I apply a texture or shadows to the voxel? I'm just looking to draw each voxel as a cube, like in the image below. I don't care about any smoothing or antialiasing.

atomontage_vb01_voxels.jpg

I'm not expecting to match that engine in scale but I'd like to at least get a similar appearance.

All of these questions I might be able to figure out on my own with a lot of math and time, but I know they've already been solved. I just can't find a source that explains them clearly.

Hopefully based on this you can tell where I'm at and recommend some reading or more specific search terms. Thanks for any help.

PS: I'm not interested in mesh extraction techniques, just direct volume rendering.

Share this post


Link to post
Share on other sites
Advertisement
>[color=#1C2837][size=2]1024x1024x64. Ideally it would twice those dimensions.
>I search for leads me to acceleration structures and other advanced concepts
[color=#1C2837][size=2]

[color="#1c2837"]For such "thin" data the [color=#1C2837][size=2]acceleration structures are overkill, just [color="#1c2837"]GPU brute force texture mapping VR would be a better choice; it is not well scalable for thicker data though. There are several "free" GPU based engines suitable for such data... If you go to fat volumes GPU is obsolete multi-core CPU is the way to go...
[color="#1c2837"]

[color="#1c2837"]

Share this post


Link to post
Share on other sites
Yeah. I was thinking that with such a small volume I may not even need any acceleration structure. But that's something I'll figure out later. For now I'm having trouble finding good resources to just get started.

Share this post


Link to post
Share on other sites

Yeah. I was thinking that with such a small volume I may not even need any acceleration structure. But that's something I'll figure out later. For now I'm having trouble finding good resources to just get started.

minecraft solves it in a smart way, by generating mesh chunks. I assume by XNA you're talking about the x360 gpu as your main target, it would be faster (if you'll add occlusion culling) to render the faces of your volume.

Share this post


Link to post
Share on other sites
I have something like that currently implemented, but getting the draw distance I want is pretty much impossible. The Xbox can only handle so many draw calls. So I need to make my chunks extra large, which means they'll take forever to rebuild when they're modified. And the memory requirements of all the vertex buffers is hard to keep down.

I'd hate to spend all my time trying to optimize the mesh extraction approach when It may not even be possible to get it to where I want. I don't know if the direct rendering method will get me there either, but I am still interested in learning the technology.

Share this post


Link to post
Share on other sites
Directly rendering voxels via raycasting using a single level 3DDDA sounds like it will work. It sounds like you already understand 3DDDA so this should be easy/obvious. Putting it into practice would be to create a format for the voxels. A simple one I started toying with in SlimDX was a 2 level 3DDDA grid traversal. I can give some code snippets which might help you along with a simple design. However directly rendering voxels using a full screen pixel shader is really more of a DX10/11 job than something you'd want to do in older hardware. I'm not sure an Xbox would be fast enough.

I'll describe a basic setup though. You have a chunk info texture that holds offsets to a chunk data texture. You store the state of which chunks are loaded and compression information in the chunk info texture. In the below example we assume a 16 MB texture to hold 256x256x64 chunks. Each chunk info record is 4 bytes consisting of:
[byte] bool chunk type 0 = Empty, 1 = Mixed, 2 = Water, 3 = Lava
[byte] texture index
[short] texture offset
And the chunk data textures are simply just 32x32x32 records with 2 bytes per voxel.

const int chunkFreeListTextureSize = 4096 * 4096 * 4;
const int chunkFreeListTextureCount = 3;
const int chunkVoxelSize = 32 * 32 * 32 * 2;
// Metadata about each chunk telling if it's loaded
int unusedChunkIndex;
int unusedChunkCount;
int usedChunkIndex;
int usedChunkCount;
const int chunkCount = chunkFreeListTextureSize * chunkFreeListTextureCount / chunkVoxelSize;
ChunkMetaData[] chunkFreeList = new ChunkMetaData[chunkCount];
// Map the unique chunk index to the free list location
int[] chunkToFreeList = Enumerable.Repeat(-1, 256 * 256 * 64).ToArray();
// Chunk Info stored in the video card in 2048x2048 32 bit texture. 256x256x64 with 4 bytes per chunk
byte[] chunkInfo = new byte[256 * 256 * 64 * 4];
// Chunk Data, stored in the video card in 3 4096x4069 32-bit textures.
byte[] chunkData = new byte[chunkFreeListTextureSize * chunkFreeListTextureCount];

Initializing the free list of 32x32x32 chunks:

unusedChunkIndex = 0;
unusedChunkCount = chunkCount;
usedChunkIndex = -1;
usedChunkCount = 0;

for (int i = 0; i < chunkCount; ++i)
{
chunkFreeList.previous = i == 0 ? -1 : i - 1;
chunkFreeList.next = i == chunkCount - 1 ? -1 : i + 1;
}


private int GetChunk(int x, int y, int z)
{
if (unusedChunkCount == 0)
{
// No chunks left to be used. Try again later?
return -1;
}
unusedChunkCount--;
usedChunkCount++;

if (usedChunkIndex == -1)
{
usedChunkIndex = unusedChunkIndex;
unusedChunkIndex = chunkFreeList[unusedChunkIndex].next;
chunkFreeList[unusedChunkIndex].previous = -1;
chunkFreeList[usedChunkIndex].next = -1;
}
else
{
int oldUsedChunkIndex = usedChunkIndex;
usedChunkIndex = unusedChunkIndex;
unusedChunkIndex = chunkFreeList[unusedChunkIndex].next;
if (unusedChunkIndex != -1) chunkFreeList[unusedChunkIndex].previous = -1;
chunkFreeList[usedChunkIndex].next = oldUsedChunkIndex;
chunkFreeList[oldUsedChunkIndex].previous = usedChunkIndex;
}
chunkFreeList[usedChunkIndex].x = x;
chunkFreeList[usedChunkIndex].y = y;
chunkFreeList[usedChunkIndex].z = z;
return usedChunkIndex;
}

private void PutChunk(int chunkIndex)
{
unusedChunkCount++;
usedChunkCount--;

if (chunkFreeList[chunkIndex].previous == -1)
{
usedChunkIndex = chunkFreeList[chunkIndex].next;
}
else
{
chunkFreeList[chunkFreeList[chunkIndex].previous].next = chunkFreeList[chunkIndex].next;
}

if (chunkFreeList[chunkIndex].next != -1)
{
chunkFreeList[chunkFreeList[chunkIndex].next].previous = chunkFreeList[chunkIndex].previous;
}

// Add the chunk back to the unused chunk list
chunkFreeList[chunkIndex].previous = -1;
chunkFreeList[chunkIndex].next = unusedChunkIndex;
chunkFreeList[unusedChunkIndex].previous = chunkIndex;
unusedChunkIndex = chunkIndex;
}

private int ChunkUniqueIndex(int x, int y, int z)
{
return z * 256 * 256 + y * 256 + x;
}

/// <summary>
/// Iterate the AABB region around the position and for any chunk within range and load it
/// </summary>
/// <param name="position"></param>
private void LoadProximityChunks(Vector3 position)
{
const int size = 6;
float radius = (float)Math.Sqrt(2) * ((float)size + 0.5f);
int positionX = (int)position.X / 32;
int positionY = (int)position.Y / 32;
int positionZ = (int)position.Z / 32;
int minX = Math.Max(0, positionX - size);
int maxX = Math.Min(255, positionX + size);
int minY = Math.Max(0, positionY - size);
int maxY = Math.Min(255, positionY + size);
int minZ = Math.Max(0, positionZ - size);
int maxZ = Math.Min(63, positionZ + size);
int foo = 0;
int bar = 0;
for (int z = minZ; z <= maxZ; ++z)
{
for (int y = minY; y <= maxY; ++y)
{
for (int x = minX; x <= maxX; ++x)
{
foo++;
if ((x - positionX) * (x - positionX) +
(y - positionY) * (y - positionY) +
(z - positionZ) * (z - positionZ) <= radius * radius)
{
bar++;
if (chunkToFreeList[ChunkUniqueIndex(x, y, z)] == -1)
{
LoadChunk(x, y, z);
}
}
}
}
}
}

private void UnloadProximityChunks(Vector3 position)
{
const int size = 7;
float radius = (float)Math.Sqrt(2) * ((float)size + 0.5f);
int positionX = (int)position.X / 32;
int positionY = (int)position.Y / 32;
int positionZ = (int)position.Z / 32;
for (int iterator = usedChunkIndex; iterator != -1; iterator = chunkFreeList[iterator].next)
{
int x = chunkFreeList[iterator].x;
int y = chunkFreeList[iterator].y;
int z = chunkFreeList[iterator].z;
if ((x - positionX) * (x - positionX) +
(y - positionY) * (y - positionY) +
(z - positionZ) * (z - positionZ) > radius * radius)
{
UnloadChunk(x, y, z, iterator);
}
}
}

private void LoadChunk(int x, int y, int z)
{
// Procedurally generate chunk for now
int chunkFreeListIndex = GetChunk(x, y, z);
chunkToFreeList[ChunkUniqueIndex(x, y, z)] = chunkFreeListIndex;
int chunkDataOffset = chunkFreeListIndex * 32 * 32 * 32 * 2;
for (int i = chunkDataOffset; i < chunkDataOffset + 32 * 32 * 32 * 2; i += 2)
{
// random numbers generated 0, 1, 2, 3 so a 25% chance there will be a voxel at a given position
ushort voxelType = random.Next(4) == 0 ? (ushort)0x8001 : (ushort)0x8000;
chunkData = (byte)(voxelType >> 8);
chunkData[i + 1] = (byte)(voxelType & 0xFF);
}

// Update chunk info with the change
int chunkInfoOffset = ChunkUniqueIndex(x, y, z) * 4;
// Chunk Type
chunkInfo[chunkInfoOffset + 0] = 1;
// Texture Index
chunkInfo[chunkInfoOffset + 1] = (byte)ChunkFreeListIndexToTextureIndex(chunkFreeListIndex);
// Texture Offset
int textureOffset = ChunkFreeListIndexToTextureOffset(chunkFreeListIndex);
chunkInfo[chunkInfoOffset + 2] = (byte)(textureOffset >> 8);
chunkInfo[chunkInfoOffset + 3] = (byte)(textureOffset & 0xFF);
}

private void UnloadChunk(int x, int y, int z, int chunkFreeListIndex)
{
PutChunk(chunkFreeListIndex);
chunkToFreeList[ChunkUniqueIndex(x, y, z)] = -1;
}

private int ChunkFreeListIndexToTextureIndex(int chunkFreeListIndex)
{
return chunkFreeListIndex / chunkFreeListTextureSize;
}

private int ChunkFreeListIndexToTextureOffset(int chunkFreeListIndex)
{
return chunkFreeListIndex % chunkFreeListTextureSize;
}

private void UploadChunkInfo()
{
var dataRectangle = chunkInfoSurface.LockRectangle(LockFlags.None);
dataRectangle.Data.Write(chunkInfo, 0, 256 * 256 * 64 * 4);
chunkInfoSurface.UnlockRectangle();
}

I can't verify the above code works, but it's shows off the basic structure of managing chunks and changing the chunk info and chunk data textures. I got busy with classes and had to set it aside so I'm not really sure how well it works. I do remember I got a depth map rendering. Texturing is really simple since you just do a modulus operator and since you know which side of the voxel the ray entered you can perform a look-up in a texture for the pixel value to output.

If you're curious the shader looks something like below. In your traversal you traverse similar to regular 3DDDA but when you're at the chunk level and a voxel isn't empty you need to go to the voxel traversal and begin traversing then step back up. This is a fun problem to figure out on paper. It's easy if you look at it as a state graph with 2 states and certain conditions that go from each of the 2 states.

float aspectRatio;
float3x3 cameraRotation;
float3 cameraCenter;
float focalDistance;
float2 tileSize;
float2 tileAtlasSize;
bool depthMap;

texture tileAtlas;
texture chunkInfo;
texture chunkData0;
texture chunkData1;
texture chunkData2;

sampler tileAtlasSampler = sampler_state
{
Texture = <tileAtlas>;
};
sampler chunkInfoSampler = sampler_state
{
Texture = <chunkInfo>;
};
sampler chunkDataSampler0 = sampler_state
{
Texture = <chunkData0>;
};
sampler chunkDataSampler1 = sampler_state
{
Texture = <chunkData1>;
};
sampler chunkDataSampler2 = sampler_state
{
Texture = <chunkData2>;
};

// Two const boundary limits for chunks and voxels
int3 minChunkBoundary = int3(-1, -1, -1);
int3 maxChunkBoundary = int3(256, 256, 64);

int3 minVoxelBoundary = int3(-1, -1, -1);
int3 maxVoxelBoundary = int3(32, 32, 32);

float chunkSize = 32.0;
float voxelSize = 1.0;

// Chunk types
float ChunkEmpty = 0;
float ChunkMixed = 1;
float ChunkWater = 2;
float ChunkLava = 3;

struct VS_INPUT
{
float4 Position : POSITION;
float4 Color : COLOR0;
};

struct VS_OUTPUT
{
float4 vPosition : POSITION;
float4 Position : TEXCOORD0;
float4 Color : COLOR0;
};

struct PS_OUTPUT
{
float4 Color : COLOR0;
};

VS_OUTPUT VShader(VS_INPUT input)
{
VS_OUTPUT output;
output.vPosition = input.Position;
output.Position = input.Position;
output.Color = input.Color;
return output;
}

PS_OUTPUT PShader(VS_OUTPUT input)
{
PS_OUTPUT output;

float cameraScale = 0.2;
float3 rayPosition = float3(0.0, input.Position.x, input.Position.y / aspectRatio) * float3(0.0, cameraScale, cameraScale);
float3 rayDirection = mul(cameraRotation, normalize(rayPosition - float3(-focalDistance * cameraScale, 0.0, 0.0)));

// TODO: Test these instead and translate back by the focal distance instead of transforming each pixel position
// float3 rayPosition = cameraCenter;
// float3 rayDirection = mul(cameraRotation, normalize(float3(focalDistance, input.Position.x, input.Position.y / aspectRatio)));

if(all(rayDirection))
{
rayPosition = mul(cameraRotation, rayPosition);
rayPosition += cameraCenter;

int3 chunk = int3(rayPosition / chunkSize);
int3 voxel = int3(rayPosition / voxelSize);

int3 step = sign(rayDirection);

float3 offsetChunkTemp = rayPosition - mul(floor(rayPosition / chunkSize), chunkSize);
float3 offsetFromChunkAxis = step == float3(1, 1, 1) ? chunkSize - offsetChunkTemp : offsetChunkTemp;

float3 offsetVoxelTemp = rayPosition - mul(floor(rayPosition / voxelSize), voxelSize);
float offsetFromVoxelAxis = step == float3(1, 1, 1) ? voxelSize - offsetVoxelTemp : offsetVoxelTemp;

float3 tMaxChunk = offsetFromChunkAxis / abs(rayDirection);
float3 tDeltaChunk = chunkSize / abs(rayDirection);

float3 tMaxVoxel = offsetFromVoxelAxis / abs(rayDirection);
float3 tDeltaVoxel = voxelSize / abs(rayDirection);

int3 chunkBoundary = step == int3(-1, -1, -1) ? minChunkBoundary : maxChunkBoundary;
int3 voxelBoundary = step == int3(-1, -1, -1) ? minVoxelBoundary : maxVoxelBoundary;

int lastCrossed = 0; // 0 = x, 1 = y, 2 = z

bool chunkLevel = true;

// Chunk to voxel traversal loop:
while (true)
{
// Your 3DDDA traversal algorithm that transitions between chunks and voxel levels.
}
}
else
{
output.Color = float4(1, 0, 1, 1);
}
return output;
}

technique voxelTechnique
{
pass p0
{
VertexShader = compile vs_3_0 VShader();
PixelShader = compile ps_3_0 PShader();
}
}

Here's a 2D example of a 2-level 3DDDA algorithm. Not sure if it's optimal though, but it does show the number of taps on the textures required. I've been told that DX9 hardware probably would die performing that many texture look-ups.

Shadows can be performed by ray tracing toward the sun just like regular ray tracing. I believe it would be prohibitively expensive, but I haven't really researched the current methods.

Rendering a full screen quad by the way solves your first question. You can litterally just pass in a very very simple quad into the (-1, -1) to (1, 1) range that is expected so that no transformation is needed in your shader.

// (3 floats per Vector4) * (4 bytes for each float) + (4 bytes for each integer color) = 20
vertices = new VertexBuffer(device, 4 * 20, Usage.WriteOnly, VertexFormat.None, Pool.Managed);
vertices.Lock(0, 0, LockFlags.None).WriteRange(new[] {
new Vertex() { Position = new Vector4(-1, -1, 0, 1), Color = new Color4(1, 1, 0, 0).ToArgb() },
new Vertex() { Position = new Vector4(-1, 1, 0, 1), Color = new Color4(1, 0, 1, 0).ToArgb() },
new Vertex() { Position = new Vector4(1, 1, 0, 1), Color = new Color4(1, 0, 0, 1).ToArgb() },
new Vertex() { Position = new Vector4(1, -1, 0, 1), Color = new Color4(1, 1, 0, 1).ToArgb() }
});
vertices.Unlock();

var vertexElements = new[] { new VertexElement(0, 0, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.Position, 0),
new VertexElement(0, 16, DeclarationType.Color, DeclarationMethod.Default, DeclarationUsage.Color, 0),
VertexElement.VertexDeclarationEnd };

vertexDeclaraction = new VertexDeclaration(device, vertexElements);

Share this post


Link to post
Share on other sites
Wow thanks this is really helpful! I've just about got something working. I don't understand how the rayDirection is calculated. I assume cameraScale and focalDistance are somehow related to fov. How can I determine the proper values based on fov, or is there a way to use a projection matrix instead?

Share this post


Link to post
Share on other sites
focaldistance.png
So you have this setup and you want to scale it so that the camera scale is smaller such that the rays fire from a plane that represent a smaller screen. This camera scaling is merely so that the camera plane doesn't clip into objects very easily if you're using a world with like 1.0 = 1 meter. If you can imagine it the screen is 2x2 meters (ignoring the aspect ratio) and rotates around its center. Now this 2x2 meter quad will probably clip into things if it isn't scaled.

FOV = atan2(focalDistance, 1) * 2:
or in a more useful notation:
focalDistance = tan(FOV / 2);

So if FOV is 90 degrees then focalDistance should be 1.

As for the ray direction you can imagine that the screen is split into a grid of pixels with each having a normalized (length of 1) ray emitting from the surface. To construct the ray you take the pixel position which happens to the be the input into the pixel shader and subtract it by a position located right behind the plane (defined by the focal distance). Rotating the camera rotates all of the ray directions the same way.

Also here's a fun image:
conventions.png
It describes the convention used in that code. Z is up and X is to the right with Y going into the screen. Most of the problems you'll have can be easily solved by just drawing the situations out on paper.

Share this post


Link to post
Share on other sites
Alright! I finally made something that looks how I want it to:

uDsNK.png

This is at 640x360 on my PC. I get about 20 fps on the Xbox. I'm not using the chunk structure you posted yet, it's just one 256^3 volume. With the layout of my data I don't think a 2 level traversal would have any benefits when the camera is at ground level. There are probably zero chunks in there that have homogeneous data. My chunk size would have to be like 4^3 to have any benefit, but then I'd be passing in a large amount of textures. I've also realized that I really don't know my 3d math well enough. Every small problem I run into takes forever to figure out. I think next semester I'll take that 3D math and physics course I've been avoiding. Thanks for the help. I might try adding textures later, but I think my practical use of this is going to be pretty limited for now.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By QQemka
      Hello. So far i got decently looking 3d scene. I also managed to render a truetype font, on my way to implementing gui (windows, buttons and textboxes). There are several issues i am facing, would love to hear your feedback.
      1) I render text using atlas with VBO containing x/y/u/v of every digit in the atlas (calculated basing on x/y/z/width/height/xoffset/yoffset/xadvance data in binary .fnt format file, screenshot 1). I generated a Comic Sans MS with 32 size and times new roman with size 12 (screenshot 2 and 3). The first issue is the font looks horrible when rescaling. I guess it is because i am using fixed -1 to 1 screen space coords. This is where ortho matrix should be used, right?
      2) Rendering GUI. Situation is similar to above. I guess the widgets should NOT scale when scaling window, am i right? So what am i looking for is saying "this should be always in the middle, 200x200 size no matter the display window xy", and "this should stick to the bottom left corner". Is ortho matrix the cure for all such problems?
      3) The game is 3D but i have to go 2D to render static gui elements over the scene - and i want to do it properly! At the moment i am using matrix 3x3 for 2d transformations and vec3 for all kinds of coordinates. In shaders tho i technically still IS 3D. I have to set all 4 x y z w of the gl_Position while it would be much much more conventient to... just do the maths in 2d space. Can i achieve it somehow?
      4) Text again. I am kind of confused what is the reason of artifacts in Times New Roman font displaying (screenshot 1). I render from left to right, letter after letter. You can clearly see that letters on the right (so the ones rendered after ones on the left are covered by the previous one). I was toying around with blending options but no luck. I do not support kerning at the moment but that's definitely not the cause of error.
      Thanks in advance for all your ideas and suggestions!
      https://i.imgur.com/4rd1VC3.png
      https://i.imgur.com/uHrSXfe.png
      https://i.imgur.com/xRTffPn.png
    • By plz717
      Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
      I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance! 

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631397
    • Total Posts
      2999783
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!