# BlueSpud

Members

149

650 Good

• Rank
Member
1. ## Screen Space Reflection Streaking

Hey, Im trying to implement some very simple screen space reflections. Right now I have everything working, except there are some pretty noticeable artifacts. Here is an image of the reflection buffer with the artifact shown. I've outlined the actual geometry extents in red so that its more obvious because this is the reflection buffer only. The noise is intentional, I'm either going to blur it out later, bur right now Im concentrated on getting the reflections themselves to look right. As you can see there are "streaks" that optimally should not be there. My guess is that the ray is facing away from the camera and is actually moving behind the objects but since the ray position is then less than the depth of the object it is reflected.  If an object isn't touching the plane it is reflecting on the streaking becomes much more noticeable.  Does anyone have any ideas on what I can do to fix this?    Thanks   EDIT: Here is my code if there is a chance that its an issue with that  vec2 binarySearch(vec3 last, vec3 current) { vec3 ray_min = last; vec3 ray_max = current; vec3 ray_mid; for (int i = 0; i < 10; i++) { ray_mid = mix(ray_min, ray_max, 0.5); float depth = texture(tex_depth, ray_mid.xy).r; if (ray_mid.z > depth) ray_max = ray_mid; else ray_min = ray_max; } return ray_mid.xy; } void main() { // Get the normal vec3 normal = texture(tex_normal, tex_coord0).rgb; // Calculate world space position float depth = texture(tex_depth, tex_coord0).x; vec4 position_p = vec4(tex_coord0 * 2.0 - 1.0, depth * 2.0 - 1.0, 1.0); position_p = inverse_proj_view * position_p; vec3 position = (position_p / position_p.w).xyz; // Get the reflection vector vec3 V = normalize(position - view_position); vec3 reflection_vector = reflect(V, normal); vec4 reflection_pos = mat_view_proj * vec4(position + reflection_vector, 1.0); reflection_pos = reflection_pos / reflection_pos.w; vec3 reflection_pos_s = reflection_pos.xyz * 0.5 + 0.5; // Calculate a jitter to add some noise float c = (tex_coord0.x * 14400.0 + tex_coord0.y * 8500.0) * 0.25; float jitter = mod( c, 1.0); vec3 NDC = vec3(tex_coord0, depth); vec3 ray_direction = normalize(reflection_pos_s - NDC) * jitter; vec3 last_ray_step = NDC; vec3 reflection_color; for (int i = 0; i < 16; i++) { vec3 ray_step = last_ray_step + ray_direction * 0.04; float depth_ray = texture(tex_depth, ray_step.xy).r; if (depth_ray < ray_step.z) { reflection_color = texture(tex_albedo, binarySearch(last_ray_step, ray_step)).rgb; break; } last_ray_step = ray_step; } reflection = reflection_color * (1 - smoothstep(0.25, 0.5, dot(-V, reflection_vector))); }
2. ## OpenGL Possible Causes of Performance Degradation Over Time

Yeah - I suspect that too. I wouldn't be surprised if the management of the CPU/GPU clock speed is dynamic enough that launching a new program or alt-tabbing out could lead to a temporary speed recovery.     Here are some graphs of the thermals running the game for ~7 minutes. http://imgur.com/a/zZ1iq It seemed like the there could perhaps be some throttling, but for the long patch where the temperature remained constant, the game was still running at 60FPS (V-sync was enabled). This was a fresh reboot, so it could still be a temperature issue, but I'll have to do more extensive tests later on that if there aren't any other ideas.
3. ## OpenGL Possible Causes of Performance Degradation Over Time

Hey, So I was working on my OpenGL engine today and I noticed that if I leave it running for ~a minute and a half I start to experience some significant framerate drops. The first thing I did was make sure that there were no OpenGL allocations every frame. After both a code check and a separate profile application check I verified that I am not calling something like glGenBuffers() or something like that. Then I checked for possible issues with dynamic allocations per-frame. The code executed every frame is relatively large, but it doesn't seem like there are any dynamic allocations in the rendering pipeline. I checked the graph of memory usage and it doesn't even seem like I'm allocating any memory except for a little bit here and there, which I assume is coming from PhysX or OpenAL which are both in the program.     I thought that perhaps my CPU or GPU could be thermal throttling (I'm on a laptop). I ran the program until I experienced performance issues, closed it and immediately ran it again. The performance issues stopped once the application was re-run. What also strikes me as odd is that when I tab-out and tab back in the performance issues stop for a short period of time.   I'm a bit puzzled at what is happening here. I highly doubt it's a driver issue as other games work fine, although I haven't tried it on another computer. I've run some standard time-elapsed CPU profiling to try to figure out where the code was getting slow (I know these are generally unreliable because the GPU operates independently) and it just seems like anywhere that is touched by OpenGL just runs slower after the game is left running. Any ideas to what might be causing this? I've noticed this issue for a while but I've always ignored it thinking it was just something else running on the computer until I did extensive testing now so I don't know what code specifically changed this.    Thanks

Could it be this line?  -position.y/position.w/ 2.0f + 0.5f I'm not 100% sure why you are flipping this, could that be the issue?   Also, you can create a matrix to transform your coordinates from [-1, 1] to [0, 1] and multiply it by your light matrix to save a bit of computation when you do the shadow comparison.   Also I believe that you don't need to divide by w for this or the z position. The z value in the shadow map is the projected z without the division, so doing it is pointless I believe.    This is how I sample from my shadow map for reference: vec4 position_shadow = light_matrix * vec4(position, 1.0); vec2 tex_coord_shadow = position_shadow.xy; float z = texture(tex_shadow, tex_coord_shadow).x; position is the word space position of a fragment and my light matrix includes the matrix to transform it to [0, 1]
5. ## Better PhysX Character Controller

I've tried using both rigid bodies and manual sweep tests. I had some really nasty problems with both, mostly stuttering / bad collision behavior, etc. I would think that a rigid body would be the best solution as well, so I tried implementing it. I created a dynamic rigid body with a convex shape and set the inertial tensors so it wouldn't rotate and fall over. I then set the linear velocity before I called step simulation. This worked pretty good, but it didn't really slide against the walls and when the rigid body was pushed against the wall at too steep of an angle, it jittered back and forth and I'm not quite sure why. I'm sure there's something physx specific I'm missing but there isn't a whole lot on character controllers in physx from what I've found
6. ## get (x/y) coordinates of a square given angle?

I haven't tested this but I'm pretty sure it should work.      I'm assuming your square is defined by things: - a center point - a radius - a rotation vec2 point1 = vec2(cos(pi / 4 + angle), sin(pi / 4 + angle)) * sqrt(2) * radius + center; vec2 point2 = vec2(cos(3 * pi / 4 + angle), sin(3 * pi / 4 + angle)) * sqrt(2) * radius + center; vec2 point3 = vec2(cos(-pi / 4 + angle), sin(-pi / 4 + angle)) * sqrt(2) * radius + center; vec2 point4 = vec2(cos(-3 * pi / 4 + angle), sin(-3 * pi / 4 + angle)) * sqrt(2) * radius + center;  This should give you all 4 points of the square, but once you pass pi/2 the relative position of the points will change. For instance point1 will become the top left point instead of the top right. You could combat this by either sorting the points after or just use something like: void setAngle(float _angle) { angle = angle % (pi / 2); } which is much more efficient.   This works because if you look at a unit circle, you can make a square on the circle by taking the points at pi/4, 3pi/4 5pi/5 and 7pi/4. This will give us a square with a radius of sqrt(2)/2 because cos(pi/4) is sqrt(2)/2. To make it a radius of 1 we multiply by the square root of 2 and then by the desired radius. Take a look at the unit circle if you're having any issues visualizing this.   I would verify this before using it (I didn't really do extensive testing) but it should work.
7. ## Better PhysX Character Controller

My mistake, it totally slipped my mind that they were using a convex hull when I wrote the post. I took a look at the character controller source and it seems very specialized to just use capsules and boxes, so I doubt that they modified it because it would probably result in redoing the entire controller.   When I thought of using rigid bodies I envisioned using velocity instead of forces, and I actually tried it out. It resulted in a lot of rubber-banding when colliding with static objects. I'm not really very experienced with PhysX, so it was probably just my ineptness. The forces approach seems like it would require a lot of overcomplicating because the force would have be exactly the same as friction once the controller was accelerated to full speed.    As for the kinematic approach, I was under the impression that kinematic objects don't have collisions? I could be wrong but from the Nvidia docs it says "There is no interaction or collision between kinematic actors and static actors." The regular implementation uses sweep tests for the collisions from what I can tell and kinematics just to simulate collision with dynamic bodies. What were you thinking with this one? Its probably the best solution because it gives the most control.
8. ## Better PhysX Character Controller

Hey, I've had a character controller implemented into my engine for a while now using PhysX, and it works decently well, but I'm looking for a better solution. The biggest issue I have is that the character controller doesn't support cylinders because their math is hard to work with, and I've never really liked the way that capsule character controllers gently slide down a 90º drop-off because their bottom is rounded. I also had some issues with collision between the character controller and dynamic objects, and the behavior was inconstant, sometimes not being able to move the dynamic body and sometimes hitting it like a truck.   I've been playing a lot of Natural Selection 2 lately (great game, go buy it if you haven't) which also uses PhysX. They have some behind the scenes videos where they show the PhysX visualizer where they show their character controller as a cylinder. Heres a link if anyone is interested: https://youtu.be/QjrGE9xZc30?t=48   I played around in the game for a little bit and noticed that for some of the other characters in the game they use shapes other than cylinders. For instance one of the characters in the game is almost rhinoceros like and uses a large, definitely not cylindrical or capsule shape.   I wanted to revisit my implementation of the character controller and try to make it smoother as well as be a cylinder or another shape. I considered trying to cheat and use a rigid body to represent the character controller and use constraints to keep it upright and set the linear velocity for walking, but I thought that I would ask here before I wasted a bunch of time and effort to see if there are any better solutions. Does any body have any suggestions? I would appreciate any ideas.    Thanks
9. ## btKinematicCharacterController Not Behaving Correctly

Wouldn't I still have issues with sampling and still have to write some custom sampling code with spherical coordinates? To me that seems a bit more complicated than just writing a custom cube map sampler.    That would work, but then I wouldn't be able to have directional and spot light shadows in the atlas.

Hey, So right now I have a shadow map atlas where I render all of my shadows, similar to how DOOM does it (according to this article http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/). This works really well for directional and spot lights, but I'm not sure how I want to proceed concerning point lights. I want to store point light shadows in the atlas, but I'm not sure how the best way to do that is. Is the best way to do this going to be to write an algorithm in the shader to figure out which side to sample from based on the light vector and then just do the calculation as if I were using a cube map, or is there a better solution to this.   Thanks
12. ## Tangents for Mesh not Calculated Correctly?

I'm trying to implement normal mapping but I've been having some trouble calculating the tangents for my model. I'm loading in from an .obj exported from blender if that makes any difference. For each triangle (not currently using VBO indexing although I am averaging the tangents properly) I runt he following calculations to calculate the tangent:  glm::vec3 edge1 = _verts[vi.y] - _verts[vi.x]; glm::vec3 edge2 = _verts[vi.z] - _verts[vi.x]; glm::vec2 d_UV1 = _tex_coords[ti.y] - _tex_coords[ti.x]; glm::vec2 d_UV2 = _tex_coords[ti.z] - _tex_coords[ti.x]; float f = 1.0f / (d_UV1.x * d_UV2.y - d_UV1.y * d_UV2.x); glm::vec3 tangent = glm::normalize((edge1 * d_UV2.y - edge2 * d_UV1.y) * f); Where vi is a vector3 containing the indices of the 3 vertices and ti is also a vector3 containing the indices of the texture coordinates. I based the code off of here: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/. When I try visualizing the tangents in the shader, all I am seeing is black on a cube, whereas the normal are showing color and when I make a TBN matrix and when I use a blank normal map I get some odd results that are not the same as the regular normals, even though in theory they should be. Any ideas?
13. ## Placing Things in a (Kinda) Voxel World

What kid of editor did you make? When I was writing the editor for my engine, I just used the standard 3 arrows in the XYZ and did the math for that. IMO thats the best solution, but I feel like that wouldn't be immersive enough to give to a player.
14. ## Placing Things in a (Kinda) Voxel World

I am using color picking temporarily in my engine for the editor portion, but I'm going to switch to using rays in the future. Color picking is kinda slow, but it does work well. Since I'm using voxels though, its kinda not needed because I can just construct a simple tree and retrace it against that, which would probably be faster in the long run. Because color picking needs objects, I think it would have a hard time figuring out where to place something that was in the air because there won't be any objects there.     I think thats what I'm going with. I decided to just use the projected ray from the player's camera with a certain length and then change that length with the scroll wheel. Its probably the best and simplest option.
15. ## Placing Things in a (Kinda) Voxel World

Hey, I'm developing a game that is a sandbox game and I want everything to be snapped into a grid, but other than that I'm using full 3D model, not cubes or anything. All the models are different, but I've built them in mind that I'm going to be using a grid snapping system to keep building nice and neat. My dilemma is placing the objects. Because I'm not really building a full voxel world like Minecraft or similar games, I haven't figured out a solution to allowing the player to accurately place things. I tried constructing a plane at the player's feet, projecting a ray from the camera and seeing where they intersect, then placing the object there. This was really bad and made some really odd results, but it did kinda work. I was wondering if anyone else has any ideas as to how I could accomplish this. As a note, there is going to be mostly blank space in the world, so I need to allow the player to place things on nothing, which is why I can't just cast a ray and test for the nearest intersection with another object. Its also a first person game that is going to have verticality, so I need to handle that as well.   Thanks.