• Content count

  • Joined

  • Last visited

Community Reputation

1451 Excellent

About CDProp

  • Rank
    Advanced Member
  1. Lookin good. Just one minor question: this will return the z in view space, no? 
  2. Custom Shaders

    The output of the vertex shader (gl_Position) is not actually the screen coordinate. It's a coordinate in clip space, which is a sort of abstract 4D space that is hard to visualize. It's the coordinate space where OpenGL does all of the polygon clipping. There are actually a couple more transformations that OpenGL does before you get a screen coordinate: The first is often called the w-divide or perspective divide, where the x/y/z clip space coordinates are divided by the w coordinate in each vertex. This transforms the verts from clip space to Normalized Device Coordinates (a 3D space where every vertex now fits within the -1 to 1 range for the x, y, and z axes). Then, OpenGL does a viewport transform to convert the Normalized Device Coordinates to screen coordinates based on what you specified using glViewport (or three.js equivalent). All this happens after your vertex shader returns, but before your fragment shader is called. In your fragment shader, the gl_FragCoord input will contain these screen coordinates. At this point, they've already been decided and you cannot change them. So, to shift your entire object over by one pixel, you'll probably need to either shift your viewport over 1 pixel when you call glViewport for that object, or do some crazy math in the vertex shader to do the shift in clip space. Something like: gl_Position.x += 2.0/viewportWidth*gl_Position.w Note: I haven't tried this and even if it works it might be weird. And you'll have to pass in viewportWidth yourself as a uniform.
  3. Shadow matrix calculation

    I only checked a couple values, but the ortho matrix in your original post looks correct. The top left element is going to be 2/width, and so if your width is 950, then small numbers like 0.002105 are expected. Now that you're taking scale into account, these numbers might have changed a bit, but I doubt your ortho matrix is a problem. Concerning the up vector, it depends on the LightDir. You'll just want to make sure that your up vector and LightDir are not in the same or opposite directions, as this will result in a singular matrix. Given your choice of coordinate system (where x and z are horizontal axes and y is vertical, and z seems to be north/south), I think that <0,0,1> is a wise choice. Do you have a world matrix for these verts, or are they already in world space? How big is shadowRes? And how large are your objects in these units, typically?
  4. collision

    Does it work well? Can you see how you might be able to utilize that function in your game? Try this: Loop over all of your bullets. For each bullet: Loop over all bricks. For each brick: Use that function you wrote to determine if the bullet is intersecting the brick. If so, make the brick disappear. If you have N bullets on the screen and M bricks, then you'll end up calling your function N×M times. On modern computers, this is probably not a problem unless you literally have hundreds of bullets and bricks. However, if needed you can speed up this algorithm by (for example) only testing your bullet against bricks that are in the same column as the bullet (assuming the bullet moves straight up). One common pitfall with this type of collision detection is that the bullet might be moving so fast that it passes right through the brick without intersecting it. For example, on frame 1 the bullet might be fully below the brick, and on frame 2 the bullet might be fully above the brick. The easiest way to fix that is probably to divide your frame up into 5 or so increments, and move the bullet one increment at a time while testing for an intersection at each point. This requires 5 intersection tests per bullet per brick per frame, but it's often worth it. You can also use a different number of increments, e.g. 2 or 3 or 7, depending on your needs. Another option is to use a sweeping intersection test, which will tell you exactly when the bullet hit the brick and doesn't require multiple increments. However, that is an advanced topic. That article doesn't have a sweeping test specifically for circles vs. boxes, and so you'll have to come up with one yourself or find one somewhere else, or maybe use sphere vs. plane against each side of the box.
  5. Custom Shaders

    By the way, notice that your shaders use projectionMatrix  and modelViewMatrix. These are examples of uniforms. In a full shader, you would see something like this at the top: uniform mat4 projectionMatrix; uniform mat4 modelViewMatrix; Your shaders don't have this, so I suppose that three.js is adding it for you. I'm guessing that the names of these uniforms are chosen by three.js and so you couldn't use something like "projMatrix". The names have to match what three.js uses when they set the uniform data. Similarly with the light uniforms, you'll need to find out what names they're using and what their data types are. Other variables like position and normal are not uniforms. They are vertex attributes, because they change once per vertex. Chances are, these names were decided by three.js as well. One thing that I noticed is that you're not transforming your normal. You'll want to make sure that you multiply the normal by the inverse-transpose of the modelView matrix. That will ensure that the normal is rotated without being scaled. Put the normal in a vec4, like you do with position, but use 0.0 instead of 1.0 as the w coordinate (so that the normal isn't translated). It looks like three.js calls the inverse-transpose modelView matrix this normalMatrix.  The vNormal variable is declared as a varying because its value will be interpolated across the triangle being rendered. Note: in later versions of GLSL, varying is not used. Instead, vertex shader outputs are declared using the out keyword, and they are matched with fragment shader inputs that are declared using the in keyword.
  6. Custom Shaders

    I am not familiar with three.js, but the properties of lights are often specified using uniforms. Uniforms are shader inputs, basically. You set them once before drawing, and their value stays the same for every vertex or fragment in that draw call. Hence the name "uniform". Typically, for a directional light you'll need a vec3 uniform for the light direction, and another vec3 uniform for the light color. Since many shaders support multiple directional lights, these uniforms might be declared as arrays like so: uniform vec3 directionalLightColor[MAX_DIRECTIONAL_LIGHTS]; uniform vec3 directionalLightPosition[MAX_DIRECTIONAL_LIGHTS]; Chances are, three.js uses something like this. They will use the glUniform functions to set the directional light uniforms before each frame, and then all you have to do is declare the right uniforms in your shader, and then your shader can use those uniforms to access the light values. This is just an educated guess, though. You'll need to find our what uniforms three.js is trying to set, and what their names and data types are.  Unfortunately, the three.js documentation is completely unclear on this. So you might have to do some digging. Concerning alpha blending, it's possible that you have blending disabled. In standard OpenGL, one would enable blending by calling glEnable with GL_BLEND passed in. You'll have to consult the three.js documentation to find out how to do this through their framework. You'll also want to look into glBlendFunc to make sure you're using the right blend function (use GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA as the source and destination blend factors, respectively). Again, you may need to find the three.js equivalent of this. Using alpha blending will not require multiple passes. However, if you get to a point where you're using multiple translucent objects in your scene, you'll need to sort them from back-to-front (so the distant ones are drawn first). The alpha blending function is not commutative, and so it has to be done in that order. This means it won't double your rendering time, but since you're sorting your draw calls by distance instead of by state, you'll incur some penalty there. At some point, you may want to look into using premultiplied alpha.  It is not possible to know the values of the adjacent pixels. One big reason for this is that we can't be sure if those pixels have even been calculated yet. The pixel shader is executed in parallel for large batches of pixels at a time, and there's no way of knowing which pixels have been drawn and which haven't. Generally speaking, you can never read from the same buffer that you're drawing to. As for reading the previous frame, there is no way to do that directly. You can do it indirectly, however, if you draw each frame to an off-screen texture. So, you would need two such textures (call them texture A and texture B). During the first frame, you render the scene to texture A. Then you copy that texture to the main back buffer. Then, on the next frame, you bind Texture A so that you can sample from it, and you draw the scene to Texture B. This allows you to sample from texture A while you draw the scene. Then, you copy texture B to the back buffer. On the third frame, you one again draw to texture A. At this point, texture B contains the previous frame, so you can sample from it. And so on, back and forth. Kind of a pain. I'm not sure what grass effect you're after, but your fragment shader is only capable of drawing within the boundaries of the triangle that it is shading. So, it can't make anything seem to "pop out" of that triangle. 
  7. My plan for realizing my game.

    This isn't the answer you're looking for, but I think you are getting ahead of yourself. You are attempting to discuss resource constraints and how to deal with them, but you don't have good data on what your game will require. For instance, how did you come up with 3 as your required number of servers (targeting 4 core CPUs?)? Do you really know how much memory this is going to take? Let alone CPU utilization? Most people don't know these things without programming and profiling. I would also disagree that distributing your game will be a simple matter of handing js files to the end user. Does the end user know how to set up and configure Node? Are you going to have them install npm and then install and run all three web apps? Are you going to give them a script that will do this for them? If so, how does this differ appreciably (from the end-user's perspective) from just giving them an installer? How is multiplayer going to work? Will one user host, and will that user's servers do all the work for all players? If so, have you worked out all of the complications involved with other users hitting resources on the host user's machine? Will this be done across the internet? Are there security issues involved? This does not sound like the path of least resistance to me, but to each his/her own. My main concern with your plan is that you don't seem to have the programming chops yet. You will soon. I would definitely recommend that you start with a dead-simple game project that includes a subset of this functionality. Then, work on another game that is slightly more complicated and incorporates a wider subset of this functionality. Gain the experience slowly but surely, and you'll be successful (and you'll have a list of intermediary accomplishments as well).
  8. collision

    Good work. Don't be discouraged by blunt critiques of your code or pseudocode. Absolutely everyone starts off writing bad code, and hardly anyone found this particular problem easy when they first encountered it. It's not my style, but some people don't see a point in sugarcoating it with newbies, particularly if you seem resistant to taking advice.  If you're finding that previous attempts to help you are still too complicated, then try something simpler. Don't worry about Breakout, or moving balls, or making bricks disappear. Instead, just see if you can write a function that can determine whether a circle and a rectangle are overlapping. A few tips: The circle can be represented by a radius (float) and a center point (2D vector, or separate floats for x and y). The rectangle can be represented by a position (x, y) and size (width, height). Alternatively, you could have four separate floats for the positions of the left, right, top, and bottom edges of the box. Either way, you're dealing with what's called an axis-aligned box, because the box is not tilted or rotated. If you can determine the distance between the circle's center point and the box (see: Arvo's algorithm), then you have the problem solved. You just need to see if this distance is less than the radius of the sphere. This won't completely solve your collision problems, but it'll be a good first step. I would advise doing this exercise, even if you already have your game collision working. It sounds like you might be doing some unnecessary stuff (like having separate steps for separate brick layers) and there's good reason to believe that you'll start running into trouble once the ball starts bouncing from other directions (e.g., if the player gets the ball in the top area). Much better to start simple with an algorithm that you understand step-by-step.
  9. Custom Shaders

    Concerning line drawing: are you wanting to do this specifically with ShaderToy? Or in your own program? The reason I ask is that ShaderToy has some limitations that won't apply to your own program. To understand why, you need to understand a little about how the graphics pipeline works. A simplified version is: You issue a command to draw a series of vertices, using a specific shader program (including both vertex and fragment shaders), specifying some other important details like shader constants and what type of primitive (triangle, quad, etc) you are drawing. The GPU invokes your vertex shader once per vertex. Inside your vertex shader, you determine where the vertex should appear on the screen (not exactly, but close enough for this). You output this on-screen coordinate. The GPU then decides which screen pixels are within the triangles specified by the verts outputted in step 2. It invokes your fragment shader for each of these pixels. Your fragment shader determines what color the pixel in question should be. ShaderToy has very specific limitations. In particular, you have no control over steps 1 & 2. The framework simply draws a full-screen quad (two triangles, arranged in a rectangle that covers the whole screen). Anything you want to draw using ShaderToy has to be done procedurally in the fragment shader. If you look at all of the various shaders that people have come up with, they all deal with this limitation, even the ones that look like they have thousands of custom polygons. The 3D scenes will often use ray-tracing or a technique called ray-marching with distance fields or height fields or something like that.  With your own program, you have a lot more control. You can specify whatever polygons you want to draw, and you have complete control over steps 1 & 2. The downside is that it's a lot harder to get something started -- unfortunately there is so much boilerplate involved in setting up the graphics API (OpenGL, in this case) just to draw a single triangle that this option is a huge pain in the butt. You'll want to find a decent OpenGL or DirectX tutorial (this one looks alright, but I haven't checked it out in detail yet), and work through it until you can draw a triangle and you understand how it works. Don't expect to finish in one sitting! With that said, how you solve your problem depends on whether you're using ShaderToy or not. If you're using ShaderToy, then you'll want to look at your fragCoord input (this tells you which pixel is currently being drawn) and use math to determine how far away this coordinate is from your line. If the distance is less than half the thickness of your line, then it is within the line and should be shaded with the line color. Otherwise, it should be shaded with the background color. If you want to do something fancy, you can anti-alias the line by performing this operation 4 times (each time slightly offset from the others by 1/2 pixel width/height, which you can determine with the iResolution input) and blend the results. If you're using your own program, simply draw your triangle with glPolygonMode set to GL_LINE. You can set the thickness of the line with glLineThickness. Anti-aliasing is a little more advanced, but this article will tell you how to set up your OpenGL context so that it uses MSAA. So, it's much simpler within the context of your own program, but of course setting up your own program is a huge pain in the butt. It's something you'll want to tackle at some point anyway, though. ShaderToy is fun, but doing something non-trivial in ShaderToy is usually done very differently than how it would be done in a typical 3D game or application. If you go very far down the ShaderToy route, you'll be messing with ray-marching and distance fields and stuff, which believe me, is very fun, but it might not be what you're after.
  10. Whats the added benefit of using hexadecimal?

    No one has mentioned the main reason, which is so that you can make your numbers say things like DEADBEEF
  11. Shadow mapping problems

    It looks like you're multiplying the matrices in the opposite order for the lightViewPosition and worldPosition.
  12. Shadow mapping problems

    Did You post your shader code somewhere? It looks like you're rendering the shadow map correctly, so I suspect you've made a mistake when you sample the shadow map and do the depth comparison. 
  13. Banding using GGX specular

    Oh! You're right. Even in the Fresnel screenshot, the bands are only 1/255 different. That's great to know, thanks! Thought I was going crazy for a sec.
  14. Greetings,   I'm implementing specular using a GGX distribution and I'm noticing some unpleasant banding in the dark colors. Here is my HLSL code for the GGX distribution function: float GGX(float NdotH, float a) { float asq = a*a; float denom = NdotH * NdotH * (asq - 1.0) + 1.0; return asq / (PI * denom * denom); } Here is what it looks like if I shade a cube using only the GGX distribution function, with roughness = 0.25:     From what I can tell, the GGX function is totally continuous with respect to NdotH, with no stepping:     Granted, it's pretty sharp and so everything below NdotH = 0.8 is basically black. Not a lot of shades in 8-bit color down there, but so what? The only way I can imagine that it matters is if I'm doing gamma correction wrong. Here's my MSAA resolve shader to show that I'm doing the gamma correction: float3 pixelShader(PSInput input) : SV_TARGET { int3 texCoord = int3((int2) input.texCoord, 0); float3 color = 0.0; [unroll] for (int i = 0 ; i < 4; ++i) { float3 c = shaderTexture.Load(texCoord, i).rgb; color += c; } color *= 0.25; return pow(color, 1.0 / 2.2); } What could be the problem? Am I using the wrong gamma function for my monitor, perhaps? Is there a better way to do gamma?   Edit: By the way, I'm seeing plenty of banding in my Fresnel, too, and it's not even that dark:   HLSL for the Fresnel: float R0 = 0.05; float RF = R0 + (1.0 - R0)*pow(1.0 - NdotV, 5.0); Starting to think I'm just doing something really wrong with my gamma...
  15. I want to be helpful, but it's difficult for me to reason about how the architecture works from a high-level graph like that. For instance, what information do your scenes contain? What constitutes a scene? Is your front menu a scene? How will the scenes be loaded/unloaded? Will you support dynamic, asynchronous loading of level fragments as the player plays through the level? Is each one of those fragments a scene? If not, how do you keep track of which assets are actually needed and which aren't? Some sort of usage counting? These are the sorts of questions that generally give rise to an overall architecture. However, these questions are usually not answerable from a high-level graph of class names alone.   I tend to have more success building up levels of abstraction from the bottom-up. I know how to draw a scene using low-level API calls, but that is tedious and painful. What tools and abstractions would be helpful here? Identify the biggest pain points and fix them first. For instance, there's so much that goes into just loading and initializing a texture with all of the right parameters (size, dimension, mip levels, etc.). You may even need a small constellation/hierarchy of classes to make this less painful while retaining the flexibility you need. And how much flexibility will you need, by the way? It's important to keep your design goals in mind. Are there any extreme optimizations you have in mind in order to reduce texture state changes? Atlasing? Placing multiple same-sized textures into one array? Sparse virtual texturing? You'll want to make sure that your low-level design doesn't preclude these options when you get to a higher level.   I don't know, maybe that's just me. I just find it hard to *start* with the highest-level abstractions. It seems like, once you start to get to the granularity of concrete, single-purpose objects, you may find that the boundaries you drew at the higher levels aren't going to work so well. It looks like you've designed some engines before, though, so maybe you just have some domain knowledge (even if that knowledge is just your usual way of doing things) that I don't have.