Jump to content
  • Advertisement

Syerjchep

Member
  • Content Count

    107
  • Joined

  • Last visited

Community Reputation

213 Neutral

About Syerjchep

  • Rank
    Member

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I mean there's also an ambient component. It sure would be odd if in a daytime scene the back of a yellow painted object outside in the sun was totally black. Edit: Just occurred to me that maybe I should just decrease the effect of shadow map shadows with respect to cosTheta. Edit x2: Appears to have worked like a charm: It probably doesn't work as good as that everywhere but for now this seems like a good solution. Surprisingly most shading tutorials tell you to adjust the *bias* based on cosTheta but never the actual darkness of the shadow.
  2. So I've got an example here: And you can quite clearly see the ambient, diffuse and specular components of a basic phong shader model. You can then see some shadows from nearby trees being cast on the object, even if they're distorted due to the object's curvature. But past the cutoff where the diffuse component is depleted you can see an even darker band where the object is casting a shadow on itself using shadow map shadows. Due to the sampling bias needed to avoid shadow acne the shadow map shadow doesn't line up with the diffuse component much at all so it looks weird. What's the typical solution to making phong shading and shadow maps line up correctly? Ideally I'd just not let objects cast shadows on themselves with the shadow map but that's easier said than done since I'm not even sure exactly how I'd go about defining "object."
  3. So I have a program that involves rendering a few models and some tessellated terrain, and allowing the user to navigate through it using wasd (R and F for vertical) keys and a mouselook camera. I also have it rendering two triangles at the front of the screen that I am attempting to use as the basis for some post-processing raymarching stuff. I also have a depth framebuffer texture that I render from the player's camera perspective to get a sense of how far each pixel's ray must be cast. For reference, here's the part of my fragment shader that deals with the raymarching: float LinearizeDepth(in vec2 uv) { float zNear = 0.1; // TODO: Replace by the zNear of your perspective projection float zFar = 250.0; // TODO: Replace by the zFar of your perspective projection float depth = texture2D(texture0, uv).x; float z_n = 2.0 * depth - 1.0; float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); return z_e; } float mapSphere(vec3 eyePos) { return length(eyePos) - 1.0; } float doMap(vec3 eyePos) { float total = 999999; for(int x = 0; x<5; x++) { for(int y = 0; y<5; y++) { eyePos += vec3(x*10,y*10,0); total = min(total,mapSphere(eyePos)); eyePos -= vec3(x*10,y*10,0); } } return total; } Above: Some functions to help set things up. Below: The part of main that deals with the two triangles applied to the screen. if(skyDrawingSky == 1) { vec2 uv = (vert.xy + 1.0) * 0.5; //vert is the 2d coords from -1 to 1 of the two triangles rendered to the screen for the raymarching shader float depth = LinearizeDepth(uv); //get the depth of a particular pixel's view from a depth buffer rendered to texture0 vec4 r = normalize(vec4(-vert.x,-vert.y,1.0,0.0)); r = r * camAngleMain; //I thought adding a 3rd dimension and the normalizing would be equivalent to a perspective matrix //At which point I just multiply those 'perspective coordinates' by the inverse of my actual camera view matrix that I use for the polygonal graphics vec3 rayDirection = r.xyz; vec3 rayOrigin = -camPositionMain; float distance = 0; float total = 0; for(int i = 0; i < 64; i++) { vec3 pos = rayOrigin + rayDirection * distance; float value = doMap(pos); distance += value; if(distance >= depth) break; else total += clamp(1-value,0,1); } color = vec4(1,1,1,total); return; } edit: Changing the raymarching camera glsl code to: vec4 r = normalize(vec4(vert.x,vert.y,1.0,0.0)); r = r * camProjectionMain; r = r * camAngleMain; vec3 rayDirection = r.xyz; vec3 rayOrigin = camPositionMain; Doesn't really change anything, but it's easier to read. My main problem is that for some reason vertically panning the camera results in vertical distortion of objects as they approach the top and bottom of the view/screen. If you look at the attached image, the white orbs are part of the raymarching shader, everything else is tesselated/polygonal graphics. In the first two images, the orbs can be clearly seen to be above or below the sand colored heightmap. In the last image, the position of the orbs is a bit off such that they now clip through the terrain. I can't figure out why this is. My camera seems to *almost* work, but not quite. My aspect ratio at the moment is a perfect square, and I multiply my raymarching screen coordinates (2d from -1 to 1) by the inverse of my camera view matrix so I don't really know why this would happen. Is there anything I'm doing wrong with my implementation here?
  4. So I have a heightmap for my terrain. The heighmap is a texture, and I have tessellation control and evaulation shaders in glsl (using OpenGL 4.0 atm) that tessellate them into smaller squares. I've tried to write code that looks at differences in neighboring pixels in the heightmap to determine the normals at a given point. The result of this is the attached image. It looks like it more or less works, but there's undesirable artifacts. I've drawn a black circle around one such artifact in the close up screenshot to the right. Here's my tessellation evaluation shader that computes vertex positions and normals: #version 400 core uniform sampler2D texture0; uniform sampler2D texture1; uniform mat4 cameraProjectionMatrix; uniform mat4 cameraMatrix; uniform vec3 cameraPosition; uniform vec3 cameraRight; uniform vec3 cameraUp; uniform vec3 cameraDirection; layout(quads, equal_spacing, ccw) in; in vec3 ES_Position[]; out vec2 UV; out vec3 FS_Position; out vec3 lightingNormal; vec3 getNormal(float cellSize,float totalPixels,vec2 uvCoords) { float heightL = texture(texture0,uvCoords - vec2(1.0/totalPixels,0)).r*10.0; float heightR = texture(texture0,uvCoords + vec2(1.0/totalPixels,0)).r*10.0; float heightD = texture(texture0,uvCoords - vec2(0,1.0/totalPixels)).r*10.0; float heightU = texture(texture0,uvCoords + vec2(0,1.0/totalPixels)).r*10.0; vec3 tangent = normalize(vec3(cellSize,heightL - heightR,0)); vec3 bitangent = normalize(vec3(0,heightD - heightU,cellSize)); return normalize(cross(tangent,bitangent)) * vec3(1,-1,1); } void main() { float totalMapSize = 400.0; float totalPixels = 2048.0; float cellSize = totalMapSize/totalPixels; vec3 highXPos = mix(ES_Position[3],ES_Position[2],gl_TessCoord.y); vec3 lowXPos = mix(ES_Position[0],ES_Position[1],gl_TessCoord.y); FS_Position = mix(lowXPos,highXPos,gl_TessCoord.x); vec2 uvCoords = FS_Position.xz/totalMapSize; float actualHeight = texture(texture0,uvCoords).r*10.0; FS_Position.y = actualHeight; gl_Position = cameraProjectionMatrix * cameraMatrix * vec4(FS_Position,1.0); UV = gl_TessCoord.xy * vec2(0.33,1); vec3 norm = getNormal(cellSize,totalPixels,uvCoords); //vec3 normL = getNormal(cellSize,totalPixels,uvCoords - vec2(1.0/totalPixels,0)); //vec3 normR = getNormal(cellSize,totalPixels,uvCoords + vec2(1.0/totalPixels,0)); //vec3 normD = getNormal(cellSize,totalPixels,uvCoords - vec2(0,1.0/totalPixels)); //vec3 normU = getNormal(cellSize,totalPixels,uvCoords + vec2(0,1.0/totalPixels)); //vec3 normX = mix(normL,normR,0.5); //vec3 normY = mix(normD,normU,0.5); //vec3 normAround = mix(normX,normY,0.5); //lightingNormal = mix(norm,normAround,0.5); lightingNormal = norm; } Trying to approximate what pixel it is on and finding the supposedly neighboring pixels using floating point math seems hacky, I feel like there's a better way to do this that will eliminate those artifacts and make it look a lot more smooth.
  5. I actually considered doing 4 bit sequences. For now they're just unsigned chars though. That would save RAM, but would it save CPU? RAM isn't the issue at all. As for doing it on the GPU, that'd be great, though I imagine that's a bit above my head. I understand the basics of parallel computing though.
  6. I've got like thousands of enzymes to compare to thousands of ligands each frame. I guess it just boils down to doing what ya'll said and finding ways to preemptively determine a failed match and weed out any unnecessary comparison.
  7. So I've got some very large (a few thousand elements long) arrays of numbers. Each element in the array is a number from 1 to 6. I then have a lot of smaller arrays (maybe 20 or 30 elements long) which also consist of the numbers 1 to 6. For each of these smaller arrays I have to find the location on the larger array that is similar enough to pass my test if there is one. The similarity for any one of the smaller arrays is found as follows: 1. Tally up the differences between each pair of numbers: one from the smaller array, and one from the larger, going for the whole length of the smaller array. 2. Sum the total difference between all pairs together. 3. Raise it to a high power because I need the results to follow a curve. (e.g. 40/40 has a similarity of 1, but 39/40 is only 0.81 similar, and 35/40 is 0.21 similar) 4. Advance the offset of the larger array by one. Starting over a 1, but this time the first element of the smaller array is compared to the second element of the larger array, and so on. Unfortunately this means that for each smaller array I have to do an exponentiation operation for each element of the larger array. (e.g. to compare a smaller array to an array with 1000 characters, I have to call the 'pow' method over 1000 times) I also have find the difference between an amount of pairs of numbers equal to the number of elements in the smaller array multiplied by the number of elements in the larger array for each of the smaller arrays. (e.g. I have to find out if a 20 element smaller array matches elements 0-19 of the larger array, then elements 1-20 of the larger array, then elements 2-21 ... until the end of the larger array) Can anyone suggest some optimizations I can do here? Here's a snippet of code as an example: void activator::calculateFree() { if(attachSize < 1) return; //attach size is the size of the binding domain of the activator //e.g. the size of the 'smaller array' calculateLigand(); if(!boundLigand) { //parent->mainDNA->size is the size of the 'larger array' //a is the offset as we work our way through the larger array for(unsigned int a = 0; a<=parent->mainDNA->size-attachSize; a++) { float totalDifference = 0; float actualDifference = 0; bool inhibited = false; for(unsigned int b = 0; b<attachSize; b++) { totalDifference += 6; //parent->mainDNA->code is the larger array //attachBinding is the smaller array actualDifference += abs(parent->mainDNA->code[a+b]-attachBinding[b]); if(parent->mainDNA->inhibitedCode[a+b]) { inhibited = true; break; } } if(inhibited) continue; //here (totalDifference-actualDifference)/totalDifference is the raw similarity between the two arrays geneAffinity = (totalDifference-actualDifference)/totalDifference; //try and avoid doing a pow if we don't have to if(geneAffinity > 0.61324) { geneAffinity = pow(geneAffinity,8); //For example, if we have a smaller array length of 20 //And at a particular offset 19 of its elements are equal to the larger arrays elements //and the last element is only 1 different from the corresponding larger array element //that grants a similarity of 119/120 //(totalDifference-actualDifference)/totalDifference in this case would be 0.99166 //raised to the 8th power, that's 0.935 //here we do a check to see if the similarity is close enough to do something with float bindingAffinity = 1-geneAffinity; //geneAffinity > 0.02 if(bindingAffinity < 0.98) { unsigned int randMax = 1+(bindingAffinity*1000); if(rand() % randMax == 0) { boundDNA = parent->mainDNA; boundPosition = a; for(unsigned int b = 0; b<parent->unboundActivators.size(); b++) { if(parent->unboundActivators[b] == this) { parent->unboundActivators.erase(parent->unboundActivators.begin() + b); break; } } boundDNA->activators.push_back(this); break; } } } } } }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!