Advertisement Jump to content
  • Advertisement

Amadeus

Member
  • Content Count

    64
  • Joined

  • Last visited

Community Reputation

205 Neutral

About Amadeus

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming
  1. Quote:Original post by Halifax2 Chapter 25 of GPU Gems 3 might be of interest to you: Rendering Vector Art on the GPU. Quote:Original post by raigan Rather than using circles, rectangles, etc. as primitives, you could use a single general/generic primitive, for instance beziers: Resolution Independent Curve Rendering using Programmable Graphics Hardware You could then build circles, rectangles, etc. by combining many of these simpler primitives. This is an area i have been attempting to look into as well. Have there been any implementations of these methods. i was attempting to work out the alpha-blending method mentioned for graphics cards that do not support the ddx/ddy gradient instructions. Although not entirely necessary for more recent cards, i would like to better understand the math involved, and would be nice to have support for it in those instances. Any help in that area would be greatly appreciated. Haven't compared against using distance fields yet though.
  2. Amadeus

    Exterior Ballistics

    I am currently attempting to handle various effects present on bullets during flight. I have a fairly basic method in place using some simple bullet coefficient based drag models i found, but I am looking to find out more concrete informationon on handling the different characteristics of bullets and environmental factors that contribute to the way a bullet reacts when fired. I currently use a G1 drag function combined with gravity and a simple calculation for relative wind interference and combine the forces with the initial/current muzzle velocity and trajectory direction over time to compute the change in state as the bullet travels. I would like to find more information on how various ballistics calculators/simulations handle things like wind, altitude, temperature, bullet weight, ballistics coefficient, sighting, and other factors when computing the effects on a bullets trajectory in order to improve my current simulation. How is this traditionally handled in more realistic simulations and related situations? Any information, resources, and/or advice you could provide would be greatly appreciated. Thanks in advance.
  3. Amadeus

    Multiple contacts with V-Clip

    I would also be interested in more information in this area. I am just getting into my own real investigations in collision detection methods and starting to look into both gjk and v-clip. More information on the detection/determination of multiple contact points would certainly be helpful. As well as any more clear information on these techniques and others that may be beneficial to experiment with.
  4. Amadeus

    Animation Blending

    Hello, Are you still using the originally posted test code? Although if you only have a single track it won't matter (or at least won't see the problem), but to be completely accurate your Blend method would need to know the totalWeight before using the current tracks weight in the division. Adding it up as you go changes the ratio each time a new tracks weight is added. Also in your test code it appears you add two key values at time 0 and 1 (or 1 nd 2 - depending on the post), yet you increase the time value by using time++ which increments by 1 each time, skipping in between float values of time (try testing with keys at times 0 and 2 if you want to keep it like this). It's also probably best to ensure keys are within some valid time range regardless of the time passed, and disregard animations by some other means - use curr->value if time < curr->time, use next->value if time > next->time, depends on your needs I guess. bool FindKeys(Keyframe *& left, Keyframe *& right, real time) const { if (!keys.empty()) { KeyList::const_iterator curr = keys.begin(); KeyList::const_iterator next = curr; for(; next != keys.end(); ++next) { if((*next)->time > time) { break; } curr = next; } if (next == keys.end) { next = curr; } left = (*curr); right = (*next); return true; } return false; } Be mindful of any inaccuracies - treat more as a pseudo-code suggestion - depending on how you choose to handle things you may always want the true current and previous/next keys instead of them being the same key when the time is out of range. Other than that, from the quick glance I did I don't particularly see a direct solution to your issue. May need a few more clarifications on how exactly you are handling things.
  5. Glad to hear things went well with the presentation. This looks very interesting. The addition of support for quadruped characters is also of interest. Looking forward to learning more about the techniques used.
  6. As long as you don't mind being restricted to only 1 LOD difference between patches I have been able to get away without using skirts in my geomipmapping based implementation. I use a different triangulation of the regular patches than most seem to promote, preferring a layout similar to that which result when using triangle fans versus the more uniform structure. Another Post This helps simplify things as all my index buffers are static. I am still looking for even more ways I can improve upon this. With more than 1 level difference I agree it would probably get complicated keeping things matching up, but as of yet I haven't even looked into using skirts so far. Hope that helps.
  7. If you transform the points into world space you now have to take into account that the sample points may no longer relative to the origin. The shader code was taking advantage of the fact that points were relative to the origin to allow for them to just be normalized to get correct direction vectors. Once the planet is translated this is no longer the case. So you need to subtract the sample point from the new planet center for those directions and normalize that rather than just the sample point itself. This also goes for things like the light direction as well, as for objects like the sun the direction would actually be from the light position to the planet center, not just a static direction vector, as this would make the light appear to come from the same direction no matter where you placed the planet. You can also just make sure everything is in object space so that all your positions and direction are relative to the planet being centered at the origin. Hope that helps. [Edited by - Amadeus on July 31, 2008 8:39:15 AM]
  8. Amadeus

    A bit of history

    Been following for a while, event before this IOTD...the good ol' days of flipcode. Always impressive since the beginning. A lot of very insightful information shared as well. Highly interested in seeing how it all turns out.
  9. Because the bulk of the calculations are in the vertex shader this version of the shader can be dependent on the tessellation of the sphere mesh used. Also if you are still using 10 and 10.25 as your radii values try using other sizes within the same scale range (i.e. 6000 * 1.025 = 6150, giving you an atmosphere thickness of 150 units instead of 0.25 units). As long as you keep the scale range the same I believe all of oneil's shaders should still work correctly (been a while), this will give you more tri/pixel coverage for the calculations. Could also move things into the pixel shader, but that can be a pretty heavy shader to be calculating per pixel, though cards should be able to handle it pretty well. You are correct about the '-ray' in the calculations, at least as far as the atmosphere is concerned. This was a combination of the atmosphere and ground shaders. If you look a the original oneil groundfromspace shader you will see the same negation in the shader. It just so happens the atmosphere looks relative the same with it in there. Probably not entirely correct when considering the atmosphere though. Can't be sure without testing it in all situations, especially from within the atmosphere as i believe there is a negation / direction flip needed in that instance as well.
  10. That was with your original source, project modified to compile in Visual Studio 2005 instead of 2008, and the shader code i posted above. The only thing changed was the position of the camera. The planet is not being rendered in that shot, just the atmosphere. [Edited by - Amadeus on July 21, 2008 10:50:24 AM]
  11. Hmmm. This is what you should have seen with the shader as I had it. I also moved the camera position to (30, 0, 0), but that shouldn't have mattered since you could move around the planet.
  12. For the atmosphere, because the planet is in the way, the backfaces are really all you will ever see, especially when inside the atmosphere at lower altitudes. So yes, the artifacts you are seeing would normally be occluded by the planet geometry. It has been a while since i have messed with this, and I am just getting back to playing with it as well, but i remember having this and a number of similar issues as well when I originally tried. Rendering the planet, or accounting for where intersection with the planet would be should take care of it. And from what I have seen, at least from my initial view, your problem is "nonexistant". I would have to delve deeper to be sure. The other problem with using the parameters as they are is that the inner and outer radii are 10 and 10.25 respectively. An atmosphere thickness of 0.25 is extremely small, thus you get very little of the atmosphere actually visible. Try scaling your inner radius value, and thus your mesh as well, and see if that helps you evaluate whether you are getting correct result. For the atmosphere at least, it should not matter which way the normals are, as long as it is centered at the origin. Just flip your culling based on which side of the sphere you want visible. The normals are not used directly in the calculation, though implicitly the normalization of the points on the sphere create vectors that are equivalent to the normal directions. Here is a slight modification of your shader taking into account planet intersections to give you an idea of what it would look like. Normally rendering the planet would do this instead of handling it all in the same shader. I have not tried it from within the atmosphere, but you should probably be able to figure out if any thing needs to be changed. You can also try uncommenting the two lines in the pixel shader and playing with the exposure to see if you get better looking results. The extra intersection check I added is normally not needed as the planet and atmosphere are rendered separately with front or back faces visible respectively, that keeps the first length calculation valid for the farthest intersection. float4x4 mWorldViewProj; float4 invWavelength; float4 vEyePos; float4 vLightDir; float cameraHeight; // The camera's current height float cameraHeight2; // fCameraHeight^2 float outerRadius; // The outer (atmosphere) radius float outerRadius2; // fOuterRadius^2 float innerRadius; // The inner (planetary) radius //float innerRadius2; // fInnerRadius^2 float krESun; // Kr * ESun float kmESun; // Km * ESun float kr4PI; // Kr * 4 * PI float km4PI; // Km * 4 * PI float scale; // 1 / (fOuterRadius - fInnerRadius) float scaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) float scaleOverScaleDepth; // fScale / fScaleDepth float g; float g2; struct vpconn { float4 Position : POSITION; float3 t0 : TEXCOORD0; //float DepthBlur : TEXCOORD1; float3 c0 : COLOR; // The Rayleigh color float3 c1 : COLOR1; // The Mie color }; struct vsconn { float4 rt0 : COLOR0; //float4 rt1 : COLOR1; }; // The scale equation calculated by Vernier's Graphical Analysis float expScale (float fCos) { //float x = 1.0 - fCos; float x = 1 - fCos; return scaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } // Calculates the Mie phase function float getMiePhase(float fCos, float fCos2, float g, float g2) { return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); } // Calculates the Rayleigh phase function float getRayleighPhase(float fCos2) { return 0.75 + (1.0 + fCos2); } // Returns the near intersection point of a line and a sphere float getNearIntersection(float3 v3Pos, float3 v3Ray, float fDistance2, float fRadius2) { float B = 2.0 * dot(v3Pos, v3Ray); float C = fDistance2 - fRadius2; float fDet = max(0.0, B*B - 4.0 * C); return 0.5 * (-B - sqrt(fDet)); } // Returns the far intersection point of a line and a sphere float getFarIntersection(float3 v3Pos, float3 v3Ray, float fDistance2, float fRadius2) { float B = 2.0 * dot(v3Pos, v3Ray); float C = fDistance2 - fRadius2; float fDet = max(0.0, B*B - 4.0 * C); return 0.5 * (-B + sqrt(fDet)); } vpconn AtmosphereFromSpaceVS(float4 vPos : POSITION ) { float3 pos = vPos.xyz; float3 ray = pos - vEyePos.xyz; pos = normalize(pos); float far = length (ray); ray /= far; // check if this point is obscured by the planet float B = 2.0 * dot(vEyePos, ray); float C = cameraHeight2 - (innerRadius*innerRadius); float fDet = (B*B - 4.0 * C); if (fDet >= 0) { // compute the intersection if so far = 0.5 * (-B - sqrt(fDet)); } float near = getNearIntersection (vEyePos, ray, cameraHeight2, outerRadius2); float3 start = vEyePos + ray * near; far -= near; float startAngle = dot (ray, start) / outerRadius; float startDepth = exp (scaleOverScaleDepth * (innerRadius - cameraHeight)); //float startDepth = exp ((innerRadius - cameraHeight) / scaleDepth); //float startDepth = exp (-(1.0f / 0.25)); float startOffset = startDepth * expScale (startAngle); float sampleLength = far / 1.0f; float scaledLength = sampleLength * scale; float3 sampleRay = ray * sampleLength; float3 samplePoint = start + sampleRay * 0.5f; float3 frontColor = float3 (0,0,0); for (int i = 0; i < 1; i++) { float height = length (samplePoint); float depth = exp (scaleOverScaleDepth * (innerRadius - height)); float lightAngle = dot (vLightDir, samplePoint) / height; float cameraAngle = dot (-ray, samplePoint) / height; float scatter = (startOffset + depth * (expScale (lightAngle) - expScale (cameraAngle))); float3 attenuate = exp (-scatter * (invWavelength.xyz * kr4PI + km4PI)); frontColor += attenuate * (depth * scaledLength); samplePoint += sampleRay; } vpconn OUT; // OUT.DepthBlur = ComputeDepthBlur(mul (vPos, viewMatrix)); OUT.t0 = vEyePos.xyz - vPos.xyz; OUT.Position = mul(vPos, mWorldViewProj); OUT.c0.xyz = frontColor * (invWavelength.xyz * krESun); OUT.c1.xyz = frontColor * kmESun; return OUT; } vsconn AtmosphereFromSpacePS1(vpconn IN) { float3 color = IN.c0; vsconn OUT; OUT.rt0.rgb = color.rgb; OUT.rt0.a = 1.0f; return OUT; } vsconn AtmosphereFromSpacePS(vpconn IN) { vsconn OUT; float cos = saturate(dot (vLightDir, IN.t0) / length (IN.t0)); float cos2 = cos*cos; float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + cos*cos) / pow(1.0 + g2 - 2.0*g*cos, 1.5); float fRayleighPhase = 0.75 * (1.0 + cos*cos); OUT.rt0.rgb = (fRayleighPhase * IN.c0 + fMiePhase * IN.c1); //float exposure = 0.85; //OUT.rt0.rgb = 1.0 - exp(-exposure * (fRayleighPhase * IN.c0 + fMiePhase * IN.c1)); OUT.rt0.a = OUT.rt0.b; //OUT.rt1 = ComputeDepthBlur (IN.DepthBlur); return OUT; } technique AtmosphereFromSpace { pass P0 { AlphaBlendEnable = true; DestBlend = ONE; SrcBlend = ONE; CullMode = CW; VertexShader = compile vs_3_0 AtmosphereFromSpaceVS(); PixelShader = compile ps_3_0 AtmosphereFromSpacePS(); } } Like I said, it has been a while since I have fully messed with oneil's shaders, so there may still be some values you need to play with to get everything in scale and acting correctly from both sides of the atmosphere. I don't remember needing the saturate call I added in the pixel shader, but I definitely remember getting the artifact it causes if you remove it in my original testing. I can't remember whether or not some angles flip when moving from space to in atmosphere, but i think comparing the original variations of the shaders and where they differ should shed some light on that. Hope that helps. I'll have to see if I can dig up my original tests in this area, though I plan on looking into it again fully sometime soon anyway. [Edited by - Amadeus on July 18, 2008 8:36:59 AM]
  13. If I remember correctly, O'Neil does render the back faces of the atmosphere so that the inside of the sphere is showing (I missed this at first as well). Depending on your methods, this saves you a sphere intersection check as well since the vertices rendered consist of one side of the intersection. Though if you calculate all the intersections and distances correctly I believe it shouldn't matter. As well you need to discard the points that would have otherwise intersected the planet, either by calculating the intersection, and adjusting your distance, or rendering a sphere the size of your inner radius to cover them. As the cosine of your start angle appraoches -1 the x=1-fCos in expScale approaches 2, which makes the equation in the exp() call eventually reach approximately 22.919, and exp(22.919) is extremely large, as you have seen. The distance covered across the entire atmosphere, where the planet would otherwise be covering, is larger by comparison to the ones you get from horizon to horizon. Towards the center of the sphere the distance covered moves towards being the diameter of the sphere; also as the sample point starts to move through the "center" the angle between the sample and view/light directions increases causing the cosine of those angles to reverse. Since the scattering tends towards white at the horizon, and the planet/atmosphere diameter is much larger than the distance to the horizon, combined with the cosine reversal, you get white across the center of the sphere. Without handling intersections with the planet the distances will only be within a suitable range in the "relatively" small area of the atmosphere where the planet does not appear, which is why you see your "edges" appearing to look correct. Otherwise, rendering the planet with the same shader is actually what fills in the rest of what you see, as well as accounting for terrain color, etc. Hope I worded that right, and hope it makes sense. Feel free to correct me, heh. I'll try and take a look and see if I find anything else. [Edited by - Amadeus on July 17, 2008 7:21:40 AM]
  14. Amadeus

    AABB to sphere

    If you assume the aabb is actually enclosed by a cube the size of your largest half axis, you should be able to multiply the largest half axis by the square-root of 3 and get a sphere radius that will enclose the cube. This may not be accurate enough for you if your aabb are far from being ideal cubes, but the sphere will surround the aabb. If you have a unit sphere centered at the origin, an aabb that encompasses it will have a corner at (1,1,1), whose distance from the origin is sqrt(1*1 + 1*1 + 1*1) = sqrt(3), which since the half axii are 1, mean that a sphere bounding the cube equals the sqrt(3) times any of the half axii. For non-cube aabb, taking a suggestion from a collision detection article a while back, whereby you divided the interacting variables by the half-axii to convert things from ellipsoid into unit-sphere space, multiplying your half-axii by the sqrt(3) will get you the relevant bounding ellipse instead. Dividing by the sqrt(3) will get you the enclosed sphere/cube or ellipse/aabb. In 2d this would be the sqrt(2) instead. I hope that makes sense. - revel8n [Edited by - Amadeus on June 16, 2008 1:35:20 PM]
  15. Would it be possible to track IOTD topics like the other forum sections? I don't seem to be able to do this currently. Also are posts suppose to mark themselves as read just by visiting a forum section? Topics get marked as read just by me entering a section, not by having to actually enter the topic itself. Could there possibly be a way to categorize tracked topics? Thanks in advance.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!