Jump to content
  • Advertisement

OpaqueEncounter

Member
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

156 Neutral

About OpaqueEncounter

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the replies. @L. Spiro, to address your point of "...this is not an issue. There is nothing here to be fixed."... correct, there's nothing wrong here. I just need a specific object to be render in this manner due to the nature of the game I am working. With that said, I think the only two solutions that I can consider is point 1. that @Zorinthrox suggested, or to capture it into a render target.
  2. To explain the title, consider these two captures. The first one is a sphere that is centered at (0, 0, 0) which is the target of the Matrix.CreateLookAt with target being (0, 0, 0). In the second, the same sphere was moved to the side (Vector3.Right * some factor). The sphere is now somewhat skewed and oval-looking. This is due to the properties of perspective field of view. I wish to render the sphere to the side as in the second capture, but with the same perspective as in first. One thing I tried is to change the field of view to a smaller, near orthographic, value. This solves the problem but creates other problems, where other objects are now subject to what is effectively a near-orthographic view. Another option is to capture the sphere into a render target and draw it separately. The problem with this is that the sphere will no longer be in 3D space. Furthermore, unnecessary GPU resources will be committed just for this, impacting performance. Any other ideas on how I can achieve this?
  3. OpaqueEncounter

    Separating components of a free form rotation

    I ultimately solved as described in my previous post, but I did try your method at some point as well. The problem with it is that it is also susceptible to the effect I described in my original post, "if you're rotating along the X axis in the positive direction and then suddenly start rotating in the negative direction, at some point during the tweening affect, there will be some rotation along the Y axis, as the rotation "falls over" in the opposite direction."
  4. OpaqueEncounter

    Separating components of a free form rotation

    Poor final editing on my part. My bad. I re-edited that sentence to make it clearer. Thanks for this. I learned a few new things from your decomposition suggestions. As for the original issue, I had to re-think the whole thing when you pointed out that you're not sure if this actually helps. And it doesn't; the change of axes in rotation always causes "jitters" when the components I provided above are decomposed. I ultimately solved this by interpolating the actual screen delta values, and then recalculating a new Quaternion every frame.
  5. OpaqueEncounter

    Separating components of a free form rotation

    I am half way through with your solution. You just have a typo where you're using the Dot product to split the axes. It's actually vec3 splitRV0 = splitDirection0.Cross(rotationVector); Other than that, I verified that the decomposition and re-combination of these vectors in your solution is correct. The only problem is the actual manipulation of the splitRV0 and splitRV1 vectors. No matter what I do with them (tweener, multiply, add an unit axis to them) the result always changes the rotation from free form to some constraint. The question I have now is how to manipulate splitRV0 and splitRV1?
  6. A have a "free form" rotation method that looks like this: Vector3 screenVector = new Vector3(delta.X, -delta.Y, 0.0f); Vector3 rotationVector = Vector3.Cross(Vector3.UnitZ, screenVector); float angle = rotationVector.Length(); rotation = Quaternion.Concatenate(rotation, Quaternion.CreateFromAxisAngle(Vector3.Normalize(rotationVector), angle)); world = Matrix.CreateFromQuaternion(rotation); The values of delta.X/delta.Y are displacement values along the screen coordinates. This allows me to freely rotate an object in 3D, no matter how it is orientated using touch/mouse movements. I wish to smooth this out by adding a tweener to the rotation. This seems straightforward: split up the X and Y components of the rotation vector (the cross product above), separate the angle value in to two values X and Y, tween the angles, and then create a composite Quaternion. However, this doesn't seem to work in practice. When concatenating the two Quaternions, there is no movement along some arbitrary axis (depending on how much the object has been rotated). Furthermore, the object doesn't rotate freely anymore and instead starts to rotate in a manner similar to a third/first person camera (once the elevation goes past 90 degrees, the orientation is "flipped"). Instead of concatenating the Quaternions, I tried to Matrix.CreateFromQuaternion for each one instead, and then multiply the matrices. But this yields the exact same effect as described above. Going to the extremes, I tried to tween the rotation vector, instead of manipulating the angles. This doesn't work as expected either. Whenever the direction changes, the normalization of the rotation vector causes it to rotate on the opposite axis to reverse direction. This means that if you're rotating along the X axis in the positive direction and then suddenly start rotating in the negative direction, at some point during the tweening affect, there will be some rotation along the Y axis, as the rotation "falls over" in the opposite direction. Any recommendations on making this work correctly? Tweening the angle values does feel like the right way to go about it, but how do I construct the correct final rotation matrix if I separate them out?
  7. Thanks for clarifying. This was the gap in my understanding.
  8. I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model. When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);. I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is. Why is the value requiring normalization? under); ); float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection) { float4 worldPosition = mul(position, world); float4 viewPosition = mul(worldPosition, view); return mul(viewPosition, projection); } VertexShaderOutput VS(VertexShaderInput input) { VertexShaderOutput output; matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform)); output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection); float3 normal = mul(input.Normal, WorldInverseTranspose); normal = normalize(normal); float lightIntensity = -dot(normal, DiffuseLightDirection); output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f); output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds); return output; } float4 PS(VertexShaderOutput input) : SV_Target { return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color; }
  9.   So that's essentially an animated spritesheet on a quad? That seems plausible, but also potentially expensive to render. Some of those animations are pretty detailed and long.   In any case, I am going to give this a shot.
  10. Question's simple, how (what techniques or combination of techniques) are the elongated beams of light (pointed out by red arrows) in the screenshot from DOTA 2 achieved? The fire light (green arrow) is obvious, that's just a bunch of particles, but what about those dynamic (they twist and turn) beams of lights.
  11. OpaqueEncounter

    SpriteBatch billboards in a 3D slow on mobile device

      EDIT: Well, I actually did try running GenerateMipMaps every frame and the framerate did go up, so that's that. :)
  12. OpaqueEncounter

    SpriteBatch billboards in a 3D slow on mobile device

      I actually generate the texture like I described above (render models into a render target). I'll play around with lower quality pixel format, but I guess if there are no other suggestions then I'm stuck with it.   The only thing I don't understand is, why is it that reducing the render target size helps if this is a fillrate issue? Or is fillrate a bit more broad than I assume it to be? (Sampling a larger image contributes as well?)
  13. OpaqueEncounter

    SpriteBatch billboards in a 3D slow on mobile device

        Not vertex processing for sure since the aforementioned method does all that on the CPU. That, I managed to measure to ensure that it's not a bottleneck. And yes, reducing what is being drawn on screen increases the framerate.     The shader used in that method is BasicEffect, in which I disabled absolutely everything (even vertex color). I am already running at the lowest resolution feasible.   To render less pixels, I also tried to replace BasicEffect with AlphaTestEffect.   It seems that if this is a fillrate issue, the only thing really left is to skip drawing some of those billboards. Luckily, it happens to be that quite a few of them are blocked most of the time. I am not really sure where to start here if this is a solution. Frustum culling is not really the answer here and occlusion querying is unavailable on CPUs like Adreno 225 and less, which I plan on targeting.   Any suggestions?
  14. I used this method http://blogs.msdn.com/b/shawnhar/archive/2011/01/12/spritebatch-billboards-in-a-3d-world.aspx to create a 3D billboard renderer using SpriteBatch. It works perfectly as described and on a modest desktop (with Intel HD graphics) can renderer 10,000s of billboards or particles easily.   On a mobile device (Windows Phone) the framerate drops sharply past a certain, not so large, point. My test (on all devices) is this:   - Render a primitives (sphere, cube, etc) into a render target - Pass the render target to the method above - Start increasing the number of billboards until the framerate drops.   On an x86 desktop or an ARM tablet (Surface) the framerate holds into the thousands. On the phone, it instantly drops from 60 to 30 (looks like disabling VSync has no effect on that device) as soon as you pass a certain point (~200?). The funny thing is that I can get the framerate to go up to 60 again by making the billboards half the size. Same goes when making the render target half the size.   Using a stopwatch, I determined that the time spent on the CPU is nowhere near the 16.67ms threshold. VS2013's frame analysis is unavailable on the Windows Phone, so that's useless.   Can anyone explain as to what is going on here? Is this simply the limitation of a low-power GPU (the Adreno 225 in this case)? If so, what exactly is bogging it down? The fill rate? The blending? (I tried all blend states from Opaque to NonPremultiplied, no effect on performance).
  15. OpaqueEncounter

    HLSL Shader Library

      I guess I should have clarified that I was talking specifically about visual techniques and post processing. The Nvidia SDK samples appear to be what I was looking for.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!