Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1172 Excellent

About remigius

  • Rank
  1. remigius

    Skydome vs skyquad?

    Gotcha, so typically the skyquad works with cubemap lookup I mentioned, only I was wrong in thinking it was fancy
  2. I was wondering if there's a standard set of images that's used in studies concerning perception or congition. I've been tinkering on an alternative on-screen display method for images (it's not my idea originally, but here's some more info). The perceived quality of the technique seems to be quite subjective, so I'd like to created a batch of these images and find out what other people think. I could pull a bunch of pictures off Google, but if there's a standard set of images available, tailored to these kind of tests without any biasses, any results I get might actually be useful.
  3. remigius

    Skydome vs skyquad?

    I'm curious how this sky quad would work. If you want to be able to look around, you'll somehow need to get some correct view of the sky onto the quad. To my mind you'd either be doing some fancy perspective based lookup into a cubemap (like this) or you'd prerender some dome/box onto the quad texture. The former would represent the biggest saving in geometry at the cost of a more complex shader. It might be worthwhile to reduce your vertex count by prerendering if you have a fixed viewpoint or when looking out through windows, but doesn't seem to be worth the hassle for dynamic outdoor scenes where the quad texture would need to update a lot.
  4. remigius

    Singularities on a sphere

    Thank you for your reply, good to know my hunch made sense. Maybe I'm reading too much into this metaphor, but it just rather strange the author would pick just this example to prove a point about these singularities. Would it make any difference if complex numbers are involved?
  5. remigius

    Singularities on a sphere

    This has absolutely nothing to do with game development, but I hope someone here feels like answering this question for me. I'm reading a book where some process is pictured as the propagation of a latitude parallel along a sphere. The parallel will start at the north pole with 0 radius, expand to some maximum at the equator and contract back to 0 radius at the south pole: The main point being made using this metaphor, is that there are no singularities at the north and south poles. I find this a bit confusing, since the first thing that occured to me when I saw this picture was the Hairy Ball Theorem. By definition it'd follow that there are no continuous tangents for the north and south poles in this particular picture. Doesn't this mean that the north and south poles are in fact singularities since they're not differentiable?
  6. remigius

    [XNA] 2D Flashlight Vision

    Ah yeah, for WP7 custom shaders won't work. I'm a bit out of the loop, but I think the phone just doesn't allow custom shaders at all (for now). I can't seem to think of anything much simpler though. Whether you're casting shadows or light, the basic problem is the same. If you want the light to be occluded, the only techniques I can come up with are 2D versions of shadowmapping or shadow volumes. I did find this tutorial and this engine that might both offer good ways to create 2D shadows.
  7. remigius

    [XNA] 2D Flashlight Vision

    Does this tutorial help?
  8. remigius

    From Managed DirectX to SlimDX

    Ow, my bad, I had forgotten about the VC9 runtimes. Thank you for the correction and sorry about the misinformation then.
  9. remigius

    From Managed DirectX to SlimDX

    The advantages would be: - It's not deprecated, but still supported and (IIRC) open source, so you're not locked in a library MS killed off - It supports multiple D3D flavors up to and including D3D11, which may be beneficial for tool code as well (ComputeShader et al) - It's a slim wrapper, meaning it doesn't force anything on you how to code your program, so in this regard it's similar to MDX Basically it's what MDX would be if it hadn't been killed off and remade to fit the XBox platform (and had been designed with a more clear vision up front . I believe you can just stick it's DLL in your program directory and work with it, so it doesn't have any special runtimes that need to be installed seperately.
  10. For shame my suggestion wasn't useful, it seemed logical enough. I guess that proves I need to read up on homogenous coordinates In good old uniform shadow mapping you need to set up point sampling, linear interpolation on the shadow map depth value doesn't make sense and can give incorrect results. [s]I'm afraid I don't have any good suggestions as to the cause of the jiggling.[/s] It looks like the jiggling happens only when you 'widen' the light cone, right? If so, it may be down to simple inaccuracy in point sampling the depthmap. I think most unfiltered shadow mapping would suffer from this.
  11. Fancy terminology aside, it basically means that your graphics card is the limiting factor to your game's performance (so you're GPU bound), either being very busy with the computations in the shader (that'd be ALU bound) or busy pulling in (or waiting on) texture information. To paint a complete picture, I think the latter case could be either a bandwidth problem (pulling in too much texture data) or a latency problem (either sampling many different textures or random non-sequential access to the texture, both typically killing the cache). How do I identify if a shader is either ALU or texture-fetch bound? And how do I improve shader performance based on that? [/font][/quote] I don't know of any elegant way of determining this, so I'd try setting only 1x1 textures on the shader. These should solve both potential texture-fetch problems, so if the performance is still bad you'd be ALU bound. This would mean you need to optimize (or simplify) the shader code so it runs more smoothly (or up your minimum requirements =). If the performance improves, you probably have a texture-fetch problem on your hands. I don't have any good generic tips on this, but you'd obviously want to make sure that you're using mipmaps and that your texture sizes aren't too extreme. If you happen to be sampling many different textures (and are probably blending them together), it might be worthwhile to check if you can combine them as a pre-processing step.
  12. I gather you mean Special as in 'A Royal Pain In The Behind'? To this day I still look back in horror to coaxing and coercing the fixed function pipeline [size="1"]Sorry about this utterly useless post, but I just had to respond to Nik while I'm here. Nice to see you're still around =)
  13. remigius

    PIX question

    It's been a while since I used PIX, but from back then I seem to recall that the texture/rendering/geometry stuff wasn't too hassle-free indeed and that its main advantage was keeping track of all D3D calls and profiling these. From what I've read since though, I got the impression that it has been much improved and that the texture/rendering/geometry stuff should work much better now. It may be a silly question, but are you using the most recent version? Anyway, I'm afraid I can't answer your questions from any recent experience, but to the last one I'd say it doesn't matter. PIX just keeps track of the D3D calls and doesn't care about your application. To actually get some application-specific info in there, you could use the D3DPERF utility functions (see here). This may also be helpful for your second question. When you create a texture, you can set a marker (using D3DPERF_SetMarker()) with the texture's name. PIX will show this marker text, so if you put it just before the texture creation call, you can see which texture is being created and remember the pointer to track it through the frame.
  14. I think this part may be wrong: float4 worldPos = mul(screenPosition, InvViewProjection); worldPos /= worldPos.w; I'm not sure about this homogenous coordinate stuff, but I do recall you divide by w when transforming from world to projection. Since you're doing the reverse, from projection to world, you may have to do: screenPosition *= screenPosition.w; float4 worldPos = mul(screenPosition, InvViewProjection); I also seem to recall only the z component gets divided by w, but I'm really not sure about that. If that's correct though, you'd need to do: screenPosition.z *= screenPosition.w; float4 worldPos = mul(screenPosition, InvViewProjection); I hope this makes sense and isn't completely pointing you down the wrong path
  15. remigius

    Distance Constraint

    I've found Jacobson's article to be a great starting point for setting up simple physics things. The article explains how to get a servicable particle physics system going (which can be used to simulate more than you might think) and it shows how to create so called stick constrains (basically fixed distance constraints) between two points. I'm not sure if it'll work with the Box2D stuff, but it might prove an interesting read. The basic idea is that given enough iterations in which you set points A and B at a fixed distance (this is what Jacobson calls relaxation), the points will end up at acceptable locations.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!