Jump to content

  • Log In with Google      Sign In   
  • Create Account

Dunge

Member Since 01 Aug 2000
Offline Last Active Oct 31 2013 11:17 AM

Topics I've Started

Tessellation and triangulation of 2d polygons

08 August 2010 - 03:30 PM

This is a continuation of my older previous thread but with more specific questions.

I am currently storing multiple complex shapes into arrays of 2d vertices forming an counterclockwise outline (hull) of the shapes. This shape can be concave and some will probably also contain holes in it.

I need to do two things.
First, if the shape is concave, "tessellate" it into multiple sub-mesh convex shapes. Then, for every of these convex shapes, get another vertices array forming the outline to pass it to the physics engine (who only accept convex hull, not triangles).
Second, for every of these convex shapes, use a triangulation algorithm to generate a triangle list out of the convex hull for rendering.

Now, I know this have been talked a lot, but I'm just overwhelmed by the amount of information and don't know which one to use so I though specifying my exact needs, someone might point me to an easy implementation. Or help me by, for example, telling me not waste time implementing something only useful for 3D graphics. Even better if I can have a list of newly created vertices inside the polygon to keep results ready to be used with a index buffer in mind.

I've already searched a bit and this is what I found out, please correct me for any mistakes:
-There's the "ear clipping" triangulation algorithm which seems to be easy enough to implement, but does not gives "optimal" triangles.
-Then there's the "Delaunay triangulation", which also seems to work fine and give the optimal results. Problem is, from what I understand I need to already have points inside the polygon placed so it can create triangles with them, it doesn't actually create them by itself. Not sure about that since I still have to find a correct implementation of the algorithm.
-Searching Tessellation and DirectX only gives article about the new DX11 hardware tessellation feature, which is not really what I'm looking for.
-I've found this GLUtessellator sample page which is simply amazing. It does exactly what I need and works with everything. You just pass the "contour" of the polygon, and not only does it tessellate it, it completely output the topology to use with vertices lists to render it efficiently. I have to ask if there's something similar already available for DirectX? Keep in mind that triangle fan are deprecated in the latest DirectX versions.

Best practices for a "professional" 2D game engine

26 June 2010 - 06:27 AM

Hello there!
Last year I started what I considered a real DirectX10 3D engine with many neat features. It was going well, I had variance shadow maps and PhysX integrated and a nice little gameplay. Still, I realized I don't have the manpower needed for creating assets (modeling, animations, etc) and that I could never finish a game looking serious compared to what's in the market nowadays. Still, I have many ideas for a high res 2D game and think it can be easier to create while staying attractive to players.

I then took my old SDL project I had to create some GUI, and transformed it to a real-time 2D game engine and integrated Box2D. While it was great, it wasn't good enough, especially since SDL (without OpenGL, just basic SDL) can only draw axis aligned rectangular surfaces with alpha blending. Box2D is based on vectorized polygon shapes. I searched a bit and noticed the only graphical API allowing vertorial shapes are actually OpenGL/Direct3D.

So... I got back to my DirectX10 engine (which is much more advanced anyway, and which I might upgrade to DX11), scrapped everything 3D in it, modified my camera system and now use an orthogonal projection. It now begins to look like something really nice.

Still, before starting on a path and realizing it's the wrong one again, I have a few easy questions.

1. What's the current best physic engine for 2D? I know only Box2D and Chipmunks, but I can't find any comparison anywhere of what's better on one or the other. Or maybe I should stick with PhysX or ODE and use them as 2D only in some way?

2. I plan to make a "shape editor" to create my shapes in-game. I see it as a simply list of dynamic moving points composing a polygon, all of them having UVs coordinates to map the texture. It's pretty easy, but I'm stuck on one basic thing, which primitive topology should I use for that? A simple line strip only draw the outline of the polygon, it's not filled. As for the standard triangle list/strip well I need some kind of algorithm to generate front-facing triangles from any possible list of points? Isn't there an easier way?

I feel kinda dumb asking that, it's basic and I should know it but I guess I stopped game programming for a bit too long :)

[Edited by - Dunge on June 26, 2010 1:44:57 PM]

Drawing circle with SDL_draw vs SDL_gfx

18 February 2010 - 03:38 AM

We are using SDL for drawing on a arm embedded device for drawing on the linux framebuffer, so speed is important, but the problem also occurs in Windows. We plan on drawing a few hundreds of small circle one next to another. We actually draw the circle once on a temp surface, then blit it multiple times on the main surface. We first used SDL_draw, and it worked fine, until we realized it assume the surface we will draw on is the same bit depth as the primary (backbuffer) surface. Our video driver only support 16bits, but we use some 32bits surfaces to support alpha, and while it works, SDL_draw was trying to draw in 16bit on these and was not rendering properly. I then modified it's source so it use the passed surface to check bit depth instead of using SDL_GetVideoSurface, and after that it worked for alpha surface, but caused heap corruption on regular 16bits surfaces. I then though this lib seems crappy, and tried the other one which seems more used (SDL_gfx). While it draw correctly on all type of surfaces and don't crash, the filledCircleColor function actually draw a circle bigger than what you ask for, and it seems stretched and look ugly. aacircleColor() seems a bit better, but ask too much performance, is not filled, and still draw bigger than what I ask. My surface is 14x14, I draw at (7,7) with a 7radius, and the bottom/right pixels are missing. SDL_draw though draw the circle perfectly. Anything I can do?! Both libs have problems. Should I write my own circle drawing code? I can't simply load a png, cause the circle size can change.

PIX reading inconsistency / bug(?)

23 May 2009 - 01:55 PM

This is related to my other thread problem, but the title wasn't relevant anymore. Everything got tested with different video driver versions and also on the REF device, always behave the same way. DX10/C++/HLSL. I capture a frame in PIX, then go to the last draw call, then I select a particular pixel in the Render tab of PIX, right click then select "debug this pixel". It's history simply contain Initial > Clear > Draw > Final color. The pixel shader ouput of the draw is equal to float4(0.494, 0.282, 0.004, 1.000). If I click "Debug Pixel" on this page, then step all the way until the end, and check the value returned by the pixel shader, it show float4(0.193, 0.113, 0.003, 1.000). This is supposed to be the correct expected value, proving that my code actually seems ok, but unfortunately the color displayed is the first one mentioned earlier which makes no sense. Another strange thing. Debugging this way with PIX show me that a certain variable, determined by a series of "if" (see below) is set to the expected value. If I force the variable in the shader code to this value, of course it mix up things for other pixels, but for this one it fix the problem and return, and most importantly display, the correct color(0.193, 0.113, 0.003, 1.000). Debugging one way or another (exact same frame) in PIX show the exact same values all the way for every variables of the whole pixel shader, except for the final color displayed on screen. The bug happens only for certain pixels when two float values are nearly equals in the "if" series talked earlier, so that's why I had the idea to force this specific variable, and was amazed that it changed something. The variable in question is "cubeMapNum" in this code (based off an nvidia sdk sample)
float maxCoord = max( abs( DirToLight.x ), max( abs( DirToLight.y ), abs( DirToLight.z ) ) );
[flatten]
if( maxCoord == abs( DirToLight.x ) )
	cubeMapNum = DirToLight.x > 0 ? 0 : 1;
[flatten]
if( maxCoord == abs( DirToLight.y ) )
	cubeMapNum = DirToLight.y > 0 ? 5 : 4;
[flatten]
if( maxCoord == abs( DirToLight.z ) )
	cubeMapNum = DirToLight.z > 0 ? 2 : 3;
uint shadowMapIndex = cubeMapNum+(l*6);



So in fact, PIX step-trough the shader return the correct value, but D3D don't display this one. If anyone have any idea of what's going on please reply. I guess this have something to do with shaders optimizations or numerical instability, but as I said, the result is the same with any driver, device, debug or release. When in debug, shader is compiled with D3D10_SHADER_DEBUG and D3D10_SHADER_SKIP_OPTIMIZATION flags. [Edited by - Dunge on May 24, 2009 12:14:27 AM]

Summed-Area Variance Shadow Maps

25 April 2009 - 06:19 AM

Hey there! I don't know if anyone else here have implemented SAVSM, but if yes I need information. I have a DX10 game and had a working standard VSM for my shadows. I recently got my hands on the GPU Gems 3 DVD and though it would be great to implement SAT VSM (int format). It took me a while, but I integrated the code from the demo to my game. I finally managed to get it working "perfectly", both with and without MSAA for the shadow pass. I had lot of trouble debugging cause it seems both Pix and NvPerfHud don't seem capable of displaying INT textures. I even contacted AndyTX by e-mail, and he helped me to get it working, but did not answer for this question: I still have one problem remaining. It's not directly related to the SAT technique, but it might be. I'm using point lights with omnidirectional cubemap shadows and at the intersection of every shadow map there is a line artifact. If you don't mind the ugly test scene, you can view a screenshot there: http://img6.imageshack.us/img6/5397/cubemap.jpg As you can see, the light is centered above the first platform and the "square" composed of lines artifact represent the "down" shadow map of the light. The diagonal lines represent the perspective view of all sides. The pixels in these lines appear lit when it's not supposed to be, and *sometimes* unlit when it's supposed to be. Since the demo only use a spotlight and hence never use the edges of the shadow map, anyone know if the SAT can gives invalid results at these edge? In my previous implementation of VSM, I encountered the exact same problem (or very alike) and fixed it by simply adding a few extra degrees to the usual (D3DX_PI * 0.5f) light projection. Unfortunately, it don't seems to work this time. I debugged invalid pixels using PIX and the relevant information I got is: -They sample the correct shadow map index. -Of course, LightTexCoord gives coordinates in the [0,1] range, but very close to extremities -Unfortunately, after filter tile size calculation, some values are slightly <0 or >1. On pixels who appear valid, this never happens. I then tried to clamp the Tile variable to [0,1], but it don't help. Next step I will look into PCSS... EDIT: I guess this could help. I took a screenshot zoomed-in on the problem. http://img12.imageshack.us/img12/154/zoome.jpg As you can see, the part with incorrect lighting in the shadow area is about 1 texel wide (shadow map size), while the black line in the lighted area is always 1 pixel wide (render target size), no matter how close I zoom. There is also a small nearly unnoticeable shadow imperfection on the intersection. [Edited by - Dunge on April 25, 2009 2:08:27 PM]

PARTNERS