Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Today, 12:12 AM

#4954719 Roads with kerbs

Posted by Digitalfragment on 01 July 2012 - 07:06 PM

Personally I've tried both procedural approaches and artist-built tiling models. Both work well, the artist built method ends up yielding better results but fitting the tiles properly is a bit of a pain.

In the tiled model approach, the models occupy a square, and are tessellated enough that they can be bent to fit a spline in a vertex shader, and have skirting geometry to fill in any cracks that may arise from T junctions, and also punch down under the ground enough to compensate for the change in topology. This does yield a bit of potential overdraw but by drawing the ground before the roads, early-z optimizations make this a non issue.

With the procedural approach, from the spline data I build a triangle strip following the path like a thick line renderer, then use a simple grammar to describe how far to extrude, how to shape the curbs, how wide to make footpaths etc. The same grammar is also used to populate the sidewalk with street lights etc too.


#4953785 Roads with kerbs

Posted by Digitalfragment on 28 June 2012 - 05:44 PM

A lot of games companies will use a mix of their own hand-written tools and external tools such as Maya/3DSMax.

As far as roads with kerbs go, its nothing hard just time consuming, it boolean-subtracts its x/z shape from the triangles supplied by the terrain-system, and reshapes its y values to follow what the terrain dictates along the edge of the subtracted region.

Don't build everything in Max as one massive mesh. Buildings should be made seperately, along with appropriate lodding models, then introduced into the world via locators so that your game can switch between LODs as needed (And so that you can re-use assets where possible)

With terrain, you can poly-mesh the entire thing and auto generate lods, if you want overhangs etc. Or you can look at voxel modelling the terrain, then generating your poly mesh from that (or even just rendering the voxel mesh if you choose)


#4951185 Seperation of Render and SceneSystem.

Posted by Digitalfragment on 20 June 2012 - 07:34 PM

Here's my question, how do I best link the SceneObject to the DX11Geometry object. Remeber i do want the RenderSystem completet independent of the ScenceObject, so SceneObject dont know what a DX11Geometryobject is..


Create a data bridge type that links a SceneObject to a DX11GeometryObject, SceneObject doesnt know about DX11GeoObject and vice-versa. std::pair<SceneObject*,DX11GeometryObject*> fits well for that. Then you have a process that goes through the pairs and copies the appropriate data across from SceneObject (transforms etc) into your DX11 constant buffers. The object is a two way association, but a 1 way data pipeline.


#4945189 Image stretching issue

Posted by Digitalfragment on 31 May 2012 - 09:33 PM

I'd suggest taking the easiest approach and converting your mouse coordinates to be in the same scale as the screen before resizing.

So, if you have built your UI at say (640,480) but are rendering to a window thats (1024, 768) then convert your mouse x,y like:
((((float)x/1024.0f)*640.0f), (((float)x/768.0f)*480.0f)


#4945168 Problem with road and normalmap

Posted by Digitalfragment on 31 May 2012 - 08:14 PM

That's correct. Most normalmaps represent normals in tangentspace, not in worldspace or objectspace.

For the road, you have already calculated either the tangent or bitangent - that's the extrusion vector from one side of the road to the other.
The other of the 2 is more or less the cross product of that + the up vector.


#4943930 Destructible planetary size terrain : SVO, MC or DC extraction, both ?

Posted by Digitalfragment on 28 May 2012 - 02:25 AM

Noise multi-octave function = OK -> Ray marching it is expensive + applying, storing destruction over an implicit function look like a nightmare.

Use this to generate your not-yet-destroyed object once at load time, not for real time. Ideally, never calculate anything at real time if the output never changes given the same inputs.

So, storing the 3D noise scalar field inside a SVO to stream/page it from disk ? OK, but which node structure is suitable for destroying it ?

Depends on how you want to destroy it.

How to shatter the SVO volume and apply physics on shards, shattering again and again upon gravity collision until shards reach the size of a single voxel ?

To shatter an SVO is just taking a chunk of the voxel nodes out and creating a new SVO out of them. If you are dynamically building SVOs for the shards, they can become infinitely small, so place a threshold in at some point.

How to shatter the SVO volume and apply physics on shards, shattering again and again upon gravity collision Physics over such amount of voxels is obviously too expensive. So should I polygonize with a MC function, blocks of voxels in the shattered area and rasterize triangles out of those blocks rather than try to DVR a whole "dynamic" SVO ?

Atomontage answered me that he don't polygonize at all his atoms... How is he doing physics on a that huge amount of voxels ?!

Heavy use of spatial partitioning will help to avoid running pointless collision detection between things that are far away.


#4940294 Drawing Ball Path using XNA

Posted by Digitalfragment on 14 May 2012 - 11:46 PM

List<VertexPositionColor> has a ToArray() function that will convert it to type VertexPositionColor[]




For the sake of speed though, i'd suggest keeping it as the array to begin with and just set it so its initailised with a predetermined maximum size, eg new VertexPositionColor[256]; If you go to add more verticies, and the array is full, then you can start treating it like a ring/circle buffer and forget the oldest points.




#4937004 2D lighting

Posted by Digitalfragment on 03 May 2012 - 01:52 AM

I think you're missing the fact that subtracting the alpha after its already been drawn will not reveal what was below that pixel. Once you have drawn to a pixel, whatever was there before hand is gone - a texture only stores 1 colour per pixel. To get more information per pixel you have to manually store it in other render targets.


#4936969 Depth-testing in the fragment shader

Posted by Digitalfragment on 02 May 2012 - 11:19 PM

@L.Spiro:
I believe Fabonymous is referring to clipping decals so they dont overhang geometry - for example a splash-damage decal on the edge of a wall.

@Fabonymous:
If this is the case, I don't think depth clipping is viable either as a moving object can end up having polygons close enough to co-planar with the decal sprite, and as a result get drawn to as well. Depending on your platform, there are a few easier ways of solving it, for example performing clipping and rejection in a geometry shader and writing out to a decal vertex buffer.


#4936934 Smooth rotation and movement of camera

Posted by Digitalfragment on 02 May 2012 - 08:19 PM

Using interpolation to smooth from the previous position to the intended position will slow the camera down. Perhaps what you are looking for is input-smoothing which is averaging the deltas and then applying the delta to the camera.

Time delta is necessary to convert a velocity-in-seconds value into a velocity-in-frames value. When using mouse input, I divide the mouse delta by dt to get the pixels-per-second value. Buffer that over 15 frames, and average out the last 15 frames worth of velocities. Then multiply that by dt again to turn it back into the value for the frame.

The side effect of this is that the camera will have a bit of inertia after stopping movement of the mouse, but that tends to feel better than a harsh stop anyway.


#4935957 2D lighting

Posted by Digitalfragment on 29 April 2012 - 07:21 PM

As far as 2) is concerned, it comes down to order of draw calls. You can freely change blend functions and equations in between draw calls, just like anything else on the video card.
So if you draw your background, then your lighting filters, then your sprites you should be fine (Noting that your sprites blending requirements are source, not destination, so whatever has already been put into alpha is irrelevant).


#4935947 2D lighting

Posted by Digitalfragment on 29 April 2012 - 06:48 PM

Pretty sure the problem is ultimately that I don't understand what the different blend func configurations do in practical terms, or even if there is one for what I'm trying to accomplish... I'm sure there's a way, I've seen it done.


In practical terms:
  • At time of blending you have 2 colours, destination and source. Destination being what is already drawn to that pixel, and Source being the colour you are currently drawing. (in some circumstances you can also have an additional constant-colour, but I'm not sure if that exists in OpenGL)
  • There are multiple blending equations (glBlendEquation). The default is Add, which is SrcResult + DstResult, there is also Subtract, Multiply, Min and Max.
  • SrcResult and DstResult come from using the functions specified in glBlendFunc to operate on Source and Destination.
  • GL_ZERO: 0 * value // i.e. use zero
  • GL_ONE: 1 * value // i.e. use value directly
  • GL_SRC_ALPHA: Source.a * value // i.e. multiply the colour by the alpha of source (regardless of if value is Destination or Source)
  • GL_DST_ALPHA: Destination.a * value // i.e. multiply the colour by the alpha of destination (regardless of if value is Destination or Source)
  • GL_ONE_MINUS_SRC_ALPHA: (1.0f - Source.a) * value; // the inverse of GL_SRC_ALPHA, mutiply by the remainding fraction of sources' alpha
  • GL_ONE_MINUS_DST_ALPHA: (1.0f - Destination.a) * value; // same as above, except using the remainder of destinations' alpha.

    Using the default blending example of Add, with SrcAlpha, InvSrcAlpha, what you have is a linear-interpolation from the destiation to the source by sources alpha:

    ResultingPixel = (Source.a * Source.rgba) + ((1 - Source.a) * Destination.rgba

    Note that through extensions or OpenGL2+ you can also seperate the alpha channel to use a seperate equation.



#4933071 Perspective Woes

Posted by Digitalfragment on 19 April 2012 - 10:44 PM

You can supply negative values into glOrtho, so if you want to go from -1 to 1, then you can call glOrtho(-1, 1, -1, 1, 0, 1); // the last two being the near/far planes.

You should also take aspect ratio into consideration, which will typically mean modifying left/right. something like:
float aspect = (float)width / (float)height
float height = 2 // going from -1 to 1!
float width = 2 * aspect
float left = width * -0.5f
float right = width * 0.5


#4923517 Representing interval set with values for pixel values

Posted by Digitalfragment on 19 March 2012 - 09:22 PM

Its a shame this is marked as OpenGL, because its quite easy to do in DirectX11 (albiet you wouldnt be writing to a framebuffer, but a UAV). Check out the "Linked List OIT" examples.

Actually, a quick google search returned this which is OpenGL:
http://blog.icare3d.org/2010/07/opengl-40-abuffer-v20-linked-lists-of.html


#4917531 Rendering items on a model

Posted by Digitalfragment on 28 February 2012 - 04:26 PM

#2 as it is more flexible, and in the long run is quicker to add new features with less memory overhead.

If your characters are rigged, then you can just load multiple models and attach them all to the same skeleton instance. As long as the peices are designed to fit together, everything just works (e.g. if you do body shape scaling by affecting the skeleton, then it will affect the body model, the chestplate armour model, gauntlets model, etc)




PARTNERS