Jump to content

  • Log In with Google      Sign In   
  • Create Account


Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Today, 05:01 PM
*****

#4945168 Problem with road and normalmap

Posted by Digitalfragment on 31 May 2012 - 08:14 PM

That's correct. Most normalmaps represent normals in tangentspace, not in worldspace or objectspace.

For the road, you have already calculated either the tangent or bitangent - that's the extrusion vector from one side of the road to the other.
The other of the 2 is more or less the cross product of that + the up vector.


#4943930 Destructible planetary size terrain : SVO, MC or DC extraction, both ?

Posted by Digitalfragment on 28 May 2012 - 02:25 AM

Noise multi-octave function = OK -> Ray marching it is expensive + applying, storing destruction over an implicit function look like a nightmare.

Use this to generate your not-yet-destroyed object once at load time, not for real time. Ideally, never calculate anything at real time if the output never changes given the same inputs.

So, storing the 3D noise scalar field inside a SVO to stream/page it from disk ? OK, but which node structure is suitable for destroying it ?

Depends on how you want to destroy it.

How to shatter the SVO volume and apply physics on shards, shattering again and again upon gravity collision until shards reach the size of a single voxel ?

To shatter an SVO is just taking a chunk of the voxel nodes out and creating a new SVO out of them. If you are dynamically building SVOs for the shards, they can become infinitely small, so place a threshold in at some point.

How to shatter the SVO volume and apply physics on shards, shattering again and again upon gravity collision Physics over such amount of voxels is obviously too expensive. So should I polygonize with a MC function, blocks of voxels in the shattered area and rasterize triangles out of those blocks rather than try to DVR a whole "dynamic" SVO ?

Atomontage answered me that he don't polygonize at all his atoms... How is he doing physics on a that huge amount of voxels ?!

Heavy use of spatial partitioning will help to avoid running pointless collision detection between things that are far away.


#4940294 Drawing Ball Path using XNA

Posted by Digitalfragment on 14 May 2012 - 11:46 PM

List<VertexPositionColor> has a ToArray() function that will convert it to type VertexPositionColor[]




For the sake of speed though, i'd suggest keeping it as the array to begin with and just set it so its initailised with a predetermined maximum size, eg new VertexPositionColor[256]; If you go to add more verticies, and the array is full, then you can start treating it like a ring/circle buffer and forget the oldest points.




#4937004 2D lighting

Posted by Digitalfragment on 03 May 2012 - 01:52 AM

I think you're missing the fact that subtracting the alpha after its already been drawn will not reveal what was below that pixel. Once you have drawn to a pixel, whatever was there before hand is gone - a texture only stores 1 colour per pixel. To get more information per pixel you have to manually store it in other render targets.


#4936969 Depth-testing in the fragment shader

Posted by Digitalfragment on 02 May 2012 - 11:19 PM

@L.Spiro:
I believe Fabonymous is referring to clipping decals so they dont overhang geometry - for example a splash-damage decal on the edge of a wall.

@Fabonymous:
If this is the case, I don't think depth clipping is viable either as a moving object can end up having polygons close enough to co-planar with the decal sprite, and as a result get drawn to as well. Depending on your platform, there are a few easier ways of solving it, for example performing clipping and rejection in a geometry shader and writing out to a decal vertex buffer.


#4936934 Smooth rotation and movement of camera

Posted by Digitalfragment on 02 May 2012 - 08:19 PM

Using interpolation to smooth from the previous position to the intended position will slow the camera down. Perhaps what you are looking for is input-smoothing which is averaging the deltas and then applying the delta to the camera.

Time delta is necessary to convert a velocity-in-seconds value into a velocity-in-frames value. When using mouse input, I divide the mouse delta by dt to get the pixels-per-second value. Buffer that over 15 frames, and average out the last 15 frames worth of velocities. Then multiply that by dt again to turn it back into the value for the frame.

The side effect of this is that the camera will have a bit of inertia after stopping movement of the mouse, but that tends to feel better than a harsh stop anyway.


#4935957 2D lighting

Posted by Digitalfragment on 29 April 2012 - 07:21 PM

As far as 2) is concerned, it comes down to order of draw calls. You can freely change blend functions and equations in between draw calls, just like anything else on the video card.
So if you draw your background, then your lighting filters, then your sprites you should be fine (Noting that your sprites blending requirements are source, not destination, so whatever has already been put into alpha is irrelevant).


#4935947 2D lighting

Posted by Digitalfragment on 29 April 2012 - 06:48 PM

Pretty sure the problem is ultimately that I don't understand what the different blend func configurations do in practical terms, or even if there is one for what I'm trying to accomplish... I'm sure there's a way, I've seen it done.


In practical terms:
  • At time of blending you have 2 colours, destination and source. Destination being what is already drawn to that pixel, and Source being the colour you are currently drawing. (in some circumstances you can also have an additional constant-colour, but I'm not sure if that exists in OpenGL)
  • There are multiple blending equations (glBlendEquation). The default is Add, which is SrcResult + DstResult, there is also Subtract, Multiply, Min and Max.
  • SrcResult and DstResult come from using the functions specified in glBlendFunc to operate on Source and Destination.
  • GL_ZERO: 0 * value // i.e. use zero
  • GL_ONE: 1 * value // i.e. use value directly
  • GL_SRC_ALPHA: Source.a * value // i.e. multiply the colour by the alpha of source (regardless of if value is Destination or Source)
  • GL_DST_ALPHA: Destination.a * value // i.e. multiply the colour by the alpha of destination (regardless of if value is Destination or Source)
  • GL_ONE_MINUS_SRC_ALPHA: (1.0f - Source.a) * value; // the inverse of GL_SRC_ALPHA, mutiply by the remainding fraction of sources' alpha
  • GL_ONE_MINUS_DST_ALPHA: (1.0f - Destination.a) * value; // same as above, except using the remainder of destinations' alpha.

    Using the default blending example of Add, with SrcAlpha, InvSrcAlpha, what you have is a linear-interpolation from the destiation to the source by sources alpha:

    ResultingPixel = (Source.a * Source.rgba) + ((1 - Source.a) * Destination.rgba

    Note that through extensions or OpenGL2+ you can also seperate the alpha channel to use a seperate equation.



#4933071 Perspective Woes

Posted by Digitalfragment on 19 April 2012 - 10:44 PM

You can supply negative values into glOrtho, so if you want to go from -1 to 1, then you can call glOrtho(-1, 1, -1, 1, 0, 1); // the last two being the near/far planes.

You should also take aspect ratio into consideration, which will typically mean modifying left/right. something like:
float aspect = (float)width / (float)height
float height = 2 // going from -1 to 1!
float width = 2 * aspect
float left = width * -0.5f
float right = width * 0.5


#4923517 Representing interval set with values for pixel values

Posted by Digitalfragment on 19 March 2012 - 09:22 PM

Its a shame this is marked as OpenGL, because its quite easy to do in DirectX11 (albiet you wouldnt be writing to a framebuffer, but a UAV). Check out the "Linked List OIT" examples.

Actually, a quick google search returned this which is OpenGL:
http://blog.icare3d.org/2010/07/opengl-40-abuffer-v20-linked-lists-of.html


#4917531 Rendering items on a model

Posted by Digitalfragment on 28 February 2012 - 04:26 PM

#2 as it is more flexible, and in the long run is quicker to add new features with less memory overhead.

If your characters are rigged, then you can just load multiple models and attach them all to the same skeleton instance. As long as the peices are designed to fit together, everything just works (e.g. if you do body shape scaling by affecting the skeleton, then it will affect the body model, the chestplate armour model, gauntlets model, etc)


#4916017 Laser Trails / Tracer

Posted by Digitalfragment on 23 February 2012 - 05:16 PM

Cylindrical/Axial billboarding works well for this purpose. It is essentially billboarding on an arbitary axis, so that you show as much surface area of the polygon to the camera, while retaining a depth and direction. I used it on a past commercial product for both tracers and curving trails.

Per point on the line, take the normalized direction of the camera to the point, and cross product it against the forward direction of the camera. That gives you your cross section axis to extrude against.


#4905695 Destructors for Enemies and Projectiles

Posted by Digitalfragment on 23 January 2012 - 11:36 PM

I'm trying to make a simple graphical RPG game using SDL and C++ but ran into some problems a few days ago due to my lack of OOP understanding. Since I learned OOP with Java, I've been having problems with destructors and pointers. One main area where deconstructors are required is with enemies and projectiles. I want the bullets to be deleted when they hit their target and I want enemies to delete when they don't have any health.

I'm just looking for some pseudocode as to how I would avoid memory leaks in regards to this. For example, this is a rough sketch of my enemy class:

class enemy
{
	private:
		int x, y, width, height;
		SDL_Surface* sprite;
		SDL_Rect boundingBox;
		int health;
	public:
		enemy();									   //constructor 1
		enemy(int X, int Y, int W, int H)	   //constructor 2
		~enemy();									 //destructor
		void setX(int X);
		void setY(int Y);
		void draw();
		void setHealth(int H);
}

The definitions of all those functions and the values of the variables should be assumed. They are all defined in the enemy.cpp, naturally.

With that available code, would I only be able to delete the sprite since it's the only pointer? Should I make all my variables pointers and have them point to different values? What is the most efficient way to do this?

Thanks!
-Brady


POD types like ints & floats can safely be left as member values. You only need to delete() what you new(), and only need to free() what you alloc().
Along those guides, your sprite member only needs to be deleted if enemy owns the reference - you wouldn't want to delete the pointer twice!

The destructor is something akin to java finalizer methods, with the exception that it has to be manually called as there is no GC system.
The requirements for pointer destruction should be considered in the same way as using a freelist in a garbage collected language - that memory is permenantly reserved until you give it back.


#4905316 Anti-aliasing Techniques

Posted by Digitalfragment on 22 January 2012 - 09:29 PM

You can use the derivative functions, ddx & ddy, to determine a mip level to perform antialising of per pixel data. Heres some example code that simulates antialiasing, from an line renderer. Not directly portable to your case, as it assumes the inputs are layed out how its expecting; but the concept of derivative distance is the same.

float antialias(float2 texcoord, float2 edge)
{
float derivativex = ddx(texcoord).x;
float derivativey = ddy(texcoord).x;
float derivativeLength = sqrt(derivativex*derivativex + derivativey*derivativey) * 2;
float antialiasedEdge = saturate((1.0-abs(edge.x)) / (derivativeLength + 0.00001f));
return antialiasedEdge;
}


#4904094 SLI GPGPU Question

Posted by Digitalfragment on 18 January 2012 - 03:43 PM

1 & 2) When using the 2 cards as distinctly different video cards, then you aren't restricted in their use.
So, yeah you can use different cards and different threads.
However, 5) is the case where you would really want to use SLI/Crossfire and that does require very-similar if not the exact same video card.

3) You want to enumerate the device adapters, http://msdn.microsoft.com/en-us/library/windows/desktop/ff476877%28v=vs.85%29.aspx

4) If by interpolate you mean share data between, then yes but you may need to route data via the CPU which can be painfully slow. I've not looked into CUDA myself very much though.

SLI/Crossfire is largely automatic as long as your game does things in the right way, along with themselves having a few different configurations to support different means of rendering. Plenty of whitepapers from both nVidia and ATi/AMD on the matter.




PARTNERS