Jump to content
  • Advertisement

nonoptimalrobot

Member
  • Content Count

    113
  • Joined

  • Last visited

Everything posted by nonoptimalrobot

  1. nonoptimalrobot

    questions about singletons

      This is misleading -- You may "want" only one, but it's very, very seldom that one cannot imagine a scenario where, in fact, you actually need more than one, and even more seldom when having more than one would actually be incorrect. Usually when a programmer says "I'll use a singleton because I only want one of these." what he's really saying is "I'll use a singleton because I don't want to bother making my code robust enough to handle more than one of these."   If you can justify choosing convenience over correctness you're free to do so, but you've made your bed once you've chosen a Singleton and you alone will have to lay in it.     ...yeah, 'correctness' in software design; something that's largely considered undefinable.  We are not scientist doing good science or bad science; we are craftsman and what one paradigm holds over the other is simply the kinds of coding decisions that are discouraged vs encouraged.  The idea is to use designs that encourage use cases that pay productivity dividends down the road and discourage use cases that become time sinks.  In this sense Singletons are ALL BAD but making your life easier down the road shouldn't be a directive that is pathologically perused such that the productivity of the moment is drawn to a crawl.  I guess the point I'm trying to make is that a tipping point exists between writing infinitely sustainable code and getting the job done.  Here are some examples:   Singletons make sense for something like a GPU; it is a piece of hardware, there is only one of them and it has an internal state.  Of course you can write code such that the GPU appears virtualized and your application can instantiate any number of GPU objects and use them at will.  Indeed I would consider this good practice as it will force the code that relies on the GPU to be written better and should extend nicely to multi GPU platforms.  The flip side is that implementing all this properly comes at a cost in productivity at the moment and that cost needs to be considered over the probability of seeing the benefit.   Another example is collecting telemetry or profiling data.  It's nice to have something in place that tells everyone on the team: "Hey, telemetry data and performance measurements should be coalesced through these systems for the purposes of generating comprehensive reports."  A Singleton does this while a class declaration called Profiler and Telemetry does not.  Again, you can put the burden of managing and using instances of Profiler and Telemetry onto the various subsystem of your application and once again this may lead to better code but if the project never lives long enough to see that 'better code' pay productivity gains then what was the point?   I don't implement Singletons either personally or professionally (for the reasons outlined by Ravyne and SiCrane unless explicitly directed to do so) but I have worked on projects that did use them and overall I was glad they existed as they made me more productive on the whole.  In these instances the dangers of using other people's singletons in already singleton dependent systems never came to fruition and the time sink I made writing beautiful, self contained, stateless and singleton free code never paid off.  Academic excellence vs. pragmatism:  it's a tradeoff worth considering.  Mostly I'm playing devils advocate here as I find blanket statements about a design paradigm being all good or all bad misleading.  Anyway, this is likely to get flamey...it already is and people aren't even disagreeing yet. :)  I'm out.
  2. nonoptimalrobot

    questions about singletons

      Yep along with the Global Access property SiCrane mentioned.  The third property of the Singleton paradigm is control over the relative order of construction of singleton objects, something that's not possible with object instances that are declared as globals.  As with most design paradigms Singletons are mostly used to show intent to would be modifiers of the system rather than to prevent bonehead mistakes by making them syntactically illegal.      Mostly agree in the academic sense but in practice I would say they have their uses stemming primarily from convenience.  Sometimes you truly only want one instancing of something running around (assert tracker, GPU wrapper etc) in which case having a single point of access keeps things simple.  Avoiding the threading pit-falls can be done by proper constructing the singleton to deal with multi-threaded access and being vigilant about keeping singleton access out of code that doesn't absolutely need it.
  3. nonoptimalrobot

    a map<T,T> as a default parameter...

    You would need to do this: bool createWindowImp(DemString winName, // window name DemString winTitle, // window title DemUInt winWidth, // window width. DemUInt winHeight, ExtraParameters params = ExtraParameters()); Which will push an instance of ExtraParameters (which will be empty) onto the stack and pass it on to the function.  The instance will get pushed off the stack after the function's scope terminates.  A more efficient approach would look like this: bool createWindowImp(DemString winName, // window name DemString winTitle, // window title DemUInt winWidth, // window width. DemUInt winHeight, ExtraParameters* params = 0); Inside createWindowImp you will need to check to see if param is non-null and proceed to dereference it and extract values if so.   [EDIT]   I see ExtraParameters is an std::map, in that case IF you want to go with the fist option you will definitely want to tweak it a bit: bool createWindowImp(DemString winName, // window name DemString winTitle, // window title DemUInt winWidth, // window width. DemUInt winHeight, const ExtraParameters& params = ExtraParameters()); If you don't pass by reference then each time the function is called a temporary std::map will be created for the params varaible and a deep copy will be preformed between it and whatever you happen to be passing in; that equates to a lot of memory that gets newed only to be prompty deleted.  It's good practice to toss the 'const' in there too unless you want to pass info out of createWindowImp via params (usually forwned upon).
  4.   Not necessarily but as DigitalFragment pointed out there are many advantages to doing so; either way the concept is the same.  If your hardward supports texture fetches in the vertex shader and floating point texture formats it's generally a good idea (such features are common these days).
  5.   The skeleton is just a big list of matrices or quaternion-translation pairs that are usually stored in a constant buffer.  "Sending the bone transforms" is just a matter of updating a GPU resource; this only needs to happen oncer per frame.  To use that data you simply bind it to the GPU, this isn't free but it's generally very cheap as it doesn't involve moving data from the CPU to the GPU.  After binding the resource the shaders evoked by subsequent draw calls will have access to the skeleton data.   Generally speaking you should pack your entire skeleton into a single resource and update it once per frame regardless of how many draw calls it will take to render the skinned mesh.
  6. nonoptimalrobot

    D3DXMatrixLookAtLH internally

    This:  http://msdn.microsoft.com/en-us/library/windows/desktop/bb205342(v=vs.85).aspx contains a description of the algorithm.  Note that the matrix is identical to a object-to-world space transform that has been inverted.   The gist of the algorithm is to compute the local x, y and z axis of the camera in world space and then drop them into a matrix such the camera is placed at the origin.   Transforms containing rotations and translations only are constructed as follows: | ux uy uz 0 | | vx vy vz 0 | | nx ny nz 0 | | tx ty tz 1 | // x-axis of basis (ux, uy, uz) // y-axix of basis (vx, vy, vz) // z-axis of basis (nx, ny, nz) // origin of basis (tx, ty, tz) That's a simple world transform; you are building a view matrix so you actually want the inverse which is computed as follows: | ux vx nx 0 | | uy vy ny 0 | | uz vz nz 0 | | -(t dot u) -(t dot v) -(t dot n) 1 | // u = (ux, uy, uz) // v = (vx, vy, vz) // n = (nx, ny, nz) // t = (tx, ty, tz)
  7. nonoptimalrobot

    How does depth bias work in DX10 ?

    This funny looking quantity: 2**(exponent(max z in primitive) - r doesn't contain a typo.  As far as I know the double star should be interpreted like so: pow(2, (exponent(max z in primitive) - r) The 'exponent' can also be rewritten more explicitly: pow(2, (pow(e, max z in primitive) - r) Where 'e' is the natural number 2.71828... Personally, I think that formula should have been written like this: 2^(e^(MaxDepthInPrim) - r) MaxDepthInPrim is the maximum depth value (in projection space so values on [0, 1]) of the current primitive being rendered.  If you are drawing a triangle with three vertices that have a depth value of 0.5, 0.75 and 0.25 the MaxDepthInPrim will be equal to 0.75.    The constant 'r' is the number of mantissa bits used by the floating point representation of your depth buffer.  If you are using a float32 format then r = 23, if you are using a float16 format then r = 10 and if you are using a float64 format then r = 52.   I'm not entirely clear on the utility behind this formula.  If you play around with it a bit you can tell it's related (but not equal to) the distance between adjacent floating point values in your depth buffer.  The e^(MaxDepthInPrim) is just a way of doing a non linear interpolation across the deltas between adjacent values at different points in the depth buffer.   [EDIT] typed clip-space when I should have typed projection-space.
  8. This is how I've done it in the past:   Grass "clumps" are placed in the world individually or via a spray brush in the world editor.  The brush and placement tool have various options to make this easy including an 'align to terrain' behavior and various controls over how sizes, orientations and texture variations are handled.  This process generates a massive list of positions, scales and orientations (8 floats per clump).  There are millions of grass clumps so storing all this in the raw won't do...   At export time the global list of grass clumps is partitioned into a regular 3d grid.  Each partition of the grid has a list of the clumps it owns and quantizes the positions, scales and orientations into 2 32 bit values.  The fist value contains four 8bit integers: the first 3 ints represent a normalized position w.r.t. the partition's extents and the 4th is just a uniform scale factor.  The second 32 bit value is a quaternion with its components quantized to 8 bit integers (0 = -1 and 255 = 1).   At runtime the contents of nearby partitions are decompressed on the fly into data that's amiable to hardware instancing.  This was a long time ago so it was important the vertex shader didn't spent to much time unpacking data.  Theses days with bit operations available to the GPU you might be able to use the actual compressed data and not need to manage compressing and uncompressing chunks in real-time, if you do use a background thread.   It worked pretty well and was extended to support arbitrary meshes so the initial concept of a grass clump evolved to include pebbles, flowers, sticks, debris etc.  Any mesh that was static and replicated many times throughout the world was a good candidate for this system as long as its vertex count was low enough to justify the loss of the post transform cache caused by the type of HW instancing being used.
  9. nonoptimalrobot

    Distortion/Heat Haze FX

      The 'cascades' are generated by slicing up the view frustum with cutting planes that are perpendicular to the view direction.  Objects that use distortion are rendered in the furthest distortion cascade first and the nearest distortion cascade last.  The frame buffer is resolved into a texture between rendering each cascade.  Folding water into this is definitely add hoc.  The cascades get split into portions that are above the water plane and below the water plane.  Render the cascades below normally then resolve your frame buffer and move on to render the cascades that are above the water plane.  If the camera is below the water reverse the order.  Ugly huh?  Obviously none of this solves the problem it just mitigates artifacts.     Yeah, those post (especially the second one) are misinformation or at least poorly presented information.  Alpha blending, whether using pre-multiplied color data or not is still multiplying the contents of your frame buffer by 1-alpha so the result is order dependent.  Consider rendering to the same pixel 3 different times using an alpha pre-multiplied texture in back to front order.  Each pass uses color and alpha values of (c0, a0), (c1, a1) and (c2, a2) respectively and 'color' is the initial value of the frame buffer.  Note I'm writing aX instead of 1-aX here because it requires less parenthesizes and is therefore easier to visually analyze, this doesn't invalidate the assertion. Pass 1: c0 + color * a0 Pass 2: c1 + (c0 + color * a0) * a1 Pass 3: c2 + (c1 + (c0 + color * a0) * a1) * a2 = result of in order rendering Now lets reverse the order: Pass 1: c2 + color * a2 Pass 2: c1 + (c2 + color * a2) * a1 Pass 3: c0 + (c1 + (c2 + color * a2) * a1) * a0 = result of out of order rendering Unfortunately: c2 + (c1 + (c0 + color * a0) * a1) * a2 != c0 + (c1 + (c2 + color * a2) * a1) * a0  You still need to depth sort transparent objects regardless of how you choose to blend them...unless of course are just doing additive blending, then it doesn't matter.
  10. nonoptimalrobot

    Mapping OpenGL Coordinates to Screen Pixels

      OpenGL never had this problem!?  Sigh.  I've been using the wrong API all these years...
  11. Your code is a little strange but it seems to work, nothing jumps out as wrong.   Off topic note:  It is a little strange to render as frequently as possible but lock updating at 60Hz.  If nothing moved why render again, the generated image will be the same.  Don't get me wrong games do this all the time but usually in a slightly more complicated way.  For example:  if updating happens at 30Hz and rendering happens at 60Hz then the renderer can linearly interpolate between (properly buffered) frame data generated by the updater to give the illusion of the entire game running at 60 fps.    Back to the problem at hand...     This is a valid observation (something is amiss) but using fps to measure relative changes in performance can be misleading.  A change from 57-58 fps to 43-42 fps means each frame is taking approximately 6.1 milliseconds longer to finish.  A change from 6k-7k fps to 125-130 fps means each frame is taking approximately 7.6 milliseconds longer to finish.  Not as drastic as it first seems.  Think of fps measurements as velocities, they are inversely proportional to time.  If you have a race between two cars it doesn't make a lot of sense to say car A finished the race at 60 mph and car B finished the race at 75 mph.  Want you really want to know is the time it took each car to finish the race.     Agreed but I'm not sure why your performance is tanking.  Hopefully someone who knows a bit more about OpenGL can point to something that's blatant performance problem.
  12. nonoptimalrobot

    Bullet Impact on Mesh Edge

      I'm curious as well.   At one point I was dealing with a game what had highly detailed meshes used for rendering and exceptionally simplified meshes used for collision.  This meant the normal solution of projecting and clipping your decal against the collision geometry resulted a decal that was rarely consistent with the topology of the rendered object.  Obviously you can solve this by giving your decal system access to the render geometry for the purposes of projecting and clipping but this wasn't practical; the computational geometry code wanted triangles in a specific format that didn't mix well with rendering so there was a huge memory overhead.  Our render was deferred which meant easy access to the depth buffer and a normal buffer, for this reason the decision was made to render decals as boxes and use various tricks in the pixel shader to create the illusion of decals being projected onto the scene.  Essentially UV coordinates would be tweaked based on the result of a ray cast.  It worked fairly well but had some view dependent artifacts (sliding) that were distracting if you zoomed in on and rotated around a decal that had been projected onto a particularly complicated depth buffer topology.  There was also the issue of two dynamic objects overlapping, if one of the objects had a decal stuck to it in the region of the overlap the decal would appear on the other object; this only happened when the collision response was faulty so it wasn't that big a deal.   Anyway, waiting for Hodgman's insight...
  13. nonoptimalrobot

    Render queue design decision

      Sounds like you can fix this by tweaking the way your renderer works.  Why not hand off the entire Renderable component to the render which can then ask for the RenderToken vector (via some interface in Renderable) for the purposes of sorting?
  14.   This.  Class hierarchies have a way of spiraling out of control towards the end of project making bug hunting tedious just as it becomes the most important task.  There is also the runtime flexibility it allows which is handy given that live updating your world via an external or embedded editor has huge implications for the productivity of the design team.  Adding and removing components to an entity and propagating those changes to a running instance of the game is exceptionally awkward to do with a standard OOP design.    I think there is room for both approaches.  The Entity-Component model works best at a high level, the extreme end of that being only having a single entity GameObject that contains components.  The components themselves are built from a usual class hierarchy or Rener(ables) Update(ables) Stream(ables) etc.  In this scenario a game object representing a vehicle would have a component for the engine, the wheels, the seats, etc.  In turn Engine would inherit from IUpdatable but not much else.  Wheel would inherit from IUpdatable and IRenderable etc.  This is the approach I favor and it works well as long as the team as a whole is clear about where one design paradigm ends and the other begins and what type of functionality belongs in which system.  It's easy for people not privy to the intent of the two systems to start blurring the line and eventually create a mess that must be cleaned up later.  Code reviews help with this.
  15. Holy crap, man! Couldn't you just create separate ctors for normal construction and deserialization construction? I mean, I get the OnPlacedInWorld() part, but it sounds like OnCreate() and OnStartup() could just be normal ctors that call a common member function after the serializable values are set. Why do you guys avoid using the ctor? Or, rather, if it needs to be plugged in this way, why not a pair of factory functions?   This is pretty standard.  It's not that you can't achieve the same with multiple constructors but breaking it up into multiple function calls or passes allows a few different things:   1) External systems can run code in-between the initialization passes.  This is almost a necessity, especially for games.    2) Code reuse.  Often times the default constructor does stuff that all the other initialization functions want done.  If everything was broken out into multiple, overloaded constructors then a lot of code would get copied across them.   3) Clear intent.  OnPlacedInWorld() OnCreate() and OnStartup() tells would be modifiers of the class where to put various bits of initialization code while the systems using the class know when to call what.   4) Performance.  As mentioned, the minimally filled out default constructor allows faster serialization as you can avoid doing things that will simply be undone by the serializer.
  16.   You can get it to work, you can even do so without breaking the implementation out into multiple files; however, from the MSDN:     It triggers a warning that most developers elevate to an error.  My understanding is that the C++ language specification doesn't insist that 'this' be defined until the class's user defined constructor enters scope but it just so happens that the Visual Studio implementation defines it before churning through the initializer list so you can get off with a warning.
  17. nonoptimalrobot

    Distortion/Heat Haze FX

    For a differed rendered things usually get arranged like this:   1) Render solid objects into the depth and g-buffers 2) Generate shadow maps 3) Render lights 4) Resolve the frame buffer to a texture that can be sampled called S 5) Render distortion effects by sampling S 6) Render transparent objects (usually via forward lighting)   As you mentioned this isn't really a generic solution, it just works okay most of the time.  A lot of games only take it this far and artifacts that result from having distortion effects in-front of transparent objects that at are in front of other distortion effects are either accepted as inevitable or dealt with by a graphics programmer conning a designer into tweaking the layout of the problematic scene.  Add depth-of-field effects to the mix and things can get ugly fast.   A generic solution is to lump distortion and transparency into the same object class and resolve the current frame buffer into a texture just before drawing each object.  This is extremely slow as you are resolving the scene to a texture many times per frame.   More sophisticated solutions group distorting and transparent objects into slabs or cascades and then only resolve the scene to a texture between rendering cascades.  When it comes to water...well, water is just given it's own cascade.  Admittedly this doesn't make much sense in the geometric sense (water is usually a giant plane that crosses all the cascades) it works well most of the time.   Various adaptations of depth peeling along with the compute shader analog that relies on MRTs and sorting data by depth can be leveraged as well to solve this problem.
  18.   First of all:  If you absolutely must do this then no, not really but I can pretty much guarantee you there is a better way of structuring your code.  You should consider avoiding the cyclic dependency altogether.  Such a simple loop might seem innocuous at first but over time, in a larger project this can get you into serious trouble and is a nightmare to untangle.   Second of all:  using std::unique_ptr is going to get you intro trouble.  Game contains an instance of Thingy and Thingy contains a std::unique_ptr<Game> to the Game instance that owns it.  When Game's destructor goes off the destructor for Thingy will go off as well triggering the destructor for the std::unique_ptr<Game> instance which will in turn cause the destructor for Game to go off.  Stack overflow.  If you are new-ing an instance of std::unique_ptr<Game> instide of Thingy as your phrasing suggest the stack won't overflow but there is little point in having a pointer to a smart pointer...all the advantages are lost, you might as well use a regular pointer.   [edit] Forgot to mention, if you do new std::unique_ptr<Game> in Thingy you need to delete it or you will leak memory but doing so will cause the Game and Thingy destructors to call each other recursively until the stack overflows.  You just can't win.
  19. nonoptimalrobot

    Mapping OpenGL Coordinates to Screen Pixels

    ...forgot two things.   1) Here is how a perspective transform is constructed for OpenGL:  http://www.songho.ca/opengl/gl_projectionmatrix.html.  You will want to consult this while diagnosing bugs in your math.   2) The way UV coordinates are used to address pixels in a texture is not always the same as the way NDC values are used to address pixels on the screen.  I believe DirectX 11 finally made these two addressing modes consistent so the interval [-1, 1] addresses pixels on screen in the exact same way the interval [0, 1] addresses pixels in an identically sized texture.  I'm not sure what iteration of OpenGL fixed this inconsistency if it happened at all.  Search for "mapping texels to pixels" to figure out how to rectify the different addressing modes for whatever version of OpenGL you are using.
  20. nonoptimalrobot

    Mapping OpenGL Coordinates to Screen Pixels

    If you don't want to switch from a perspective projection to a orthographic projection for the drawing of 2d elements you'll need to crunch the numbers in your projection matrix to generate a transform that converts screen coordinates to view space coordinates.  Note that it doesn't make a lot of sense to use a perspective transform to render 2d screen elements with pixel perfect alignment.  If you are dead set on a perspective transform... // Transforming a view space point (vx, vy, vz, 1) into clip space looks like this: |sx 0 0 0||vx| |sx * vx | | 0 sy 0 0||vy| = |sx * vy | | 0 0 sz tz||vz| |sz * vz + tz| | 0 0 -1 0|| 1| |-vz | // The corresponding NCD values are computed as follows: |nx| | (sx * vx) / -vz | |nx| = | (sy * vy) / -vz | |nz| | (sz * vz + tz) / -vz | // You want to go from NDC (nx, ny, nz) to view space (vx, vy, vz) // Fortunately this is pretty simple... vz = -tz / (nz - sz) <-- Evaluate this first! vx = (-vz * nx) / sx vy = (-vz * ny) / sy // Note that (sx, sy, sz, tz) were all pulled from your perspective transform. Converting a pixel coordinate to NDC is a simple matter scaling and biasing one interval into another... Range of x values in NDC:  [-1, 1]  (1 is the right of the screen and -1 is the left) Range of y values in NDC:  [-1, 1]  (1 is the top of the screen and -1 is the bottom) Range of x values in pixels [0, ScreenWidth-1] Range of y values in pixels: [0, ScreenHeight-1]   I'm sure you can figure that transform out yourself.
  21. I would exploit the behavior of constructors and destructors to manage the list for you.  Here is a toy example:   First the Collectable class, primary class types will inherit from this: template<class tType> class Collectable { public: Collectable(void) { m_nMyIndex = ms_lstObjects.size(); ms_lstObjects.push_back(this); } ~Collectable(void) { if(ms_lstObjects.size() > 1) { Collectable* pTemp = ms_lstObjects.back(); pTemp->m_nMyIndex = m_nMyIndex; ms_lstObjects[m_nMyIndex] = pTemp; } ms_lstObjects.pop_back(); } static int32 NumObjects(void) { return (int32)ms_lstObjects.size(); } static tType* GetObject(int32 a_nIndex) { return ms_lstObjects[a_nIndex]; } protected: private: int32 m_nMyIndex; static std::vector<tType*> ms_lstObjects; }; template<class tType> std::vector<tType*> Collectable<tType>::ms_lstObjects; Using Collectable would look something like this: class ICollidable : public Collectable<ICollidable> { public: virtual void OnCollision(ICollidable* a_pObject) = 0; }; class IUpdatable : public Collectable<IUpdatable> { public: virtual void Update(float32 a_fDeltaT) = 0; }; bool Collide(ICollidable* a_pObjectA, ICollidable* a_pObjectB) { bool bCollision = true; // perform collision code here if(bCollision) a_pObjectA->OnCollision(a_pObjectB); if(bCollision) a_pObjectB->OnCollision(a_pObjectA); return false; } Finally, game side code will take the convenient form: // Update... for(int32 i = 0; i < IUpdatable::NumObjects(); i++) IUpdatable::GetObject(i)->Update(1.0f / 3.0f); // Collision detection and response... for(int32 i = 0; i < ICollidable::NumObjects(); i++) for(int32 j = i+1; j < ICollidable::NumObjects(); j++) { ICollidable* pObjectI = ICollidable::GetObject(i); ICollidable* pObjectJ = ICollidable::GetObject(j); Collide(pObjectI, pObjectJ); } // Render for(int32 i = 0; i < IRenderable::NumObjects(); i++) IRenderable::GetObject(i)->Render(); With this type of setup you can new and delete objects at will and not have to worry about managing lists of pointers for them.  There are subtleties involved; the ugliest one is that you can't call delete on object of a particular type while using the Collectable interface to iterate over that type without adjust the current iteration index.  This can be dealt with easily enough by deactivating objects and differing the actual deletion until the end of the frame.  Since each object knows where it is in each list you can easily shuffle or defrag the list from time to time (while executing deletions for example) so in-order iterations traverse linearly through memory improving cache performance.  Another advantage of this setup up is that Collectable forces you to deal with a single vtable at time (your initial stab at the problem has this advantage as well) also helping with cache performance.
  22. Ah, yes but the terms "colouring algorithm" and "simulated annealing" in the first post are not jiving well with the mental model I have of the problem you are trying to solve.  In any case, occlusion queries are pretty simple:   1) Render some things to populate the depth buffer 2) Issue a begin occlusion query command 3) Render some more stuff 4) Issue an end occlusion query command 5) Get the result 'N' from the query which tells you the number of pixels that have failed the depth buffer test.   To start out you will have to render your mesh between a begin and end query command in such a way that you know every pixel will fail so you know its initial pixel count; you can subtract 'N' from this value down the road to get the area left uncovered by your disks.  You will want to use an orthographic projection so each pixel, regardless of its depth, has the same projected area.   In my experience using compute shaders to solve a generic problem is never a performance win unless you aggressively restructure the algorithm to accommodate the various quirks of the GPU architecture.   The amount of work it takes to optimization a compute shader solution is often similar to the amount of work in takes to recast your algorithm into a rasterization task.  For reasons that are not entirely clear to me GPUs nudge closer to their peak theoretical performance when you are blasting triangles instead of running compute shaders making a rasterization method a clear winner in a lot of cases.  This may not be true with workstation cards and OpenCL does a good job at parallelizing your entire machine (even the integrated graphics) so there might be performance wins in that department.
  23. Assuming I'm following you (and I might not be), it sounds like you can turn this into a rendering problem use occlusion queries.
  24. nonoptimalrobot

    Custom 2D geometry from a list of points.

      This is interesting and much better than what I proposed as it avoids the costly segment vs segment intersection tests.  Do you know what it's called?
  25. nonoptimalrobot

    Custom 2D geometry from a list of points.

    I'm not an expert at this but I have solved this problem before.  I would recommend against using quads as a conceptual primitive for the algorithm, it will only make your life harder.  The first thing you need to do is establish a convention:  specify your points in order such that they traverse the boundary of the shape in counterclockwise order.  This is needed to consistently distinguish the inside of the shape from the outside and well as provide a convenient mathematical property for the edges.  If you have an edge vector (x, y) then the 2d cross product of (x, y) produces a vector that points outside of the shape.  The 2d cross product is defines as:   Cross2D(x, y) = (y, -x)   The gist of the tessellation algorithm is simple, it looks like this: Initialize the set S to the edges on the boarder of your shape Initialize the triangle set T to empty For each edge (a, b) on the border of your shape { For each vertex p on the border of your shape (that is not a or b) { triangle t = triangle(a, b, p) vector3 n = (b.x-a.x, b.y-a.y, 0) cross (p.x-a.x, p.y-a.y, 0) if n.z > 0 AND if edge (b, p) and edge (p, a) do not intersect any of the edges in the set S { 1) add t to the triangle list T 2) and add the edges (b, p) and (p, a) to to the edge set S } } } The set T now contains the set of triangles that tessellates your shape This is the most basic algorithm for triangulating a 2d polygon.  The Wikipedia page cowsarenotevil pointed out will lead you to more intelligent algorithms.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!