Jump to content

  • Log In with Google      Sign In   
  • Create Account


Crowley99

Member Since 02 Feb 2011
Offline Last Active Aug 30 2013 01:00 AM

Posts I've Made

In Topic: QuickSort algorithm

24 April 2013 - 01:30 AM

- Is it possible to have better performances ? How ?

In addition to Hodgman's implemtation changes:
1) Choose a random pivot element, rather than the first. This greatly reduces the probability of performance leaning in the O(n^2) direction for poorly distributed (semi-sorted to fully sorted) input data. You may want to also compute the median of the first, second and middle elements, or the median of a random subset (trade off between better medians, and more computation to get them).
2) Drop to a simpler sort when the data gets small enough. E.g., insertion sort (at approx. 8-16 elements) in order to reduce overhead. A less conservative switch (which IIRC std::sort does), is a switch to a heap sort after either a certain memory footprint size is reached or stack depth is reached. This is because this approach has a bounded stack depth, and HS becomes more efficient (due to it not being inhibited by it's cache unfriendly memory access patterns on larger sets) for smaller data sets.
3) If you are using primitive types, use a SIMD sorting network when the data set for a particular recursion is small enough.
4) Separate your sorting keys from your data for cache and swap efficiency.
5) Sort multi-threaded.
6) Sort distributed.

In Topic: Octrees: Precomputed visibility and rendering approach

20 April 2013 - 11:36 AM

My idea to compute the PVS was to go raycast from each cube corner to all the other cubes, main problem with this is that it will be really expensive based on the fact that each cube have 8 childs, i was thinking to just raycast to the "bigger" cubes (either by area or polygon count) this way i would be able to still skip rendering of hidden geometry and not take ages to compute the visibility.

 

I think that you will find that you need to ray-cast from a lot more than the corners to prevent popping/false-invisibility.  If you subdivide far enough, this won't be an issue, but then you land up with an extremely dense octree that takes ages to compute (most of the computation is redundant since you are mostly sampling the same lines again and again). For best results you will likely need to densely sample the full line space between your source and destination nodes.  This can be done by ray-casting from all over the surface of your source node (to all over the surface of your destination node, stopping if visibility is established).

 

This problem has been well researched.  You should check out:

http://dcgi.felk.cvut.cz/home/bittner/publications/agvs/agvs.pdf

and

http://www.cg.tuwien.ac.at/.../WONKA-2006-GVS-Preprint.pdf

for CPU based sampling approaches tha attempt to minimize the number of rays cast.

 

You can also use the GPU to do the visibility determination for you, or compute it exactly:

http://nirenstein.com/wp-content/uploads/2012/10/thesis.pdf

Chapter 4 talks about using the GPU for sampling visibility.  The hardware at the time was dated: I expect that today you could avoid the readback by using occlusion queries or compute shader processing of the z-buffer.  Chapters 5 and 6 talks about computing the exact visibility set, but the math is a bit hairy, and it is probably overkill for mose use cases.


In Topic: Octrees: Precomputed visibility and rendering approach

19 April 2013 - 01:40 AM

I suggest splitting your static data on node boundaries after you have your visibility graph, since this allows you to associate data with nodes of your tree partition. Then you just draw the static geometry associated with that node if the node is visible. If you have instanced geometry in a node, you can just conservatively make the whole object visible - just be sure to do a scoreboard/mailbox check so that you don't draw it once for each visible node it intersects. You may want to do the same for non-instanced geometry if your subdivision is very fine and results in a lot of splits (since this will result in more draw calls).

For dynamic objects, you can easily (via a hierarchical test) figure out which nodes it intersects, and if it intersects a visible node, then it is treated as visible.

BTW, How do you intend to compute the PVS for the nodes? (which approach/algorithm - it's a difficult problem)

For best results, I suggest a general axially aligned BSP tree, since you ideally want to keep your leaf nodes cubic.

In Topic: How does tan work exactly?

29 March 2013 - 06:58 PM

First you get some sun, then you get a tan :)


Tsk, tsk. Don't you know that puns about trigonometry are a sin? Why? 'Cos I said so!

In Topic: Anyone here a self-taught graphics programmer?

10 February 2013 - 07:30 PM

I started graphics dev towards the end of 1983.  My Acorn Electron tapes had all stretched out from overuse, so all my games were broken.  This encouraged me to read the basic and system programming manuals (at that age, I didn't really comprehend that writing software was different to playing a game -- I thought that everything done on a PC was a game :-) ).   I wrote a few half baked games (the first one was a lunar lander type - partly pulled out of code samples written for the BBC Micro), and then a few ground up little games.

 

I got my first x86 PC in 1986 (4.77mhz FTW and a Hercules card, no HDD), and started with GW Basic, and then later on Pascal, C and Assembler.  At some point in the late 80's to early 90's, I had an epiphany:  I don't really like making games at all, I just like to play them -- what I really enjoyed was making cool graphics via code (for games or otherwise).  Through a bunch of BBS's and friend's sharing coding "secrets", I got involved in the Demo Scene in the early 90's, which fitted my interests perfectly.  I wrote some software 3D renderers (386 DX no-FPU era), a bunch of old Mode X type special effects, etc.  I remember those days fondly: every year from 1986-2000, I would see a new effect, or feature that I had never seen before, or even imagined possible (my imagination was possibly, a bit limited :-) ).

 

Then I did a degree in Math and CS, which was pretty cool, because it helped a lot of the math I was doing fall into place.  Then I did a PhD in CS (graphics/computational geometry), and worked in visualization and VR for a few years, and then I spent the better part of a decade at NVIDA.

 

Now I work for a hedge fund.  Go figure. :-)


PARTNERS