Jump to content

  • Log In with Google      Sign In   
  • Create Account

Aressera

Member Since 25 Mar 2007
Online Last Active Today, 12:05 AM

#5286404 Developing games with the Flow-based programing paradigm.

Posted by Aressera on 11 April 2016 - 07:17 PM

Most of my engine's core functionality (e.g. communicating position of an object from physics to graphics, animation bindings, sound DSP processing) is abstracted as data flows between arbitrary object pairs. In the editor, objects declare their input and output connectors and the user can drag to create connections between them. An "entity" is just a collection of objects with their connections (also encoded as small objects). Each data flow connection can be set to update either before or after a specific engine subsystem, and the engine batches the updates in the correct ordering based on the connectivity between objects. Each batch, if large enough, can then be broken down into disconnected "islands" that can be executed in parallel on a thread pool. 


In theory, nodes in the graph can perform arbitrary math/logical operations so it could be used to implement more complex logic or used as a more general-purpose multimedia processing system.

 

A hard bit in implementing something like this in a large system is that the number of possible type interactions is O(N^2). To avoid that you must decouple the two endpoints of a connection, such as by writing the data to an intermediate temporary storage before reading it on the other endpoint. A connection consists of two Connector subclasses with read() and write() methods, plus a Connection subclass that contains the temporary storage. Then, the number of required connector types is just O(N). My system has over 50 node types so this a big win in code size.




#5285656 Sine-based Tiled Procedural Bump

Posted by Aressera on 07 April 2016 - 04:52 PM

Your image contains negative values and when you visualize it naively they appear black. Bias it to between 0-1 and the results should be similar.




#5284899 In terms of engine technology, what ground is left to break?

Posted by Aressera on 03 April 2016 - 01:15 PM

Sound has huge room for improvement, particularly in the simulation of realistic acoustics/sound propagation/spatial sound. Current games neglect many acoustic effects, most do not even handle the effects of occlusion and only apply static zoned reverberation. It's very immersion-breaking to hear a Fallout-4 supermutant talking through a wall two stories up as if it was talking in your ear.

 

My work/research focuses on using real-time ray tracing on the CPU, here is a recent paper of mine from i3D2016 showing what is possible. Our current system can handle about 20 sound sources in a fully interactive dynamic scene. This tech has the potential to save a lot of artist time that would otherwise be spent tuning artificial reverb filters. If you have a 3D mesh for the scene + acoustic material properties you can hear in real time how the scene should actually sound with realistic indirect sound propagation. Plus it gives a big improvement in the overall sound quality and can introduce new gameplay that isn't possible with current tech (e.g. tracking an enemy based on their sound).

 

The big problem in transitioning this technology to games at the moment is that you still need most of a 4-core HT CPU to do this interactively. GPU implementation is a possibility but I doubt many game developers want to trade most of their graphics compute time for sound, plus there are issues with the amount of data that needs to be transferred back to the CPU and the latency inherent in this.




#5284625 Wondering how to build a somewhat complex UI...

Posted by Aressera on 01 April 2016 - 10:42 AM

The most common way is to have a base class (Control , Widget , Window) .. and every UI element derive from it.

The base class will have a virtual OnMouseEvent(..) and every UI element will implement it .

For the keyboard events I like to have a focus element , so lets say you click on a text box , and in the OnMouseEvent() of the text box you do something like

GUISystem -> SetFocus(this) and this will pass all the keyboard events to the element you set .

 

And the third important thing is the callbacks. Let's say you register a callback for "CLICK" to some button , and the button will call it in OnMouseEvent().

This is the base and you can extend from there.

 

This, except that I think it's better for the parent of every widget to maintain the focus for its level of the GUI hierarchy, rather than a global focus for the GUI. Then, each widget in the hierarchy just passes the key/text input events down to its child that locally has focus.

 

Another important concept is the delegate. In my case this is just a struct with a collection of std::function callbacks that respond to events for each type of widget. This is way better than the inheritance solutions required by many GUI toolkits.




#5284103 How do I know which parameters to give Mix_OpenAudio() ?

Posted by Aressera on 29 March 2016 - 02:35 PM

These are the audio stream format parameters. They specify the format of the sound device's output stream. I don't know any specifics about the SDL API though or how it would deal with parameters that don't match the current device settings.

  1. Sample rate - 44100 Hz is standard for CD-quality audio. Some audio cards use 48kHz and pro-level hardware goes up to 192kHz.
  2. Sample type - in this case, signed 16-bit integers. This is the output format. Most audio DSP is done with 32-bit floats these days, then converted to 16 or 24-bit integers on output.
  3. Number of channels (2 = stereo)
  4. Audio buffer size - how big of a buffer should the sound be processed in. This influences the latency of the device. A 1024 sample buffer at 44100 kHz adds a latency of at least 23ms. Too much latency can be bad, but if the buffer size is too small it can cause glitches in the audio if the CPU isn't fast enough to finish the processing in time. I would try a buffer of 512 samples.



#5280932 Reducing Compile/Link Times

Posted by Aressera on 12 March 2016 - 02:35 PM

If you have link-time code generation enabled in visual studio that can increase the link time by an order of magnitude in my experience. Disabling it (for all projects) made a big difference in my build time and didn't affect runtime performance much.




#5279853 Why do we iterate when applying sequential impulse?

Posted by Aressera on 06 March 2016 - 12:19 PM

Anytime you have more than two coupled (touching) objects, or more than one contact point between them, the pairwise impulses are not guaranteed to produce a correct response. So, the algorithm iterates until an approximate solution is found to the multi-body contact problem.




#5277039 How fast is hardware-accelerated ray-tracing these days?

Posted by Aressera on 19 February 2016 - 04:36 PM

 

You'd have to perform the vertex animation, apply it to the mesh, then take the mesh triangles and build an oct-tree/BVH heirarchy/binary-tree/sort into grid cells/whatever?


Like a character? Precompute a static BVH for the character in T-Pose. At runtime keep the tree structure but update the bounding boxes.
The animated tree might not be as good as a completely rebuild tree but still pretty good.
If you have 100 Characters you only need to build a tree for those 100 root nodes.
I'm using it for a realtime GI solution i'm working on for many years now.
Don't know any papers but i'm sure i'm not the inventor of this simple idea smile.png

 

This is called BVH refitting and is commonly used for animated scenes. Optionally, you can rebuild the entire tree every N frames or if there is a big change in the scene to maintain decent ray tracing performance under deformations.

 

Most of the compute in rendering global illumination is spent on indirect light. It's pretty easy to write a real-time ray caster that just handles direct light/shadows. When you add GI the number of ray casts required goes up by a factor of at least 10-100, plus they are incoherent rays which are less able to be accelerated using ray packet traversal.

Also, it is not accurate to state that the cost for ray tracing is O(logn) vs. O(n) for rasterization. Ray tracing is O(logn) per pixel, but you have >1,000,000 pixels, especially with antialiasing/supersampling. That comparison is only correct if your framebuffer is 1x1 pixels or if every triangle completely fills the viewport.




#5274798 Inverse square law for sound falloff

Posted by Aressera on 07 February 2016 - 05:49 PM

Neither. The way you are proposing, with a min and max value, while maybe good for artists, is not physically correct. I would try both and see which sounds the best in your case. Realistically, there should be no minimum, but even this will not sound very good.

 

The issue is that realistic sound involves computing the equivalent of global illumination to handle the indirect sound (very important for realism), and it's not as easy to fake as with light because 100+ bounces must be computed, not just a few. Most games totally ignore the indirect sound or fake it using an artificial reverberator. The result is that the ratio of direct to indirect sound does not behave the same as in the real world, and this ratio is critical for distance perception and realistic localization.




#5274121 [Debate] Using namespace should be avoided ?

Posted by Aressera on 03 February 2016 - 03:36 PM

It's not always bad...

 

I use it in my codebase, even in headers, but ONLY within the scope of another namespace, never in the global namespace, and only when it is clear that the imported namespace is a strong dependency of the enclosing namespace. In my case, all of the code is part of the same suite of libraries which are meant to interact and depend on the same base code, so I allow it to avoid literally thousands of verbose explicit namespace qualifiers in header files. The tradeoff is worth it in this case, I think.

 

e.g.

EngineConfig.h:

#include "Graphics.h"
#include "Physics.h"
#include "Sound.h"
namespace engine {
using namespace graphics;
using namespace physics;
using namespace sound;
};

In most cases I prefer importing just the needed classes, but once the import list grows beyond 5-10 classes from the same namespace, I tend to import the entire namespace instead if it makes sense.




#5273249 Warning conversion from size_t to int in x64

Posted by Aressera on 29 January 2016 - 01:50 PM

 

I'm always confused why people find size_t being unsigned to be a problem. Why are you performing signed comparisons on array indices in the first place? There are very few valid use-cases thereof.


Someone calls List<T>.IndexOf(x), but x is not in the list. You're the IndexOf function. What do you return?

signed -> return -1.

unsigned -> return Length+1 (essentially vector::end or equivalent)? What if the length is UInt32.MaxValue? You don't want to return 0. You either need to limit lists to UInt32.MaxValue-1 in length so that you can use MaxValue (i.e. (uint)-1) as the invalid position, or have two return values - Index and IsValid.

iterator (or some other custom type) -> collection_type::end, or a (index, isvalid) tuple, but this means your API now burdens people with understanding the mechanics of iterators or unpacking return values from complex types.


In my opinion, the API simplicity of signed values outweighs the loss of array sizes over 2 billion elements long. If you want collections larger than that, you can still make a custom type using ulong indices and complex IndexOf return values when you need it.

 

 

This is a problem of poor interface design for the container, not a problem with unsigned types. I would prefer something like this using an output parameter for the index:

bool IndexOf( const T& x, size_t& index ) const;

Then the usage of the method is:

size_t index;
if ( container.IndexOf( x, index ) )
{
… use index
}



#5272330 Inverse square law for sound falloff

Posted by Aressera on 23 January 2016 - 12:33 AM

You should use 1/r instead of 1/r^2 to be physically correct for sound. The sound intensity falls off as 1/r^2, but sound pressure (what you hear) falls off as 1/r.




#5268060 Inertia Tensor magnitude to find Angular Acceleration

Posted by Aressera on 26 December 2015 - 12:47 PM

The correct way is to multiply the torque vector by the inverse of the world-space inertia tensor for the object. You should precompute the object-local inverse inertia tensor, then transform it to world space on each frame via a similarity transform: R' * I^-1 * R where ' is the transpose and R is the rotation matrix.




#5262499 How are audio channels arranged in a .wav file?

Posted by Aressera on 17 November 2015 - 06:30 PM

For PCM (almost all wav files), audio data is stored as an array of interleaved channels of whatever the sample size is (8, 16, 24 common, 32, 64, 32fp, 64fp possible). Integer values are signed little-endian.

 

e.g.

sample1L sample1R sample2L sample2R sample3L sample3R...




#5262150 Cascaded Shadow Map shimmering effect

Posted by Aressera on 15 November 2015 - 01:32 PM

Here's the code I use to get rid of that artifact. I don't see where you are rounding to the nearest pixel in the code you posted, you need a floor operation or integer conversion to correctly round the light's viewport.

// bounds == 2D bounding box of camera frustum corners rotated to light-local space.
// Round the light's bounding box to the nearest texel unit to reduce flickering.
Vector2f unitsPerTexel = 2.0f*Vector2f( boundsSize.x, boundsSize.y ) / Vector2f( (Float)shadowWidth, (Float)shadowHeight );

bounds.min.x = math::floor( bounds.min.x / unitsPerTexel.x )*unitsPerTexel.x;
bounds.max.x = math::floor( bounds.max.x / unitsPerTexel.x )*unitsPerTexel.x;

bounds.min.y = math::floor( bounds.min.y / unitsPerTexel.y )*unitsPerTexel.y;
bounds.max.y = math::floor( bounds.max.y / unitsPerTexel.y )*unitsPerTexel.y;

// then, use rounded bounding box and the light orientation to construct the light's view/projection matrices.





PARTNERS