Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 05:19 AM

#5049871 Real-Time Tessellation Techniques of Note?

Posted by Hodgman on 04 April 2013 - 03:33 AM

"you're ruining my model, what are you doing!"
Artists want people to see what they themselves are looking at in whatever modeling program they use, and they don't want to have to guess what some bloated smoothing thing is going to do to their carefully sculpted models.

This is a workflow issue rather than a problem with any particular technique. The artists need to have these tesselation options available in their modeling tools, and an exact implementation in the engine too. Then if they model something using catmull-Clark surfaces, it will just work, and look exactly as they expect.

You simply can't take a low-poly model and tesselate it while retaining the artists ideas, in general. They need to have this expressive control, either by directl modeling with some flavour(s) of subdiv surfaces, or by giving you a high-poly 'target' model.


#5049714 Expert question: How to get a better time delta?

Posted by Hodgman on 03 April 2013 - 03:34 PM

Can someone give some input about the difference in handling gameloop in PC and in consoles?

ppl always say that pc is too generic and cant guarantee a thing, so, how that works in a console?

There's no real difference in your game loop these days, but the level of abstraction between you and the hardware is thinner -- there's a fixed hardware spec (with some exceptions, like storage size) and the OS is simpler.
On older consoles, there isn't even really an OS at all, just the game and the hardware, so all the "device driver" code is in the game. On a console, you can do dangerous things that you really don't want to be available to general purpose PC applications, like the ability to take mutually exclusive control over a device, or generate hardware interrupts which call their own kernel-mode functions, communicate directly to devices via MMIO without going through a driver, implement CPU/GPU/refresh-rate synchronization yourself from scratch...

With such a simple machine you can be sure that no background process is going to steal any of your CPU time.
On windows (with default settings), if a thread sleeps (which might happen inside Present), then the thread might not wake for 15ms+, and in a worst case, windows can leave a thread sleeping for over 5 seconds if you've got enough processes running ;/


#5049498 Easter Eggs

Posted by Hodgman on 03 April 2013 - 03:31 AM

Speaking of the above goldeneye case -- it's famous multiplayer mode was never meant to exist. Management refused to allocate dev time for a MP mode, so two rogue programmers secretly worked on it for a month of their time in-between other tasks. Without their risk-taking and insubordination, it wouldn't be appearing on 'best games of all time' lists anywhere near as often ;)
That said, it depends on your organization. There's a lot of companies that would've fires these guys and not shipped the MP mode...


#5049175 Easter Eggs

Posted by Hodgman on 02 April 2013 - 08:15 AM


Just leave as few clues as possible that there is an Easter egg in there.


One time, someone on our team was using (unlicensed) star wars imagery as placeholders, which were somehow left in he data archives after the code was removed (a la 'hot coffee'). It only took a week for the hardcore players to crack our archive format and find these files, and then go and start all sorts of speculation/rumor about a star wars easter egg that really didn't exist. We didn't put out an official explanation, because we didn't want to draw Lucas' attention to our slip up, lest they sue.


#5049065 Easter Eggs

Posted by Hodgman on 01 April 2013 - 08:53 PM

Perform some sort of super simple encryption on just that asset (if it's done on all assets, the file crackers will quickly realize), like XORing all it's bytes with some magic number.

Store it in a non-obvious location, like within a string table instead of a file archive.




#5049052 Free texture image memory after binding it

Posted by Hodgman on 01 April 2013 - 08:13 PM

Yes, glTexImage2D is copying your image data.




#5048847 packing two floats to one and back in HLSL/CG

Posted by Hodgman on 01 April 2013 - 05:24 AM

I haven't tried this code (just typed it into the forum), but it might be worth a try wink.png

This should pack two 0-1 range floats into the hi/low 4 bits of an 8 bit fraction.

//quantize from 0-1 floats, to 0-15 integers, which can be represented in 4 bits
a = round(a*15);
b = round(b*15);
//bit shift a into the upper 4 bits of the fraction, and b into the lower 4 bits
float c = dot( float2(a,b), float2(1.0/(255.0/16.0), 1.0/255.0 ) );
return c to an 8-bit render target

float c = tex2d(...) //point sampling and/or tex-coord at exact texel centres.
//shift so that a is in the integer part and b in the fractional part
float temp = c * 255.0/16.0;
//reconstruct the original (but quantized) a&b
float a = floor(temp) / 15.0;
float b = frac(temp) * 16.0/15.0;



#5048833 Static cast and unitialize variable?

Posted by Hodgman on 01 April 2013 - 04:03 AM

I need to first make a copy of the message struct

This causes all of your examples to be invalid. None of them should work.
When you take a polymorphic object, and copy it to just an instance of the base type, it's called "slicing". All of the actual derived type infomation is lost, and only the base part is copied:
http://en.wikipedia.org/wiki/Object_slicing
http://stackoverflow.com/questions/274626/what-is-the-slicing-problem-in-c

When you cast msg to a PickObject, this is an invalid cast. msg is a BaseMessage only; it is not a PickObject. So when you try to read the pObject member with static_cast<PickObject*>(&msg)->pObject, this member doesn't exist, and the memory access is erroneous. Often, such an illegal operation will seem to work without error, and you'll just be reading some random bit of RAM, other times you'll be lucky enough to get a crash/exception telling you that you've done something wrong.

It looks like what you want is:

void TranslatePickMessage(const BaseMessage& message)
{
  const PickObject& pickMessage = static_cast<const PickObject&>(message);
  m_pSelected = pickMessage.pObject;
}



#5048826 Expert question: How to get a better time delta?

Posted by Hodgman on 01 April 2013 - 03:28 AM

I added a bunch of logging to my timing code, and tried adding Frank's fixup.
Originally, I had an accumulator for a fixed time-step, and no interpolation (which simplifies things), very simple:

static float accumulator = 0;
accumulator += deltaTime;
const float stepSize = 1/60.0f;
while( accumulator >= stepSize )
{
	accumulator -= stepSize;
	DoPhysics( stepSize );
}

And then I added this 'snap to vsync' code above it:

static float buffer = 0;
float actualDelta = deltaTime + buffer;
float frameCount = floorf(actualDelta * 60.0f + 0.5);//I did this a bit differently - rounding to nearest number of frames
frameCount = max(1.0f, frameCount);
deltaTime = frameCount / 60.0f;
buffer = actualDelta - deltaTime;

Without the fixup, the accumulator would gradually increase it's remainder, until over about 1000 frames, a 'spare' 1.66ms of time builds up in the accumulator, and the physics loop runs twice in a single frame. Occasionally when this occurs, the next frame will only measure a delta of slightly less than 1/60th, e.g. 0.016654s, which means that no physics update occurs this frame. Then occasionally the next frame will be slightly over 1/60th, which when added to the accumulator, results in another two physics steps.
So typically, I'm getting 1 physics step a frame. Then once every few thousand frames, I take 2 steps in one frame, then 0, then 2, then back to normal. I hadn't noticed this small quirk, and only found it now due to this thread!
 
With the fixup, things are much more stable around this edge case. When the 'buffer' and/or accumulator build up enough extra time, then I still do two physics updates in a frame. However, the case where this is followed by zero the next frame is gone (and also the case where the 'zero' frame is followed by a 'two' frame is also gone).
 
So, from that, it seems to be a pretty good idea in general, and I think if I was using interpolation/extrapolation in my fixed-time-step code, then this fix would be even more important! As is, my step size matches my refresh rate, but in general I can't rely on this being true, so I need to add interpolation at some point. Without the fix, I'm guessing the original jittery timings would have a large impact on the perceived quality of the interpolation. Thanks frank biggrin.png

 

[edit]I also did some testing on what my actual average frame time was, but it was a bit inconsistant run-to-run. One run it averaged to 0.01667, another 0.01668, and another 0.01666... Also, at different times, the average would either be slowly creeping upwards or downwards.




#5048783 Is my frustum culling slow ?

Posted by Hodgman on 31 March 2013 - 09:13 PM

but I'm still getting a compiler error:
c:\program files (x86)\microsoft visual studio 10.0\vc\include\vector(870): error C2719: '_Val': formal parameter with __declspec(align('16')) won't be aligned

This is a bug in visual studio's implementation of the standard containers -- the resize method takes a 'T' argument by value, and MSVC does not support pass-by-value with declspec-align'ed types... std::vector simply doesn't compile under MSVC when it's used with one of these types sad.png

Hence the alternative versions, like the Bullet version I posted earlier (which you can steal the source for, and use it by itself without Bullet) or other projects, like RDESTL.




#5048780 Expert question: How to get a better time delta?

Posted by Hodgman on 31 March 2013 - 08:53 PM

I am curious about why your measured deltas don't vary, have you tried logging them to a file?

Yeah I was mistaken, when synced to display at 60Hz, the delta log looks like:
...
0.016761
0.016591
0.016652
0.016698
0.016710
0.016666
...
Which over time seems to average out to the correct value, but yes, there is jitter.
[edit]They actually don't average out to 1/60 - they're generally higher than 1/60[/edit]
 
I am running my physics on a fixed time step of 60Hz (with some internal parts of the physics taking 10 sub-steps for each 1 physics tick), so it seems that occasionally I should actually be doing no physics updates for a frame, followed to two the next frame. Using a minimum delta of 1/60th (and the correction buffer) might smooth this out. Thanks.


#5048582 Is my frustum culling slow ?

Posted by Hodgman on 31 March 2013 - 08:47 AM

std::vector simply has problems with aligned types, especially on MSVC.

 

Often the solution is to not use it, e.g. the Bullet physics middleware wrote a replacement called btAlignedObjectArray.




#5048565 Gamma from sRGB Texture to Human Brain

Posted by Hodgman on 31 March 2013 - 07:56 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2, but it could be sRGB, "gamma 1.8", or something else too!).
  7. The display then does the opposite transform (e.g. x^2.2) upon transmitting the picture out into the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5 (but with a different linear scale).
  9. These values are then perceived by the viewer, in the same way that they perceive the radiometric values presented by the real-world.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

 

luminance = radiance * is_visible(wavelength) ? 1 : 0;

That should be:

luminance = radiance * weight(wavelength);

where weight returns a value from 0 to 1, depending on how well perceived that particular wavelength is by your eyes.

The XYZ colour space defines weighting curves for the visible wavelengths based on average human values (the exact values differ from person to person).

The weighting function peaks at the red, green and blue wavelengths (which are the individual peaks of the weighting functions for each type of cone cell), which is why we use them as our 3 simulated wavelengths. For low-light scenes, we should actually use a 4th colour, which is the wavelength where the rod-cell's peak responsiveness lies. For a true simulation, we should render all the visible wavelengths and calculate the weighted sum at the end for display, but that's way too complicated wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.




#5048560 Is my frustum culling slow ?

Posted by Hodgman on 31 March 2013 - 07:32 AM

But i don't understand why isn't it already 16byte aligned as data is 4 floats ?

It's made up of primitives, each of which only requires 4-byte alignment to work correctly, so the struct will work as long as it's 4-byte aligned. It doesn't need to be 8 or 16 byte aligned in order to function correctly, only 4-byte aligned (and this is assuming that floats actually need to be 4 byte aligned).
 
 


class and type sizes are unrelated to instance adresses.

Excuse my stupidity i still don't understand.
You mean instances of a _Plane in an std::array or vector might not be contiguous in memory (with their default allocator)?


As far as I know:
  • std::allocator<T>::allocate (which is used by the std containers) or "new T" will return a block of memory that is correctly aligned for the type "T" (i.e. the address is a multiple of alignof(T))
  • malloc( sizeof(T) ) wont.



#5048455 Should I Use MSVC For My Release Build?

Posted by Hodgman on 30 March 2013 - 08:05 PM

it works just fine even without QT, and the Vim mode puts it roughly 50 miles above anything else(including VS) imo.


You can use Vim in (the non express versions of) visual studio, if you're into that kind of thing ;-)




PARTNERS