Jump to content

  • Log In with Google      Sign In   
  • Create Account


cowsarenotevil

Member Since 08 Nov 2002
Offline Last Active Private
-----

Posts I've Made

In Topic: is this white light?

26 July 2014 - 07:43 PM

 

when I add all possible frequencies I get a wave that has no wave because it cancels it out


Photons can behave like waves, but they don't combine the way you are trying to combine them.

These little bundles don't combine mathematically. They don't merge into a single wave. They are independent little bundles of energy.

(...)

 

 

Isn't one of the most readily-observed consequences of wave-particle duality that "waves" of light can, in fact, exhibit interference?


In Topic: is this white light?

26 July 2014 - 11:59 AM

I assume that plot is made up of the approximate wavelengths for what human eyes/brains consider red, green, and blue light? If so, assuming the frequency response is even, then yes, that would be white light, but it obviously wouldn't be the only light that looks that way. The "all possible frequencies" thing you're thinking of is probably http://en.wikipedia.org/wiki/Black-body_radiation. Note that there's a falloff curve on either side of the center frequency, and that the center and shape of the curve vary by temperature.


In Topic: Oculus Rift and Super Resolution

17 July 2014 - 10:38 AM

 

I'm actually very surprised that, to my knowledge, no one has used this principle for up-scaling video resolution

It's used in a lot of video-games. Every frame you shift the screen projection by a fraction of a pixel, which results in edges maybe shifting by a pixel, or not. When combined with MSAA and temporal reprojection, nVidia uses the TXAA buzzword to refer to it wink.png
 
It's also an old-school way that people used to use to render out "magazine quality" screenshots from game engines. You'd take thousands of sub-pixel shifted screenshots and then average them, which gives you a screenshot that you can say is "in engine", but looks pre-rendered laugh.png

 

 

Yeah, that's pretty close to what I'm referring to, but I was imagining using that principle to actually increase the number of pixels (versus just increasing the number of "samples" averaged to form each pixel) by reconstructing the amount of shift after the fact based on only the video data.


In Topic: Oculus Rift and Super Resolution

16 July 2014 - 11:57 PM

 

It seems that you like the screen door effect. ;) The thing it reminds me of is when you walk past a fence. On average you can see only 20% through many fences (e.g. the gaps), but as long as you keep moving your brain stitches those glimpses together and you feel like you can see through the fence.

It's simple motion blur and depth of filed blur what you are describing with that fence thing.

 

 

No it isn't; you're actually getting more information about what's behind the fence if the background is moving relative to the fence. Each pixel in the Oculus (or any screen) is only a single data point, but naturally, if you sample the same pixels repeatedly over time, the pixels from one frame could give you data that falls "between" pixels in another frame. It does seem that the brain is remarkably good at figuring out where in space those pixels belong as well.

 

I'm actually very surprised that, to my knowledge, no one has used this principle for up-scaling video resolution. Obviously it'd only work in situations where automatic motion tracking be performed, but motion tracking works well enough that it can be used for 3D reconstruction, camera tracking, and stabilization, so my intuition says that it would work for this as well.

I've used the original Oculus development kit, and I've noticed the same thing, but I think with improved latency and frame rates the effect would be much stronger.


In Topic: Multiverse theory

14 July 2014 - 03:59 PM

This was bad even by trolling standards and you should feel bad.


PARTNERS