Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


godmodder

Member Since 05 Nov 2005
Offline Last Active Nov 18 2014 05:14 AM

Topics I've Started

Voxel cone tracing problem

29 May 2014 - 09:38 AM

Hello,

 

I have succesfully implemented some ambient occlusion with voxel cone tracing. It is a very simple implementation, that renders the scene in 6 directions to separate 3D textures.

 

To improve performance, I wanted to switch to the single-pass voxelization method. The voxelization works and I get better performance. However, if I want to cone trace my ambient occlusion I now get very ugly self-intersection artifacts everywhere. This is because every cone now starts looking for intersections in 1 single 3D texture that comprises the entire scene. How do I avoid these artifacts? Anyone had this problem with VCT before?

 

PS: I have tried an offset in the surface normal direction to start tracing in the shader, which somewhat improves the situation but doesn't eliminate the artifacts completely. Unless I set it to a huge distance ofcourse, but then my shadows start disappearing.

 

Thanks for any suggestions,

Jeroen


decltype on std::async

01 November 2013 - 11:56 AM

Hello,

 

I was experimenting with some C++11. I want to create std::vectors of std::futures that come from std::async.

When using decltype(), this works for lambdas and functors, but not for regular functions:

 

unsigned int m()

{

    return 1;

}


class n {

public:

    unsigned int operator()() { return 1; }

};


int main() {


    auto l = []() -> unsigned int { return 1; };

    std::vector < decltype( std::async(l) ) > r1;

    std::vector < decltype( std::async(n())  ) > r2;

    std::vector < decltype( std::async(m) ) > r3;                  -> this line fails to compile
 
    return 0;
}

 

But the line I indicated fails with the following compile errors:

 

Error    1    error C2780: 'std::future<result_of<enable_if<std::_Is_launch_type<_Fty>::value,_Fty>::type(_ArgTypes...)>::type> std::async(_Policy_type,_Fty &&,_ArgTypes &&...)' : expects 3 arguments - 1 provided    c:\users\jeroen\documents\visual studio 2013\projects\threading\threading\main.cpp    38
Error    2    error C2893: Failed to specialize function template 'std::future<result_of<enable_if<!std::_Is_launch_type<decay<_Ty>::type>::value,_Fty>::type(_ArgTypes...)>::type> std::async(_Fty &&,_ArgTypes &&...)'    c:\users\jeroen\documents\visual studio 2013\projects\threading\threading\main.cpp    38
 

 

Can somebody help me make sense of these errors?

 

Regards,

Jeroen


Virtual address space of 32-bit app in 64-bit OS

27 March 2013 - 01:03 PM

Hello,

 

My application is 32-bit and runs fine on huge datasets in 64-bit Ubuntu. Memory usage goes up to about 6.7GB of RAM.

However, on 64-bit Windows I get std::bad_alloc exceptions, presumably because I'm limited to 2GB of virtual address space for 32-bit apps. I have tried to link with the /LARGEADDRESSAWARE flag, which increases this limit. Indeed this seems to stall the crash, but around 4GB it still fails.

 

Is there a way to circumvent this limitation in Windows, without resorting to full porting to 64-bit? Why doesn't Ubuntu suffer from this limitation, as after all the application is 32-bit on there as well?

 

Thanks,

Jeroen


Lower bounds on the complexity of problems

19 October 2012 - 09:13 AM

Hello, everyone!

I had a session today with my colleagues to practise the defense for an important scholarship. One of the work packages in my project is the determination of the lower bound complexity of a specific new algorithm. The colleagues questioned whether I would be able to define bounds on the complexity of the problem before finding an algorithm that actually solves the problem.

Now I'm sure this must be possible because, if one takes sorting as an example for instance, it doesn't take long to realise that one needs to iterate over at least every element to sort a whole array (hence linear complexity as the absolute lower bound). In practise ofcourse average sorting complexity will be n log n. It seems like a conclusion one could arrive at before knowing even one sorting algorithm.

Is it possible to determine complexity bounds of a problem before discovering an actual algorithm? If so, did this happen before (like with sorting e.g.), so I could use that particular case to motivate my answer?

Thanks very much,
Jeroen

Linear wavelets

19 September 2012 - 03:00 AM

Hi,

I'm researching inverse rendering: extraction of the lighting, materials and geometry out of photos from different viewpoints.
The current algorithm uses a hierarchical refinement procedure, based on a Haar wavelet tree, to guide the optimization process. The process is illustrated below:

Posted Image

As you can see, the estimated lighting is not smooth enough. In game development the most obvious thing to do would be to apply some smoothing filter, but I cannot do it in this case. This would make my estimation much more innaccurate and this is not meant for games but more critical visualisation applications.

So my idea was to replace the Haar wavelets with linear wavelets. Unfortunately, there is far less literature on them.
As we all know, Haar wavelets look like this:

Posted Image

However, I've been scratching my head over what a linear (second order) wavelet would look like. Would it look something like this?

Posted Image
Also, if you know of any good literature on linear wavelets like these, please let me know.

Many thanks,
Jeroen

PARTNERS