Sign in to follow this  
  • entries
    455
  • comments
    639
  • views
    422478

Victory is mine!

Sign in to follow this  
_the_phantom_

62 views

So, having made the code corrections I rebooted into WinXP x64 to test out the program, because ya know, blind coding it's at all daft [grin]

Loaded up VS, hit run and watched as it crashed.
Oh dear.

A number of errors came to light; a great stream of GLSL errors on the console was the least of my worries so I corrected the typos first; however two were a case of me trying to use vertex attributes in a fragment shader... yes, of course, that was always going to work [rolleyes]

The program also crashed while trying to copy data from a mesh class to a VBO; turned out this was a case of me being dumb as when I was constructing the data in the class I was incrementing the pointer to it, which of course meant the data was all there but my pointer was pointing at the end of it.. doh! A couple of quick vars added later and this problem was solved.

The attributes one however took a bit longer as I had to convert the code from using attributes to using textures to store the data (yay! more FP32 data flying around, although I suspect I can get away with FP16 for this as well); my first method using PBOs didn't appear to go well so I swapped to a tmp buffer and a normal upload; this also appeared to fail until I realised I was coping too little data (a problem I keep having it seems...[sad])

With that all fixed the code finally came back to life and I had a TLM simulation going again, huzzah!

In theory now it will work on ANY object as the vertices are displaced by the energy amount along their orignal normal, so for a plane it would be up and down and for a sphere it should move in and out... I currently lack a sphere mesh to test it with however, I'll add that later.

After this I moved to the CPU based ones... and dear god the CPU version is sloooooooooow, however it's not the TLM sim which is the problem but the normal regeneration of the scene as the same data being run with GPU displacement but no normal adjustment (because I can't do it, which kinda sucks) runs alot faster.

The normal calculation is doing it with 8 positional taps however so it's a little CPU intensive and probably a bit wonky cache wise.

Once I'm done with this I'm going to copy the two CPU based ones and OpenMP the main loops, see if I can get some more performance out of them (I was considering a custom work threads system but in the end decided I couldn't be bothered [grin]) and then, depending on the time, I might see if I can SSE/MMX them a bit in a 3rd version, but that's a MASSIVE maybe, infact I might leave it until I'm done with my write up and see how much time I have left.

The other thing I need to do is rig a test system up which can run each module with each mesh at each resolution a set number of times and extract timing information from each of them.

Oh, and I think my normals are backwards [grin] Need to check that maths at some point too [wink]

Right... back to the code!
Sign in to follow this  


0 Comments


Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now