Jump to content
  • Advertisement

rick_appleton

Member
  • Content count

    1491
  • Joined

  • Last visited

Community Reputation

864 Good

About rick_appleton

  • Rank
    Contributor

Personal Information

  • Interests
    Programming
  1. I'm wondering about something that I haven't been able to find much info on. Assuming Virtual Texturing, how do you integrate materials with non-virtual textures? What ID to give meshes with that material, so that it is rendered into the feedback buffer, and so you know that that ID maps to an unloaded piece of plain old texture. I see two options: I can give it any unique number initially, and then when it is actually needed to be paged into the physical texture change the ID so that it points at the right location in the indirection texture. Manage a BSP for the entire Virtual Texture space. Then when a plain texture comes in, find a place in the virtual texture that is empty, and assign it an id for that space. No need to actually load the texture yet, until the system determines that id is needed. In this case the ID will always point to a valid place in the indirection texture. Is either of these the 'usual' solution or am I missing something?
  2. rick_appleton

    From: Opengl texture quality

    You'll need to give your exact matrices and draw call info for us to really help. It probably has to do with the sampling position of the texels. The best link I could find in a few minutes is http://www.gamedev.net/topic/396461-solved-pixel-perfect-textures-opengl/
  3. rick_appleton

    iPhone Development Pitfalls

    Thanks for the tips!
  4. rick_appleton

    A particle storm...

    You know, I actually like the green quads. Reminds me of Darwinia.
  5. rick_appleton

    Moving, keep on moving...

    Good luck at Codies!
  6. rick_appleton

    Particles and Unemployment

    Sorry to hear that Rob, although I guess if you're okay with it I needn't be :D Any ideas where you want to go now?
  7. rick_appleton

    Generalized Animations

    Looking forward to it :D I'm actually not entirely sure I'll win. However, I dare say that I've spent more time on optimizing my version. So it'll be interesting to see the trade-off between time spent on programming a CPU version, and the powers of a GPU version. Edit: Are you sure AssImp can import animations? I couldn't clearly see on the site.
  8. rick_appleton

    Vertex Skinning Functional

    roel: you're right, I do believe it's not the most straightforward thing to implement on the CPU. However, I've got OpenCL and CUDA versions running, so I imagine a pure Vertex Shader version should be doable. Jason: yep, I'm using OpenGL. Currently the fastest skinning is simply doing it on the CPU. This is something I'm quite surprised about. I would have expected that a CUDA version would perform better. However, I'm running this on the cheapest MacBook Pro, with integrated GPU, so I'm inclined to say that's the bottleneck. Still need to check this though. And that was exactly what I'm proposing; fixing a scene and a viewpoint, and a render method (currently I'm doing simple full bright texturing). Here's my current test-scene:
  9. rick_appleton

    Vertex Skinning Functional

    Doom3 has some nicely vertex blended skinned models, and it's format is really easy. I've been using those for my tests. I'd definately be interested in hearing a like for like comparison of how my skinning code is performing if you do decide to use Doom3 models.
  10. rick_appleton

    R9 Progress

    I've been following your progress on Epoch with quite some interest. It looks like you've been making great strides towards a useful language. As someone who has recently dived into CUDA, I'm wondering how you generate the code for CUDA, as getting optimal performance is not easy. I converted some Doom3 animation code to CUDA, and optimized it on CPU as well, and although I could improve the CPU code easily, it's taken lots of twiddling to get good performance out of the CUDA code. Granted, I'm running on an integrated 9400M GPU, which will partially be to blame for the bad performance. However, you'll never easily get optimal performance on GPU with this kind of automatic code generation. But then I guess that's not your goal? Rather, you just want to get 'better' performance on GPU with minimal work? I'm looking forward to your next posts. Rick
  11. rick_appleton

    Almost done

    I picked this up via Twitter and started reading. Will attempt to finish when I have time. I hope all is well in Oxford?
  12. rick_appleton

    Untitled

    I thought the following was the standard way to do this? // Get number of bytes read so far u32 getNumBytes() const { return (m_nOffset+7)/8; }
  13. rick_appleton

    Unit Testing a renderer

    I've always been interested in Unit Testing and Test Driven Development. Unfortunately, I've never had the chance to apply this to any of my professional projects. So I decided to try it out at home. Since my interests lie with graphical programming, and general software architecture, I've decided to apply Unit Testing and to a lesser degree Test Driven Development to my refactoring/rewriting/implementation of multiple renderers. Initially only on the PC platform under Windows, but later also Linux and non-PC platforms. Different renderers would be OpenGL 1.5 era, OpenGL 3.0 minus deprecated functionality, Direct3D 9, and Direct3D 10/11. A software renderer might be a nice exercise, but I'm not sure I really want to do that. For non-PC platforms I want to write a DS renderer (homebrew). The first question I asked myself was "How does one Unit Test a renderer?" That seems fairly obvious; you generate pictures and compare them. What to compare them with? I decided I would initially generate a picture, manually verify that it is what I expect, then archive it, and from then on compare with the existing picture. This works nicely for refactoring, but the initial process isn't quite 'Testing'. However, I don't see a way around it really. Since especially the DS renderer will be low-resolution I've decided to keep these images smallish at 128x128 resolution. To compare the images I initially wrote my own comparison routines which compared the images pixel-by-pixel. This worked well until I moved development onto a different PC. At that point the images generated by the renderer suddenly weren't matching the stored images. After looking at the image closely it turned out that in the WhiteTriangle image below the horizontal and vertical edges were each extended 1 pixel, so the edge at 45deg was one pixel to the right and one pixel up. This totally broke my image comparison. First I thought one of the images was incorrect because of driver issues, but after posting on the forums Brother Bob mentioned something about OpenGL allowing variations. After a look in the latest OpenGL spec it turns out he's correct: Quote:OpenGL 3.0 Spec, Chapter 2.1 The GL is designed to be run on a range of graphics platforms with varying graphics capabilities and performance. To accommodate this variety, we specify ideal behavior instead of actual behavior for certain GL operations. In cases where deviation from the ideal is allowed, we also specify the rules that an implementation must obey if it is to approximate the ideal behavior usefully. This allowed variation in GL behavior implies that two distinct GL implementations may not agree pixel for pixel when presented with the same input even when run on identical framebuffer configurations. What to do now? Clearly I needed to compare the images differently, allowing for some variation. Since I wasn't interested in writing image comparison routines I took a look on the net. Happily I found a solution: PerceptualDiff. This GPL library does some kind of perceptual-based comparison on two images. After writing the interface between my code and PerceptualDiff, the 'errant' PC happily reports the images are identical. I'll need to keep an eye on it as I don't know how different images need to be to trigger a failed comparison, but so far it's working nicely. The process has already uncovered a latent bug in my file loading routines which has been there for years, and numerous bugs in my own image comparison code (which is scrapped now anyway). Overall I'm quite happy with how things are going. Now the process is working, I'm going to attempt to write at least one test a day (on average). This should get the total test count up pretty quickly, and hopefully the test-coverage will go up with it. Here are the first few images I'm testing against, so far only with a basic OpenGL 1.5 era renderer. Default clear color (set inside each renderer to be dark gray) Custom clear color A simple triangle (both Projection and Model/View matrix are set to identity) A translated triangle (Projection matrix is still identity) A scaled triangle A rotated triangle Combinations of the above matrix operations are nice simple tests to add in my daily additions. An isocahedron (a 3d mesh, mostly in preparation of adding lighting in the next tests)
  14. rick_appleton

    More homepage

    How would this page look on a widescreen monitor? I'm using one part of the time and find it annoying that so much horizontal space is unused by many sites (gamesindustry.biz for example). iGoogle on the other hand is very nice, but probably overkill for this? Other than that I like the starkness of the mockup.
  15. rick_appleton

    Quick update

    Looking forward to the next part, and good luck with your health.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!