Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

234 Neutral

About clapton

  • Rank
  1. Hello! Do you know if it's possible to use iPhone's bluetooth to connect with different devices? Does SDK supply us with any bluetooth-framework? I've searched apple developer page with no results. Thanks in advance
  2. clapton

    Depth-sorting or Z buffer ?

    Quote:Original post by C0D1F1EDYou could use SoftWire, it's the open-source version of the dynamic code generator I use in SwiftShader. The package contains a few documents that could help you get started. Thanks! You are great, man! That was exactly what I was looking for. Quote:This is still the best VSD algo I know, nothing I've ever read seems better. Your solution reminds me of a similiar technique described by Abrash in one of his articles. It seems to be good choice if you always use BSP trees to store your geometry. Anyway, z-buffer is more general approach I guess. Too bad that the university absorbs all my creativity and energy that I could use to finish my renderer. :P Cheers
  3. clapton

    Depth-sorting or Z buffer ?

    Quote:Original post by C0D1F1EDThe ultimate approach is dynamic code generation though... In a meantime it made me think. I guess there is a great dose of applications of reflective programming in computer graphics. Using self-modyfing code would mean that I can get rid of various state checking in time-critical loops, reduce branching and even avoid vtables in some situations. As an example, imagine that during inititialization of a scene I have specified a material for each object. Then I could recompile code responsible for computing a final pixel color (assuming the user won't change material properites at run-time). I am not sure if it can reach that far. At least I could control texel fetching modes this way. ;) Anyway, do you know any good readings on dynamic code generation? I guess that the only way in case of C/C++ is to go assembly but that's not a problem. Thank you
  4. clapton

    Depth-sorting or Z buffer ?

    Quote:Original post by C0D1F1ED MMX uses the same registers as the FPU, and you can't use them together (with the exception of the AMD-specific 3DNow!), but you can use MMX and SSE together without any trouble. OK, great. I must have misread something. Actually I've been thinking on putting majority of the fixed-point into MMX (the rasterization) and all transformations into SSE. Maybe just for sake of simplicity I won't mix different SIMDs. Quote:What does work though in my experience is 'deferred rendering'. First render the whole scene but only doing z operations. This way your z-buffer will be filled with the z coordinate of the nearest geometry. Then you render the scene again with texture sampling and everything else, but not writing to the z-buffer. This way you have zero overdraw (for the second pass). A disadvantage is that every triangle is processed twice though. But for high overdraw it's a win. Sounds promising! It won't be hard to implement once I've got my Z-buffer fully working. I'll attach this step before the final rasterization and see how it works. Have you used edge-equation rasterization in SwiftShader? Just curiosity. ;) I've got terrible tons of work at the university so I don't know when I finish with the Z-buffer. ;) If you like, I can let you know about the results, COD1F1ED. Cheers
  5. clapton

    Depth-sorting or Z buffer ?

    Quote:Original post by C0D1F1ED Anyway you assumed that there is no depth complexity in the scene. True, high overdraw can really kill performance for a software renderer. If you have control over the application side as well, make sure you render front-to-back as much as possible.[/quote] Actually I've got pretty much control in this case. There will be no simple way to render raw triangles (nothing like immediate mode in OpenGL) but the smallest entity will be 'mesh'. Each mesh will have its own bounding box so before sending vertices down the pipeline I will clip meshes against the frustum and sort them when 'mesh assembling' is finished. Quote:Normally you can keep z interpolation separate from the rest, so you can use floating-point there and fixed-point for the rest. It really depends on the precision you want. The reason why I choose fixed-point z-buffer is that you cannot mix floating point operations with MMX (at least that's what I heard). I've read somewhere that there is some clock penalty when switching from MMX to floating point. Sure that SSE/SSE2 use the same registers as MMX but how about coprocessor routines? Quote:For a z-buffer 16-bit integer is enough, but you need to be quite careful there is no unnecessary precision loss. With floating-point it's a whole lot more straightforward and no slower in practice. How about 32-bit z-buffer - maybe it's a better choice in case of 32 bit architecture? What do you think of hierarchical z-buffer, C0D1F1ED? I am not sure if scan-converting bounding boxes would be a good choice in case of scenes of small complexity but pyramidal z buffer seems interesting (probably better CPU cache management). Thank you
  6. clapton

    Depth-sorting or Z buffer ?

    Quote:A modern CPU can do tens of billions of operations per second. Say for example we have a 2.4 GHz CPU and we'd like to render at 800x600 resolution with 25 frames per second. That means you have 200 clock cycles per pixel, per frame. That's plenty. You'll only need a fraction of that for z-buffering. Nice example! :) Anyway you assumed that there is no depth complexity in the scene. I am thinking about integer z-buffer since all my per-fragment operations are fixed-point (is it good or bad?). Then I could try to optimize the scan conversion with MMX. Quote:But that certainly doesn't mean memory is slow in general. In the case of a z-buffer it is accessed mostly linearly, so it's really not a problem. Well, my doubts appeared after watching a few software-renderers which work totally crappy on my PC (I won't name any though ;). My aim is to write something simple but highly interactive. :)
  7. clapton

    Depth-sorting or Z buffer ?

    I've investigated the idea of the hierarchical z-buffer. :) While that's a totally awesome technique, I am afraid it won't be suitable for my purposes. The reason is that my scene complexity is not going to be big (4k-5k tris, small depth complexity). The algorithm prooves its value when it comes to huge data sets (milions triangles perhaps) but for smaller problems standard scan-convertion appears to be slightly faster (at least that's what authors say). Implementing the hierarchical z-buffer in pure software could cause an overhead due to scan-converting the bounding boxes. Anyway I am thinking about using Z-pyramid on its own as it can be used separately of spatial occlusion tests. I know what I'll do. First I will code a classical scan-conversion Z testing and if the result comes out bad, I will worry about some other solution. ;)
  8. clapton

    Depth-sorting or Z buffer ?

    Hierarchical Z buffers seems very interesting. I will check that out! Quote:You could also think about tiling the screen, binning triangles based on intersections with tiles, and rasterizing one tile at a time. This means potentially rasterizing some tris more than once, but this way you can work with small chunks of the Z and colors buffers that will fit nicely in cache. It's an interesting idea. But it would require clipping triangles against tiles which could result in a overhead. I'll think about it, though. Btw, is there any way to check which portions of memory are stored in CPU cache? Hm. Thanks
  9. Hello! I am writing a software renderer (pure pleasure). I am stuck at the moment where I should decide which surface visibility determination method I should go for. A couple of years ago, there was only one choice (depth-sorting by averaged Z coordinates per triangle) but these days CPUs are probably strong enough to handle Z buffering. The truth is that Z-sorting is awful in terms of the renderer design. First of all, it cannot guarantee correct results (mostly noticable when objects are close to the camera). The second thing is that it takes a way a whole bunch of optimizations of the graphics pipeline (i.e. sorting triangles by material/minimizing state changes of the renderer). But Z-sorting would be FAST anyway (the work is per triangle, not per pixel). The Z-buffer is great but it introduces a lot of per-pixel computation. I am afraid that it can be too much even for nowadays CPUs. :/ Sure that perspective texture mapping requires 1/Z interpolation as well but that's only a minor problem. There would be a heavy memory traffic between the CPU and the Z-buffer which surely won't fit the CPU cache (reading, writing ...). OMG. I thought about making a kind of a hybrid solution - use z-buffer to close objects while z-sorting to distant ones. But it doesn't hold together (what happens if there is a huge objects which uses a z-buffer while it still covers distant meshes that doesnt use it?). I should make one choice. What do you think I should choose? Z-buffer seems to be a blessing but won't it kill all my effort to optimize the code? :F Thanks for your help, I'd like to know your opinion
  10. Remember that you can always render your scene in multiple passes (one pass for each light, additional passes for some other effects etc) and the blend the results of each pass to get the final image.
  11. clapton

    C++ : creating 'devices'

    Quote:Original post by Anon Mike You eventually will need to do something manually. In this case typically what you would do is in your initialization code you tell each device type register itself. Then later on when the user gets around to asking for a device you have a list you can run through. You can minimize the amount of manual stuff you need to do by creating a device registration class that does the dirty work. Ok, I get it. Actually it is pretty easy to do. Quote:I don't know why you want virtual static functions for. I simply wanted to keep the global device-creating function within the class of the device. Just trying to minimize the amount of global stuff. Thanks
  12. clapton

    C++ : creating 'devices'

    Hi! Quote:Original post by Anon Mike This can be done directly by having each individual device type register with the manager This one sounds interesting. You mean there is a way to make device-classes register automatically? Or someone needs to do that manually? Quote: For the enforcement issue I like to use static member functions to do my creating. It means you can't say "new Whatever()" and arrays are annoying but for the kind of things I go through this sort of effort for I don't care either. I tried to make a use of static methods but then I discovered that you obviously can't have virtual static methods. ;) More, I didn't want anyone who implements 'a device' to add any methods that are not listed in the interface (!). The solution (your post above) is that device implementation require CreateDevice and DestroyDevice but the only way to invoke these methods from outside the class is to use template functions. Perhaps I made it over-complicated but it works fine. If you see any drawbacks with the solution, please let me know. Quote:For distruction I tend to use ref counting and have objects that destroy themselves rather than explicit delete functions. Yeah, I've been thinking on it but since I can't understand it clearly I decided not to use reference counting. I guess that rc concerns dynamic allocation only? What I am trying to do is to make a simple abstraction for rendering system. Depending on the platform, user will switch 'devices' (win32 + BGI, win32 + DDraw, lin + SDL ... ) while the actual renderer would stay untouched. Thanks for your help
  13. clapton

    C++ : creating 'devices'

    In the meantime, I came up with a following thing : class IDevice { public: virtual void CreateDevice(int width, int height, const char* pTitle) = 0; virtual void DestroyDevice() = 0; protected: ISSRDevice(){} virtual ~ISSRDevice() {} }; // createDevice/destroyDevice template<class T> void createDevice(T* &pDevice, int width, int height, const char *pTitle) { pDevice = new T(); if (pDevice != NULL) { pDevice->CreateDevice(width, height, pTitle); } } template<class T> void destroyDevice(T* &pDevice) { if (pDevice != NULL) { pDevice->DestroyDevice(); delete pDevice; pDevice = NULL; } } // now the implementation class Device : public ISSRDevice { // This two must be included in device implementation class friend void createDevice<SSRDevice>(SSRDevice* &pDevice, int width, int height, const char* pTitle); friend void destroyDevice<SSRDevice>(SSRDevice* &pDevice); public: void CreateDevice(int width, int height, const char* pTitle) {} void DestroyDevice() {} protected: Device() {} Device(Device&) {} virtual ~Device() {} Device& operator=(Device&) {}; } And here is how I use this stuff : win32::Device *pDevice; createDevice<win32::Device>(pDevice, 640, 480, "window name"); destroyDevice<win32::Device>(pDevice); What do you think ? :F
  14. Hello! I've been thinking for a couple of days on this one but I can't find an elegant solution. Assume that there is an interface class which specifies 'a device'. Most probably there are many particular device implementations. I am looking for a way to 'create a device' selected by the user while not requiring user to mess around with library code (i.e. no switch statement selecting an implementation). Here is what I'd like to achieve : // We've got a pointer to a concrete device deviceWin32::Device *pDevice; // There is a global function (perhaps template?) which creates the device // IMPORTANT : this should be the ONLY way to create device createDevice<deviceWin32::Device>(pDevice); // There is a global function which destroys the device // IMPORTANT : this shoud be the ONLY wat to destroy device destoryDevice<deviceWin32::Device)(pDevice); The second major problem is - how to limit the code user to use only create/destroyDevice and forbid any other way of Device creation? One solution is to make createDevice/destoryDevice friend functions in deviceWin32::Device but it doesn't seem like an ideal solution. Thanks in advance!
  15. clapton

    Ant + ignore Java abilities

    Nobody uses Ant here ? :/
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!