pauljan

Members
  • Content count

    60
  • Joined

  • Last visited

Community Reputation

137 Neutral

About pauljan

  • Rank
    Member
  1. [Android/help]loading 3d models

    Sure there are, google some more. A good example would be [url="http://libgdx.badlogicgames.com/"]libgdx[/url] . It doesn't allow you to load .3DS out of the box, so you'll have to convert to one of the model formats it [i]does[/i] support. The point however is that your lack of experience in game development, and 3D development in particular, [b]is very likely to stop you from using such an engine effectively[/b]. I'd suggest you try Unity in stead, as it's one of the [i]friendliest[/i] engines out there[i]. [/i]Just use the free version to prototype the game on any of the supported platforms. This will teach you enough about the concepts involved in creating a 3D game to give you a much better start. You'll then be able to decide whether it's worth the money to buy the Android license. At least you'll have a much better idea how much work would be involved in getting to the same point using any of the free engines.
  2. So here I have 8 vertices (I actually have face information too), as drawn by the artist... all in world coordinates. From this I need to derive the Collada OBB collision volume, meaning I need a translation, rotation and half-extents. Rotation is to be expressed as up to three rotations around arbitrary axes. For an AABB this is easy enough, but deriving the rotation for a non-aligned BB is causing me a bit of a headache. Any pointers would be very much appreciated!
  3. center (not weighted) of points

    The 'center of AABB' has the drawback that it produces contra-intuitive results for fairly trivial shapes. Imagine the 3D equivalent of an axis aligned Right Triangle (a AABB cut in half so to speak): the AABB center will be on the edge of the shape. If you are going to visualize this position, or use it as the pivot for rotation, it will look 'wrong'. Let us know what algorithm you end up using!
  4. Nice replies guys, very insightful, thanks! You are very right pointing a true fixed-distance margin would yield a curve around each vertex. And indeed that is not necessary for my goal. Agony: Thanks for the art, very clear! One drawback would indeed be the huge margin at small angles, but that's something all approximations seem to suffer from. The bad news is that small angles are not as infrequent as you would think, as we are talking 3D polygons that are projected onto a 2D screen here, so they often end up as long-and-thin with small angles. Then again, the good news is that a too-big margin is not too much of a problem, as I have some post-processing in place after the pick has been made (I am secretly picking vertices, I am abusing the polygon-picking to make the vertex picking account for occlusion). Matches81: Nice idea, but does suffer from the same problem as my original approach if you only scale the vertices a unit size vector times the margin. Near vertices with small edges, the margin gets too small. I have attached a little sketch to make this a bit more clear, notice how close the yellow lines approaches the orignal polygon near the top of the triangle? hplus0603: Isn't it nice to be communicating on the gamedev forums for a change? Thanks for the cos(beta/2) trigonometry! I am still doubting whether I should opt for the moving-towards-unlimited-margins-at-small-angles, leading to rare but potentially bizarre false positives, or add extra vertices, implicating slightly larger cost for calculating the point-in-polygon test. By using two vertices in stead of one, I can create a much nicer bounding volume for the implicit 'fixed margin curve'. (The yellow margin is the fixed pixel margin I am trying to approach, the blue squares are the vertices I propose to add) One could take the effort to quantify the error for a given polygon (total area introduced outside fixed margin), but like I said before: I have an heuristic in place to compensate for this errors resulting from a too large picking size afterwards anyway, so it isn't that important. I do however like the fact that this approach does compensate for the 'degenerate' case where the bend is 180 degrees, so no worries about numerical instability when things approach that limit. Well, I think I will sleep another night on this, and decide in the morning. Thanks again guys for contributing, you have been a great help!
  5. 3D Cube Example

    The word you are looking for is "Rasterization" (or "triangle rasterization"). Keep on reading that tutorial Paulshady pointed you at: http://www.devmaster.net/articles/software-rendering/part3.php http://www.devmaster.net/articles/software-rendering/part4.php Then use google to find more.
  6. Hi folks, What I am ultimately trying to do is trying to pick a (convex) polygon in 3D, given a fixed size (in screen space) picking margin. Because of that last constraint, I am current doing: 1. Project Polygon to screen coordinates (using modelview and projection matrices) 2. Extend the now 2D polygon to all sides by [margin] pixels 3. Do a simple 2D point in polygon test with the mouse coordinates. I've got (1) and (3) basically covered, but (2) is giving me a bit of a problem. How do I extend a 2D polygon by (at least) X pixels to all sides? Simply scaling all vertices out from the centre doesn't work (obviously, but I had to try it out first to get the picture :D). I frankly don't know how to handle this. Perhaps I should, at each vertice, move X pixel in the 'outward normal' direction of one edge, and add an _extra_ vertice at X pixels in the normal direction of the other edge? This would mean effectively doing the point-in-polygon test v.s. the double amount of vertices, but I don't mind doing this as long as I feel assured that this approach makes at least a bit of sense. I apparently don't know now the correct terminology to search for, as google has thus far not resulted in any useful information. Any help is most appreciated!
  7. how to update normal instantly?

    The YAGNI part referred to simply recalculating all normals as long as you are not actually running into speed-problems at the moment. More relevant to your question here is the 'invalidate' part. Upon changes, you don't immediately recalculate the affected normals, but simply 'invalidate' them (set them to NULL, keep a flag, whatever you like). Only once you actually need them, you recalculate them. Either through an explicit call (i.e. "calculateInvalidatedNormals()") or through lazy evaluation of the normals. Hope that helps!
  8. GL/GLSL: How to approach a problem?

    Depending on how complex that 'other' stuff is, an how much of it there is, perhaps you can use glFeedbackBuffer and GL_FEEDBACK to record the OpenGL calls, replaying them later with your desired linewidth.
  9. how to update normal instantly?

    I take it your vertex artifacts are simple averages of the normals from the surrounding faces, right? One approach is to keep an information structure up to date with vertex->faces mappings (or vertex->edges, edges->faces), also known as 'reversed topology'. Once you have this structure, updating only the affected normals should be easy (I don't really understand the complicated case you tried to sketch out, sorry). Such a reversed topology structure is quite a lot of work to maintain, but it is absolutely necessary for most non-trivial modeler operations anyway. But then again, depending on the current state of your modeler, you might not yet need such non-trivial operations, and better stick to the YAGNI approach of invalidating and recalculculating all the normals once they are needed. Try it. You'll be surprised at the speed.
  10. OpenGL OpenGL Extension

    I take it you just want to map that texture on a quad in the background and show that with 1:1 pixel accuracy anyway? Personally, I'd write a routine to chop up an image into smaller textures mapped onto individual rectangles, using the closest power of 2 numbers you can get (with some threshold). In this case, that would imply chopping the 800x600 image into a 512 + 256 + 32 by 512 + 64 + 16 + 8 image. Or 512 + 64 + 32 if you want to draw the line at individual 32x32 textures. A bit tricky to set up correctly, but in general those extra polygons are cheaper than wasted texture memory. It depends very much on your specific application.
  11. Multitexturing and blend

    I think I see what is causing your confusing here: Multitexturing != scene blending. You want to blend the two water textures together, then blend the water into the scene using glBlendFunc. Make sure to enable GL_BLEND, and render transparent surfaces last (and sorted). My humble apologies if this was not at all what you were getting at.
  12. Per-material culling setting

    Ah, forgot about the normals, good point (taking notes). Your custom solution sounds good, our exporters could take care of this on the fly. Then again, I still think it would be better if the user just creates both sides of the geometry himself, our modeler interface should make this trivial. I guess it comes down to this: you don't need an extra concept (culling) to create two-sided geometry, so I'd rather keep things simple and not introduce it. AFAIK there is not significant overhead.
  13. 1. Make sure the HWND your rendering context is bound to does not get disabled when Delphi does it's "DisableTaskWindows" trick to make the form modal. Try enabling the Debug DCU's, that usually helps resolving message/event based problems. 2. Does your redrawing event get called continuously or only once? Other than that, rendering OpenGL graphics in a panel on a modal form should work just fine. You might want to show us some code, so we can help with the debugging.
  14. Of late, we have been getting some requests to implement "double sided" materials in our 3D modeling package. Effectively, this means turning culling on/off per material. I have always been under the impression that implementing culling on a per-material basis is a bad idea, and as a rule of thumb you should just always use culling (and build double sided geometry where needed). However, I was wondering what the 'general consensus of the community' was on this topic. Am I dead wrong here? Is everyone using non-culling materials in their games (for things like transparent/semi-transparent)? Does the reduced polycount outweight the extra material state (splitting up the rendered batches)? Are there any rendering artifacts I am not aware of that can only be resolved using non-culled rendering? Any feedback is most appreciated.
  15. OpenGL History of openGL

    The short history section at http://en.wikipedia.org/wiki/Opengl isn't exactly much more complete, but it definitely is a lot more structured.