• Content count

  • Joined

  • Last visited

Community Reputation

125 Neutral

About sylpheed

  • Rank
  1.   Hi guys, we have released our new game a few hours ago. It's an arcade game based on the fantastic and funny Bomb Jack.   You can download it from Apple Store here. It's free! ;)   The promo video:   [video][/video]
  2. How do you pick an edge?

    [quote name='luca-deltodesco' timestamp='1303740375' post='4802659'] It would suffice to simply find the closest edge to the cursor-ray, choosing those less than your epsilon; if you like you'd be intersecting your cursor ray with capsules that contain each edge as you do with spheres that contain your vertices. probably what could be even better (and far simpler), is to simply find the first intersected triangle, and evaluate if the intersection point is close to an edge/vertex to choose those instead of the triangle face. if you needed to have wireframe; having triangles with no solid faces too, then you could simply continue along ray if you intersect a triangle which has no face, and the intersection point is not sufficiently close to the edges/vertices. in both cases you can project the intersection point onto the given edge/vertex to get a picking point on the feature if needed. my second proposed method would likely be much more easily transformed to make the 'thickness' of the edge/vertex be screen-space constant which would probably be far preferable as then the distance from cursor to the visual representation of the edge/vertex required to select it wouldn't become too large as edge/vertex approaches near clip-plane, or too small to be able to reliably select the edge/vertex if it moves very far away from the camera. [/quote] Thank you luca, that's a really nice idea. I will search the picked triangles and then choose the nearest edge
  3. How do you pick an edge?

    [quote name='wildbunny' timestamp='1303737022' post='4802644'] Have you tried doing distance to edge from click point? Then pick nearest? Cheers, Paul. [/quote] I thought doing that first time, but exists the problem of "where is the clicked point in 3D space?", I could calc the ray direction picking, but I need to know how much distance is the "clicked point I want to" from the camera. I would need a plane or something to calc the point in 3D space and then do distance to edge. Sorry for my English, it's not very fluent.
  4. How do you pick an edge?

    I'm coding a mesh editor and I need to pick edges. I have no problem picking vertices (sphere collisions) or polygons (triangle collisions), but I can't solve a line picking efficiently, I was thinking about picking triangles from a cube envolving the edge, but it's a bit expensive. Anybody knows a way to do this? Thanks in advance
  5. How road creation works?

    Great answers, thank you. So, if the track is a static mesh, does it mean graphic engine has loaded the whole track before race starts? Then, has the track the same behaviour like a common character (in "load/free model" senses) or should be managed in other way?
  6. How road creation works?

    Just curious. I'm a 3D developer but I've never worked in a game. Other day I was playing Burnout and I asked myself, how are roads generated? In generar porpouse, are they static mesh or dynamic mesh? Thanks in advance
  7. Does glAccum use AA filter?

    Thanks for you answer. Old hardware is not a problem for our purpose, we have a high system requirements, so framebuffer should be supported by clients. I'll try with framebuffers, but, what would you use to mix the results offered by framebuffers? Using accumulation buffer, I had: for (...) ... glAccum (GL_ADD, ...) ... end glAccum (GL_RETURN); With framebuffers, should I use glTexEnvi with GL_COMBINE instead of glAccum (GL_ADD,...)? Thanks again
  8. Hi all, I have a problem with AA in my application, I need to use glAccum function in a post process filter. If I render my objects without this post process filter (so, I'm not using glAccum), objects appear fine without jaggies. Instead, if I activate the post process filter (so, I'm now using glAccum), objects appears with jaggies in edges. I was wondering if glAccum uses or not AA. In my application, the results offered by glAccum (GL_RETURN) are stored in a texture to process the filter. After that, I render a whole fullscreen QUAD with that texture modified by the filter. I have tried to show directly the return of glAccum in the screen, but the results are the same, objects with jaggies. So, my question is: is there a way to activate the AA filter in glAccum calls? Thanks for all and sorry for my English.
  9. Hi all, I'm having a problem drawing two objects in the scene. They're very close each other and they don't show correctly when the camera move away, some vertices put on top of others and they shouldn't (yeah, the typical depth precision problem). I have searched in google and I've found this: It seems that solve my problem, but this example uses FBO. Is there any way to increase OpenGL depth buffer precision without using FBO? Thanks in advance and sorry for my poor English.
  10. Finally... I DID IT!! Tears in my eyes, is a beginner code, but it has been soooo long... and now is a happy end :) Thanks Bluntman for your help, you give me the trick! The steps are: 1) Render all previous passes in FBO's (the last pass uses all FBO's textures for calc the results). 2) In the last pass, render a full screen Quad (it is equivalent to "target=buffer"). For those who have the same problem, dont't forget binding FBO's textures to CGFX texture samplers!! ;) Best regards.
  11. Quote:Original post by bluntman Well I haven't used FxComposer, but from what I understand from what you say, the draw=buffer is used to cause a full screen quad to be drawn to the screen using the specified shader. So to do this in OpenGL with Cg you need to: 1) set the shader you want to apply to the fullscreen quad (the edge detection shader I guess). 2) attach the texture (your FBO containing the render of the normals) to the shader parameter. 3) draw a quad that fills the entire screen. That should be it! I have code but it is quite modularised and split up, so wouldn't be easy to immediately understand, but it should be simple (ish) to do what you are trying to do. Again, thanks a lot for your help. For now, we have moved the post-process for two weeks, we're in a hurry with other things of the application. I will tell you when we take up again. Thanks for your time. Best regards.
  12. I'm almost there... need help though. I've downloaded another shader from Nvidia Library, its algorithm is very similar and provides various techniques, one of them is very "simple": - Pass1 -> Calc normals. - Pass2 -> "Do something" (particularly detect edges from a object) using normals from the pass before. The technique is like this: Quote:technique NormsOnly < string Script = "Pass=Norms;" "Pass=ImageProc;"; > { /* * PASS 1 */ pass Norms < string Script = "RenderColorTarget0=gNormTexture;" "RenderDepthStencilTarget=gDepthBuffer;" "ClearSetColor=gClearColor;" "ClearSetDepth=gClearDepth;" "Clear=Color;" "Clear=Depth;" "Draw=Geometry;"; > { VertexProgram = compile vp40 simpleVS(gWorldITXf,gWorldXf, gViewIXf,gWvpXf,gWorldViewXf); DepthTestEnable = true; DepthMask = true; CullFaceEnable = false; BlendEnable = false; DepthFunc = LEqual; FragmentProgram = compile fp40 normPS(); } /* * PASS 2 */ pass ImageProc < string Script = "RenderColorTarget0=;" // re-use "RenderDepthStencilTarget=;" "Draw=Buffer;"; > { VertexProgram = compile vp40 edgeVS(QuadScreenSize,gNPixels); DepthTestEnable = false; DepthMask = false; BlendEnable = false; CullFaceEnable = false; DepthFunc = LEqual; FragmentProgram = compile fp40 normEdgePS(gNormSampler,gThreshhold); } } Now, thanks to bluntman, I have rendered the normals to a texture (using frame buffer objects) and now the Shader has access to this texture. This is executed in pass number one and it works, I have tested it: It's a teapot on a triangled plane :). Problems come in Pass2 due to FX Composer Scripts. I don't know how to perform the script "Draw=Buffer;" using Open GL. I've readed SAS from CGFX: Quote: Passes can also specify what to draw in each pass – either the geometry from the scene, or a screen-aligned quadrilateral that will exactly fit the render window. We use the pass Script “Draw” command to chose: either “Draw=Geometry;” for (you guessed it) geometric models, or “Draw=Buffer;” for a full-screen quad. If neither is specified, a “Draw=Geometry;” will be implied at the end of your pass Script. That sounds pretty cool using FX Composer, but I have no idea how to implement using Open GL and CGFX API. How could I do this? This is a nightmare, ten hours non stop, I only see code around me :( Please, anyone could help me? If anybody has an example of post-processing using OpenGL + CGFX, please share it, I would appreciate it so much. Best regards.
  13. You couldn't imagine how much I appreciate your help :D Thank you very, very much. After two days in the hell, I see a piece of hope in my code. Best regards.
  14. First of all, sorry for my poor English, and sorry again if this thread is not in the correct section, I am new in the forum :) I have an application that uses C++, OpenGL and CG. I have no problem rendering my objects with CG shaders. In fact, I'm using FX Composer to create Shaders, so I have to use Techniques and Passes. The problem is that I do not know how to apply a post-process shader to the whole scene. I have downloaded a shader from Nvidia library, specifically this: It works in FX Composer, but I can't use it in my app because I'm not sure how to implement in my code. For standard shaders, I do this: Quote: CGpass pass = cgGetFirstPass(technique); while (pass) { cgSetPassState(pass); glCallList(...); // Render the mesh cgResetPassState(pass); pass = cgGetNextPass(pass); } Please, could you help me? I've searched for two days in Google but I don't find anything :(