Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by bleyblue2

  1. bleyblue2


    Well, I think I'll stick with it then. Thanks for your reply.
  2. bleyblue2


    In my small projects, I have until recently been using a CDXApp class. This class has only one instance created at any time, declared in WinMain. WinMain calls in turn DXApp.Render() each frame (while retrieving windows messages and eventually passing them to this class). It also calls DXApp.Initialize() at startup, and a cleanup function before program end. CDXApp also contains other top-level objects like CInput, CGUI instances, or some other general-purpose variables, such as fTime, a pointer to the D3D device, etc. Now, I wonder if this is really good coding practice, and if I should'nt refactor/modifiy it. So far, three main ideas come to mind : - keep CDXApp and make it a singleton (making Washu angry, since I don't really need a "global point of access to it"). This isn't really different from the current situation. - use "global" variables inside a namespace (ie move all the class' contents into this namespace) - move the class data members to the WinMain function (thus becoming local variables), and members functions into main.cpp (making them global) - but at the cost of clarity, since window-management code would be mixed with, for example, the profiling (timing) code. So, what do you guys think, what have you implemented in your code ? keep in mind that these are only small samples, not to become an entire game engine - still, I'd like to organize my code better. Any help is appreciated.
  3. Hi everyone. I'm currently trying to render a starfield (i.e. stars as seen from space), but falling short of ideas and documentation on how to proceed. I already know how to generate an arbitrary sized (rectangular) image of stars with photoshop, but I don't know how to render a starfield in 3D. The two methods I can think of ATM are the following : 1- Spherical Mapping : the starfield is rendered on a sphere surrounding the viewer (as used in sky domes). The problem is generating the texture (and which corresponding UV mapping) that I'd apply to my 3D sphere model (keep in mind that since I'm generating starfields with photoshop, a method involving this software would be preferred), i.e. how to realistically map a 2D texture representing space on a sphere ? Any advices are welcome. 2- Skybox : perhaps the simplest method, the main issue with this approach is in the generation of the six faces, as I've got (almost) no idea on how to handle the seams between different faces, which makes at least 14 seams to "mirror" between adjacent faces, which seems a cumbersome process. As stated before, I'm mainly using photoshop so if there's a trick I don't know on how to quickly make a skybox from a rectangular texture, please tell me. Thanks in advance.
  4. bleyblue2

    Space and starfield rendering

    Thanks sordid, your method is the logical way to project a cubemap onto a cube or a sphere (what I know how to do), but aren't the seams between some faces too obvious ? if that method works well and the seams aren't too noticeable, I'll go with it. Oh, and btw I'm using the programmable pipeline/directx, and generating my starfield with the method described in http://gallery.artofgregmartin.com/tuts_arts/making_a_star_field.html (see sticky in game design, basically noise and brightness/constrast like you, with some tweakings afterwards). Thanks.
  5. bleyblue2

    Space and starfield rendering

    Red falcon : This link only explains the UV mapping to apply in order to use pre-made textures that fit well with this mapping. The problem is, I make the textures myself, so how could (in photoshop) I make a texture fitting properly to this mapping ? (0 to 1 from one pole to another, 0 to 1 around the equator, mapping latitude and longitude). As for the other solutions (icosahedral and such sorts of mapping), they all present seams, causing "complicated special cases for the tile boundaries". Sloner : your idea is good, but I fear the seams might be too noticeable ! But I'll stick with that if I don't get a better idea till then. Thanks both for your answers.
  6. I am in the process of implementing the geomipmapping algorithm, and I am now trying to solve the problems of the "cracks" (T-junctions or geometry gaps) who appears when two nearby patches have different LODs, it is advised in the paper to use "two triangle fans for this edge". A few questions to everyone who have already implemented this : 1) What kind of primitives do you use when rendering terrain ? I use triangle lists because triangle strips means making a lot of draw primitive quads for a single patchs (my patches are made of 32x32 quads), and triangle fans are not an option. 2) The paper advices not to render the edge of the higher level-patch (replacing it with the triangle fan). But in order not to render an edge, I have to split my DrawPrimitive calls for a patch, thus losing a lot of performance ! I don't get what I should do in this situation (of course rendering the edge's geometry twice is not an option) Thanks for your replies. EDIT: for those of you who don't know what geometry gaps are, have a look at (1) The cracks clearly visible on the terrain because of (2) the LOD variations [Edited by - bleyblue2 on October 28, 2005 4:58:22 AM]
  7. Wudan, you are using the same method as JohnBolton's, i.e. changing the terrain's geometry by adding triangles who don't follow the heightfield. It might be a good method, but I'm rather doing like De Boers and fill the gaps while allowing the terrain to follow the heightfield. Thanks anyway!
  8. Promit : I have well understood the method you are using, thank you again ! (even though I will be doing like De Boer, because I do not have the time to measure the performance nor to implement every method). But maybe you could explain to me how you setup your triangle strips ? As for the IBs, by locking them I was implying modifications, and I have already enough to do with my fillrate, I want the terrain as lightweight as possible, etc. Could you explain what you mean by "caching" and how you code it though ? I agree on minimizing the LODs transitions, but only the user really decides... JohnBolton : This skirt seems rather interesting, as it would be very simple to draw and code ! but I think the LOD changes would be too noticeable though - but maybe you could show me some screenshots/demos ? Thanks.
  9. Thanks both for you replies - it's always a pleasure to have such quick answers. Promit : I'm very interested in how you set up your triangle strips; maybe you could provide some code or an explanation ? As for the gaps "the paper" (I meant de Boer's paper, sorry not to have mentioned that, but I thought everyone implementing geomipmapping would have read it) advices not to add geometry, but rather to change the indices to omit some - still, I have to give you credit for your methode, esp. if you found it yourself, but I recommend you to read "Fast Terrain Rendering Using Geometrical MipMapping", Willem H. de Boer, where you might find some hints in how to improve your current implementation. Thanks a lot anyway ! Wanmaster : As stated before, I'm interested in the setup of the triangle strips, I expect you have to duplicate some vertices like Promit, but I recognize the cost might be worth it. Thanks you for re-explaining the paper's method, I understand it a lot more now. Still, feel free to post supplementary pictures, as it isn't the Nevertheless, since you both seem to have successfully implemented a terrain rendering algorithm, I have to ask a few other questions : In either way, you seem to be updating your indices for every patch who has changed LOD (or whose neighbor has) for each frame. Isn't the lock costly ? I don't think it's possible to do otherwise, though. Wanmaster : when locking your Index Buffer, do you update all the indices or just the ones that have changed ? I'm speaking about the difference at the Lock() level, do you Lock the entire LOD-relative IB or do multiple locks for each index ? Do you keep the same IB from a patch to another (with the same LODs, of course) and modify it or do you create an IB for each "special" patch (i.e. patches whose cracks need to be fixed) ? Since there are also as many IBs as LODs, I suspect you might use the latter (creating all the IBs combinations at run time) as you say you switch between lists of indices but isn't it memory-expensive ? (since terrain is not the only thing I have to render, I'd like to keep it as lightweight as possible). Moreover the former (updating only parts of a single LOD-relative IB or using a single dynamic IB for each LOD, which would be modified whether the lower-level patch is on north, south, east...) seems pretty inefficient, but. To put it simple, which method are you using to create/modify your index buffers ? As this is my last step toward a complete implementation of geomipmapping, I would appreciate any comments/advices from anyone. Thanks in advance.
  10. bleyblue2

    A lot of GPU programs

    Quote:Original post by Anonymous Poster Quote:Original post by matches81 Hi there! I´m sorry to ask my question in another one´s thread, but it just fits so nicely and essentially it covers the same material: a) Are there any simple tutorials for the use of this FragmentLinker, perhaps one just merging a simple light fragment and tex fragment? b) One thing that has been bothering me for some time now, is the following problem: Let´s say I have about 10 different shaders I want to use in one frame, I would really love to set the world, view and projection matrix only once for all these shaders. Is there a way that different shaders share the same variables? In DirectX, you can tell the compiler to bind a constant to a specific register, and set the constants once For example: uniform float4x4 mWorldViewProj : register(c0) will force the matrix to be bound to register c0. You can then set it with something like: device->SetVertexShaderConstantF(0, (float*)&matrix, 4); One gotcha here is that if you're transforming matrices the usual way: mul(P,mWorldViewProjection) the matrices have to be passed in column-major order, so if you're setting shader constants directly, you need to transpose them first. Or, you can forget about the registers and just declare your variables "shared" i.e. shared uniform float4x4 mWorldViewProj; (just keep the same name for all your hlsl files) Then, you have to create one ID3DXEffectPool, passing it to D3DXCreateEffect*** everytime you want to load an effet which will then share its parameters with all the other effects using this same pool. Hope this is clear enough.
  11. bleyblue2

    The English-to-12-Year-Old-AOLer Translator

    OMG SO OLD3~~~~!!!!111
  12. bleyblue2

    Moving into residence

    Quote:Original post by lethalhamster I still live with my mommy.
  13. bleyblue2

    Return of the Elitist Programmer Snob!

    Quote:Original post by Ilici Quote:Original post by CodeBlue Quote:Original post by Toxic Hippo Quote:Original post by nes8bit you people make me sickyou people make me sick you people make me sick aww, be nice: you people make me sick
  14. bleyblue2

    Vertex Shader Diffuse Color

    Quote:Original post by Kija If I want to specify my screen space coordinates in x = 0 to 1024 (or whatever my resolution is) and y = 0 to 768. I need to send these values in to the shader and then convert them to -1 to 1 coordinates, right? Yes, you'd need to scale and translate your screen space coordinates to clip (-1..1) space. Anyway, good luck with lighting.
  15. bleyblue2

    Vertex Shader Diffuse Color

    Kija, your shader settings look right. You might want to test if there's any d3d debugging output. Don't forget to disable Z-buffering (at least Z test). Keep in mind that clip space is [-1..1] too, using the FFP with D3DFVF_RHW needs the screen-space coordinates, i.e. [0..640]x[0..480] but the vs output needs to be in clip space. Armadon, since Kija's vertices are already transformed, you don't need to multiply them by the modelviewprojection matrix. You don't even need to set this matrix in your shader. Kija, if your program still don't work, you might consider sending me your code so that I can have a look at it, if it's possible.
  16. bleyblue2

    Vertex Shader Diffuse Color

    I think your problem is because of the D3DFVF_RHW flag. This flags barely means that your vertices are transformed by your application. But because they are already transformed, they won't pass through the vertex shader ! Removing this flag, you can simulate the FFP by using Armadon's simple VS, i.e. : VS_OUTPUT myvs( VS_INPUT IN ) { VS_OUTPUT OUT; OUT.position = IN.position; return OUT; } So, in conclusion : - don't use the RHW flag with vertex shaders, even if your vertices are already pre-transformed (use only D3DFVF_XYZ), else the transformation pipeline, programmable or FF, will be bypassed; - the vertex shader for pre-transformed vertices should output the same position as provided in the VS input. I think you can now output your color from the vertex shader. EDIT: oh, and btw Arex, HLSL doesn't need no vertex declaration itself, but using any draw*** call requires you to set one - by using SetVertexDeclaration or SetFVF (who calls in turn SetVertexDeclaration nowadays).
  17. If 8 bits per components isn't enough (and that might well be your problem), try A2R10G10B10 for your normal map texture. It should be supported with most new cards anyway, and 2 supplementary bits are worth here (but you can't really use the alpha compenent).
  18. bleyblue2

    Vertex Shader Diffuse Color

    Quote:Original post by Armadon 3) In your pixel shader you just replace OUT.color = IN.color; with //Remember it's in the form //(A, R, G, B) OUT.color = float4(1.0, 1.0, 0.0, 0.0); Actually, it's RGBA in HLSL, so Kija's right. Can't help for the main problem though.
  19. To center your text horizontally, you'd supply to the DrawText method (ID3DXFont interface) a rectangle (parameter pRect) as large as the screen's width, and then use the flag DT_CENTER (paremeter Format) to specify that it should be horizontally centered. A better example (assuming a 640x480 resolution) : RECT rt = { h, 0, h + 50, 640 }; pFont->DrawText(..., ..., ..., &rt, DT_CENTER, ...); To center the text vertically : RECT rt = { w, 0, w + 100, 480 }; pFont->DrawText(..., ..., ..., &rt, DT_SINGLELINE | DT_VCENTER, ...);
  20. Deferred shading, as you probably know, is done in serveral passes. The first pass always outputs position, normal, albedo, etc. for each visible pixel on screen to several off-screen render targets. The other passes compute lighting for each pixel : it takes as input the previous render targets, and for each pixel lit outputs the color of this pixel to the backbuffer. Let's assume you want to do classic N.L lighting for a pixel. Whether this pixel is bump-mapped or not, you only need this pixel's normal and the light direction (in the proper spaces) to do your lighting. You don't need any tangent-space information when you already have the pixel's normal (in view or world space, so that you can have a unique light direction vector). Therefore, you want to output to your Normal Offscreen Render Target (the normal RT), the normal corresponding to each pixel. For non-normal-mapped pixels, the normal is interpolated from vertex normals and stored in the NRT (in the first pass!). For normal-mapped pixels, the normal is computed in the first pass therefore you don't need any tangent space vectors to do deferred shading. You need them in the first pass, when you render your objects to the offscreen buffers : you compute the normal there, transforming them from tangent space (your normal texture) to view/world space; the view/world space normal vector computed goes to the NRT, but remember that lighting is not applied yet. I hope i've explained it enough, good luck anyway.
  21. bleyblue2

    Bump mapping via D3DTOP_DOTPRODUCT3

    The problem is, you would have to transform every single normal (ie. texel from your normal map) to world space. But you don't need to, because you can transform the light vector to tangent space instead - but you can only draw planar surfaces accurately then. Moreover, you can't do transformations just using texture stage states.
  22. bleyblue2

    Bump mapping via D3DTOP_DOTPRODUCT3

    tokaplan, you're right : I don't think you can avoid using vertex shaders to do what you want to achieve on a non-planar surface : you would need to transform your light vector into tangent space for each triangle and set it with the appropriated render state for each triangle. For planes (walls, floor), it works correctly because the tangent space is the same for all the surface. That's a limitation of bump mapping, and that's why normal mapping exists to, well, map normals on an arbitrary surface (but then you need vertex & pixel shaders). Also, for your problem of "lighting too dark", the usual way is to modulate the diffuse color w/ the dot product and you usually add some ambient value, it can be done easily for a constant color, but you might be limited by the FF pipeline to do anything more complicated though.
  23. It seems like you are outputting your interpolated vertex normals into your "normals" render target, which might not be your goal, since it seems like you want to do normal mapping (or bump mapping, nvm). You have to output the surface normals in the first pass to your normal RT, ie. transform them from tangent space (in case of bump mapped surfaces) or model space (per-vertex interpolated normals) to view space, and write the per-pixel, view space normal to your normal RT in the first pass. You will use this surface-local normal to do your (lighting) calculations in your deferred pass. You only need the tangent/binormal vectors to transform the tangent-space normal to model space, not directly in your calculations. Also, your position can be in view space or world space as long as you do your calculations in the good space, ie. a dot product between two vectors of the same space, computing the distance between two points in the same space (point lighting ?) and so on.
  24. afaik, the lighting in the FF pipeline is applied per-vertex. So when you use your own vertex shader, no per-vertex FF operations like lighting do appear (would need to verify it though).
  25. Quote:Original post by xsirxx Now I need to find a way to get in the tangents into a target(no targets left). You usually don't want to output the tangents in the render target with deferred shading - you only need to output the surface normal at this point, which, in case of normal mapping, can (has to) be calculated when you output to the offscreen buffers in the first pass. If you're not using the tangents for normal mapping though, just forget what i said.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!