• Content count

  • Joined

  • Last visited

Community Reputation

180 Neutral

About bleyblue2

  • Rank
  1. CDXApp

    Well, I think I'll stick with it then. Thanks for your reply.
  2. CDXApp

    In my small projects, I have until recently been using a CDXApp class. This class has only one instance created at any time, declared in WinMain. WinMain calls in turn DXApp.Render() each frame (while retrieving windows messages and eventually passing them to this class). It also calls DXApp.Initialize() at startup, and a cleanup function before program end. CDXApp also contains other top-level objects like CInput, CGUI instances, or some other general-purpose variables, such as fTime, a pointer to the D3D device, etc. Now, I wonder if this is really good coding practice, and if I should'nt refactor/modifiy it. So far, three main ideas come to mind : - keep CDXApp and make it a singleton (making Washu angry, since I don't really need a "global point of access to it"). This isn't really different from the current situation. - use "global" variables inside a namespace (ie move all the class' contents into this namespace) - move the class data members to the WinMain function (thus becoming local variables), and members functions into main.cpp (making them global) - but at the cost of clarity, since window-management code would be mixed with, for example, the profiling (timing) code. So, what do you guys think, what have you implemented in your code ? keep in mind that these are only small samples, not to become an entire game engine - still, I'd like to organize my code better. Any help is appreciated.
  3. Space and starfield rendering

    Thanks sordid, your method is the logical way to project a cubemap onto a cube or a sphere (what I know how to do), but aren't the seams between some faces too obvious ? if that method works well and the seams aren't too noticeable, I'll go with it. Oh, and btw I'm using the programmable pipeline/directx, and generating my starfield with the method described in (see sticky in game design, basically noise and brightness/constrast like you, with some tweakings afterwards). Thanks.
  4. Space and starfield rendering

    Red falcon : This link only explains the UV mapping to apply in order to use pre-made textures that fit well with this mapping. The problem is, I make the textures myself, so how could (in photoshop) I make a texture fitting properly to this mapping ? (0 to 1 from one pole to another, 0 to 1 around the equator, mapping latitude and longitude). As for the other solutions (icosahedral and such sorts of mapping), they all present seams, causing "complicated special cases for the tile boundaries". Sloner : your idea is good, but I fear the seams might be too noticeable ! But I'll stick with that if I don't get a better idea till then. Thanks both for your answers.
  5. Hi everyone. I'm currently trying to render a starfield (i.e. stars as seen from space), but falling short of ideas and documentation on how to proceed. I already know how to generate an arbitrary sized (rectangular) image of stars with photoshop, but I don't know how to render a starfield in 3D. The two methods I can think of ATM are the following : 1- Spherical Mapping : the starfield is rendered on a sphere surrounding the viewer (as used in sky domes). The problem is generating the texture (and which corresponding UV mapping) that I'd apply to my 3D sphere model (keep in mind that since I'm generating starfields with photoshop, a method involving this software would be preferred), i.e. how to realistically map a 2D texture representing space on a sphere ? Any advices are welcome. 2- Skybox : perhaps the simplest method, the main issue with this approach is in the generation of the six faces, as I've got (almost) no idea on how to handle the seams between different faces, which makes at least 14 seams to "mirror" between adjacent faces, which seems a cumbersome process. As stated before, I'm mainly using photoshop so if there's a trick I don't know on how to quickly make a skybox from a rectangular texture, please tell me. Thanks in advance.
  6. Wudan, you are using the same method as JohnBolton's, i.e. changing the terrain's geometry by adding triangles who don't follow the heightfield. It might be a good method, but I'm rather doing like De Boers and fill the gaps while allowing the terrain to follow the heightfield. Thanks anyway!
  7. Promit : I have well understood the method you are using, thank you again ! (even though I will be doing like De Boer, because I do not have the time to measure the performance nor to implement every method). But maybe you could explain to me how you setup your triangle strips ? As for the IBs, by locking them I was implying modifications, and I have already enough to do with my fillrate, I want the terrain as lightweight as possible, etc. Could you explain what you mean by "caching" and how you code it though ? I agree on minimizing the LODs transitions, but only the user really decides... JohnBolton : This skirt seems rather interesting, as it would be very simple to draw and code ! but I think the LOD changes would be too noticeable though - but maybe you could show me some screenshots/demos ? Thanks.
  8. Thanks both for you replies - it's always a pleasure to have such quick answers. Promit : I'm very interested in how you set up your triangle strips; maybe you could provide some code or an explanation ? As for the gaps "the paper" (I meant de Boer's paper, sorry not to have mentioned that, but I thought everyone implementing geomipmapping would have read it) advices not to add geometry, but rather to change the indices to omit some - still, I have to give you credit for your methode, esp. if you found it yourself, but I recommend you to read "Fast Terrain Rendering Using Geometrical MipMapping", Willem H. de Boer, where you might find some hints in how to improve your current implementation. Thanks a lot anyway ! Wanmaster : As stated before, I'm interested in the setup of the triangle strips, I expect you have to duplicate some vertices like Promit, but I recognize the cost might be worth it. Thanks you for re-explaining the paper's method, I understand it a lot more now. Still, feel free to post supplementary pictures, as it isn't the Nevertheless, since you both seem to have successfully implemented a terrain rendering algorithm, I have to ask a few other questions : In either way, you seem to be updating your indices for every patch who has changed LOD (or whose neighbor has) for each frame. Isn't the lock costly ? I don't think it's possible to do otherwise, though. Wanmaster : when locking your Index Buffer, do you update all the indices or just the ones that have changed ? I'm speaking about the difference at the Lock() level, do you Lock the entire LOD-relative IB or do multiple locks for each index ? Do you keep the same IB from a patch to another (with the same LODs, of course) and modify it or do you create an IB for each "special" patch (i.e. patches whose cracks need to be fixed) ? Since there are also as many IBs as LODs, I suspect you might use the latter (creating all the IBs combinations at run time) as you say you switch between lists of indices but isn't it memory-expensive ? (since terrain is not the only thing I have to render, I'd like to keep it as lightweight as possible). Moreover the former (updating only parts of a single LOD-relative IB or using a single dynamic IB for each LOD, which would be modified whether the lower-level patch is on north, south, east...) seems pretty inefficient, but. To put it simple, which method are you using to create/modify your index buffers ? As this is my last step toward a complete implementation of geomipmapping, I would appreciate any comments/advices from anyone. Thanks in advance.
  9. I am in the process of implementing the geomipmapping algorithm, and I am now trying to solve the problems of the "cracks" (T-junctions or geometry gaps) who appears when two nearby patches have different LODs, it is advised in the paper to use "two triangle fans for this edge". A few questions to everyone who have already implemented this : 1) What kind of primitives do you use when rendering terrain ? I use triangle lists because triangle strips means making a lot of draw primitive quads for a single patchs (my patches are made of 32x32 quads), and triangle fans are not an option. 2) The paper advices not to render the edge of the higher level-patch (replacing it with the triangle fan). But in order not to render an edge, I have to split my DrawPrimitive calls for a patch, thus losing a lot of performance ! I don't get what I should do in this situation (of course rendering the edge's geometry twice is not an option) Thanks for your replies. EDIT: for those of you who don't know what geometry gaps are, have a look at (1) The cracks clearly visible on the terrain because of (2) the LOD variations [Edited by - bleyblue2 on October 28, 2005 4:58:22 AM]
  10. A lot of GPU programs

    Quote:Original post by Anonymous Poster Quote:Original post by matches81 Hi there! I´m sorry to ask my question in another one´s thread, but it just fits so nicely and essentially it covers the same material: a) Are there any simple tutorials for the use of this FragmentLinker, perhaps one just merging a simple light fragment and tex fragment? b) One thing that has been bothering me for some time now, is the following problem: Let´s say I have about 10 different shaders I want to use in one frame, I would really love to set the world, view and projection matrix only once for all these shaders. Is there a way that different shaders share the same variables? In DirectX, you can tell the compiler to bind a constant to a specific register, and set the constants once For example: uniform float4x4 mWorldViewProj : register(c0) will force the matrix to be bound to register c0. You can then set it with something like: device->SetVertexShaderConstantF(0, (float*)&matrix, 4); One gotcha here is that if you're transforming matrices the usual way: mul(P,mWorldViewProjection) the matrices have to be passed in column-major order, so if you're setting shader constants directly, you need to transpose them first. Or, you can forget about the registers and just declare your variables "shared" i.e. shared uniform float4x4 mWorldViewProj; (just keep the same name for all your hlsl files) Then, you have to create one ID3DXEffectPool, passing it to D3DXCreateEffect*** everytime you want to load an effet which will then share its parameters with all the other effects using this same pool. Hope this is clear enough.
  11. The English-to-12-Year-Old-AOLer Translator

    OMG SO OLD3~~~~!!!!111
  12. Moving into residence

    Quote:Original post by lethalhamster I still live with my mommy.
  13. Return of the Elitist Programmer Snob!

    Quote:Original post by Ilici Quote:Original post by CodeBlue Quote:Original post by Toxic Hippo Quote:Original post by nes8bit you people make me sickyou people make me sick you people make me sick aww, be nice: you people make me sick
  14. Vertex Shader Diffuse Color

    Quote:Original post by Kija If I want to specify my screen space coordinates in x = 0 to 1024 (or whatever my resolution is) and y = 0 to 768. I need to send these values in to the shader and then convert them to -1 to 1 coordinates, right? Yes, you'd need to scale and translate your screen space coordinates to clip (-1..1) space. Anyway, good luck with lighting.
  15. Vertex Shader Diffuse Color

    Kija, your shader settings look right. You might want to test if there's any d3d debugging output. Don't forget to disable Z-buffering (at least Z test). Keep in mind that clip space is [-1..1] too, using the FFP with D3DFVF_RHW needs the screen-space coordinates, i.e. [0..640]x[0..480] but the vs output needs to be in clip space. Armadon, since Kija's vertices are already transformed, you don't need to multiply them by the modelviewprojection matrix. You don't even need to set this matrix in your shader. Kija, if your program still don't work, you might consider sending me your code so that I can have a look at it, if it's possible.