• Content count

  • Joined

  • Last visited

Community Reputation

101 Neutral

About brekehan

  • Rank
    Advanced Member
  1. RenderQueues

    Quote:Original post by Jason Z I think part of the problem is that you are mixing two different classes of operations. In my engine I define a single draw call as an 'Entity3D' class, which would something along the lines of your Renderable (although it also has 3D information as well). Your lense flare rendering is a sequence of the draw calls, each with varying purposes (i.e. predicated rendering, drawing, and blooming). Since it uses more than one call, it is fundamentally different than a single object being rendered - this is exactly what you pointed out below. For more complex rendering sequences that should not be split up by the engine, I provide a 'RenderView' class that manages all of the individual sequence handling. This allows for the sequence to be defined within an object, and then you can treat the whole RenderView as a sortable object too. This is what your lens flare sounds like to me - a complex sequence of objects that need to be rendered. Personally, I prefer to keep the views at a higher organizational level than the individual Entity3D instances (i.e. the views are using the Entity3Ds), but could do it either way. Yes, it sounds like you are understanding. I do think I need to make a class that contains seperate entities to be rendered at some point. I know there will be models with multiple parts for example. However, I am still unsure whether to treat the lensflare that way. The reason being tht the entire thing except for the occlusion test is drawn at screen level...I think the occlusion test is too for that matter if I understand the XNA example I orginally wrote it from. How does your RenderQueue and RenderView work for say a model surrounded by particle effects where the two are grouped? Say like a sword with sparklies around it ala modern mmos. They are two entities that probably have a transform multiplied by another as they are positonally related, they are probably also related in that they both need to be rendered or not, but are drawn at different times during a frame in different ways right? How does that work as far as sorting and removing?
  2. RenderQueues

    I have been stumped on a design for more than a few months. Wondering if someone can give some info on how they tackle this. I implemented the start of a simple RenderQueue. The RenderQueue as is just has Insert, Remove, and Render methods. Its purpose is to sort Renderables and call thier corresponding Render methods in the correct order. This seemed simple enough and a good assumption on how a RenderQueue works in a real engine. Tell me if I am wrong! Obviously, I need a Renderable base class in order to put things in the queue. So, I imagined a renderable being a single entity that is going to be rendered and gave it some data including an effect to use, as well as a pointer to buffers. Additionally a Renderable is derived from a Transform, as I thought that surely anything that is rendered would have a transform! I gave Renderable an enum to tell the queue which order to render things: enum RenderType { RENDERTYPE_OPAQUE = 0, RENDERTYPE_TRANSPARENT, RENDERTYPE_SCREEN, RENDERTYPE_UI, NUM_RENDER_TYPES }; Now the problem arrises when I created a LensFlare class. A LensFlare is something that is rendered, so I want to plop it into the RenderQueue. In order to do that I should derive it from Renderable. Problem is a LensFlare really isn't a single thing to be rendered! It consists of a occlusion test (a square), a glow, and 3 flares, all of which are pretty much 2D squares with some transparency. The occlusion test is rendered at a position far back in the scene while the flares are rendered at the screen level. Furthermore, the Lensflare has more than one effect that needs to be used per frame, one for occlusion test and one for everything else. It also doesn't have a single transform, but rather just a light position. I am not sure how to handle this. The easy choice would be to alter the RenderQueue to have a seperate method to insert LensFlares as a special case, but that feels really hacky. What happens down the road when I want to insert complex models made of parts for that matter? Another thought was to seperate the Lensflare into components and insert them seperatly, but I am not sure how that would turn out either, especially since they would have to communicate with eachother to get the occlusion test result. Does anyone have some design advice here that would help sort out this mess? There doesn't seem to be much out there on the internal workings of a RenderQueue in a real engine. Open source inspection really hasn't helped me much in this scenario either.
  3. DirectX 11

    Anyone have links to tutorials or books on DX 11? My searches on Google and Amazon have failed to yield any results. Can we sticky DirectX 11 resources since it seems to be so hard to find any? I've got half an engine done in DirectX 10, but would really like a better system for loading and organizing shaders than what I have and reading the SDK seems to hint they might have a good alternative, but I haven't seen a single thing on the new dynamic linking. I'm also interested in how the supposed mutlithreaded rendering works.
  4. DEM to heightmap

    Okie Dokie. I'll try and write a parser this weekend. Appreciate the helps.
  5. DEM to heightmap

    It's not. It is binary for sure, my hex editor says so. The site says the DEMS are "1/3 arc gridlfoat". There are tons of labels so I don't know what the hell it is. But the site is evidently where everyone gets satellite data. They give you back a zip full of files. I assume I am looking at the right one with the .flt extension. The html page tells me about the scale. It wont open in any text editor I have. [Edited by - brekehan on March 15, 2010 11:40:03 PM]
  6. Searched and found lots of very dated posts, but nothing recent. I have data in DEM format. I want to make a heightmap from it. I cannot find any working conversion software. Anyone know what to use these days? I tried MicroDEM and got tons of errors on startup. I think it is too dated to run on Windows 7. I also searched for 3DEM, which seems to no longer be supported by its developer.
  7. I am designing my render queue and came to the point where I need to handle insertion and removable of renerable objects which really have multiple parts. It would be nice to have m_renderQueue->Insert(lensflare); for example, but lensflare contains multiple renderable objects. The light source - opaque The glow - transparent 3 flares - trnasparent occlusion test object - invisible I am going to have the same problem with models that contain different parts. How do you handle insertion and removable of objects like this such that the application can still keep track of what's being rendered in a logical manner?
  8. Plasma bullet

    I cannot seem to to find the slightest bit of information on rendering weapon effects like the one above. I cannot believe noone has had to do this. They appear in almost every game! I've googled until my fingers hurt and found absolutly nothing at all on this topic. Found lots of nice plasma TVs, and a few cool photoshop tutorials, but that's about it. I've tried particle systems and it just isn't going to work. Rendering bunches of particles for every shot is just going to bog things down too much. There has to be a simplere method! If anyone at all has rendered any kind of beam, laser, plasma, or other type of sci-fi weapon effects, please share how you did it. Whatever the method was. What good is all the 3D ships, planets, nebulas, stations, asetroids, I've modeled if you can't shoot at 'em???
  9. How are we rendering a particle in DX10 these days? Is it a textured quad? How do you figure how big the quad is? Do we use a dynamic vertex buffer? I've seen alot of articles that use textures for attributes and the geometry shader, but that seems a bit overly complicated for me. I want to start a basic particle system and go from there. Anyone have a good up to date tutorial in dx 10?
  10. Plasma bullet

    I am trying to duplicate the plasma bullets from X3 Terran Conflict in my own engine. I can't figure out how they do it. Here is what it looks like - Look at the green weapons fire at time=1:36 in the following video Since they allow us to modify the game. I grabbed thier model and the textures they used. What is off limits is the shader. Here is a screenshot of the model in 3ds max: Here is the texture file Here is what the texture looks like: Here is what it looks like when max renders it (obviously incorrect): There are no tutorials anywhere that I can find on how to render sci-fi weapons fire like plasma bolts, laser beams, pulse cannons, and the like. I know some people use particles, but obviously these guys used a model and I would like to also. I just don't know how to go about it. Any information is appreciated.
  11. Working on render queue again. Last time I asked about render order for transparent objects, I was told to sort them by depth. Now it comes down to it, I have to ask myself how do I get depth? I thought it was simply the z coordinate in the position of the object, but that doesn't seem to make sense. A) a sky sphere that encompases the camera is at 0,0,0 so would have least depth, but algorith would draw last. B) If the camera is at 0,0,0 something at 200,0,0 is just as far away as something at 0,0,200 Should I handle the generic case, disregarding A, and to some kind of distance formula between camera and object position?
  12. Anyone notice the online documentation for DirectX has disappeared? As of 4:16am, central there is nothing there but a blank page with "August 2009" on it. Is microsoft really taking down the old docs before they get the new ones ready? Anyone got more info on it? It happens I have some things I need to look up!
  13. Could you: Render the shapes to a seperate render target Make that render target a mask Feed the mask to shader Use mask in pixel shader to determine alpha
  14. ID3DXEffect::SetTechnique per frame

    I can't tell you how efficient it is, use a performance counter for that. But, it didn't make the top 10 when i did an analysis on my render method. So, I'd say you could, but it wouldn't be the best design. I use an effect manager/resource pool/factory to hold all effects ever used and all thier techniques. I would just sort your render queue by effect and technique, then render them in groups.
  15. Always have problems texturing spheres near the poles. I am not sure if I am currently experiencing that same problem again. I programatically create my own sphere with texture coordinates based on a sphere generated by 3ds max. I apply a star field texture and tile it 3 times around the sphere. It looks fine in max, but in my application I see two spots where the stars seem to dim and everything is black. I don't think that's good. However, if I apply a differant texture for testing, like say a smiley face, I do not see the problem. I am not sure if it is the texture or the sphere. Any ideas? See how it is dim in the center here? #include "Shapes.h" // Common Lib Includes #include "BaseException.h" // Standard Includes #include <cmath> #include <sstream> using namespace common_lib_cpisz; //------------------------------------------------------------------------------ void GenerateSphere(std::vector<Position> & positions, std::vector<TexCoord2D> & texCoords, std::vector<Normal> & normals, std::vector<Index> & indices, const float radius, const unsigned segments, const bool flipNormals) { // Example Sphere, with 6 segments: // // 0 1 2 3 4 5 // |\ |\ |\ |\ |\ |\ // | \| \| \| \| \| \ // 6 7 8 9 10 11 12 // | /| /| /| /| /| /| // |/ |/ |/ |/ |/ |/ | as would be seen in an UV editing tool // 12 14 15 16 17 18 19 // | /| /| /| /| /| / // |/ |/ |/ |/ |/ |/ // 34 35 36 37 38 39 // // 0 // \ // 1 // | Verticle Divisions // 2 // / // 3 // // 5 - 4 // / \ // 0,6 3 Horizontal Divisions // \ / Middle rings contain an extra vertex at the same position as the start point to allow texture to be wrapped. // 1 _ 2 Top and Botton rings do not contain the extra vertex BaseException exception("Not Set", "void GenerateSphere(ID3D10Device & device, std::vector<Buffer::SharedPtr> & buffers, float radius, unsigned segments)", "Shapes.cpp"); // Start with an empty vectors of data positions.clear(); texCoords.clear(); normals.clear(); // Check min divisions = 3 if( segments < 3 ) { std::stringstream msg("Sphere requires a minimum of 3 segments to be classified as a sphere at all."); msg << " segments param: " << segments; exception.m_msg = msg.str(); throw exception; } // Check radius > 0 if( radius <= 0.0f ) { std::stringstream msg("Sphere requires a radius greater than zero."); msg << " radius param: " << radius; exception.m_msg = msg.str(); throw exception; } // Generate geometry Position position; Normal normal; TexCoord2D texCoord; unsigned divisionsH = segments; unsigned divisionsV = segments / 2 + 1; float incrementU = 1.0f / static_cast<float>(divisionsH); float incrementV = 1.0f / static_cast<float>(divisionsV - 1); // Top Ring for(unsigned i = 0; i < divisionsH; ++i) { normal.x = 0.0f; normal.y = 1.0f; normal.z = 0.0f; position = normal * radius; texCoord.x = incrementU * static_cast<float>(i); texCoord.y = 0.0f; if( flipNormals ) { normal *= -1.0f; } positions.push_back(position); texCoords.push_back(texCoord); normals.push_back(normal); } // Middle Rings float angleV = static_cast<float>(D3DX_PI / static_cast<double>(divisionsV - 1)); float angleH = static_cast<float>(2.0 * D3DX_PI / static_cast<double>(divisionsH)); for(unsigned j = 1; j < divisionsV - 1; ++j) { float y = cos(static_cast<float>(j) * angleV); float ringRadius = sin(static_cast<float>(j) * angleV); for(unsigned i = 0; i < divisionsH; ++i) { normal.x = ringRadius * cos(static_cast<float>(D3DX_PI) + static_cast<float>(i) * angleH); normal.y = y; normal.z = ringRadius * sin(static_cast<float>(D3DX_PI) + static_cast<float>(i) * angleH); position = normal * radius; texCoord.x = static_cast<float>(i) * incrementU; texCoord.y = static_cast<float>(j) * incrementV; if( flipNormals ) { normal *= -1.0f; } positions.push_back(position); texCoords.push_back(texCoord); normals.push_back(normal); } // Need an extra vertex at same position as the first to wrap the texture around the sphere normal.x = ringRadius * -1.0f; normal.y = y; normal.z = 0.0f; position = normal * radius; texCoord.x = 1.0f; texCoord.y = static_cast<float>(j) * incrementV; if( flipNormals ) { normal *= -1.0f; } positions.push_back(position); texCoords.push_back(texCoord); normals.push_back(normal); } // Bottom Ring for(unsigned i = 0; i < divisionsH; ++i) { normal.x = 0.0f; normal.y = -1.0f; normal.z = 0.0f; position = normal * radius; texCoord.x = incrementU * static_cast<float>(i); texCoord.y = 1.0f; if( flipNormals ) { normal *= -1.0f; } positions.push_back(position); texCoords.push_back(texCoord); normals.push_back(normal); } // At this point we have: // // A number of rings on the XZ plane equal to the number of segments / 2 + 1 // The first ring sits at the highest Y value. The last ring sits at the lowest Y value // Top and bottom rings have a number of points equal to the number of segments // Inner rings have a number of points equal to the number of segments and an extra point positioned the same as the first // Each ring starts at the left and circles counter clockwise // Generate indices // Positive Y cap for(unsigned i = 0; i < divisionsH; ++i) { if( !flipNormals ) { indices.push_back(divisionsH + i); indices.push_back(i); indices.push_back(divisionsH + i + 1); } else { indices.push_back(divisionsH + i); indices.push_back(divisionsH + i + 1); indices.push_back(i); } } // Inner Rings unsigned numPoints = 2 * divisionsH + (divisionsV - 2) * (divisionsH + 1); for(unsigned ringA = divisionsH; ringA < numPoints - 2 * divisionsH - 1; ringA+= divisionsH + 1) { unsigned ringB = ringA + (divisionsH + 1); for(unsigned i = 0; i < divisionsH; ++i) { if( !flipNormals ) { indices.push_back(ringA + i); indices.push_back(ringA + i + 1); indices.push_back(ringB + i); indices.push_back(ringA + i + 1); indices.push_back(ringB + i + 1); indices.push_back(ringB + i); } else { indices.push_back(ringA + i); indices.push_back(ringB + i); indices.push_back(ringA + i + 1); indices.push_back(ringA + i + 1); indices.push_back(ringB + i); indices.push_back(ringB + i + 1); } } } // Negative Y cap for(unsigned i = 0; i < segments; ++i) { if( !flipNormals ) { indices.push_back(numPoints - segments * 2 - 1 + i); indices.push_back(numPoints - segments * 2 + i); indices.push_back(numPoints - segments + i); } else { indices.push_back(numPoints - segments * 2 - 1 + i); indices.push_back(numPoints - segments + i); indices.push_back(numPoints - segments * 2 + i); } } }