• Advertisement

Fibonacci One

  • Content count

  • Joined

  • Last visited

Community Reputation

284 Neutral

About Fibonacci One

  • Rank
  1. C++ Program Exam for a Job

    Quote:Original post by Crazyfool Quote:Original post by Fibonacci One That's probably not something to worry about. However, the fact that you used: void Queue::push(Node d) rather than void Queue::push (const Node& d) would definitely catch my attention while reviewing the exam. Is the reason because 1) its more efficient (only passing a reference) and 2) const ensures that you wont alter d? I am trying to get better at learning more proper c++ =/ (1) and (2). You pass the reference because if you don't it will call the copy constructor to create a new Node (unnecessary work). You make it const because now that it's a reference you don't want the function to be able to modify it.
  2. C++ Program Exam for a Job

    That's probably not something to worry about. However, the fact that you used: void Queue::push(Node d) rather than void Queue::push (const Node& d) would definitely catch my attention while reviewing the exam.
  3. Streaming level geometry

    Quote:Original post by solenoidz 1.How do I go about it ? I'm thinking of storing the vertex data in an huge file on disk, and then go , offset position, calculate size and "fread" the chunk. Can I store the materials and textures in that chunk of data near the vertex part of it ? Yes, you should definitely store the world's geometry in a way that it can easily be read off of disk and placed into it's final format without needed much cpu time. However, you'll probably find that storing the actual materials in these same chunks to be cumbersome when considering the fact you only want to have 1 of each material or texture loaded at a time. You can (and should) just store information about which materials to use within this chunk, just not the materials themselves. Quote: 2. I guess I better handle the textures the classic way, by loading 1 instance of every individual texture and just reuse it for different meshes in diferent part of the world. Yes, you should do this for every asset type. Textures, materials, meshes... everything. Quote: 3.I've seen in some games, that they are loading the low-poly mesh first and after approaching the actual shiny model. How I go about this kind of LOD in a streaming geometry engine ? I doubt this is going to gain you much for small meshes. Making two trips to the disk to read the mesh in is going to be pretty expensive. However, if you're meshes are large enough for this to actually be beneficial it seems like it would be pretty easy to do. You should precalculate which mesh to be using at each distance (or whatever other metric your LoD system uses) for a certain object. Then in game when you calculate whatever LoD you need, you just use the most detailed LoD available at or below that level. You can also use this information to know when to start streaming a LoD. As soon as you decide to use a higher LoD you should begin streaming the next higher LoD. Likewise, when you move down an LoD you can begin unloaded the level TWO levels above where you're at. Quote: 4.How am I supposed to know, which chunk is needed to be loaded and wich isn't needed any more since it can't be visible ? Basically i need no render only 2 chunks. The one my camera is standing, and the one my camera is facing in the distance. If I turn the camera, the new chunk have to be loaded to be seen ahead in the distance ? Going to the disk is a very slow operation. You won't be able to load and unload these chunks as quickly as the player can turn their camera (I say this assuming you're going to be using a first or third person camera). You'll need to have the entire world around you loaded at once. Note that if you use the LoD scheme I mentioned above it shouldn't be too difficult to only have lower LoD meshes loaded for objects in the distance. Quote: 5.How fast is loading a chunk of data from a hard drive in a modern machine ? My vertices will be probably the same for the whole world, and I need to be able to read several Megabytes of vertex data almost every frame without slowdown ? I don't have exact data (that will vary between machines anyway) but I can say that it will be SLOW.
  4. Quote:Original post by slayemin I noticed that in oblivion :) The other thing was, when I was in a face to face conversation with a character, my CPU/GPU would go crazy and the game would lag. Its on my laptop though, so maybe that has something to do with it. High-res models = high resource consumption. Odds are that it was probably the loading of the dialog that was slowing your machine down. Quote:Original post by slowpid What helps it is that as it uses a specific base mesh, it does a good job of projecting the phototexture onto the UV's. So, while it might look 'okay', at first glance, turn off the textures and you will find that it poorly resembles a human, and in the case of subtle, soft femanine attributes it butchers. Thats the reason you will never get a attractive female from it. Exactly. And after that first glance where things look 'okay' I usually found that things would descend to 'not very good' and then 'disturbing' on the second and third glances.
  5. While I haven't actually tested this myself, it seems like the technique described here http://www.mvps.org/directx/articles/linear_z/linearz.htm might be a good solution for you.
  6. Few Questions on Photon Tracing

    Quote:Original post by jbarcz1 Final gathering, with a high enough sample count, is much more effective at producing smooth indirect illumination while still using sensible numbers of photons. Could you please elaborate a bit more on what final gathering is?
  7. Hey everyone, I'm working on what I assumed would be some fairly simple code to perform a gaussian blur. To keep the source code I'm giving you as small as possible let's say I'm working on a 3x3 gaussian blur. I'm calculating three sets of texture coordinates in the vertex shader then when I pass them to the pixel shader the only texture coordinates that work correctly are the first ones defined in my structure. For example, if I were to draw the texture coords to the red and green channels in the code below LowTexCoords would work but CenterTexCoords would be black. The only set of texture coordnates that don't give me a black screen are the ones sent to TexCoord0. Any ideas what I'm doing wrong? texture2D Image; sampler2D nearestSampler = sampler_state { Texture = <Image>; AddressU = Clamp; AddressV = Clamp; MipFilter = Point; MinFilter = Point; MagFilter = Point; }; /******************************************************** 3x3 Gaussian Filter ********************************************************/ struct VSOutputBlur3 { float4 Position : Position; float2 LowTexCoords : TexCoord0; float2 CenterTexCoords : TexCoord1; float2 HighTexCoords : TexCoord2; }; VSOutputBlur3 VSGaussianBlur3X( float4 Position : Position, float2 TexCoords : TexCoord0) { VSOutputBlur3 Output; Output.Position = Position; Output.LowTexCoords = float2(TexCoords.x - TexelSize, TexCoords.y); Output.CenterTexCoords = TexCoords; Output.HighTexCoords = float2(TexCoords.x + TexelSize, TexCoords.y); return Output; } VSOutputBlur3 VSGaussianBlur3Y( float4 Position : Position, float2 TexCoords : TexCoord0) { VSOutputBlur3 Output; Output.Position = Position; Output.LowTexCoords = float2(TexCoords.x, TexCoords.y - TexelSize); Output.CenterTexCoords = TexCoords; Output.HighTexCoords = float2(TexCoords.x, TexCoords.y + TexelSize); return Output; } float4 PSGaussianBlur3( VSOutputBlur3 Input ) : Color { float4 TapLow = tex2D(nearestSampler, Input.LowTexCoords) * Blur3[0]; float4 TapCenter = tex2D(nearestSampler, Input.CenterTexCoords) * Blur3[1]; float4 TapHigh = tex2D(nearestSampler, Input.HighTexCoords) * Blur3[2]; return (TapLow + TapCenter + TapHigh); } technique GuassianBlur3x3 { pass p0 { VertexShader = compile vs_2_0 VSGaussianBlur3X(); PixelShader = compile ps_2_0 PSGaussianBlur3(); } pass p1 { VertexShader = compile vs_2_0 VSGaussianBlur3Y(); PixelShader = compile ps_2_0 PSGaussianBlur3(); } } [Edited by - Fibonacci One on May 14, 2007 1:27:50 PM]
  8. Shadow mapping issue

    Quote:Original post by Schrompf That's because the scene at the mirror image has a negative depth from the light's point of view so all depth comparisions will succeed, rendering light everywhere. And that's what you want after all. Only the scene at the real image in front has a positive depth from the light's point of view. There the depth comparision will render shadow and light as intended. So, the problem is the way I'm calculating the depth from the lights point of view. How am I suppose to do that then? It seems to me that I should just multiply the world position by the light's view matrix and then take the length of that. However, that still gives me the same problem. Or, maybe that's right and my comparison is incorrect. Here's what I've got: // I moved these to the vertex shader // Calculate the texture coords float4 tempPos = mul(position, lightViewProj); tempPos = tempPos / tempPos.w; dpo.depthTexCoords = tempPos.xy * float2(0.5, -0.5) + 0.5; dpo.lightVec = mul(position, lightViewInv).xyz; // And in the pixel shader float distToLight = length(dpi.lightVec); // Read the moments from the shadow map. float4 moments = tex2D(shadowSampler, dpi.depthTexCoords); // Calculate if this point is in light or shadow. float isLit = distToLight / 50.0f <= moments.x; // Calculate the final color float3 resultColor = ((ndotl * isLit) + ndotl2) * color; Quote:Original post by AndyTX One note: if you're doing variance shadow mapping you have to make sure that you're rendering receivers into the shadow map as well (i.e. the terrain). This is necessary for the interpolation and filtering to work. You also must render front faces (not back faces or midpoints). Shoot, that forces me to have the receivers cast shadows too, doesn't it? I really didn't want my terrain casting shadows since it makes it obvious how jagged it can be at some places. Oh well... Thanks to both of you. Ratings++ if I havn't already.
  9. Shadow mapping issue

    Ok, I think I kind of understand what you're saying here. With this screenshot I'm not actually doing any comparisions to the shadow map, I'm just rendering it directly. This is the bulk of what I'm doing there: float4 projectivePos = mul(dpi.worldPosition, lightViewProj); projectivePos = projectivePos / projectivePos.w; float2 depthTexCoords = projectivePos.xy * float2(0.5, -0.5) + 0.5; // Read the moments from the shadow map. float4 moments = tex2D(shadowSampler, depthTexCoords); return moments.xxxx; Now, to use an orthogonal projection matrix I just need to replace something like D3DXMatrixPerspectiveFovLH(&lightProj, D3DX_PI/4, 1.0f, 1.0f, 50.0f) into something like D3DXMatrixOrthoLH(&lightProj, ShadowMapSize, ShadowMapSize, 1.0f, 50.0f) and then get rid of the divide by w in the above code? I'm assuming there's more to it than that, since I tried it and it didn't work. Thanks for your help.
  10. Shadow mapping issue

    I'm attempting to create shadows for my scene by rendering all of the shadow casting objects to a shadow map then using that information in the shadow receiving objects to know whether or not the object is in shadow. Sounds pretty standard so far, right? I think that my problem arrives when I am calculating the texture coordinates to use to look at the shadow map. I'll illustrate the problem with a screenshot; this is the shadow map being shown on the terrain here. (To see the image you'll probably need right click and select view image, or left click to follow the link.) The light for the scene is down behind the hill and so when I move it into projective space it actually ends up with the same coordinates as the area on the ground behind the player. So, obviously, when the sun is passing below the terrain like that I'll end up with shadows where they shouldn't be. Here's what I think is the relevant source: float4 projectivePos = mul(dpi.worldPosition, lightViewProj); projectivePos = projectivePos / projectivePos .w; float2 depthTexCoords = projectivePos .xy * float2(0.5, -0.5) + 0.5; Does anyone have any idea what I'm doing wrong or what else I need to do to fix this? Thanks in advance.
  11. Great Games Experiment beta invites

    Quote:Original post by Thevenin How (is this becoming popular)?! This could be a great way to network with other developers. I also sent you an invite.
  12. Lame Programming/CS Jokes

    Quote:Original post by alnite What is the integral of one over cabin? A natural log cabin. What is the integral of one over cabin? Houseboat. (Natural) log cabin + sea.
  13. T-Junctions: Big Deal?

    Quote:Original post by mg_mchenry A) Are these t-junctions really going to be a big deal? Yes. There will be great, big, glaring gaps in your terrain if you don't somehow fix them. Quote: B) If they are, doesn't that make the texturing situation weird, since both adjacent quads are already pused to the edge of their textures? This depends on how you fix the t-junctions. Though, even with skirts (which is probably the sloppiest fix and is likely most prone to texture streaching) I have a hard time seeing any streaching since the terrain will have moved to a higher detail before the camera gets close enough. Quote: C) Couldn't I just make a slight adjustment to the vertexes[sic] to hide the t-junction? This is probably the cleanest way to fix this. Whenever you detect a higher detail quad next to a lower detail quad, adjust the vertices on the edge of the higher detail quad to match the edge of the lower one.
  14. Those pesky and funny glitches...

    Probably the most entertaining bug that I have ever seen was created as part of a group project I worked on a few semesters ago. The premise (simplified for brevity) of the game was that people would run around throwing bombs at each other and would eventually be eliminated for the round. When a player was eliminated he was supposed to be given access to a free roaming camera. However, some of our early server code was sending a message that would actually bind the eliminated player's camera to the bombs that the live players were throwing. So every time someone threw a bomb all of the dead players would have a second or two to watch the path of the bomb with a third person camera. The server code was eventually revamped and this bug was eliminated. We actually considered adding code to recreate that bug as a feature, but didn't do to time constraints. I am currently working on an updated version of this game in my spare time and will probably add this as one of the camera options for dead people.
  15. This is a screenshot from a game I've been working on since the beginning of summer. I'm near the end of a little break I'm taking from it, then I'll add a GUI, networking, and gameplay mechanics. I expect to be finished with it sometime before January 2007. The image isn't loading so I made it a link as well.
  • Advertisement