Supernat02

Members
  • Content count

    918
  • Joined

  • Last visited

Community Reputation

604 Good

About Supernat02

  • Rank
    Advanced Member
  1. I have some Windows code for a timer that is pretty straight forward. It calls QueryPerformanceFrequency to get the high perf clock freq and then calls QueryPerformanceCounter each frame at the beginning. The code performs a Sleep(1) until a specified amount of time (33 ms) has passed. This is all it does, just sleeps until X ms has passed to force a specific frame rate, I disabled all code running after this. What I'm seeing is that over time, after my PC has been running a while, the QPC return value doesn't change between calls. So I sleep 1 ms, call QPC and the same integer is returned as the previous call before sleeping. I do this repeatedly and then all of the sudden the returned value is a 10 ms or higher jump from the previous value (or any # of ms, 10 just example). I've placed a hysteresis buffer in to be sure that's what is happening. I get the same problem with timeGetTime or getTickCount. It gets worse over time. If I reboot my PC, I get exactly 33.33 ms (30 Hz). I run my PC for a few hours (just my PC, not the software in question), then it's running at 40.5 ms, then a day later at 60.4 ms, etc until just now it was running at 125 ms. Then I rebooted and it went back to 33 ms. The best I can tell, there may be something wrong with my system, the CPU itself, or the motherboard. I have a quad core 2.33 Intel and an EP45-UD3P Gigabyte motherboard. Has anyone ever heard of this before? Thanks! Chris
  2. I'm looking for the most efficient way to render multiple (up to 4) outputs from one PC. My options are: 1) Run multi-head, which requires native fullscreen. or 2) Create a swap chain and use multiple maximized windows for each display. First off, is native fullscreen mode still an improvement over running windows that take up the entire screen (borderless). Second, will I be able to still swap the buffers for both monitors at the same time in non-multi-head mode? I believe this is a feature independent of multi-head (a feature of swap chains in general), but it's not clear. Thanks, Chris
  3. Rendering Point Sprites

    Try setting the D3DRS_POINTSIZE render state to a larger value, or add Point Size to the vertex structure, just to rule that out. You're in untransformed coordinates, so maybe your perspective is too far away from the point itself. Try moving your camera closer or reducing the FOV. I don't see that code below, so that's just a guess.
  4. There may not be an easy answer, or you may not be able to at all. Are you writing the shader in assembly or a higher level language? If you write it in assembly, you can specify exactly what to do for each step and probably cut it down.
  5. Primitives and light source problem?

    Is there anything in the chapter about normals? You can't use the RHW FVF type (I don't think). You have to have a vertex format of (D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_DIFFUSE) and add normals (3 floats) to your vertex structure. The pipeline uses the normals (vertex normals) to define the amount of light hitting each vertex. It is then shaded between vertices. Make sure the normals are normalized, or you can turn on a Render State (forget the name) to always normalize normals within the pipeline itself. Chris
  6. HLSL trunc and modulus operations

    Yep, I tried the debug and retail releases, stepped through the shader in debugger (with software vert processing and REF device). It shows 7 % 5 and provides 1 as the result in the debugger. The algorithm is to provide an x,y coordinate to display a tile, so I first noticed it when the tiles were missing (actually they were just rendering over a previous tile). I have an nVidia 9800 GTX+ video card. My understanding is that you are at the mercy of the integer emulation in shaders because native integers aren't supported, but the HLSL support at Microsoft had this snippet: int i1 = 1; int i2 = 2; i3 = i1 % i2; // i3 = remainder of 1/2, which is 1 i3 = i2 % i1; // i3 = remainder of 2/1, which is 0 i3 = 5 % 2; // i3 = remainder of 5/2, which is 1 i3 = 9 % 2; // i3 = remainder of 9/2, which is 1 The modulus operator truncates a fractional remainder when using integers. I figured at least that would work when running with REF device...I'm okay with this being wrong, but I have to be able to get an index value, and I wanted to use the trunc() function for that. I don't understand why trunc just flat out fails. What I finally ended up with that works: int var1, float var2; float w = var1 % var2; int h = var1/var2; It feels so kluged though... Thanks for your help. Chris
  7. Hey everyone, I'm trying to do the following in a vertex shader 3.0: int w = var1 % var2; int h = var1/var2; var1 and var2 are both integers, constants I set prior to calling the shader. For whatever reason (assume integer emulation), the value for w is valid for 0, 1, 2, 3, 4, 5, and 6 for var1 with var2 set to 5. The value for w returns 1 when var1 is set to 7 (instead of returning 2). 8, 9, 10, etc work from then on until var1 hits 14, where it breaks again. So I figured, to heck with the modulus operator, will just do everything in floating point and truncate. WRONG again! Doing it in floating point is fine and works but doesn't achieve my end goal. Trunc just doesn't work. float w = trunc(var1 % var2); float h = trunc(var1/var2); where var1 and var2 are now floats. For whatever reason, nothing happens. The shader won't even load. It's just dead but doesn't give me a valid reason. Anyone got any ideas? Thanks, Chris
  8. z fighting issue?

    You could also use depth biasing to bias each entity at a priority in the depth, but disabling depth testing is pretty much the same but faster. The downside is that it requires rendering in the correct order, but it fits perfectly for what you're doing.
  9. Flight Controls

    I suspect it would be game-dependent, but many "real-world" models are not simulated in games to prevent them from being boring. It should be fairly easy to limit your ability to move forward/backward when not colliding with any objects (or ground).
  10. octree in large-scale landscape

    You could also just use a quad-tree with a bit field that specifies height of objects, a semi-octree if you will. You would just generate a bounding volume up to the highest "height" in that quadtree node and do normal bounding tests. Also, you shouldn't need to regenerate an octree or quadtree based on your viewpoint.
  11. image shape analysis

    There's an open source library called IPL98 somewhere on the internet. It has Blob detection algorithms and is very easy to use. Provided an image, it will return a list of PixelList objects that contain the locations of pixels that are joined together in "blobs". It determines these blobs based on a threshold color, which you could set based on the transparency. It's also very optimized.
  12. Pseudo transpose

    Though I'm not familiar with the equation, this probably means to normalize the matrix. http://mathworld.wolfram.com/MatrixNorm.html
  13. Ray Picking

    Sorry, I made a mistake, corrected the previous post.
  14. Knowing the Lines Position

    This is similar to determining if a point lies in front of or behind a plane. Consider each line as an infinite plane with some normal N1, and N2 (you calculate these first). Then for each line, create a vector from each endpoint of the line to any point on the other line (choose an endpoint for ease). Dot the normal of the other line with the new vector. If the dot product is positive, the point is behind the plane, negative for in front of it. Of course whether positive or negative depends on which direction your normal is. The only additional case to consider is when the point of one line is coplanar (colinear in this case) with the other line. This happens when the normal vector of the other line is exactly orthogonal (90 deg) to the newly created vector. Chris
  15. Ray Picking

    You are correct. The frustum can be a rectangle, and as such the aspect ratio is something other than 1.0 when the frustum is not square. That's why you need to divide the X value by the aspect ratio, so that the final x value will be scaled in the range of the view frustum. The y value is your constant value, so -1 to +1 scales to the height of the frustum. If your frustum is a square, the same would hold true for the X value. Since the frustum is larger in the x direction, you have to scale it. Think it out with a scenario. width = 640 height = 480 // For ease, just assume height is 1.0 frustum height = 1.0 world units frustum width = 1.333 world units x=640, y=480 x_new = 0.75 y_new = 1.0 now apply these to the frustum: frust_x_pos = x_new*frustum_width = 1.0 frust_y_pos = y_new*frustum_height = 1.0 Think of the final frustum values at this point just a percentage. That's the goal. The next step is to multiply this by the tangent of the FOV/2 to get the actual coordinates. Chris [Edited by - Supernat02 on October 30, 2008 11:50:49 PM]