Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

431 Neutral

About don

  • Rank
    Advanced Member
  1. Black seems to make sense to me if there's no vertex color being propagated to the input of the pixel shader. Where do you think it should be getting the input color from? Why do you think it should default to white?
  2. Have you tried using the D3DM reference driver? It will be slow as mud but might tell you if it's a driver problem. Do fullscreen DDraw apps work?
  3. don

    Transparent Textured Quads

    If your texture has transparent texels defined by a color key, then you have a 1-bit alpha. Why don't you just enable alpha testing? Pixels that fail the alpha test don't affect the depth buffer and you can draw the quads in any order that you like.
  4. "Efficient Generation of Motion Transitions using Spacetime Constraints"
  5. Why don't you just run perfmon?
  6. D3DM perf is highly dependent on the driver supplied with your device. It sounds like your OEM put the D3DM Reference Driver on the device instead of developing a driver of their own. That driver is designed to generate "golden frames" for driver verification purposes and does the majority of its rendering calculations using floating point math, hence it's very accurate, but also very slow. There has been some work in the public domain in getting a managed wrapper around OGLES. This might be a place to start: There's another OGLES managed wrapper on Good luck!
  7. If the vertices were transformed by ProcessVertices, there will be an internal buffer that contains the state of the clip test for each vertex. The transformed vertices in the destination VB will be in either clip space or screen space, depending on whether or not they lie outside or inside the viewing frustum or not. This is done so that vertices that lie outside of the frustum don't have to be back-transformed from screen space to clip space in order to calculate the clip intersection with the frustum planes prior to rasterization. So the resulting XYZRHW buffer's vertices can be frustum clipped, but they must have been processed by PV and have an internal clip state buffer. There's no way to access this clip state buffer directly, so if you are performing transforms yourself, you should be doing your own clipping (or rely on guard bands, as Richard noted).
  8. No DDraw or DShow? AFAIK, they're still available on all of these platforms. I'm probably going to get beat up for asking this, but where's OpenGL? ;) The 'bespoke' term is hilarious. I'll make a note to scribble that on a whiteboard in our next architecture review meeting instead of simply using the term,'Apps'. I'll just tell my confused colleagues that I found the term online in the new MSDN "Dev Lore" web pages, to which they'll likely reply, "Yeah, sure. Have a seat, Gandalf."
  9. What I came up with was similar to ET3D's solution. Use DOTPRODUCT3 to set the alpha channel to zero, based on the color values. You'd need to shift each texel by an offset based on the color key prior to performing the dot product operation, so that the dot product of the texel with itself is zero. I used ADDSIGNED for this, but there are probably other ways to do it. I have code that does all of this and it does work, the problem is a lack of precision which causes colors that are close to the color key to also become transparent. As an example, if I used 0x00f800 (97% green) as the color key, any color from 0x00f100 to 0x00ff00 ended up with an alpha value of zero. This is because the dot product of that range produces values that are less than 1/255, and are assigned a value of zero once the colors have been scaled back to the range [0,255]. Another problem is that there are probably combinations of color components that will result in a dot product (and a resulting alpha value) of zero, even though they are nowhere near the colorkey's color. BTW, if you use DOTPRODUCT3 as a color op, it will copy the result to all color channels as well as the alpha channel. The alpha op in the stage will be ignored. There's also an offset involved so that negative values can be expressed. I had to look at the source of the reference rasterizer in order to find out how this texture op actually works. If you're not going to do this with shaders, then you may want to spend your time making that texture load routine as fast as possible and just search for the color key and set the alpha accordingly. There's a D3DX routine that does this, but you might be able to do something using SSE that would be faster. If you have to do this every frame, double-buffering the texture might provide a speed-up since you aren't trying to write to the texture while it's in use by the GPU.
  10. So you question basically boils down to, "How do I configure the D3D9 texture stages of the fixed-function pipeline to render a specified color as transparent?" If that's the case, I think I can help you, but before I spend time testing my theory, I need to know if this is truly what you're trying to do.
  11. don

    Lighting question

    It should be black. You should sum the contribution of each of your lights. A light's contribution is determined by modulating (multiplying) the color of the light with the color of the surface. Look at the lighting formulas in the D3D docs for the old fixed function pipeline.
  12. pd3dDevice->SetTextureStageState(0, D3DTSS_ALPHAOP , D3DTOP_SELECTARG1 ); [edit] Looking in d3d9types.h, D3DTOP_SELECTARG1 and D3DTA_TEXTURE are both defined to a value of 2, so you'll still have the problem after making the correction above. Texture capabilities are usually queried not by caps bits, but by setting up the texture stage and calling ValidateDevice. Have you explicitly disabled the next texture stage? Perhaps it's a driver bug. You could try getting an updated driver from NVIDIA's website or your card manufacturer.
  13. It's quite one thing to reinvent the wheel and develop an alternative to Boost/STL at the beginning of a project, and another to choose to continue to use the team's existing libraries instead of switching to Boost/STL. There had better be some good reason for the former, while the latter is more common and arguably justified. I agree with most of the posts in that you shouldn't make waves and end up butting heads over this. If you want to support your assertion that the entire organization should standardize on Boost/STL then track the bugs during the course of the project and note which would have never occurred had the developers used Boost/STL instead. At the end of the project, present this data to the project lead and let them decide whether this is worth changing. In my 20+ years as a software engineer I've seen similar battles over silly things like the placement of braces or whether TABs should be in comment blocks. In the end, all it does it get people pissed at each other and no matter who wins it affects future team interaction in a negative way. Don't sweat the small stuff.
  14. The reason why the blit is slow is because you're doing a color conversion during the blit. The DDB is the same format as the primary display so the blit is fast and is basically a hardware accelerated memcpy. The DDB is created by the display driver and its contents are opaque outside of the driver. The GDI calls made by applications, translate to DDI calls into the display driver and then the display hardware draws to the DDB.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!