David Clark

Members
  • Content count

    27
  • Joined

  • Last visited

Community Reputation

122 Neutral

About David Clark

  • Rank
    Member
  1. Redundant calls to glBegin(GL_TRIANGLE) seem to be making my code faster by 3 times :O! Heres a pastie: http://www.pastie.org/612337 Basically List 5 is: begin triangle list vertex vertex vertex ...... vertex vertex vertex end triangle list and List 6 is: begin triangles vertex vertex vertex end triangles begin triangles vertex vertex vertex end triangles ... begin triangle list vertex vertex vertex end triangle list Both lists have the exact same number of triangles in them. When I run my code, list 5 takes 0.09 seconds to draw, and list 6 takes me only 0.03! Can anyone see if i'm doing something stupid in my code? Is this normal? Its convenient the later is faster, but I have no idea why and its driving me mad! :( Thanks in advance :D
  2. ID3DXFont really rubbish quality

    Well I had a go at modifying it: HDC dc = gfont->GetDC(); SetBkColor(dc, RGB(255,255,255)); SetTextColor(dc, RGB(0,0,0)); But when you call the drawtext function, the letters are inverted...like instead of it drawing the letters, it draws the background. Heres a screenie: The first window is how it rendered. The second two are modified in paintbrush...but this is how text should look on a white background.. So I think DirectX creates the font using a black background and white text...then creates the texture using the black to white values as the alpha channel (black being transparent, white being opaque)
  3. ID3DXFont really rubbish quality

    I think Codeka was right on the ball with this one! Have a look at this: I did this all in MS Paint. I inverted the white text on a black background so it became black text on a white background...and low and behold it looks like s***! And looks exactly like the blocky text!
  4. ID3DXFont really rubbish quality

    Hmm I found this thread while googling around: http://forums.devx.com/archive/index.php/t-153589.html 'Actually I use A8R8G8B8 as a render target and it works fine. All you need is to fill the background with ARGB = 0,255,255,255 then you render the text with fully white color (255,255,255,255) in your A8R8G8B8 render target. Then in your ID3DXSprite Draw function you set the last param to the color you want the text to be displayed with (in your case black).' My guess is whats happened is when he drew the text onto the render target it used these render states: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); Now the red green blue channels would have remained 255,255,255, but the alpha channel would have been multiplied by itself: alpha = srcalpha * srcalpha + destalpha * (1-srcalpha) alpha = srcalpha * srcalpha + 0 * (1-srcalpha) alpha = srcalpha * srcalpha Anyway heres an image with three lines of text. The first is the normal 'blocky' writing The second is the normal 'blocky' writing after its alpha has been multiplied by itself The third is MS paint text Although this improves the quality of the antialising it still looks thicker than the MS Paint one :/
  5. Hello, So basically I have: ID3DXFont *gfont; Which i give a font: D3DXCreateFont( pd3dDevice, 400, 0, FW_BOLD, 0, FALSE, DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, ANTIALIASED_QUALITY, DEFAULT_PITCH | FF_DONTCARE, TEXT("Arial Black"), &gfont ); And I render it: gfont->DrawText(NULL, text, -1, &rct, 0, fontColor ); And it looks like the ? to the left http://img16.imageshack.us/img16/9397/sighi.png To produce the ? on the right, I used photoshop to multiply the alpha channel of the text by itself, making the rendered text look much better (basically invert so text is white like an alpha channel, then apply image, and the invert back to how it was) So I'm wondering...am I doing something wrong? Is there a render state I'm meant to be setting to make the question mark look like the one to the right? I've uploaded the source code here: http://www.mediafire.com/?5uteeydcmva Help would be greatly appreciated :)
  6. Best way of drawing 2D

    Heres some code to help you setup a matrix: // let width and height be screen size float Width = 640; float Height = 480; float l = 0; float t = 0; float r = Width; float b = Height; l += 0.5; t += 0.5; r += 0.5; b += 0.5; D3DXMATRIX Ortho2D; D3DXMATRIX Identity; D3DXMatrixOrthoOffCenterRH(&Ortho2D, l, r, b, t, 0, 1); D3DXMatrixIdentity(&Identity); d3dDevice->SetTransform(D3DTS_PROJECTION, &Ortho2D); d3dDevice->SetTransform(D3DTS_WORLD, &Identity); d3dDevice->SetTransform(D3DTS_VIEW, &Identity); If you wish to use zooms and rotations, then you need to look at ways to manipulate the matrix. For example, if you want to centre the screen on the player, you might instead setup a matrix like this: D3DXMatrixOrthoOffCenterRH(&Ortho2D, -width/2, width/2, height/2, -height/2, 0, 1); and then follow with a scale for zooming: D3DXMatrixScaling D3DXMatrixMultiply (multiply allows you to combine matrixes so one matrix has the exact same effect as multiple matrices) and then follow with a rotation: D3DXMatrixRotationYawPitchRoll D3DXMatrixMultiply and then a translation: D3DXMatrixTranslation D3DXMatrixMultiply If everything looks slightly blury, try doing a final translation of 0.5. It depends at what co-ordinates you draw your quads, but usually the pixels dont line up with the direct x texils and if u decide to use a linear samplier you could find everything looks slightly blury. Good luck :)
  7. Hi. On some graphics cards, people are limited by how many render targets they are able to have. In my game, there are some render targets which are only affected once and for the rest of the game they are used like normal textures. I was wondering, is it possible to convert a render target texture into a texture. At the moment I save it to a file in memory, and load that file in memory into direct x....but its a slow process as its going GPU to CPU to GPU.
  8. 2D Map Editor, Loading Tilesets.

    I would highly recommend you use an image library like CxImage. http://www.codeproject.com/KB/graphics/cximage.aspx This makes handling images a lot easier, as you can loop through pixels and obtain the width and height of the image. Your method of having separate images for each tile imo is the right method to use. You save memory because usually tiles are 32x32 or 64x64...which is a native size for direct X. Also, if you render with quads, you dont have to worry about the edges of the tiles 'bleeding' if you transform the screen (such as rotate or zoom) Also, it means people can use more than 1 tileset for their levels..and if the entire tileset isnt used then you dont need to load those unused tiles into direct x. Anyways... one tip if you decide to use CxImage..it takes the y co-ordinate from the bottom of the image. Thats caused me a few headaches in the past. When your done with your CxImage...heres a way you can load it into directx: CxImage& cx BYTE *buffer = 0; long size; cx.Encode(buffer,size,CXIMAGE_FORMAT_PNG); Call a function such as D3DXCreateTextureFromFileInMemory cx.FreeMemory(buffer); I use the format png as it will maintain the alpha channel in the tileset (if it has one) Good luck :)
  9. Transparent textures + zbuffer

    hey we have the same name :D I just did some googling and found this: 'pd3dDevice->SetRenderState( D3DRS_ALPHATESTENABLE, true ); pd3dDevice->SetRenderState( D3DRS_ALPHAREF, 0x01 ); pd3dDevice->SetRenderState( D3DRS_ALPHAFUNC, D3DCMP_GREATEREQUAL );' apparently that prevents not only rendering on the screen but also rendering on the zbuffer if u have a transparent section of the texture. But I guess the problem with that is if your texture has sampling then u'll have areas that are semitrasparent and they wont be rendered right. Thanks for the advice Dave :) looks like rendering the gate last is the best solution (or at least to build up a list of meshes that have semitransparent regions, and sort them in order to render furthest to closest after rendering everything else)
  10. Hello I have a small question about the zbuffer and how to use it with textures with transparent areas. As I see it, the zbuffer is a surface that records the depth of pixels on the surface, and when enabled prevents pixels rendering where other pixels have rendered that are closer to the camera. So if you render wallB after rendering wallA, and wallB is behind wallA, then the area of wallB that wallA is in front of wont be rendered, and everythings cool. However....how do you handle textures with transparent areas? For example, if instead of having walls you have a gate texture. If gateA is behind gateB then the render will look fine, but if gateA is in front of gateB, then u wont see gateA through gateB. Or what about if the wall was made out of a semitransparent material. I guess you could determine by the distance from the camera which wall is closer...but what if your scenery is a single mesh? (or you have a mesh such as a hollow cube)...is there something special you can do with directx? Thanks in advance
  11. Okay so theres that new playstation 3 game I've heard about where you can put cards in front of the camera, and it detects what card it is (obviously from black and white shapes at the top of the card) and then you see your card turn into a cool 3d moving character that follows your card around. I can guess at how the motion detection is done, and how the card is read...but there is one thing that has completely baffled me...how did they render 3d on top of a webcam picture ?...I mean the technology has been around for ages I guess as it was used in iToy and stuff...but is there some special way of streaming in graphics into direct x so they can be rendered in 3d? If someones knows of any tutorials or even what the method might be called...or hell even if its possible on a pc with directx...please tell me :D I promise i wont code a pokemon card game :P Thanks for reading.
  12. Hello This may seem like a ridiculous question but... Say I have a CD3D9Texture from a IDirect3DDevice9, and I wish to use this texture in another IDirect3DDevice9, how is the best way to copy the texture information over (unless it is possible to use a texture from another device indirectly). Also, the CD3D9Texture is a rendertarget. I know one way is to lock the texture, and loop all the pixels, then put those into a new texture in the new device...however that means going from GPU to CPU to GPU and I'm wondering if theres a way to go GPU -> GPU. Thanks :)
  13. Clamping UV texture co-ordinates

    oh such a simple solution! i feel like such a noob. Thanks heaps both of u!
  14. I'm working on a 2d game and I've noticed that when the sprites increase in size the edges of the texture seem to wrap. The texture co-ordinates are simply 0,0,1,1. I know I could fix this by giving all my sprites a transparent border, but i'm sure there must be a solution as this kinda thing would surely affect 3d games! I remember back in the days i used to use blitzbasic and i had a texture which was a gradient. The ground was black, and the texture of the walls were a gradient of black to white, and the sky was white. I had a wrap problem back then as the top of the walls had some white appear on them, but there was an option to fix it called 'CLAMPUV'. So I'm wondering how do i go about this in direct x? is there a render state or something ? Thanks in advance :)
  15. G'day I'm programming in VC++6, using Direct9 I have a render target which is about 4x4 pixels...and it contains important pixel information ( it all relates to a collision system ). I need to extract this information so i can detect a collsion. This is currently what I use: IDirect3DSurface9* RTTSurface; // global offscreen surface IDirect3DSurface9* surface; //Get information about our texture D3DSURFACE_DESC desc; m_RenderTargetTexture->GetLevelDesc(0, &desc); if(!RTTSurface) { // Create an offscreen plain surface if(FAILED( m_display.d3dDevice->CreateOffscreenPlainSurface(desc.Width, desc.Height, desc.Format, D3DPOOL_SYSTEMMEM, &RTTSurface, NULL))) MessageBox(0,"Error", MB_OK | MB_ICONHAND); } // Get the surface of the render target if(FAILED( RenderTargetTexture->GetSurfaceLevel(0, &surface))) MessageBox(0,"Error", MB_OK | MB_ICONHAND); // Put the render target data into the surface if(FAILED( d3dDevice->GetRenderTargetData(surface, RTTSurface))) MessageBox( 0, "Error", MB_OK | MB_ICONHAND); // lock the offset plain D3DLOCKED_RECT rect; if(FAILED( RTTSurface->LockRect(&rect, NULL, 0)) ) MessageBox( 0, "Error", MB_OK | MB_ICONHAND); COLORREF* pData = (COLORREF*)rect.pBits; And at this point pData is a pointer to the pixels. Is there anyway this code can be optimised ? Is there a faster way of reading for the graphics card? Thanks for reading :)