Michael Anthony Wion

Members
  • Content count

    39
  • Joined

  • Last visited

Community Reputation

106 Neutral

About Michael Anthony Wion

  • Rank
    Member
  1. Font rendering using GDI & D3D10

    Thanks! It appears to have copied over properly, except the image is rendered upside-down so I assume I'll need to reverse the pixel order in the GDI image before copying, unless there's a more efficient way to handle this. EDIT: In the BITMAPINFOHEADER, when you set biHeight to be negative, it reverses the image along the y-axis. All is good, thanks again!
  2. Font rendering using GDI & D3D10

    I'm using D3D10, so if it were possible to use those suggestions then I'd imagine it being a rather painful rewriting process. [quote name='MJP' timestamp='1336370325' post='4937981'] For that, just create a Texture2D and pass the image data through the pInitialData parameter of CreateTexture2D. [/quote] Thanks! I was able to get the fonts generated, cached, and even added into a resource view. When I save the bitmap data out to a file, it comes out precisely as expected. But when I try to render the resource view, I see nothing being rendered. Is there a simple way to output the texture data to a file from a resource? Just trying to make sure the data is actually copying over properly before continuing on. If it saves as expected, then it's probably just my matrix/shader setup.
  3. Font rendering using GDI & D3D10

    So just as a general test of ideas, I tried this: [CODE] bool TextClass::Initialize(ID3D10Device* device) { int bmpWidth = 64; int bmpHeight = 16; LPCWSTR strText = TEXT("Test"); RECT rcText = { 0, 0, bmpWidth, bmpHeight }; COLORREF hbrColor = RGB(255, 0, 0); HDC hDC = CreateCompatibleDC(NULL); DWORD* pSrcData = 0; BITMAPINFO bmi = { sizeof( BITMAPINFOHEADER ), bmpWidth, bmpHeight, 1, 32, BI_RGB, 0, 0, 0, 0, 0 }; HBITMAP hTempBmp = CreateDIBSection( hDC, &bmi, DIB_RGB_COLORS, (void**)&pSrcData, NULL, 0 ); HFONT NewFont = CreateFont(0, 0, 0, 0, FW_DONTCARE, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_OUTLINE_PRECIS, CLIP_DEFAULT_PRECIS, CLEARTYPE_QUALITY, VARIABLE_PITCH, TEXT("Impact")); HBRUSH NewBrush = CreateSolidBrush(hbrColor); SelectObject(hDC, NewFont); SelectObject(hDC, NewBrush); DrawText(hDC, strText, 4, &rcText, DT_LEFT | DT_WORDBREAK); GdiFlush(); DeleteObject(NewBrush); DeleteObject(NewFont); ReleaseDC(NULL, hDC); D3DX10_IMAGE_LOAD_INFO info; info.Width = (rcText.right - rcText.left); info.Height = (rcText.bottom - rcText.top); info.Format = DXGI_FORMAT_R8G8B8A8_UNORM; info.Usage = D3D10_USAGE_DEFAULT; info.MipLevels = 1; info.Depth = 1; info.CpuAccessFlags = 0; info.BindFlags = D3D10_BIND_SHADER_RESOURCE; info.MiscFlags = 0; info.Filter = 0; info.FirstMipLevel = 1; info.MipFilter = 0; info.pSrcInfo = 0; if(FAILED( D3DX10CreateTextureFromMemory( device, pSrcData, (4 * info.Width) * info.Height, &info, NULL, &m_pTexture, NULL) )) { return false; } D3D10_SHADER_RESOURCE_VIEW_DESC srvDesc; SecureZeroMemory(&srvDesc, sizeof(D3D10_SHADER_RESOURCE_VIEW_DESC)); m_pTexture2D = (ID3D10Texture2D*)m_pTexture; srvDesc.Format = info.Format; srvDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MostDetailedMip = info.MipLevels; if(FAILED( device->CreateShaderResourceView( m_pTexture, &srvDesc, &m_pSRView ) )) { return false; } return true; } [/CODE] ... But it always fails on D3DX10CreateTextureFromMemory. I don't know if this is a compatibility issue or if I'm just going about this improperly, but AFAIK it looks like should work.
  4. Font rendering using GDI & D3D10

    I've heard from multiple sources that the best way to crank out performance while rendering 2D text to the screen is to roll out your own font class. The font class should hold a cache of text being drawn to the screen, and only create new resources when the specified text does not already exist within the cache. This makes perfect sense to me. As for the implementation, I figured I would create the font & text using GDI (or possibly GDI+), then slap it onto a texture in D3D, freeing GDI resources as I go along. I've also read that this is basically how ID3DXFont does the job. My question, however, is how would you get the image from a GDI device context onto a ID3D10ShaderResourceView texture? And would it be better to use GDI+ instead of GDI? Thank you for your thoughts.
  5. Problem with shaders

    Check your messages.
  6. Efficient 2D sprite designs

    I just keep a private variable in the class to store the number of instances. What about a slight combination of option A and B, where each instance holds a "state" variable indicating it's current status. Then during each update call, I would iterate through the buffer searching for dead elements, replacing them with topmost elements? As for the draw ordering, I think I could just set each Z value to be [i]slotNumber / numberOfSlots[/i] and call it a day. Could this end up being overkill for large buffers? BTW, I'm not creating the buffer every frame. I create it once during the initialization phase, update it only when necessary, and memcpy it every frame.
  7. Problem with shaders

    This may seem like a stupid question, but are your transforms setup properly to view your geometry? Try making sure your world matrix is an identity matrix, your view is a slight "lookat" offset of that, and your projection is a proper perspective transform. It's typical for some people (myself included) who are learning something new to overlook the simplest of things. Usually ends in a smile + facepalm combo.
  8. Strange alphablending results

    I'm not rendering anything except a batch of sprites on a clear backbuffer. I did however manage to get the alpha channel to kick in by using a png image and setting [i]AlphaToCoverageEnable[/i] to [i]TRUE[/i]... ... Except there are some "ghostly" artifacts which occur during certain camera angles, as shown here: [media]http://youtu.be/jXx27YG7gY4[/media] Luckily from what I can tell so far, this doesn't occur with my dds format sprites. What I don't understand is why every online tutorial/resource I've scrounged up have [i]AlphaToCoverageEnable[/i] disabled, and yet it seemingly does the job for me?
  9. Strange alphablending results

    So I've read up about blend states, and enabled alphablending by adding the following lines to my shader: [code]BlendState AlphaBlendState { AlphaToCoverageEnable = FALSE; BlendEnable[0] = TRUE; SrcBlend = SRC_ALPHA; DestBlend = INV_SRC_ALPHA; BlendOp = ADD; SrcBlendAlpha = ZERO; DestBlendAlpha = ZERO; BlendOpAlpha = ADD; RenderTargetWriteMask[0] = 0x0F; }; //... technique10 TextureTechnique { pass pass0 { SetBlendState(AlphaBlendState, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF); SetVertexShader(CompileShader(vs_4_0, TextureVertexShader())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_4_0, TexturePixelShader())); } }[/code] And I can tell that some effect is occurring when I set the alpha of a given pixel color to 0.5f, however it's not actually [i]blending[/i] with the background. It merely seems to dim the hues of the texture. I opened up my bitmap in DirectX Texture Tool, added an alpha map, and there was no change in results. If I set a color's alpha to 0, it just turns black but you can still see the outlines of the texture overlapping with other textures, instead of being invisible. Anyone have any suggestions on how to solve this?
  10. Is this a Good site to learn DirectX

    I really enjoyed those tutorials. As far as translating them goes, you would first need to research all of the small differences between unmanaged D3D and SlimDX. This could prove to be a long and tedious task, even though the designs are incredibly similar as InvalidPointer pointed out. If you don't understand how it works in C++, and don't have the time to learn it, the only other option would be to buy conversion software. A quick google search gave me this link: [url="http://tangiblesoftwaresolutions.com/"]http://tangiblesoftwaresolutions.com/[/url] But keep in mind that there is no universal solution, and you would (at least) still need to know the differences between managed and unmanaged code in order to get the translated code to work as intended. Best of luck!
  11. Efficient 2D sprite designs

    Okay so I've figured out the concepts of instancing and have it set up nicely to allow for changes to position and texture coordinates per instance. This is nice because the geometry of a quad never changes, but the location of the quad and the animation frame of the texture does. After a quick stress test, this design seems to beat the rest hands down (I can render over 10,000 sprites at 100-300 fps, windowed!!) Now, can anybody tell me what the best way to manage the instance buffer would be, in the event of adding/removing elements at runtime? Example: Sprite #3 (out of 6 total) dies, and no longer needs rendering. So should we: a.) Move elements #4, #5 and #6 to become #3, #4 and #5, and reallocate the buffer size to 5? (Sounds expensive if the buffer size is huge) b.) Create a flag, telling the shader whether or not to render the specified instance. Newly created sprites will overwrite any slot with this flag set. c.) ??? I'd really appreciate people's thoughts on this!
  12. Efficient 2D sprite designs

    I haven't tampered with instancing yet, but another member once suggested the same. I understand that it allows for a small amount of changes to each part of the original? If that's the case, does it allow for a change in position, as well as texture coordinates? I ask this because each each sprite needs to have it's individual location and animation frame within a given texture.
  13. Efficient 2D sprite designs

    Okay, I know this has been rehashed several times in the past (with unknown accuracy), but I can't help but ask for a more updated answer... [u]Options[/u]: 1.) [i]ID3DXSprites[/i]. Supposedly fast, and definitely easy to use (but scary since I have no idea how exactly it handles everything behind the scenes, and holds a few limitations). 2.) [i]Textured quads using separate triangle strip vertex buffers[/i] (this is what I'm currently using, and works well, at least on small tests). 3.) [i]Textured quads using a singular triangle list vertex buffer[/i] (uses more vertices per quad, but is able to manage several separate quads without "connecting" them). 4.) ??? ... Which is the most optimal in terms of performance? My goal is to keep the frame rate => 60 FPS whilst rendering up to 1,000 (possibly more) sprites per frame, so I can't settle for any lesser option than whatever would be the most optimal. Also, if you have any benchmark results for each of these designs, I'd very much like to know them.
  14. Efficient 2D sprite designs

    EDIT: Accidental double post. Please delete this one, thanks.
  15. Texturing a cube built using indices?

    Strangely enough after getting rid of the index buffer and feeding a straightforward triangle list into my vertex buffer, my texture no longer shows up properly. I'm only rendering a single triangle, and have tried [i]every[/i] combination of texture coords to make sure that I'm not crazy. Here's the texture....................... [url="http://alkaspace.com/is.php?i=136756&img=dirt.jpg"]http://alkaspace.com...56&img=dirt.jpg[/url] And here's how it's coming out... [url="http://alkaspace.com/is.php?i=136759&img=TextureIssu.jpg"]http://alkaspace.com...TextureIssu.jpg[/url] When I change the texture coordinates for each vertex, the only difference it makes is the "direction" of the swipes from the texture. This is insane. I've spent an entire week just writing a textured cube class. Legacy Direct X versions were much less bothersome. And before I changed it to include indices, it was working almost 100%. EDIT: Turns out that the Color parameter was the culprit. Once I took it out of the layout, the texture rendered just fine. But why? I thought that switching the technique to use texture coordinates and a sampler instead of a color was possible?