Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About Vertex333

  • Rank
  1. The problem is, that your program starts with the default graphics chip, which is the integrated intel one. Heard about nVidia optimus? Simply start your application from the dedicated nVidia card, or go into the nVidia control panel and add your exe to the high performance profile, so your program always starts with the nvidia card. Maybe there is also a way to explicitly force the nvidia card (nvidia extension/SDK or so?)
  2. Is it really needed to have the texture in one large bitmap? Why not chunk it into that kind of parts that you can handle well (from a performance view)? When drawing/using the bitmap you simply use the chunks. I don't know a way to do this with WIC, maybe doing a special copy or so (telling WIC about the big image, which isn't loaded at that time, then copying a part of it where WIC has to decode/open it anyway)... but I don't think that you can get WIC to a point where it doesn't load the whole image into memory, especially think about how e.g. jpeg works... or are we still talking about raw bitmaps? If you are only using raw bitmaps there is an option, you can bypass the file loading of WIC and manually load you WIC Image (later used for D2D, where you may make a copy to a d2d hardware render target for better performance, since everything with WIC and D2D uses d2d software). So manually loading the WIC image can be done with the stream com interface (at least), you simply have to load the bitmap by yourself. That shouldn't be that problem. You read the file by yourself (look up bitmap format), then you get the pixels (only that part that you want, this is your primary requirement) into your WIC image (maybe you will lock the image instead of using the stream, which may be better) and finally use it with d2d. When using the stream you have to manually adding the correct bitmap header (this may also be easily possible, but some additional work). With lock or so it should be straightforward I think. Sorry if my thoughts are a little bit confusing, just some ideas that I got when writing the text. Tell us what you have chosen and we may be able to help you with the implementation. Vertex
  3. Thx again Erik! That sounds already pretty nice, but it is not allowed to bring a message box in the application (except we close the app): nothing is allowed to interrupt user interaction and so on. The graphics is important, but the user should not have to interact for the graphics stuff. The application could simply run without any person beeing there for days (then the graphics don't need to be drawn, but we could not ask the user to do anything or block anything of the app or something like that). The idea with the time measurement is good. Thx! Nevertheless, it is hard to set the times + no user interaction or user dependency. Vertex
  4. Thx Erik! So you would not end trying to recreate the device? The problem with the app is that it is kind of a user interface to a "control", so it would be important to see graphics. A very long time without seeing anything shouldn't be that user friendly or in other words the user doesn't know what happens. On the one side we should not close our application, on the other side the user should not stay in front of our app without seeing anything + transistions/... that need device recreation should work! The recreation is a very important part for our app (as described here). You can't totally prevent device losts/resets, so we should do the best. Hence I am very interested how other applications/games do that.
  5. Hi all! I have a 2D visualization app (D3D11 with Direct2D Interop) that may get the device reset/lost or whatever, so I have to recreate my device and resources. The app is critical and should run as long as possible or at least quit with an error message. The question here arises wheter I should try to recreate the device as long as possible (without any display to the user in the meantime), or should I stop after 1000 tries, or after a minute? What do other programs (I don't mean microsoft msdn samples, but real delivered/professional applications/games)? I already do a software fallback to WARP if no HW is available or if recreation failed. As far as I know a device reset could occur of the following resons (some of them may be incorrect and there are at least some more that I forgot): driver hangs & gets recovered (never had this/couldn't reproduce that), driver update/installation (couldn't reproduce that), adapter/device/driver gets removed (never had this), internal errors (had that but don't ask me what is needed to reproduce), insufficient memory (especially often with WARP, so not too hard to reproduce),...) thx, Vertex
  6. Hi all! For D3D11: are DXGI calls also thread safe when I do not specify D3D11_CREATE_DEVICE_SINGLETHREADED for d3d11 device. Or are DXGI calls like Present and ResizeBuffers (wich are both the functions I have to worry about) threadsafe/not threadsafe by at all? Thx, Vertex!
  7. I solved it by synchronizing it with a critical seciton and hence calling resizebuffers on the main thread just before WM_SIZE gets executed.
  8. Hi! I have a DXGI1.1/D3D11 MFC App that has a message and a render thread. My Problem is that DXGI always scales (bitmap stretched) the content of the swapchain buffer when I change the size of the window (should happen on the message/main thread). Nevetheless, I have a 2D application where the content should not get scaled, hence I call ResizeBuffers for the swapchain buffer to get the content back to its original size (happens on the renderthread). Due to this behavior it seems to me that the content flickers (it gets scaled by dxgi and some hundred ms later it gets scaled back by my ResizeBuffers call). Is there a way to prevent the automatic scale of the swapchain buffer? We are only talking about windows mode (if that is unclear). Ask if something is unclear or if there are too less information. Thx, Vertex
  9. Vertex333

    Direct2D - flipping bitmaps

    You could do that in different ways. You could create a bitmapbrush. The brush contains your already flipped bitmap. You can directly use that brush to draw anywhere with a brush (transformation should be a property on the createbitmapbrush call). Use that if you can use a brush for your further drawing. & & & You could create a bitmaprendertarget (http://msdn.microsof...6(v=VS.85).aspx) Simply draw you bitmap flipped into this rendertarget. Afterwards you "get" the "Bitmap" (which is flipped) and can then use the bitmap (don't forget that there is a dependency of the bitmap to the bitmaprendertarget). Use that if you really need a bitmap. Layer would be an additional option, but I think you would waste performance if you draw your image more than once. Maybe this is one of the faster solution to code, it may have bad performance for every drawing operation (depends). You could create a WICBitmap and render into it, which I can't recommand (defaults automatically to software). In addition you could also create a normal direct3D 10.1 texture/dxgisurface, but I think you will go for D2D only so this would be overkill for you. I think the first solution is the way to go, if you really need the bitmap interface itself use the second solution. If you know what you do you could try the layer stuff, forget the rest. Please ask if you need more help with implementation for one of these solutions. Me or someone else will help you for sure. Vertex
  10. It seems to me that DXGI_MODE_SCALING_CENTERED only works for full screen. Still seaching for the correct way to resize a windowed directx app.
  11. I use a MDI Child window that has a frame (with min/max/close buttons) and another child window in it. The second/inner child window is the one where I create the swapchain. My problem is that although I set [color="#010001"][color="#010001"]DXGI_MODE_SCALING_CENTERED for the swap chain it resizes (on pixelbasis) always like [color="#010001"][color="#010001"]DXGI_MODE_SCALING_STRETCHED. I immediately call resizebuffers to correctly resize it, but for that short period inbetween (size and my resizebuffer call) you can see a jumping effect (it first gets streched on pixelbasis and afterwards correctly drawn in the new size). The drawn content has always the same size, the stretching effect may be usefull if you would strech it anyway, nevertheless DX should let the content where it is like the centered option says. Any suggestion what I have to do to get DXGI_MODE_SCALING_CENTERED working? The rest of the relevant parameters for [color="#010001"][color="#010001"]CreateSwapChain: [color="#008000"][color="#008000"]/*BufferDesc.RefreshRate.Numerator = */[color="#010001"]0, [color="#008000"][color="#008000"]/*BufferDesc.RefreshRate.Denominator = */0, [color="#008000"][color="#008000"]/*BufferDesc.Format = */[color="#010001"][color="#010001"]DXGI_FORMAT_B8G8R8A8_UNORM, [color="#008000"][color="#008000"]/*BufferDesc.ScanlineOrdering = */[color="#010001"][color="#010001"]DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED, [color="#008000"][color="#008000"]/*BufferDesc.Scaling = */[color="#010001"][color="#010001"]DXGI_MODE_SCALING_CENTERED, [color="#008000"][color="#008000"]/*SampleDesc.Count = */1, [color="#008000"][color="#008000"]/*SampleDesc.Quality = */0, [color="#008000"][color="#008000"]/*BufferUsage = */[color="#010001"][color="#010001"]DXGI_USAGE_RENDER_TARGET_OUTPUT | [color="#010001"][color="#010001"]DXGI_USAGE_BACK_BUFFER, [color="#008000"][color="#008000"]/*BufferCount = */1, [color="#008000"][color="#008000"]/*OutputWindow = */[color="#010001"][color="#010001"]m_hwnd, [color="#008000"][color="#008000"]/*Windowed = */[color="#010001"][color="#010001"]TRUE, [color="#008000"][color="#008000"]/*SwapEffect = */[color="#010001"][color="#010001"]DXGI_SWAP_EFFECT_DISCARD, [color="#008000"][color="#008000"]/*Flags = */[color="#010001"][color="#010001"]DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE present call looks like this: [color="#010001"][color="#010001"]Present(0,[color="#010001"][color="#010001"]DXGI_SWAP_EFFECT_DISCARD); resize buffers looks like this: [color="#010001"][color="#010001"]ResizeBuffers(1,[color="#010001"][color="#010001"]width,[color="#010001"][color="#010001"]height,[color="#010001"][color="#010001"]DXGI_FORMAT_B8G8R8A8_UNORM,[color="#010001"][color="#010001"]DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE); Some facts: DXGI1.1,D3D11, Hardware, WARP and Ref tested(it's DXGI anyway). thx, Vertex
  12. I just found some new infos (for me) on the "Surface Sharing Between Windows Graphics APIs" msdn page: http://msdn.microsof...y/ee913554.aspx (starting from "High-Level Overview of Helper") There is an interface named ISurfaceQueue which can be created via CreateSurfaceQueue. They may be simpler/better than synchronized shared surface. Can somebody tell me where I can find them. Either I can't find them (hopefully) or msdn documented a future interface (win8?). thx, Vertex
  13. Vertex333

    [Direct2D] Color keying?

    What bitmaps are you drawing? WIC (bmp,jpg,png) loaded or manually drawn (lines, rectangles,...). You should look at the fillopacitymask function as an alternative for drawing bitmaps. Maybe you can get a similar result with that.
  14. So, it should cost more CPU + GPU to do this additional pass than using only CPU (I mean using instancing and stuff like this it is much work to do everything a second time on the CPU, from a programmers view)? I for myself think that it offloads work to GPU + you don't need to change 2 code bases if you change vertex calculation/creation. In my case I have simple quads (only few vertices) and lots of instances (matrices via buffer). calculating that should cost more than doing a second pass on the gpu (not tested or implemented, just my thoughts). I am really asking myself why it is not common to do this on the GPU in an additional pass. Vertex
  15. Well, would it a) be possible and useful to use the vertex data and calculate the mouse point intersection on GPU (direct compute or pixel shader,...) (the intention is to save CPU Power and don't need to calculate vertex stuff a second time on the CPU for mouse selection if everything gets usually done on the vertex shader). I think it may be a problem that there is a large delay (but maybe this can be reduced if after every frame the calculation gets done) + not only a single point gets calculated but a field around the last mouse position (lets say 100x100) gets calcluated, the next mouse position simply exists until the next frame updates the 100x100 matrix or we get out of the 100x100 field (maybe performance is large enough that we could calculate the whole screen resolution and simply only upload a small area to the CPU/RAM). What are your thoughts? If it is possible and useful, how would you implement it? Thx, Vertex
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!