• Advertisement
Sign in to follow this  

DX11 Question about WIC Pixel format conversions

Recommended Posts

I was wondering if someone could explain this to me

I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide

I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA

In one case I would think that: 
GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels

In the second case I would think that:
GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void

I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end

 

Edited by noodleBowl

Share this post


Link to post
Share on other sites
Advertisement

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By sergio2k18
      Hi all
      this is my first post on this forum.
      First of all i want to say you that i've searched many posts on this forum about this specific argument, without success, so i write another one....
      Im a beginner.
      I want use GPU geometry clipmaps algorithm to visualize virtual inifinte terrains. 
      I already tried to use vertex texture fetch with a single sampler2D with success.
       
      Readed many papers about the argument and all speak about the fact that EVERY level of a geometry clipmap, has its own texture. What means this exactly? i have to 
      upload on graphic card a sampler2DArray?
      With a single sampler2D is conceptually simple. Creating a vbo and ibo on cpu (the vbo contains only the positions on X-Z plane, not the heights)
      and upload on GPU the texture containing the elevations. In vertex shader i sample, for every vertex, the relative height to te uv coordinate.
      But i can't imagine how can i reproduce various 2d footprint for every level of the clipmap. The only way i can imagine is follow:
      Upload the finer texture on GPU (entire heightmap). Create on CPU, and for each level of clipmap, the 2D footprints of entire clipmap.
      So in CPU i create all clipmap levels in terms of X-Z plane. In vertex shader sampling these values is simple using vertex texture fetch.
      So, how can i to sample a sampler2DArray in vertex shader, instead of upload a sampler2D of entire clipmap?
       
       
      Sorry for my VERY bad english, i hope i have been clear.
       
    • By mangine
      Hello. I am developing a civ 6 clone set in space and I have a few issues. I am using Lua for the logic and UI of the game and c++ directx 12 for the graphics. I need a way to send information between Lua and c++ occasionally and was wondering what is the best and most flexible (and hopefully fast) way to do this. Don't forget that I also need to send things from c++ back to Lua. I know of a lua extension called "LuaBridge" on github but it is a little old and I am worried that it will not work with directx 12. Has anybody done something similar and knows a good method of sending data back and forth? I am aware that Lua is used more and more in the industry and surely plenty of AAA game programmers know the answer to this. I want a good solution that will hopefully still be viable code in a couple of years...
    • By owenjr
      Hi there.
      I'm pretty new to this and I don't know if it has been asked before, but here I go.
      I'm developing a game using SFML and C++.
      I would like to use the "Tiled" tool to load maps into my game but I don't actually find any tutorial or guide on how to exaclty use it (I know that I have to read an XML file and stuff). I just step into diverse projects that make all a mess. 
      Anyone knows where can I find good information to make my map loader by myself?
      Thanks in advantage!!
    • By MHG OstryTM
      Hello guys,
      I've released my game for the first time. I'm very excited about it and I hope you'll enjoy the game - Beer Ranger. It's a retro-like puzzle-platfromer which makes you think a lot or die trying. You have a squad of skilled dwarfs with special powers and your goal is tasty beer. There is a lot of traps as well as many solutions how to endure them - it is up to your choice how to complete the level! 
      Link to the project: Project site
      Link to the Steam site with video: Beer Ranger
      Have fun and please write feedback if you feel up to.
      Some screens: 




    • By matt77hias
      Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext?
      If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a
      IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
  • Advertisement