Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

148 Neutral

About mattropo1is

  • Rank
  1. eglChooseConfig() allows you to specify a set of config attributes for color depths (EGL_RED_SIZE/etc.) for your frame buffer. However, there doesn't seem to be a way to specify the format such as ARGB, RGBA, XRGB, RGBX as is possible in DirectX. You seem to be able to do this for textures, but not during initial frame buffer context setup for OpenGL ES 3+. Is that correct? Are there no ways to specify XRGB vs ARGB explicit?   I know I can practically do this by creating an RGBA format and then always assuming the A is 255 in shaders. My goal would be to get XRGB format.  
  2. mattropo1is

    Mapping a DXT1 compressed texture

    Thanks guys. Yes, I can confirm if you just lock DXT compressed textures, the data you get back is compressed. As for D3DXLoadSurfaceFromSurface(), you might also find (as I did) that D3DX11LoadTextureFromTexture() actually works better/easier. Yes, it is deprecated for Win8, but it was ok for our purposes. However, there is a new MS DirectXTex library for doing these operations that is compatible with Win8. Thanks for the help!
  3. Hi guys, I have some textures that when created, were created as DXT1 compressed. I'd like to map them, and get the data back out. To that end, I created a staging buffer, do a copy of the original resource to the staging buffer, then map the staging buffer. This works just fine for non-DXT compressed textures; and the data all looks correct with the test textures I used. questions: 1. When I map a DXT1 compressed texture - am I seeing compressed bits? 2. If they are not the compressed bits, what format are they in (RGB8)? 3. If they ARE compressed bits, 3a. Can I specify the staging buffer they are mapping to have a different format such as RGB8 to get the non-compressed bits (i.e use it as a decompressor)? 3b. If I cannot do 3A, then are there software decompressors around in a library that I might use to get the raw bits back out? Thanks in advance
  4. Well, after much investigation and experimentation - it comes down to much of what has been said. -The internal DX texture functions do NOT seem to understand or honor sRGB/gamma properly in png/jpg/dds files. The DX functions pretty much just load things assuming they are non-sRGB into unorm/standard rgb component internal formats. -The nvidia photoshop dds plugin doesn't seem to mark things as sRGB properly. Sadly, this doesn't really matter as it's probably not honored anyway. -You will have to have some sort of meta-data file, or a policy, that tells you which files need to be created using an sRGB texture surface/buffer. -Make sure your content generation tools are outputing to sRGB. Adobe has a color space setting for this. Some tools do NOT have sRGB output. Make sure you can get your favorite content into the sRGB format you want before you make the jump. -If you wish to use DX texture loading routines, you need to use the sRGB load flags on D3DXGetImageInfoFromFile() (i.e. D3DX11_IMAGE_LOAD_INFO Info.Filter = D3DX11_FILTER_NONE | D3DX11_FILTER_SRGB_IN | D3DX11_FILTER_SRGB_OUT). This preforms the correct interpretation of the data. -Both your back buffer AND the textures need to be a DXGI_FORMAT_xxxx_SRGB format type. -There are very few DXGI_FORMAT_xxxx_SRGB formats. Only 6 to be exact, and 4 are BCx compressed types. If you're not very careful, odds are very good you'll pay for painful load-time conversion because most of the old formats are ARGB, but the SRGB formats are RGBA. If you're looking for single-channel formats - you'll be sorely dissappointed. -It's useful at first to change your pixel shader to do an pow(2.2) on each pixel (using the old textures) to see what you SHOULD be getting. Then, convert the textures and make the format and loading changes, then compare the results. They should look nearly the same. It certainly does appear that at some point, you'll likely want to come up with your own texture format or get by using some extra meta-data file to specify whether the image is sRGB (diffuse channels, etc), or not (normal maps, data maps, etc). Thanks for the help.
  5. I have Photohop cs5, and used the nVidia dds plugin to save the file. I had Photoshop set to sRGB space, and when I went to save using the nVidia DDS plugin, it didn't show any obvious checkboxes to create an sRGB dds texture. I saved it, and the D3DX11CreateShaderResourceViewFromFile still created the non-sRGB UNORM texture. Have you gotten D3DX11CreateShaderResourceViewFromFile to properly return an sRGB texture? What was your tool/process for creating the sRGB texture? M
  6. Thanks. Yes, this does get the desired result for this file, but the problem is that this code *forces* the jpg into sRGB space (it does look correct). But what I want is for D3DX11CreateShaderResourceViewFromFile to create the *correct* resource view based on what the texture file *says* it is. If the texture is not sRGB (i.e. normal map/albieto/etc), then it should load as the RGBA8_UNORM. If it IS saved sRGB, then I'd want the RGBA8_UNORM_SRGB version... Any ideas on that one?
  7. I've got a block of code that's loading what I'm pretty sure is an sRGB jpg (as the file properties on it say it is enocded sRGB color format, and I selected the sRGB profile when saving it in Photoshop) that I wish to use as a texture. hResult = D3DX11CreateShaderResourceViewFromFile( pd3dDevice, pFilename, NULL, NULL, &pTextureResourceView, NULL ); pTextureResourceView->GetDesc( &desc ); desc.Format; When i run the above code on my texture, it always loads as DXGI_FORMAT_R8G8B8A8_UNORM, not DXGI_FORMAT_R8G8B8A8_UNORM_SRGB as I would expect. Further, when I use the texture, it's washed out (too bright) as if it's been loaded as RGB. If I do a pow(sampleColor, 2.2) in the pixel shader, then the color looks right. Am I missing something on loading the texture correctly to get it to recognize it as sRGB?
  8. In almost every other case, I would say you're right. But the problem is this is an infrastructure library that many people will be using. I can imagine people using the CRT leak detection tools and seeing those two 'leaks' and assuming I'd been too lazy to clean up a trivial memory leaks. It's a professionalism issue IMHO. My other motivation is that if you don't REALLY know what is causing the leak, you could be masking much bigger problems later. Because I root caused it, changed the code, and it fixed it legitimately - I can NOW say with 100% confidence that it wasn't a real bug and have left it. But before, when I didn't know 100%, I was acting on faith. And in my experience, faith tends to blow up on you at 1am the night of code freeze. Further, the MS documentation EXPLICITLY calls out the STL containers as having had this problem fixed. This is one case it's clearly not fixed. I'll be submitting a bug tomorrow on it. That and it's just good nerdy fun to sort these things out. Now I can run all the configurations of my program without a peep out of the leak reporter and say that 'it doesn't leak' without the 'well, besides this leak that's not really a leak'. Thanks guys for all the help!
  9. SOLUTION!!! Wow. So, I started searching my code for static variables to see if I wasn't leaving one or two around - paying very close attention to any STL objects I was using (which was primarily std::vector and std::string). In searching for the keyword static, I found this: class foo { private: static bar m_mybar; }; Looking at bar I see among the many members: class bar { private: std::string m_name; std::string m_value; }; BLAMO! I changed foo's m_myBar to: static bar* m_myBar; then create a global initializer to new/delete the m_myBar object at startup/shutdown. Leak goes away! I double-verified this was the culprit by initializing m_name and m_value to actual strings, and sure enough, those strings are dumped by _CrtDumpMemoryLeaks(); The reason they were 8/16 bytes was because an 'empty' std::string operator contains a pointer and an int length. On x86, 8 = sizeof(int)+sizeof(void*) and 16 = sizeof(int)+sizeof(void*) on x64. Long story short: *static* STL container objects are NOT fixed as the MS doc indicates - and DO show up as leaks using _CrtDumpMemoryLeaks(). Time to file a bug report. Holy cow. And the crowd goes wild.
  10. According to the microsoft docs, it's: _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); However, I don't get a report at exit as the docs say I should... Hmmmm. MORE data. I was checking the microsoft docs (http://msdn.microsof...y/x98tx3cf.aspx), and see that you ALSO need to #define _CRTDBG_MAP_ALLOC_NEW: #define _CRTDBG_MAP_ALLOC #define _CRTDBG_MAP_ALLOC_NEW #include <stdlib.h> #include <crtdbg.h> #ifdef _DEBUG #ifndef DBG_NEW #define DBG_NEW new ( _NORMAL_BLOCK , __FILE__ , __LINE__ ) #define new DBG_NEW #endif #endif // _DEBUG Adding THIS gives me my answer: Detected memory leaks! Dumping objects -> c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\crtdbg.h(1116) : {268} normal block at 0x0000000001E32A20, 16 bytes long. Data: < ? > F0 A1 B5 3F 01 00 00 00 00 00 00 00 00 00 00 00 c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\crtdbg.h(1116) : {267} normal block at 0x0000000001E2FF80, 16 bytes long. Data: < ? > C0 A1 B5 3F 01 00 00 00 00 00 00 00 00 00 00 00 which is this code in crtdbg.h: _Ret_bytecap_(_Size) inline void * __CRTDECL operator new(size_t _Size) { return ::operator new(_Size, _NORMAL_BLOCK, __FILE__, __LINE__); } However, many people seem to report that crtdbg.h is sometimes blamed for an allocation when it's not if they have their config wrong, so I'm a little skeptical. Instead, what I did was set a memory checkpoint at the begin/end of my program. It reports that I'm not leaking any memory: Deleting: 0 bytes in 0 Free Blocks. 0 bytes in 0 Normal Blocks. 12448 bytes in 7 CRT Blocks. 0 bytes in 0 Ignore Blocks. 0 bytes in 0 Client Blocks . One also sees this in their notes: In some cases, _CrtDumpMemoryLeaks can give false indications of memory leaks. This might occur if you use a library that marks internal allocations as _NORMAL_BLOCKs instead of _CRT_BLOCKs or _CLIENT_BLOCKs. Older versions of the Standard Template Library, earlier than Visual Studio .NET, caused _CrtDumpMemoryLeaks to report such false positives, but this has been fixed in recent releases.[/quote] I believe I DID see this before when I was using static std::string objects. They were reported as leaking (so maybe Microsoft HASN'T completely fixed it?). However, I can't find anymore static std:strings anywhere. Perhaps another stl container object that I'm using(?)
  11. Fascinatingly enough - I get the leak with this code: int WINAPI wWinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR lpCmdLine, int nCmdShow ) { _CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF); // can comment this line in/out with same results _CrtDumpMemoryLeaks(); return 0; } This tells me it's happening before the *start* of the program. The size of the 'leak' depends on whether I'm doing x86 or x64 build. But in both cases Visual Leak Detector detects no leaks. I added one to make sure it was working and it reported that leak correctly. x86 build Detected memory leaks! Dumping objects -> {270} normal block at 0x0093E270, 8 bytes long. Data: < E > C8 9B 45 01 00 00 00 00 {269} normal block at 0x0093E228, 8 bytes long. Data: < E > A8 9B 45 01 00 00 00 00 x64 build Detected memory leaks! Dumping objects -> {268} normal block at 0x0000000001E02A20, 16 bytes long. Data: < ? > F0 A1 EB 3F 01 00 00 00 00 00 00 00 00 00 00 00 {267} normal block at 0x0000000001DFFF80, 16 bytes long. Data: < ? > C0 A1 EB 3F 01 00 00 00 00 00 00 00 00 00 00 00 Object dump complete. Furthermore, when I add back in the int* p=new int 'leak', it's address is usually within about +0x80 bytes of allocation 268 - which may imply it's getting allocated before my program starts...
  12. I'm pretty sure that #2 isn't happening. I wrote all the code originally and never used any new/malloc overloading. For the most part, I'm using standard C calls and not linking to other libraries. I might do more checking the #1 (there are a few more places I can add the code too), and maybe search for a malloc() - but I'm pretty sure I only use new.
  13. So, tried Visual Leak Detector. Nice product by the way. It actually reports NO leaks (unless I add back in the one that I intentionally added). Other ideas?
  14. Hi guys. I am cleaning up code and got my leaks down to only 2 16-byte leaks (64x build). Visual Studio's CRT memory leak detector gives me the allocation #'s, and I've had great luck cleaning up all the other leaks using the _CrtSetBreakAlloc() with this method. However, with these two leaks - it never hits the breakpoint and just starts up. So, right in my main, I do this: int WINAPI wWinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR lpCmdLine, int nCmdShow ) { int* pi = new int(); } which obviously leaks, but it reports that leak as allocation #261. However, the reported leaks are allocations #259 and 260 - which is before main starts. Looking at the dumped data, I had previously had 5 additional leaks of static global std:string objects that showed up allocated before they started. Since I had the dump of the memory and could read the strings, and was able to find them. But with these two, I can't get a line reference number and they appear to be holding pointer data (possibly?). I'm using the #define DBG_NEW new ( _NORMAL_BLOCK , __FILE__ , __LINE__ ) to retrieve file/line numbers in the output dump, but am not getting a report. My guess is because static allocations don't new()? Is there some cool trick of CRT memory functionality that could report back where these are happening? Am I just not instrumenting the right file(s) to report the line number back (I have a lot of files in this solution) - or do static allocations like this not get line numbers? Thanks in advance
  15. mattropo1is

    Backfacing lines

    Duh - surprised I didn't think of this. Great answer. I'll give it a try when I get a chance as the points of the line list have normals that are VERY easy to calculate.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!