Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1962 Excellent

About Yourself

  • Rank

Personal Information

  • Role
  • Interests
  1. the maximum size of a 3D texture in D3D11 hardware must is 2048 (on all axis, so total maximum resolution can be 2048*2048*2048). so yes, it is supported on virtually every hardware out there. Performance wise there shouldn't be a difference that a 2D texture
  2. I think it depends on the type of job you wish to apply for. wanna be a gameplay programmer? do some really cool gameplay mechanics (like assassins creed style parcour, spiderman style flying, 3rd person shooter cover system, ...). wanna be an AI programmer? do an AI intensive project (RTS ?). wanna do tool programming? make a simple game but with a 'full' (in-game) editor. wanna be a graphics programmer? drop the gameplay and make a techdemo.
  3. you can also provide an initial state (member of VkImageCreateInfo) but it has to be VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED.
  4. InterlockedExchangeSubtract is actually a utility function using InterlockedExchangeAdd. It only has been added a couple of years ago so untill then, you simply had to pass the negative value.
  5. Yourself

    Texture/Buffer persistence in VRAM

    frame buffers will most likely remain in VRAM. You would have to use specific tools/APIs to be sure though. It all depends on the amount of memory you use, you can have 100s of frame buffers in the GPU as long as the memory they occupy isn't needed for other resources/applications. It is good idea to allocate large resources (like framebuffers) as soon as possible to prevent eviction) Yes, you can 'freely' copy resources (buffers, texture, ..) on the GPU. notice the quotes are all API depened. for example some APIs have the constrain that textures need to have the same dimension and a compatible format while other APIs are pretty mutch memcpy style (very powerful but possible headache introducing :) )
  6. Depending on the api you use, you should try a single memory buffer combined with a specific allocator. I recently had great success using this on a buffer of about 512mb and a buddy allocator That 512 is the highest setting (medium was 256, low 128) and on a higher level I made sure that you never need more memory (aka bias on culling/LODs etc). Cool thing is that you always know the exact amount of memory needed and only have a single allocation at loading time It does depends on your graphics api as not all of them allow you to use a 'random' memory address as IB/VB/... 
  7. Yourself

    Rows in glTexSubImage2D?

    My bet is that one (or more) pixel storage modes are invalid (as in, not valid with the data you supply)
  8. Yourself

    How do I handle transparency

    it is not a bad idea at all, it is how most implementations work. you simply keep an array (or any data structure you think is better) for each render type. one for opaque, one for transparent, one for a custom fancy rendering effect (skin ?) it is both simple to implement and effecient to travers
  9. yes, when switching a resource between input and output of the pipeline, you have to manually unset the other state. This will do insert some barriers to guarantee correct results. for example, it will make sure that the data is written before it is read again (in case of a RT->SRV transition). Note that this guarantee is part of the d3d11 driver, in newer APIs you have to do this explicitly.
  10. typically this is solved with an abstract device interface. Create an IDevice class that has virtual functions to create/destroy ITexture/IBuffer/IShader/... objects Each rendering API can implement them as it sees fit while the high level code can work on abstract handles. The renderer will also work on those abstract handles and only do the translation to API specific resources when actually submitting the commands to the specific API. Note that the IDevice implementation can still use some sort of buffermanager (for example to provide suballocations into larger buffers) as long as it returns an abstract interface
  11. Yourself

    GDC Europe is no more

    part of a mail that was send to previous attendees (although 2016 was the first one in three years I didn't attend)    
  12. You store the quadtree data in the mips of a texture. As your mips half in size on each level, this maps nicely to the quadtree design (as-in each node has 4 child nodes)   On specific details you should just look at the code as the entire code seems to be available
  13. I have not found time to read it myself, but I heard 'Computer Graphics: Principles and Practice (3rd Edition)' is a good introduction into graphics programming
  14. IMHO Maya LT is actually pretty decent priced at $30/month (+ you get a Stringray subscription so you can build experience in that as well)
  15. Yourself

    Mipmap management

    For example in Dx11, you have to make sure that your ID3D11SamplerState is created correctly. For example, setting both minLOD and maxLOD to zero will always sample level 0 (as in do not use mipmaps). Also the filter on your sampler has influence on the miplevel calculations (bilinear vs trilinear for example)   Inside your shader, you can also control the default 'automatic' mip map calculation As mentioned, you can use a different sample functions (dx11 syntax): - SampleLevel -> sample a specific level (and that level is a float so you can have nice interpolation) - SampleGrad -> sample with specific derivatives (witch are used to control the 'default' mip calulation). (for other functions see msdn)
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!