annoying false-positive intellisense error, macro replacement?

Started by
3 comments, last by Hodgman 7 years, 2 months ago

Hi Guys,

This thread may not be super relevant to DirectX and XNA, however it's about GFX buffer management, so I guess most of you guys probably have encountered this before, and that's why I asked here. But if site admin think this should be in a different sub forum, please feel free to move it.

So my problem is maintaining a static 'list' of GPU RTs, I want to have a central place to write down all the configurations of each RTs (which also contains render irrelevant information like debug name, resize policy, debug viewable, etc), so it is easy for me to add/remove/edit RTs.

I also wish to use simple container to hold them, but still want to use debug name to index them.

When I first start my project, there is only few RTs, so I use naive separate arrays to manage my RTs : a string array to store RT debug name, a enum to map Idx to a readable name, an array of descriptor for each RTs. etc.. But as the number of RTs grows, the naive way won't work anymore since it's getting harder to ensure the consistence between all these arrays.. So I switch to use macro with def header to handle all these like the following:

surBuf_def.h


#ifndef DEF_SURFBUF
#error "DEF_SURFBUF() undefined"
#endif

DEF_SURFBUF(KINECT_COLOR,       COLOR_SIZE, DXGI_FORMAT_R11G11B10_FLOAT,    DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(KINECT_DEPTH,       DEPTH_SIZE, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(KINECT_INFRA,       DEPTH_SIZE, DXGI_FORMAT_R11G11B10_FLOAT,    DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(KINECT_DEPTH_VIS,   DEPTH_SIZE, DXGI_FORMAT_R11G11B10_FLOAT,    DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(CONFIDENCE,         DEPTH_SIZE, DXGI_FORMAT_R8G8B8A8_UNORM,     DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(KINECT_NORMAL,      DEPTH_SIZE, DXGI_FORMAT_R10G10B10A2_UNORM,  DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(TSDF_NORMAL,        DEPTH_SIZE, DXGI_FORMAT_R10G10B10A2_UNORM,  DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(FILTERED_DEPTH,     DEPTH_SIZE, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(TSDF_DEPTH,         DEPTH_SIZE, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(VISUAL_NORMAL,      VARI_SIZE1, DXGI_FORMAT_R10G10B10A2_UNORM,  DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(VISUAL_DEPTH,       VARI_SIZE1, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_D32_FLOAT)

DEF_SURFBUF(DEBUG_A_DEPTH,      DEPTH_SIZE, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(DEBUG_A_NORMAL,     DEPTH_SIZE, DXGI_FORMAT_R10G10B10A2_UNORM,  DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(DEBUG_B_DEPTH,      DEPTH_SIZE, DXGI_FORMAT_R16_UNORM,          DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(DEBUG_B_NORMAL,     DEPTH_SIZE, DXGI_FORMAT_R10G10B10A2_UNORM,  DXGI_FORMAT_UNKNOWN)
DEF_SURFBUF(DEBUG_CONFIDENCE,   DEPTH_SIZE, DXGI_FORMAT_R8G8B8A8_UNORM,     DXGI_FORMAT_UNKNOWN)

main cpp file:


//=================================================================================================
// I still have separate arrays of my RT info, but consistency is ensured by the surfbuf_defs.h file
//=================================================================================================
struct SurfBuffer {
    ViewSize sizeCode;
    DXGI_FORMAT colorFormat;
    DXGI_FORMAT depthFormat;
    ColorBuffer* colBuf;
    DepthBuffer* depBuf;
};

enum SurfBufId : uint8_t {
#define DEF_SURFBUF(_name, _size, _colformat, _depthformat) _name,
#include "surfbuf_defs.h"
#undef DEF_SURFBUF
    SURFBUF_COUNT
};

const wchar_t* _bufNames[] = {
#define DEF_SURFBUF(_name, _size, _colformat, _depthformat) L"SURFBUF_" #_name,
#include "surfbuf_defs.h"
#undef DEF_SURFBUF
};
CASSERT(ARRAY_COUNT(_bufNames) == SURFBUF_COUNT);

SurfBuffer _surfBufs[] = {
#define DEF_SURFBUF(_name, _size, _colformat, _depthformat) \
{_size, _colformat, _depthformat, nullptr, nullptr},
#include "surfbuf_defs.h"
#undef DEF_SURFBUF
};
CASSERT(ARRAY_COUNT(_surfBufs) == SURFBUF_COUNT);

....
....
....
//=========================================================================================
// then in the application code, I can use enum to easily use all these RTs very efficiently
//=========================================================================================
 // Request depthmap for ICP
    _tsdfVolume.ExtractSurface(gfxCtx, GetColBuf(TSDF_DEPTH),
        _vis ? GetColBuf(VISUAL_DEPTH) : nullptr, GetDepBuf(VISUAL_DEPTH));
    cptCtx.ClearUAV(*GetColBuf(CONFIDENCE), ClearVal);
    if (_vis) {
        // Generate normalmap for visualized depthmap
        _normalGen.OnProcessing(cptCtx, L"Norm_Vis",
            GetColBuf(VISUAL_DEPTH), GetColBuf(VISUAL_NORMAL));
        _tsdfVolume.RenderDebugGrid(
            gfxCtx, GetColBuf(VISUAL_NORMAL), GetDepBuf(VISUAL_DEPTH));
    }
    // Generate normalmap for TSDF depthmap
    _normalGen.OnProcessing(cptCtx, L"Norm_TSDF",
        &*GetColBuf(TSDF_DEPTH), GetColBuf(TSDF_NORMAL));

    // Pull new data from Kinect
    bool newData = _sensorTexGen.OnRender(cmdCtx, GetColBuf(KINECT_DEPTH),
        GetColBuf(KINECT_COLOR), GetColBuf(KINECT_INFRA),
        GetColBuf(KINECT_DEPTH_VIS));
    cmdCtx.Flush();
    _tsdfVolume.UpdateGPUMatrixBuf(cptCtx, _sensorTexGen.GetVCamMatrixBuf());
    // Bilateral filtering
    _bilateralFilter.OnRender(gfxCtx, L"Filter_Raw", GetColBuf(KINECT_DEPTH),
        GetColBuf(FILTERED_DEPTH), GetColBuf(CONFIDENCE));

    // Generate normalmap for Kinect depthmap
    _normalGen.OnProcessing(cptCtx, L"Norm_Raw",
        _bilateralFilter.IsEnabled() ? GetColBuf(FILTERED_DEPTH)
                                     : GetColBuf(KINECT_DEPTH),
        GetColBuf(KINECT_NORMAL), GetColBuf(CONFIDENCE));

It works as I wish, and satisfied all my requirement, but it really confuses VS intellisense, and I end up get annoy intellisense errors (these errors are harmless, but really annoy)

Given the fact that the use of macro is discouraged after C++11, and all these intellisense errors. I hope to know how you guys dealing with buffer management in your engine.

I was thinking of having a structure contains all configure RT info, and then use an advanced container like unordered_map to relate debug name with actual RT to avoid using Macros, but string with those container like map is way too slower than the one I am currently using.

Please let me know what's your solutions: if you think the macro way is bad, why it's bad? and what are better solutions for my case? or if you think the macro way is good, then how to deal with the intellisense error?

Thanks in advance

Advertisement

It works as I wish, and satisfied all my requirement, but it really confuses VS intellisense, and I end up get annoy intellisense errors (these errors are harmless, but really annoy)

I get quite a lot of these from magic macros in my engine, so am also interested in ways to make intellisense happy here...

Regarding your macro magic, there's another way of writing those kinds of macros that you could try:


#define LIST_WIDGETS( macro, param ) \
    macro( Foo1, Bar1, 1, param ) \
    macro( Foo2, Bar2, 2, param ) \
    macro( Foo3, Bar3, 3, param ) \
 
enum Widgets
{
#   define EXTRACT_WIDGET_NAME( a, b, c, d )   #a = c,
    LIST_WIDGETS( EXTRACT_WIDGET_NAME, )
#   undef EXTRACT_WIDGET_NAME
}

I hope to know how you guys dealing with buffer management in your engine.

I define them as regular variables,
e.g. something like:
Texture KINECT_COLOR = gpu.CreateTexture("KINECT_COLOR", COLOR_SIZE, DXGI_FORMAT_R11G11B10_FLOAT);

which exists at the appropriate scope, like any other member variable (globals are bad).

So I guess, right now there isn't any better approach than using macro to achieve what I really want ¯\_(?)_/¯

(globals are bad)

I totally agree that in general globals are bad, but I was wondering: for example in a graphic engine, static buffers are well-defined, and persistent every frame (like depth buffer, velocity buffer, SSAO, etc..), I feel having them become globals made render loop much more clear and easier to manage and easier to tune (for example: move passes around for better async compute perf). So it will be great if you could share you thought on why globals are bad in my case. Thanks~

So I guess, right now there isn't any better approach than using macro to achieve what I really want ¯\_(?)_/¯

(globals are bad)

I totally agree that in general globals are bad, but I was wondering: for example in a graphic engine, static buffers are well-defined, and persistent every frame (like depth buffer, velocity buffer, SSAO, etc..), I feel having them become globals made render loop much more clear and easier to manage and easier to tune (for example: move passes around for better async compute perf). So it will be great if you could share you thought on why globals are bad in my case. Thanks~

Until you want to have for any reason two viewport each with their set of buffers and result, or because now you have names that may link to nothing in some cases, like what if i reference the ssao buffer while it was disable… Plus any change require a recompilation when some setting could have been pure data config files. And later when your macro while have 42 differents parameters because half of them are only relevant to a tiny portion of your render targets. And the easier direct access by enum is not really relevant, finding a surface in a list of surface once a frame or so will not affect performance what so ever.

We have the same kind of RTs description, i hate it, i would call that legacy that i have no time to replace :(

I totally agree that in general globals are bad, but I was wondering: for example in a graphic engine, static buffers are well-defined, and persistent every frame (like depth buffer, velocity buffer, SSAO, etc..), I feel having them become globals made render loop much more clear and easier to manage and easier to tune
You could make the same argument for almost anything... My game always has one player controlled character so I feel having it be global is good. My game always has the same HUD items of screen every frame so I feel having it global is good.

Even if you want to hard-code them, you don't need to make them global...

Also, many render-targets only exist for less than one frame. e.g. shadow maps, ssao, bloom results, etc, are often created & used within a smaller portion of a frame. Because of this you can often save memory by pooling your render-targets and re-using them.

e.g. if you calculate Depth of Field and Bloom one after another, then the bloom pass can probably use the same render-targets that the Depth of Field pass used.

This topic is closed to new replies.

Advertisement