Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


Member Since 04 Apr 2007
Offline Last Active Yesterday, 10:10 AM

#4990718 Destructor vs Cleanup()

Posted by InvalidPointer on 16 October 2012 - 08:04 AM

yes, from C# object gets anownced that it would like to be freed (By OS manager GC). You then perform alll logic to free resources the object posesss. Like sicrane said, study cli/c++

in c++ destrocturos are not necesary, nor any good logic, from C#, managememet of memory yes

You can still design your logic to not need destructors, but your logic must be freed and managed well

I really, really hope you don't use C++ exceptions.

Why shouldn't one use exceptions in C++?

They can carry some very subtle, complicated costs and require some extra thinking when designing algorithms and classes. For games I don't really think they're worth said cost; in most cases using error codes can work equally well and can 're-enable' more dangerous (but speedier) class architectures. The latter is why I bring things up-- the C++ spec says the compiler will walk up the call stack, invoking destructors on everything until it finds an appropriate catch block. If you don't release any resources in the destructor, congratulations! You've just created a pretty massive, totally unfixable memory leak.

EDIT: That also means that using raw allocations on the stack, anywhere, is unsafe. Consider the implications. Overloading operator new can help you in limited cases, come to think of it. If you don't, though, you're in trouble.

#4990618 Destructor vs Cleanup()

Posted by InvalidPointer on 15 October 2012 - 10:41 PM

yes, from C# object gets anownced that it would like to be freed (By OS manager GC). You then perform alll logic to free resources the object posesss. Like sicrane said, study cli/c++

in c++ destrocturos are not necesary, nor any good logic, from C#, managememet of memory yes

You can still design your logic to not need destructors, but your logic must be freed and managed well

I really, really hope you don't use C++ exceptions.

#4987320 vector subscript out of bounds

Posted by InvalidPointer on 05 October 2012 - 09:46 PM

For starters you never appear to set the (extremely poorly named-- what does it do/control?) member variable 'i' to anything, so it's going to be 0xCDCDCDCD on MSVC/heap memory debugging enabled and random garbage in release mode. That's pretty big in decimal, and is very likely to be out of range when you do
void MD5Class::RenderBuffers(ID3D11DeviceContext* deviceContext)
    // Set vertex buffer stride and offset.
    stride = sizeof(VertexType);
    offset = 0;
    // Set the vertex buffer to active in the input assembler so it can be rendered.
    deviceContext->IASetVertexBuffers(0, 1, &MD5Model.subsets[i].vertBuff, &stride, &offset);
    // Set the index buffer to active in the input assembler so it can be rendered.
    deviceContext->IASetIndexBuffer(MD5Model.subsets[i].indexBuff, DXGI_FORMAT_R32_UINT, 0);

    // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles.

    deviceContext->DrawIndexed(MD5Model.subsets[i].indices.size(), 0, 0);


EDIT: I also apologize if this comes off as snarky, but I'm not sure why you jumped to index buffer creation as being problem when the debugger explicitly tells you that you're feeding a vector an index larger than what it has room for. Slow down a bit and take the time to learn the debugger, as it's designed to make this process really straightforward. You need to walk before you can run.

#4985907 Graphics programming book recommendations

Posted by InvalidPointer on 01 October 2012 - 05:08 PM

Chiming in to recommend Real-Time Rendering. If you're into photorealistic rendering, then Physically-Based Rendering from Theory to Implementation is also a really good book-- although it's very much centered on offline rendering.

If you're just getting started, Game Engine Architecture is a good read and the chapter on animation alone is probably worth the price of the book; it was written by one of the programmers at Naughty Dog/Uncharted and has a lot of simple yet effective ideas and some practical advice.

#4980946 Delegates in AngelScript

Posted by InvalidPointer on 17 September 2012 - 11:04 AM

Currently thinking about how delegates should work, as I find myself becoming really, really hamstrung without their inclusion in AngelScript. While I think I have some of the implementation worked out, I'm curious what people would want in an implementation and what the syntax should be. From a technical perspective, I think it's best approached at a single-subscriber level, with a delegate storing a function to call and an additional 'this' pointer; multicast delegates/events could be either a library add-on extending the built-in array type or something left to the application interface to provide.

What I'm not sure of, however, is how this should interact with garbage collection (as I understand it, this bites people in the ass with startling frequency in C#, do we need to have a language-level 'weak reference' construct too?) and cross-module function imports. Community, fire away.

#4969077 How do you multithread in Directx 11?

Posted by InvalidPointer on 13 August 2012 - 08:24 AM

It takes 7 miliseconds to render a large building with 10 large directional lights and about 11 miliseconds for 60 large directional lights.I'm not very happy about the performance right now and I'm gonna optimize some more and implement instancing,but from what I understood modern engines like Frostbite 2 have both instancing and multi-threaded rendering.The thing is I have no idea how to implement multithreading,what changes do I have to make to my device and context creation?Currently I'm just using D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, 0, &featureLevel, 1, D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &DEVICE, NULL, &CONTEXT));

You aren't listening to what Hodgman is saying here. Do you know how much of that time is spent queuing up draw calls on the CPU? What he's getting at is that you may actually be GPU limited-- that is, your CPU is mostly farting around waiting for the GPU to do the work assigned to it. You'd ultimately end up making the CPU fart around even more for no actual performance gain and in fact stand to make it worse if you handle threading poorly-- many professionals still can't get this right, although that's probably more the result of mediocre teaching than any inherent difficulty.

For what it's worth, though, this and this (and even more specifically, these two methods) should help get you started.

#4967341 How do I maintain a good quality real time rendering without textures?

Posted by InvalidPointer on 08 August 2012 - 04:56 AM

I have discarded the textures because our artist refuses to share his.

Sounds like it's time to get a new artist? Seriously, that's their job.

#4966361 Deferred shading ugly Phong

Posted by InvalidPointer on 05 August 2012 - 08:13 AM

Yeah, reason #3289472 why OpenGL is a design trainwreck. Remember kiddos, adding a clamp instruction is fine, but adding a special-case, higher-performance one that can be implemented in terms of the former somehow breaks hardware compatibility(???)

Good job, Khronos. You make us all so very, very proud.

#4966237 Deferred shading ugly Phong

Posted by InvalidPointer on 04 August 2012 - 08:40 PM

Phong was intentional, since it's more accurate (I am aware it's slower).

Actually Blinn-Phong produces more accurate results (try both at glancing angles and you'll see how bad Phong looks!), to even better results try Enery conserving Blinn-Phong.

QFE. The tl;dr version is that Blinn-Phong is actually an approximation to evaluating a Gaussian centered on the halfway vector.

In more plain English, you're using some statistics hacks to guess what fraction of the total surface of the area to be shaded is angled in such a way to bounce light towards you/give you laser eye surgery if it starts out coming from the light in question.

EDIT: And for extra credit, use Toksvig filtering to account for actual texture detail in the normal map!

EDIT 2: Also
float NdL = max(0.0f, dot(Normal, LightVector));
makes me really, really angry. You wouldn't like me when I'm angry. Do
float NdL = saturate(dot(Normal, LightVector));
instead to avoid my wrath.

For clarification, you're wasting precious GPU time with those max() operations that you could be getting for free with a saturate modifier. You might think that the compiler can optimize this. You'd be wrong, though-- remember that the dot product itself does not have a defined range and that the compiler generally lacks sufficient context to know that you're dotting normalized vectors.

#4963095 Register functions with default value parameters

Posted by InvalidPointer on 25 July 2012 - 04:47 PM

Default parameters are entirely a compiler thing-- they're just instructions to the compiler to add some behind-the-scenes code if you don't manually specify arguments when the function is called. Due to how Angelscript works, you can't take advantage of this since it doesn't call functions the 'normal' way. You'll need to actually add the default arguments to the AS declaration instead of just assuming the compiler will handle it.

EDIT: For clarity, I refer specifically to the "void begin(const string &in, int renderop, const string &in)" bit.

#4951906 Bloom before tonemapping

Posted by InvalidPointer on 22 June 2012 - 09:02 PM

They tell you right in the image title, you use the 'screen' blend mode instead of a direct addition. Much, much simpler ;)

#4940940 Materials Blending a'la UDK

Posted by InvalidPointer on 17 May 2012 - 08:54 AM

Are you completely sure of that? An artist told me that he could blend separate materials in UDK and asked me if I could do the same in Unity. Of course, "material" is quite a high-level concept which differs in UDK and Unity, so I was wondering how this could actually be done technically in UDK.

No, for realz. There may be other approaches, but the specific materials used in the video you linked to were all designed explicitly to support vertex color-based blending. Unreal lacks the concept of multipass rendering for materials outright.

EDIT: In fairness, I think lighting is done multipass, but this is not something you work with as an artist. The material compiler generates a lightmap-lit shader for 'ambient' and point light additive shaders.

#4940939 Branching & picking lighting technique in a Deferred Renderer

Posted by InvalidPointer on 17 May 2012 - 08:50 AM

Well, luckily it should be somewhat coherent indeed. A typical example would be a room that uses default shading on all contents, except the walls using Oren Nayar, and a sofa a velvet lighting method. In other words, pixels using technique X are clustered together.

Yet I'm still a bit affraid of having such a big branch, or doesn't it matter much whether a shader has to check for 2 or 20 cases?

It's still going to add about 19+ spurious ALU ops that may or may not be scheduled concurrently with useful work, depending on the target GPU architecture and a handful of other things. In the non-coherent branch case, you're very likely going to be shading all 20+ BRDF models and then doing a predicated move to pick the 'right' result-- *any* sort of boundary is going to be disproportionally expensive to render. I guess what I'm trying to say here is that your question gets asked a lot and the answer hasn't really changed much :(

If you want flexible BRDFs, you have a few options. You can just use standard, expressive BRDFs like Oren-Nayar/Minnaert or Kelemen Szirmay-Kalos for everything and store some additional material parameters in your G-buffers; this is in general a workable base for most scenes. More esoteric surfaces could be handled via forward shading (and you may be doing this anyway for things like hair, being that they're partially-transparent and all) and compositing into the final render.

You could also aim for the more general BRDF solutions like Lafortune or Ashikminh-Shirley and encode their parameters too. This should be sufficient to represent pretty much any material you can think of.

Lastly, you can also give tiled forward rendering a go. If you're starting off from a deferred renderer this may not be that hard to switch over to, though you'll need to do some work on the CPU side (namely light binning and culling) if you're just using a D3D9 feature set. It should still be viable, however.

#4935257 Why use Uber Shaders? Why not one giant shader with some ifs.

Posted by InvalidPointer on 26 April 2012 - 05:59 PM

It's right smack in GPU gems, actually. The article doesn't go too much into implementation details, but I wager there's some sort of runtime inlining or additional precompilation going on.

EDIT: If they don't mention it in the new language manuals, I may stand corrected here. Wonder if it's been removed/deprecated somehow.

EDIT 2: Based on the API descriptions provided, it's probably the first approach, inlining/AST substitution.

#4935071 Cascaded shadow maps question

Posted by InvalidPointer on 26 April 2012 - 07:57 AM

I mentioned this in one of the other cascaded shadow mapping threads, but do give adaptive shadow maps through rectilinear texture warping a gander. It only uses one map/'cascade' so lots of the cascaded shadow mapping edge cases aren't necessary. You can definitely get superior quality and possibly speed on top of it :)