Jump to content

  • Log In with Google      Sign In   
  • Create Account


mhagain

Member Since 06 Jun 2010
Offline Last Active Today, 02:52 PM
****-

#5168445 Textures tear at distance

Posted by mhagain on Yesterday, 01:33 PM

Should I modify the depth buffer size, too?

 

Yes, do this too.

 

A 16-bit depth buffer is woefully inadequate.  All hardware for the past 15-odd years is absolutely guaranteed to support either 24-bit (with 8 unused), 24-bit (interleaved with 8 for stencil) or 32-bit depth buffers.  There's nothing to gain from using 16-bit unless you want to run on an ancient original 3DFX or some weird mobile platform.




#5168164 glReadPixels and a Multiple Color Attachment FrameBuffer

Posted by mhagain on 21 July 2014 - 10:00 AM

Use glCopyTexImage2D instead.  Alternatively, specify the texture object once-only and then use glCopyTexSubImage2D.




#5168148 DirectX11/SharpDX: How to disable mip-mapping (and scalig) for (multipass) pi...

Posted by mhagain on 21 July 2014 - 08:44 AM

If you just want to load pixels directly, with no sampling, no interpolation, no mipmapping, etc then instead of this:

 

float4 color = picture.Sample(pictureSampler, input.tex);

 

Use this:

 

float4 color = picture.Load(input.tex);

 

Note that for this to work, input.tex should be an int3 instead of a float2, and be in the range 0...texturesize, not 0...1.




#5167466 DirectX 9 floating point texture

Posted by mhagain on 17 July 2014 - 02:01 PM

An alternative is to use SrcBlend = DestColor and DestBlend = SrcColor and use a baseline of 0.5 - that way you don't need a FP texture (use baseline of 128 for a regular 0-255 range) and can get up to double brightness, saving a good deal of bandwidth and video RAM while you're at it.




#5167415 Resizing buffers -> Out of memory

Posted by mhagain on 17 July 2014 - 09:18 AM

I don't respond directly to WM_SIZE.  What I do instead is just set a flag that indicates "hey, on the next frame we're drawing we'd really like to change the display mode/resize buffers/etc".  Then when the next frame runs I actually do the work and then clear the flag.  That way I can selectively suspend drawing without getting unwanted WM_SIZE actions, and I'm absolutely certain that only the most recent WM_SIZE is going to have any effect.

 

Regarding the resource leak, a possible cause is that D3D11 holds references for all Set commands, so before you resize it's not a bad idea to issue a ID3D11DeviceContext::ClearState call.  (This is also a great way of confirming that your state change handling is sufficiently robust.)




#5167137 Win32 BOOL and bool ?

Posted by mhagain on 16 July 2014 - 06:12 AM

BOOL vs. VARIANT_BOOL vs. BOOLEAN vs. bool

 

Again, there's a reason for everything.  I'm not necessarily saying that all of them are good reasons, just that they are reasons.

 

i guess that this is a characteristic of an API that's developed over 30-odd years.  We see the same in legacy OpenGL where there are often 5 different ways of doing the same thing, and not consistently specified.




#5166970 Win32 BOOL and bool ?

Posted by mhagain on 15 July 2014 - 07:13 AM

In that case, they have made a function with a completely different interface that actually breaks backward compatibility. They could as well have renamed the return type at the same time or made a second function.

 

Ahh, here we go: http://blogs.msdn.com/b/oldnewthing/archive/2013/03/22/10404367.aspx

 

We can infer from that (and the "somebody said" linked post) that the addition of -1 as a return code happened during the changeover from Windows 3.0 and 3.1 when parameter validation was added to these functions.  As for the alternatives you suggest in your second sentence, I obviously don't know the reason why they didn't, but I'm going to assume that there was such a reason.




#5166960 Win32 BOOL and bool ?

Posted by mhagain on 15 July 2014 - 06:20 AM

 

Note that sometimes the use of BOOL in the Windows API is a lie and is not actually a boolean. For the GetMessage() function, 0 indicates WM_QUIT, non-zero indicates a non-WM_QUIT message except for -1 which means an error. 

wacko.png WAT?

 

Are they crazy? (Rhetorical question).

 

I actually had to double-check that info on MSDN, I thought you were trolling. But you're not. And that frightens me to death.

 

 

There's probably a legacy reason for that - maybe something to do with the original GetMessage in an earlier version of Windows returning a strict BOOL, but when they moved it to 32-bit (or something else, I don't know) they needed to return another value but couldn't change the function signature for compatibility reasons.  Windows is full of stuff like that - check out Raymond Chen's blog for more info.




#5166708 obj file format

Posted by mhagain on 14 July 2014 - 05:28 AM

I'd second Promit's advice: .obj is fine as an interchange format but for actual production use you're doing slow (and error-prone) text parsing which is just going to give you complex code and annoyed users.

 

The ideal model format is one where you can just memory-map a file and pass the resulting pointer directly to a glBufferData call.  You really shouldn't be doing anything more complex.




#5166159 Drawing in one draw call vertex buffer with different programs

Posted by mhagain on 11 July 2014 - 03:17 AM

Currently what I do is creating two different VAOs with vertex attribute pointers pointing to related parts of the vertex array.

 

Just to address this part, you don't need this kind of set up at all.  You can do it with a single VAO and a single set of attrib pointers, but still with two draw calls, so it takes away some of the overhead.

 

So, say you're using glDrawArrays: look at the documentation for that and you'll see that it has a first parameter.  Let's assume that you have 200 vertices, and you want the first 100 drawn with ShaderProgramA, the second 100 with ShaderProgramB.  Your code is:

 

glBindVertexArray (LetsUseASingleVAOHere);

glUseProgram (ShaderProgramA);

glDrawArrays (GL_TRIANGLES, 0, 100);

glUseProgram (ShaderProgramB);

glDrawArrays (GL_TRIANGLES, 100, 100);

 

Similarly with glDrawElements you can see that it has a pointer parameter, (GLvoid *indices), as well as a count parameter, so again rather than using two sets of attrib pointers, you just use a single set and then adjust the parameters of your draw call to specify which range of the buffers to draw.




#5165910 Weird bug in AMD drivers.

Posted by mhagain on 09 July 2014 - 05:41 PM

If I was a betting man - and I'm not, but if I was - I would bet that the cause of this is DXGI "thinking" that your program doesn't have focus and throttling back as a result.




#5165479 Bad Performance On Intel HD

Posted by mhagain on 08 July 2014 - 02:07 AM

While it's true that if something runs well on a HD 2000 it will run well on anything, it's also true that the HD 2000 is almost right at the bottom of the lowest-of-low-end even for Intel integrated graphics.  Perhaps looking for better performance from it is just not realistic?




#5165472 glDrawElements without indices?

Posted by mhagain on 08 July 2014 - 01:40 AM

As I said, let the driver page in and page out memory. No game needs 2 GPU memory managers.

 

This, basically.

 

I'll add: just because you can it doesn't mean that you should.  The idea is that this should never even happen.  Just as you shouldn't write your own GPU memory manager, neither should the driver ever have to swap out resources.  If you're over-committing video RAM you don't start thinking about swapping out resources, you do start thinking about reducing the size of your resources.  So say you target a 1gb card and you find that you need 1.5gb - what you do is reduce the resolution of your textures, you reduce the polycount of your models, you think about using compressed textures or compressed vertex formats, you get everything to fit in that 1gb.

 

The reason why is because creating and destroying resources is expensive.  Disk access is now the primary bottleneck for a lot of computing tasks.  So you don't build something that has such heavy reliance on your slowest form of storage.  It all adds up to runtime hitches that affect the gameplay experience.




#5165365 glDrawElements without indices?

Posted by mhagain on 07 July 2014 - 03:42 PM

Take texture memory. Does that mean that if you try to put too many textures onto the GPU, some textures will automatically be flushed? If yes, then this means I might have to upload textures more than one time!?

 

The driver handles this automatically for you.  If your video RAM (there's no such thing as dedicated memory for textures any more) becomes overcommitted, the driver will swap textures out to system memory, and automatically swap them back in to video RAM as required, so there's no manual intervention required.

 

The best way of dealing with this is actually not to query GPU memory and make decisions based on that.  Instead you'll set a minimum required specification and use that as a guideline when sizing your assets.  Nowadays 1gb might be a realistic "run on all hardware" minimum, although if you want to target really old hardware you might set your minimum to 512mb.  Anything lower than that, you just don't bother supporting.




#5165309 Vertexbuffer - huge lag.

Posted by mhagain on 07 July 2014 - 11:52 AM

It's normal enough to see all (or most) of your time being spent in Present: have a read of this: http://tomsdxfaq.blogspot.com/

 

As for causes of your performance drop, the first thing to do is check the vertex buffer creation and locking flags.  For a dynamic buffer using in this manner, you should be creating with D3DUSAGE_WRITEONLY | D3DUSAGE_DYNAMIC, and locking with D3DLOCK_DISCARD.  You must definitely should not be calling CreateVertexBuffer each frame; create it once and reuse it (with a discard lock) each time you need to update.  Also be careful that you don't attempt to read from the buffer while you have it locked.

 

Assuming that these are all correct, you'll need to talk a little about how you're creating the new vertex data (CPU-side) to load into the buffer.  My own guess is that you're possibly doing CPU-side skinning or frame interpolation; if the former then there's a high probability that the slow down is not from your usage of a dynamic buffer, but more simply because CPU-side skinning is slow.  If the latter you can quite easily switch frame interpolation to run on the GPU and thereby keep your vertex data entirely static.  Either way, there is a probability that your performance issue is coming from extra CPU-side work associated with using dynamic data, and thinking a little about how you can make this data (or as much of it as possible) static can reap huge rewards.






PARTNERS