Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Jul 2001
Offline Last Active Today, 01:02 PM

#5293391 what good are cores?

Posted by on 25 May 2016 - 10:08 AM


Memory bandwidth is the bottleneck these days.


Bring on the triple channel! I was very upset when I learned that DDR3 implementations weren't supporting triple channel! I think it was only one or two intel boards that would. Of course you could always build a system using server hardware.


I was far more disappointed when I read several articles about how "we don't need triple channel memory". Well ya no shit we can't make good use of triple channel if it isn't available to develop on numb-nuts!


Triple channel is nonsense. It never showed up as beneficial to memory bandwidth outside synthetic benchmarks and very specialized uses. In any case, on the CPU side my personal feeling is that memory bandwidth isn't nearly as big a problem as latency, when it comes to games. It's chaotic accesses and cache misses that kill us. The GPU, on the other hand, can never have too much bandwidth. We're seeing some great new tech on that front with HBM(2) and GDDR5X.

Isn't 33ms still more responsive than 66ms?  :wink:

You also need to be aware that D3D/GL like to buffer an entire frame's worth of rendering commands, and only actually send them to the GPU at the end of the frame, which means the GPU is always 1 or more frames behind the CPU's timeline.

Of course, VR was where that really screwed us, much more so than input latency. That's why we wound up with this: https://developer.oculus.com/documentation/mobilesdk/latest/concepts/mobile-timewarp-overview/

#5292723 how much PC do you need to build a given game?

Posted by on 20 May 2016 - 09:25 PM

Recommended specs are what happen at the end of the dev cycle, post-optimization work. During dev, a game requires much more power because it hasn't been optimized yet, and you may have any number of quick and dirty hacks to get things done. There are also productivity concerns - our game doesn't use a hexcore i7 effectively at all, but the build sure as hell does. 

#5290910 I aspire to be an app developer

Posted by on 09 May 2016 - 07:01 PM

Moved to For Beginners.

#5288614 GPL wtf?

Posted by on 25 April 2016 - 10:44 AM

It would be helpful if you supplied the original article, rather than your interpretation of it.

#5286316 Best laptop for game development?

Posted by on 11 April 2016 - 10:21 AM

Both the Dell Inspiron 15 7000 series and Dell XPS 15 are excellent laptops. The Lenovo Y700 seems to be a great choice as well. In either case, I would opt for a dedicated GPU model if you can. I would not touch MSI again.


Of the laptops you listed just now... the T540 has a dedicated GPU so I would probably put that at the top of the list.

#5284557 When would you want to use Forward+ or Differed Rendering?

Posted by on 31 March 2016 - 07:47 PM

Crudely speaking, the cost of rendering in forward is N objects * M lights. This means that heavily lit geometrically complex environments get very expensive. Deferred was developed because the cost of rendering for that approach is N objects + M lights. Lighting in deferred is very cheap, even thousands of them if they're small. I've used deferred pipelines in the past to run dense particle systems with lighting from every particle and stuff like that. The downside is massive bandwidth requirements, alpha blending problems, anti-aliasing problems, and material limitations.


Forward+ and its variations were developed to get the benefits of cheap lighting in deferred, but without all of the other problems deferred has. While bandwidth use is still pretty high, it tends to cooperate much better with more varied materials, alpha blending, and AA. It also leverages compute tasks for better utilization of GPU overall. In general, I would tend to encourage Forward+/tiled forward as the default rendering pipeline of choices on modern desktop/laptop hardware, unless you have a specific reason not to.

#5283934 What happens if gpu reads and cpu write simultaneously the same data ?

Posted by on 28 March 2016 - 03:50 PM

I found the documentation on it: https://github.com/GPUOpen-LibrariesAndSDKs/LiquidVR/raw/master/doc/LiquidVR.pdf

It's actually being called Late Data Latch and the whitepaper has a brief explanation:


The Late Data Latch helps applications deal with this problem by continuously storing frequently updated data, such as, real-time head position and orientation matrices, in a fixed-sized constant buffer, organized as a ring buffer. Each new snapshot of data is stored in its own consecutive data slot. The data ring buffer has to be large enough to ensure the buffer will not be overrun and latched data instance will not be overwritten during the time it could be referenced by the GPU. For example, if data is updated every 2ms, a game rendering at 100fps should have more than 50 data slots in the data ring buffer. It is advised to have at least twice the minimum number of slots to avoid data corruption. Just before the data is to be consumed by the GPU, the index to the most up-to-date snapshot of data is latched. The shader could then index into the constant buffer containing the data to find the most recent matrices for rendering.

#5283741 OpenGL Check if VBO Upload is Complete

Posted by on 27 March 2016 - 12:15 PM

apitest shows how to issue a fence and wait for it. The first time it checks if the fence has been signaled. The second time it tries again but flushing the queue since the driver may not have processed the copy yet (thus the GPU hasn't even started the copy, or whatever you're waiting for. If we don't flush, we'll be waiting forever. aka deadlock)
Of course if you want to just check if the copy has finished, and if not finished then do something else: you just need to do the 'wait' like the first time (i.e. without flushing), but using waiting period of zero (so that you don't wait, and get a boolean-like response like OP wants). We do this in Ogre to check for async transfer's status.

So can you use a fence to test for the actual status of a BufferSubData call uploading to server? And that works consistently across platforms without issuing a draw call against that buffer? After all the driver must do a full copy of the buffer to somewhere in-core at the point of call, but what the rube goldberg machine does after that is anyone's guess.



Calling glDraw* family of functions won't stall because it's also asynchronous. I can't think of a scenario where the API will stall because an upload isn't complete yet.

It'll stall if it gets caught in a situation where it can't continue dispatching frames without finishing a pending operation. I don't remember seeing this happen with buffer uploads, but I've seen it many, many times with shader recompiles. Whether this is because shader recompiles are a long goddamn process, or they have to happen in the main thread, or it just runs up against the limit of buffered frames, or some combination thereof, I'm not sure. It seems conceivable that a large upload followed by lots of dispatched frames could conceivably trigger the same effect.

#5283665 OpenGL Check if VBO Upload is Complete

Posted by on 26 March 2016 - 09:57 PM

Meaning that if you update the VBO then call draw, the result will appear to be sequential/serialized..they WILL happen in the order submitted.

We're not talking about logical ordering or things appearing serialized. There's a ton of operations in GL that are deferred internally, which in turn will cause runtime hitches when the driver decides to actually commit operations. Buffer uploads, texture uploads, and especially shader (re)compiles are the prime suspects here. This is not an imaginary problem for anyone who has actually shipped a game on a GL pipeline.


Issuing a dummy draw call - and potentially a fence or glFlush/glFinish/SwapBuffers - is the only way to combat this problem of runtime hitches. 

#5283626 OpenGL Check if VBO Upload is Complete

Posted by on 26 March 2016 - 04:38 PM


Because the OpenGL spec doesn't recognize the transfer as asynchronous, per se, there's no way to check. The only option is to force it to wait by issuing a dummy draw call. There are a variety of other things that can also be deferred and lead to hitching in-game, and so it's standard practice to issue dummy draw calls for these things. Getting rid of all this is one of the big improvements in the new APIs (DX12/Vulkan).


Ah, I see! In that case, I may just set a time-out and only try to render after half a second or something; this won't be noticeable. Problem with DX12 is that one can't support Linux,  which for me is a crime if one can avoid it. Vulkan looks promising. Thanks for letting me know though.


It's not necessarily on a timer. Sometimes the driver simply can't be bothered to finish the upload until you actually issue a draw call.

#5283621 OpenGL Check if VBO Upload is Complete

Posted by on 26 March 2016 - 04:09 PM

Because the OpenGL spec doesn't recognize the transfer as asynchronous, per se, there's no way to check. The only option is to force it to wait by issuing a dummy draw call. There are a variety of other things that can also be deferred and lead to hitching in-game, and so it's standard practice to issue dummy draw calls for these things. Getting rid of all this is one of the big improvements in the new APIs (DX12/Vulkan).

#5282493 GIT vs Mercurial

Posted by on 21 March 2016 - 07:46 PM

I hate both. But I hate Git less and there's a large community based around it, GitHub being ground zero. For ease of use and all around sanity, Subversion is still vastly superior. Subversion also goes ballstic and then implodes on most branch merges, though. Git will hassle you constantly, about inane bullshit they could've easily fixed. Once in a while it will detonate more seriously. But the issues are manageable, and you gain a very capable and flexible version control system. Light history editing (particularly commit messages) and sane merges are fantastic to have. Git's also finally starting to grapple with large file management seriously, but another option is simply to run those things out of Subversion.


Mercurial doesn't support partial/cherry picked commits, on purpose. You have to play games with shelving extensions and other nonsense. I assume this is great for web developers or something. As far as I'm concerned, this is so far out of touch with normal software development that I refuse to deal with it.

#5282397 Math I need to know to make shaders

Posted by on 21 March 2016 - 12:28 PM

Computational geometry is the missing piece in what people have listed so far. Also consider robotics courses, as they are excellent primers in handling coordinate transformations.

#5282278 Efficient resource manager for OpenGL in C++?

Posted by on 20 March 2016 - 11:44 PM

I'm getting the feeling everybody's not on the same page about what a "resource manager" is.

#5282275 Too good to be true?

Posted by on 20 March 2016 - 11:29 PM


1. Buy RPG Maker MV for $80.
2. Craft an RPG storyline over the course of several months - say $10k worth of labour.
3. Make a RPG using premade assets.
4. Throw in a couple of songs not in the kit for flavor.
5. Release it on the App Store, because you can legally release games made with the kit even using the kit's assets AFAIK - just read the terms
6. Probably make zero dollars, since about eight million games are published per second and you did no marketing