Jump to content

  • Log In with Google      Sign In   
  • Create Account

Radikalizm

Member Since 05 May 2011
Online Last Active Today, 02:50 AM

Posts I've Made

In Topic: Ecs Architecture Efficiency

16 August 2016 - 05:12 PM

I'm looking at the ability to render as many entities as possible to the screen before it drops the frame rate below 60 FPS.

 

Honestly, you're going to run into a ton of actual rendering-related bottlenecks before a decently architected game object/entity/"whatever buzzword you want to use" system gets in your way. Don't start theoretically optimizing things. Solve the problems you have at hand to get to the desired frame time for your specific game.

 

 

 

Spread that 16ms out to every system???  Are you trying to run every system at 60FPS?  That's wasteful and pointless....

 

You've obviously never worked on any type of shooter before. Try telling that to people designing any type of competitive game which supports split-second inputs. 16ms in a game update tick can definitely make a difference.

 

 

 

Not to mention there's the option to run your rendering on a seperate thread asynchronously, which most games don't even do anymore.

 

 

What are you talking about? There's a ton of games doing async rendering? The era of single-threaded update and render is over!

 

 

 

Still looking for more improvements that will make this even more efficient.  But my question is actually really simple, how fast are your engines running and what specific design patterns are you using to improve how many simultaneous entities can exist and be drawn without slowing your engine down.  I'm trying to find a goal to shoot for and new ideas to improve my own engine.

 

This is still a very useless comparison to make. You're acting like every engine is comparable in some shape or form when it comes to performance. They're not. Focus on what your requirements and goals are. Do some profiling, find your bottlenecks, fix them. Rinse and repeat. 


In Topic: Need one more book on graphics programming ;)

15 August 2016 - 06:31 PM

+1 for all three of those. Especially Real Time Rendering is a must-have!

 

The GPU Pro series is neat as well, but it's all very specialized and focused towards a specific set of techniques and their implementations.


In Topic: Inspired By The Demoscene: Beginner Starting His Journey In Graphics Programm...

12 August 2016 - 12:26 PM

Being C++ an extension of C, at the core there's really not much difference between both of them... C++ just adds new stuff to the language, abstracts some others and fixes a few little things here and there, to the point that you could almost compile C code on a C++ compiler without problems... so, by learning C you're already learning the basics of C++...

 

I really don't want to start a language war here, but this is a very common misconception. C++ is not a superset of C. It might have been at one point a long time ago, but it's absolutely not true anymore these days.

 

There's a ton of difference between C++ and C, learning C does not teach you anything about how to properly write and structure code in C++. If you go into C++ with a C mindset you'll probably just end up writing something that looks like C but with classes.

 

I'm going to leave it at this. It's really not my intention to start a language flame war.


In Topic: Inspired By The Demoscene: Beginner Starting His Journey In Graphics Programm...

10 August 2016 - 06:07 PM

Welcome to the wonderful world of graphics programming! I'll try to answer some of those questions for you.

 

1. Learn C++:

Any language which has access to a well-documented graphics library will do, so you don't necessarily have to go for C++. Honestly, if this is your first experience with programming I'd actually recommend going for something like C#. There's plenty of awesome libraries or wrappers available to do graphics related programming and it'll allow you to get up and running more quickly. It's going to be a while before you can get any real benefit out of working in C++ anyway, so it might be a smaller hurdle to get over by sticking with a more "approachable" language. I know a lot of people have different opinions on this though, so I'll let others add their opinions to this as well.

 

2. Brush up on Linear Algebra and more specifically, have a solid understanding of the use and applications of Vectors and Matrices?

Absolutely! Understanding linear algebra and trigonometry is going to help you out a lot if you want to make any progress. Once you advance a bit further it can become useful to have a good grasp of some elementary concepts in calculus, like differentials and integrals. Once you're at the point where you need those you should have an understanding of the following mathematical concepts you're going to need to advance. It's best not to worry about very complex things for now, just basic linear algebra can already get you far.

 

3. Choose between DirectX or OpenGL Personally I would probably pick OpenGL as it's crossplatform, so what comes next is understanding the entire openGL pipeline, its libraries, etc? glsl, shaders?

The fact that the OpenGL standard is supported on multiple platforms does not mean that it will work out of the box across multiple platforms or even across multiple graphics hardware vendors. OpenGL is notorious for being very inconsistent in these cases. If you're starting out I wouldn't focus on getting things to work on multiple platforms just yet. Start off with one and see how far you get. Worry about the mess that is cross-platform graphics development later.

 

If you want to focus on actually implementing graphical techniques (i.e. you actually want to write code and shaders to get thing to show up on screen asap) I'd actually recommend going with something a little higher level than plain DirectX or OpenGL. I'm a bit out of the loop on publicly available graphics engines, but there's plenty of them out there. I know libraries like Ogre used to be very popular, but I haven't had a look at any of those libraries in years. Even working in Unreal and Unity for experimenting with shaders and such can be a great introduction without too much stuff getting in your way.

I'd recommend looking at OpenGL and DirectX once you understand the needs and requirements of graphical applications a bit better, or if you're just really passionate about having a look at more architectural stuff.

 

The most important thing is to experiment a lot and to not worry about not getting things right immediately. Don't be intimidated by the complex looking math and such. Find sample implementations for things, play around with them, try to understand them and try to connect them mentally to their mathematical description.

 

Good luck!


In Topic: Cross-Platform Graphics Interface Design

05 August 2016 - 01:04 AM

Alternatively, you can log the PSO's that get used in a play-through (or the combination of Dx11-style coarse states that were used with each shader), and use this logged information to construct a PSO cache on the user's machine the first time they start the game. That kind of system is always prone to accidentally missing a particular combination in your logged play-through though, so you'd have to make it gracefully deal with cache-misses and the associated framerate hitch

 

Sadly enough this is what we had to resort to in the end :(. I would've loved to have done a proper implementation, but you know how these kinds of things go when trying to meet a deadline. This particular title was not written with D3D12 in mind, and we didn't have the time or resources to re-architect it to be D3D12-friendly.

 

Most of my work is on PC titles and the occasional current-gen console title, so I generally don't have to deal with the cases you mentioned above. Having some of these tougher restrictions forced on you up-front actually does work out nicely in this situation!

 

<thread_derail>

On platforms with no input-assembler, this allows you to compile permutations of the VS with the vertex-buffer decoding hard-coded in the VS.
 

Recently I've been seeing more and more implementations which bypass the input assembler (and input layouts) entirely, instead opting to use a structured buffer to provide vertex data to the vertex shader. Adopting this approach globally would definitely simplify PSO generation. You could take geometry data out of the equation by defining some conventions on topology and index buffers. I do remember reading about some architectures already doing this under the hood to emulate the input assembler stage, but I'm afraid I don't remember specifics. I wonder whether there'd be any major downsides to taking this approach globally.

</thread_derail>


PARTNERS