Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


#ActualHodgman

Posted 01 March 2013 - 08:26 PM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. Your rants against these generic game developers just makes you seem insulting, defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of just 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and are much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...

 

P.S. All our GPU code is JIT compiled, I would JIT my gameplay code on consoles if I was allowed (it is JITed on PC), and almost every game is built using licensed middleware (Unity, Unreal, etc...)

 

P.P.S. I've worked on military contracts with Boeing and the Australian Air Force, and I was not impressed with their attitudes towards efficiency. Too much bureaucracy and secrecy caused huge barriers between different parts of the implementation, necessitating huge amounts of hardware. We ended up being given a whole PC inside the simulator just to simulate a few of the cockpit displays (which required reverse engineering and re-implementing the same landscape rendering engine from a different PC that we weren't allowed to access) and to send those images to the master PC. Who needs to be efficient when you can just throw more hardware at the problem...


#6Hodgman

Posted 01 March 2013 - 10:47 AM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. It just makes you seem defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of just 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and are much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...

 

P.S. All our GPU code is JIT compiled, I would JIT my gameplay code on consoles if I was allowed (it is JITed on PC), and almost every game is built using licensed middleware (Unity, Unreal, etc...)


#5Hodgman

Posted 01 March 2013 - 10:47 AM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. It just makes you seem defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of just 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...

 

P.S. All our GPU code is JIT compiled, I would JIT my gameplay code on consoles if I was allowed (it is JITed on PC), and almost every game is built using licensed middleware (Unity, Unreal, etc...)


#4Hodgman

Posted 01 March 2013 - 10:45 AM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. It just makes you seem defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...

 

P.S. All our GPU code is JIT compiled, I would JIT my gameplay code on consoles if I was allowed (it is JITed on PC), and almost every game is built using licensed middleware (Unity, Unreal, etc...)


#3Hodgman

Posted 01 March 2013 - 10:44 AM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. It just makes you seem defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...

All our GPU code is JIT compiled, and almost every game is built using licensed middleware (Unity, Unreal, etc...)


#2Hodgman

Posted 01 March 2013 - 10:41 AM

This is basically just data-oriented-design, right -- having "the database" optimized for memory accesses, and also having the logic broken down into a series of parts that can be automatically scheduled across n-cores? This has been all the rage at the bleeding edge of games for the past 5 years... This isn't a new way of thinking... to get decent performance on the PS3 (which has a NUMA co-CPU), you pretty much have to adopt this way of thinking about code.

There's been plenty of presentations by game developers trying to get these ideas to filter down by boasting about their 25x speed boosts gained from this approach, etc...

 

So, your rant about game-devs might be accurate in some quarters, but is very misplaced in others. It just makes you seem defensive and unapproachable, sorry tongue.png

 

It also seems misplaced when your article is about running a sim of 10k entities at 5fps...

 

What we've seen is this: game developers' culture was formed at a time when every last inch of performance had to be squeezed out of old-gen CPUs in order to render impressive graphics. Now, game devs have quite mastered the GPU, and they think their games are fast because of the impressive graphics they've been able to achieve on the GPU. But the GPU is very much like the old CPUs: it's extremely simple. CPUs, with their pipelines, instruction-level parallelism, multiple pending cache misses and instruction re-ordering are extremely complex. There is no more "close to metal" when it comes to the CPU. Even the metal is more akin to software than to actual metal. But while other disciplines have come to rely on JIT and middleware, game developers are suspicious of both. 

You need to look into how modern GPUs really function these days. All of those complications are present, and much more complex due to the massive amounts of data that's in the pipeline at one time, and the massive amounts of latency that have to be hidden...


PARTNERS