Jump to content

  • Log In with Google      Sign In   
  • Create Account

swiftcoder

Member Since 03 Jul 2003
Offline Last Active Yesterday, 09:02 PM

#5293386 how much PC do you need to build a given game?

Posted by swiftcoder on 25 May 2016 - 10:00 AM

but it seems there are more fundamental questions to be resolved first. such as does a given title even require an "average PC".  not all games are skyrim.

Why does it really matter, so long as your development PC is sufficient for the specific game *you* are building?

Last I checked, the actual "average PC" has a dual core CPU, 2 GB of RAM, and an Intel Integrated GPU. Steam survey represents a set of people who regularly play games on Steam, which pretty much guarantees a skewed picture of the world (most gamers, by the numbers, play browser and mobile games exclusively).


#5292831 how much PC do you need to build a given game?

Posted by swiftcoder on 22 May 2016 - 01:32 AM

one really needs the survey results from 2 to 4 years from now, not from right now

I'm not really sure that the average moves that much in 2-4 years. For GPUs the high end moves quite a bit in that time, but I think you'll find the low end doesn't move much if at all, and the midrange also moves pretty slowly.

CPUs barely move at all these days, my current CPU is a quad-core i5 from 4 years ago, and it's still running today's games on ultra settings. Storage has only changed in price/quantity since SSDs became mainstream. RAM gets faster and bigger over time, but realistically enthusiasts have had 8-16GB of whatever RAM their motherboard supports for at least the last 6 years.


#5292728 how much PC do you need to build a given game?

Posted by swiftcoder on 21 May 2016 - 12:04 AM

Recommended specs are what happen at the end of the dev cycle, post-optimization work. During dev, a game requires much more power because it hasn't been optimized yet, and you may have any number of quick and dirty hacks to get things done. There are also productivity concerns - our game doesn't use a hexcore i7 effectively at all, but the build sure as hell does.

QFE. When it comes to compiling giant C++ codebases, you'll want as many cores as money can buy, and an SSD or two into the bargain.

GPU tends to be less of an issue, so long as it's of the generation you plan to support, and somewhere in the mid-to-high end of things.


#5292477 How can I install a module in Python (Read More)

Posted by swiftcoder on 19 May 2016 - 07:44 AM

If you grabbed the source code, just open it up, and run python setup.py install from the directory.

If you grabbed an egg instead, it needs to go into the site-packages directory under your python installation.


#5291313 Ways of Buffering Data to be Rendered?

Posted by swiftcoder on 12 May 2016 - 12:26 PM

the other option is some sort of intermediate buffer, where update places a copy of its data when it changes, and render grabs the current data when it renders. but then update must lock and unlock the buffer, as must render. but this does mean update wont stall while waiting for render to copy its data. of course you could still gets stalls if update was ready to write to the buffer, but render was still reading

You can always double or triple-buffer your data to get out of all these issues, at the expense of additional memory.


#5291300 Are there any services for reducing network delay/latency?

Posted by swiftcoder on 12 May 2016 - 10:34 AM

Also, AWS is a virtualization based hosting provider, which does not guarantee any particular scheduling latency. Real time applications may find that they suddenly see unexplainable jitter on the server, which would be caused by "noisy neighbors" or the virtualization platform itself.

To some extent you can buy your way out of this issue by forking over for dedicated hosts. Doesn't get the virtualisation overhead out of the way, but you can guarantee dedicated access to hardware resources.




#5291205 Are there any services for reducing network delay/latency?

Posted by swiftcoder on 11 May 2016 - 03:08 PM

But, if you have a Netflix-like budget, yes, there are things you can do.

Well, one thing, really.

Only effective way to reduce latency between server and client is to move the server closer to the client. Services like OnLive/Gaikai with strong latency requirements mostly solve this by placing datacenters near each major population center they served. Blizzard splits all their online games into geographical regions for the same reason, placing a datacenter in each of US East, US West, Europe, and Asia (possibly others).

Of course, most small companies don't have the resources to run their own datacentre, let alone many datacentres. In which case your best option is likely to lease servers from a company that operates such datacentres for you. For example, here at AWS, we offer a pretty good spread of datacentres, with more coming in the near future.




#5291183 Data-oriented scene graph?

Posted by swiftcoder on 11 May 2016 - 12:56 PM

One of the issues I ran into while building a scenegraph like that was the pure index-based approach works great for static graphs such as a skeleton mesh but becomes a headache for dynamic graphs :(

Yeah, so that's an issue with data-orienting a scene graph. However, I think it's worth saying again, and again, and again: a scene graph is almost never what you actually want in this day and age.

You generally need a spatial structure to accelerate culling and other visibility queries, and a loose quad-tree or a spatial hash is much better for this than a hierarchical scene graph. You also generally want to attach one object to another (turret to a hardpoint, shield to a character's arm, etc), and for that a flat list of attachments can be equally effective as a hierarchy.


#5291032 Data-oriented scene graph?

Posted by swiftcoder on 10 May 2016 - 03:41 PM

graphs, are naturally hierarchical, so OOD matches perfectly.

This is one of these things that people who are used to OO tend to *think*. It tends to be less true in practice. When all that you have is a hammer, everything looks like a nail, and so forth...

Most scene graphs are actually trees. What is a tree, at its most essential? It's a set of parent pointers. Nothing object-oriented about a flat array of parent pointers, now, is there?

edit: even if you actually have a graph, a graph is a set of nodes and a set of edges. The typical OO encoding of Node objects with outgoing edge pointers has about as much to do with "graphness" as any other representation.


#5289234 Procedural character animations. What is the state of the art?

Posted by swiftcoder on 29 April 2016 - 07:57 AM

Eskil's confuse is also pretty neat along those lines.




#5288312 BGFX and micro-shaders?

Posted by swiftcoder on 23 April 2016 - 10:50 AM

Yes, most of these systems work by concatenating together small fragments of GLSL code. Some fragments can be chained together, others are mutually exclusive. Not only is lighting a part of this system, but so is the vertex transform pipeline, because you need to be able to account for the differences between basic and skinned meshes.

 

 

For example, a single material might be composed out of the following chunks:

- Vertex Skinning

- Diffuse Map

- Normal Map

- Lighting Equation with Specular

 

And another might use:

- Basic Vertex Transform

- Vertex Color

- Environment Map




#5288135 Planet Rendering: Spherical Level-of-Detail in less than 100 lines of C++

Posted by swiftcoder on 22 April 2016 - 08:37 AM

Thats basically what happens. I set 3 corner coords of a patch in the vertex shader, the rest is interpolated. Maybe I should first multiply the matrices to avoid a precision loss

I hadn't looked closely enough to realise you are doing this on the GPU.

I did that at first, but abandoned it several years ago in favour of calculating all vertices on the CPU, due to a number of precision issues in GPU land. YMMV.


#5287976 Planet Rendering: Spherical Level-of-Detail in less than 100 lines of C++

Posted by swiftcoder on 21 April 2016 - 09:03 AM

Basically it works nice, but once you go up close then float precision is not enough. What is the best solution to that other than splitting the frustum in near / far ?

Generally you can get away with having each vertex defined relative to the center of the patch which contains it, and rendering patches relative to the camera... Not necessarily ideal for a recursive implementation.


#5287474 Is inheritance evil?

Posted by swiftcoder on 18 April 2016 - 09:52 AM

Implementation inheritance, as provided by mainstream object-oriented languages, is fundamentally evil. I'm a little surprised that statement is even controversial.
 
This isn't a new observation. The change in paradigm here came in the mid 90's, when people started noticing the limitations and flaws of implementation inheritance, and transitioning to interface inheritance. Robert Martin's original article on the topic is a worthwhile read.
 
If you look around enterprise software, there aren't a whole hell of a lot of meaningful software projects still clinging to implementation inheritance as the default. Unfortunately, game development software is full of such warts, I think in part because many of the open source game engines were conceived of in the early 2000's, and haven't really seen a rewrite since.


#5287238 BGFX and micro-shaders?

Posted by swiftcoder on 16 April 2016 - 06:19 PM

Do you recommend any books or articles covering such shader generating system?

Not to hand. Such systems range from perl scripts that munge together fragments of GLSL source code, all the way to fancy graph editors like Unreal has.




PARTNERS