Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Today, 05:26 PM

#5314789 Bulk key purchase price

Posted by on 11 October 2016 - 10:12 PM

If the game's not making money any more whatsoever, and someone offers you $10k for 100k keys... then sure, that's $10k that you wouldn't have made otherwise!

If the game is still selling copies though, and someone wants to buy 1000 keys for $100... then, no.

#5314788 Problems reading texture data back

Posted by on 11 October 2016 - 10:08 PM

BYTE* bytes = (BYTE*)locked.pBits;
BYTE test = bytes[0];

#5314784 Present & Future AI in Games - Voice/Speech

Posted by on 11 October 2016 - 09:52 PM

Text to speech (TTS) isn't an hasn't been an AI problem until recently.

FTFY :D (assuming machine learning == AI)

In most games now, voice actors are used to keep the story and experience with NPC's somewhat linear. Are any companies looking at expanding their approach to AI?

The tech is not there yet. It's such a huge amount of work that it's not going to be a game company that creates this kind so speech synthesizer.
e.g. The link above shows a really advanced, ongoing research project in this area, and it's being done by a company backed by google-monies.

The good thing about the above system is that you could train it using several voice actors so that it's able to speak using their voices. This would let you mix generated/TTS speech and actual recorded speech as required.


Now on the other hand... if you decided up front that you want to make a game where the NPC's use synthesized speech, then you could design a game where all the NPC's are robots with bad abilities to process emotion and speak naturally :D Then current TTS systems would be well suited to your game!

#5314782 Do i need to download the DirectX SDK on Windows 10?

Posted by on 11 October 2016 - 09:20 PM

You can either use the Windows SDK, or you can use the old DirectX SDK from March 2010 (I think that's the last/latest version of it).


These have slightly different version of the D3D headers/libs (the Windows SDK ones are newer of course). Some apps may require a few updates to their own code when moving form the old DirectX SDK to the newer Windows SDK. Those "Then they add" are instructions for working with projects that specifically require the old 2010 version of the DirectX SDK.

#5314773 Problems reading texture data back

Posted by on 11 October 2016 - 07:51 PM

You bitmap may be D3DFMT_R8G8B8 when on disk, but D3DXCreateTextureFromFile probably converts it to D3DFMT_X8R8G8B8. You should query d3dTexture to find out it's actual format.


More importantly though, locked.pBits is the address the pixel data... and you're printing out the address of locked.pBits, which is the address of the address of the pixel data.

#5314761 Fast Approximation to memcpy()

Posted by on 11 October 2016 - 06:09 PM

This one actually works... within a given assumption :wink:

void memcpy(void *dest, void *src, size_t size)
  assert(dest == src);

#5314497 Texture mapping coordinates for OpenGL and DirectX

Posted by on 09 October 2016 - 11:54 PM

AFAIK both APIs do use the same UV coordinate systems and it's a common misconception that they're inverted from each other...


GL and D3D do however define a different interpretation for image data that you pass them to create a texture. IIRC GL assumes the first row of pixels that you pass it is the bottom of the image, and D3D assumes the first row of pixels is the top of the image. So if your image loader operates the same way on both APIs, and you simply pass this buffer of pixels to both APIs unaffected, it will flip the image for one of the APIs.


[edit] agh, apparently the UV systems are Y-inverted from each other :(

#5314394 What controller to use while developing PC game

Posted by on 09 October 2016 - 05:23 AM

Usually, I develop using a wired xbox 360 controller, but I'm just itching to program a game that uses my saitek x52 pro flight system.
There are some really esoteric controllers out there that use their own api. Hope to (insert deity of choice here) you never encounter one in the wild :)

That LCD is probably programmed via a custom saitek API :)
A lot of controllers work like this -- almost everything works under Direct Input, and a few specific features (custom LEDs, LCDs, servos, etc) require a custom API.
e.g. Fanatec race car pedels have a custom API to make the pedals vibrate (for anti-lock braking sims, etc), but everything else is DInput :)

#5314362 Problem with Basic Diffuse Lighting

Posted by on 08 October 2016 - 04:23 PM

The first one has simple ambient lighting and the second one has no ambient lighting.

#5314242 develop mode

Posted by on 07 October 2016 - 06:36 AM

the shaders must be compiled real time

Why? If possible, you should pre-compile them and load binary shaders at runtime, which is very fast.
As for the above suggestions of code reloading... you can do this for "engine" code too, if your engine is in a DLL.

#5314116 Scene graph rendering

Posted by on 06 October 2016 - 06:27 AM

I'm writing a game (and its corresponding engine) at the moment where every game object is represented in a scene graph. At the top there's the root node, which may have an infinite number of child nodes (each having recursively infinute number of child nodes).

Take care with this. Hierarchies are useful for specific problems, e.g. when you want to attach entities to each other (the leg bone's connected to the hip bone), or where you want to occlusion cull a whole room full of objects at a time... but the world is not a big graph. Note that the two examples, above, might be different graphs! A fish might be a child of a pond for occlusion culling purposes, but a child of a hook (which is a child of a rope, which is a child of a rod, of a hand, of a person) for animation purposes. Different systems can/should have their own graphs.

As of now, the camera (represented by a view and projection matrix) is owned by the scene manager, and is passed down on to the components (rootNode->childNodes->entities->components->update(), causing the information to be sent not only to the relevant components) using their respective update methods.

IMHO, virtual void Update() is an anti-pattern. Don't be afraid to write components that do specific things ("update" is the opposite of specific) and don't inherit from a generic interface, and which have function signatures that are honest about the actual inputs and outputs of the procedure they represent.

Virtual functions can turn your program flow (and therefore your data dependencies) into spaghetti, especially if it's used for literally every single kind of operation that the game performs. Clean code should make the data dependencies between sections of code clear and explicit.

Should each renderable entity have its own rendering component responsible for calling the core rendering system (pretty much an OpenGL abstraction layer) or should a separate system iterate the scene graph and sort the entities by type (i.e. cameras, static meshes, animated meshes, lights, etc) and then be responsible for passing the data to the core rendering system?

Yes to both. Entities in ECS do not have a type -- their type emerges for the components that they have. So you'd iterate entities to find bounding-volume components to cull, then you'd iterate the surviving entities to find mesh components and other kinds of renderable components to draw, then you'd sort them and convert them into OpenGL/etc commands. 

The main loop is responsible for updating each and every entity (and its components) recursively. This update include (amongst other) updating position, recalculating model matrices and whatnot.

The S in ECS stands for systems. Your main loop is supposed to update a collection of systems, which in turn update a particular type of component. As above, don't be afraid to make these steps do some kind of explicit processing/logic instead of a generic "update" function.

You see, the problems I'm having with ECS is that I'm not sure how a component of an entity is supposed to keep track of the necessary view and projection matrices that are needed for rendering.

Most components shouldn't care about view/projection matrices. The rendering system that consumes a collection of mesh-components cares.

I also can't grasp how to decouple updating (i.e. movement, collisions, etc) from drawing if they're part of the same entity.

Entities have no logic - they're just bags of components... so as long as all the rendering data is off in it's own component, and the collision component doesn't ever need to know about the rendering data, then it's decoupled.
P.S. IMHO "ECS" is a fad and an anti-pattern and you should never design by patterns at all, whatsoever, ever  :wink:

#5314067 PBR Sanity Check (Black Metal)

Posted by on 05 October 2016 - 09:14 PM

Am I only noticing this because I am using a single point light?

 Yes. This is equivalent to the typical physics lesson situation of "assuming the person is a frictionless sphere in a vacuum"...


Imagine you're in a perfectly black room, where the walls are like a black hole that absorbs 100% of the light that touches them. You're looking at a perfectly clean mirror, which is looking at a small light bulb. The mirror is completely black except for the rays which bounce between the bulb, the mirror, and your eye.


In the real world, there's no such thing as a perfectly black room though. All the walls and objects in a room will reflect light, which is in turn reflected off the mirror and into your eye. This means that in a real simulation, every point on every wall is itself a tiny little point light source!

The cheap way to do this in a game is using some for of Image-Based-Lighting ("ambient cubes" as you mention). This ensures that some amount of light is hitting every object from every angle, just like in a real world situation.

#5313938 Converting resources GPU/CPU

Posted by on 04 October 2016 - 06:12 PM

If you have a 2GB video card, you should definitely make sure that you don't use more than 2GB of resources within any one frame, as this will cause the OS to move resources between GPU/CPU in the middle of your frame, which can add dozens of milliseconds of stalling :)


If you only ever use 1GB of resources per frame, but use 3GB of resources in total, then it's not too bad. Hopefully the OS will move resources between GPU/CPU only occasionally as required.


If the OS is doing a bad job though, then yes, you can micro-manage it yourself by destroying resources and recreating them later.

BTW, a lot of games do destroy GPU resources when they're not required, and do later re-load them from disk again. That's basically how all open-world games work :wink:

#5313934 Managing inputlayouts (and reducing changes)

Posted by on 04 October 2016 - 05:04 PM

I think there's cost in this approach, because now I would always set the needed InputLayout when switching to another shader. Which potentially is the same inputlayout that was already set.   My questions: - how would you handle this?

As above, just cache the last one that you set so you can avoid setting it twice.

Can someone confirm, but I thought that at a driver level, any assignments were checked to ensure that only changes were set.  Anything where you set the same state again, or VB, etc would not impact performance.

I'm pretty sure that the D3D runtime itself doesn't (flow is: Your App -> D3D runtime -> NVidia/AMD/Intel Driver -> GPU)  -- so even if the driver does discard redundant commands, it's still kinda wasteful to call D3D functions for no reasons. In my experience it's (very slightly) beneficial to do redundancy checking yourself, as above.

In my current D3D11 experience I've always linked an inputlayout (ID3D11InputLayout) to a shader. This is very convenient for several reasons: - when creating/ compiling the shader I have the blob around and can easily create the inputlayout - I have a 'GetInputLayout' member function in the shader class, which returns a pointer to the inputlayout for that shader

The inconvenience here is that this creates a hard link between your vertex shader and a particular in-memory data layout of the vertex attributes.
In general it's possible to have one model with:
And another model with:
But then render them both with the same VS that only requires a position attribute (e.g. a shadow-mapping shader), by using two different input layouts.
Also note that two different VS's can also share an input layout object if they both declare the same attributes (and are used with the same vertex buffer layout)!

The completely general solution needs a dictionary of input-layouts, which is looked up using the model's buffer layout and the VS's input attributes as the key. This is what the IL does -- it maps the VS attributes to a particular layout in memory. So if you support multiple different mesh formats in memory, you'll need one IL per mesh-layout/VS-attribute-set pair.

#5313717 PBR for mid-to-high 2010 PC

Posted by on 03 October 2016 - 08:28 AM

Sure. Lots of PS3/Xbox360 games were moving towards cheap PBR models (Cook Torrance with Schlick Fresnel and Normalized Blinn-Phong, plus pre-filtered cube-maps) towards the end of that console generation, and they're equiv to something like a 2006 high end gaming PC :)