Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Sep 2007
Offline Last Active Sep 23 2016 03:53 PM

#5215295 Arbitrary thruster burn solving

Posted by on 08 March 2015 - 02:13 PM

In case anyone is interested, I solved this problem.  I use linear programming to optimize for the minimum sum of the absolute values of the difference between "attained force / torque this frame (per dimension)", and "required force / toque this frame", where required force this frame is the force required to get the ship to a set target velocity / rotational velocity.  The variables are the percentage of each thruster; the coefficients are the force and torque the thruster has in each dimension.  The thrusters are constrained between 0 and 1.


This page has a really great explanation on how to minimize absolute value even though it itself is not linear: http://lpsolve.sourceforge.net/5.5/absolute.htm


Compiling LPSolve library with VC++14 worked fine.  


If anyone has questions about the details of this feel free to ask.

#5157687 Building a game based news site

Posted by on 02 June 2014 - 06:02 PM

I'm sorry, but nothing you said makes any sense.

#5157055 Stumped on collision detection for animated voxels.

Posted by on 30 May 2014 - 04:45 PM

I coded up a prototype of the animation system today.  This prototype allows the user to use the UI to scroll the current frame, and set the position of each block at each frame.   The user does this by selecting a block in the UI, and clicking its new position -- the new position must be at most 1 square away from its previous. It is coded so that animations must result in a valid block configuration -- all blocks must be touching at least one other block (including diagonal), but whether a configuration is valid is dependent on the current frame, not the original state.  So a character can be completely transformed by an animation if the user puts the time into it.


This video demonstrates the prototype.  What does everyone think?  Will a player be willing to go through the trouble to do this, or do I need to come up with a faster way?

#5156902 Stumped on collision detection for animated voxels.

Posted by on 30 May 2014 - 12:40 AM

I recently started a rogue-like platformer game, where all characters are voxel-based and the player has an inventory of additional blocks that the user can add to their character (added blocks must be adjacent to existing blocks).  All properties of the character (guns, ammo, etc) are based on the block configuration, and if a block is hit, it's destroyed.  The player gets more blocks in their inventory when they kill stuff.


Video of some prototype gameplay.  This video shows some combat, and as well as the system for adding blocks to your character. -- for reference, if the blue heart block on a character is destroyed, the whole character dies;  if a block is left dangling with no path to the blue heart block, that block falls off.


The problem I have run into is with animating the player's character -- animation is used meaning moving a voxel aligned with the grid to an adjacent grid square, no rotation or scaling.  Originally I was going to have no animations, but I feel animations are necessary to add character to the game.  However, since the player can add arbitrary blocks to their character, animation is difficult.  I have built a simple prototype for animating the position of individual voxels on a character, and this works well for now.  I've also drafted up a simple UI where the player can specify the position of blocks on their model at a given frame.



The user adds blocks by clicking on their actual model.  But animating blocks is done through the image of the block in the UI.  I know it looks bad right now, but it's just for testing, and making this UI from scratch look good is a pain.  The user will be able to select blocks by clicking on them (the white block in the picture is the block currently selected), and then scroll the frame count to a desired frame, then click on an adjacent empty block (block animations must be adjacent, and the block must still have at least one block touching it after moving).  This will cause the selected block to translate to that block at that frame.  Scrolling the frame counter updates the image to reflect the current state of the model at that frame, combining all animations.


The issue comes from the implications of the animations on platforming.  Collision detection is handled directly with the character model, so when a particular voxel animates, it changes how the character collides with the environment.  A video showing how the platforming works:



When walking on uneven areas, if the block touching the uneven area is calculated to be a "foot" block (has no blocks on the character under it), and the block on the terrain has no blocks above it, the game will boost the character up the terrain.


I am stumped on coming up with a design solution on how to handle animations without making collision with the environment unintuitive.  Imagine if your foot animates, and whether or not you land on a ledge depends on the current frame of animation.  Also, if a character is standing directly against a wall, and an animated voxel on the character is moved into the wall, what should I do?  Will players be just okay with this, and try to build their models in a way that doesn't make platforming difficult, or do I need to completely rethink or remove the animation system?

#5156144 Finding the nearest point to a shape.

Posted by on 26 May 2014 - 05:06 PM

You can do this easily by calculating the perpendicular distance to each line comprising the polygon, as well as checking each vertex.


The perpendicular distance be calculated like this http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html

#5072172 Proper vertex order for rendering fullscreen texture to screen?

Posted by on 23 June 2013 - 01:46 AM

It was the input layout.  I can't believe I did that.

const D3D11_INPUT_ELEMENT_DESC vertexDesc[] = 
Accidentally put 64 when it should've been 16.

#5064700 Determine thruster power for arbitrarily-placed thrusters to move in desired...

Posted by on 25 May 2013 - 12:03 AM

Thanks for the response.


In the video, the thrusters creating torque on the y axis are opposite so there is no translation.  There are no thrusters on top of the ship, but there is gravity, so they shouldn't be necesary (as far as I know, quad copters don't need propellers pushing downwards to keep level). It is to be assumed that the ships should always have opposite thrusters -- the idea is that players can customize the locations of thrusters on the ship, so they'll learn soon enough that they can't turn properly without meeting that condition.  It is unstable in the video because I'm using a PID system but at the time of recording it had only implemented the P.  The controller works separately for each variable (velocity, orientation, location, orientation velocity).  I am however having some trouble combining these variables, but have had success correctly controlling just one at a time.


What I meant by (2) in my first post was if there's a way to algorithmicly determine which pairs of thrusters to fire to execute a turn.  Right now, turning left is hardcoded to activate the front right thruster and the back left.  Instead of specifically setting it to fire the front right and back left, have the AI figure that out for you.  Is there a better option than to have the AI try all possible combinations of thrusters until it finds a combination of thrusters that is able to create thrust in the correct direction each time a new target is set (by try, I mean calculate the projection of the thruster's force internally for each combination, and not actually activate any thrusters until it finds the best one).

#5025833 Effect in DX11?

Posted by on 26 January 2013 - 12:54 PM

This isn't an answer to your question, but, most of the functions you are calling are deprecated now (everything in D3DX).  I'm assuming you're using VS2012, so you should use the tutorials that come with the new SDK.


You can load a pixel shader with:


http://msdn.microsoft.com/en-us/library/windows/desktop/ff476513(v=vs.85).aspx (CreatePixelShader)


And subsequently call http://msdn.microsoft.com/en-us/library/windows/desktop/ff476472(v=vs.85).aspx (PSSetShader)


You will call CreatePixelShader on the binary data from the pre-compiled pixel shader from VS2012 -- it should be a .cso file.  To read in the data in the file, you can just do this:


std::ifstream myFile("../x64/Shaders/UIVertexShader.cso", std::ios::in | std::ios::binary | std::ios::ate); //replace with the name of your shader


size_t fileSize = myFile.tellg();
                myFile.seekg(0, std::ios::beg);
                char* shaderData = new char[fileSize];
                myFile.read(shaderData, fileSize);

#5023058 Getting model from 3DS Max to my game engine

Posted by on 18 January 2013 - 07:30 PM

So is there no way to just have 3DS Max export a list of the exact vertices the triangles should use?  Meaning, if I export a cube with each face in a different smoothing group, it will export three vertices per triangle (totaling 2 triangles per face * 6 faces * 3 vertices = 36 vertices), while at the same time any edges in the same smoothing group will not have their vertices duplicated?  I've tried every option I could, and it seems that I can either just export a list of non-duplicated vertices and completely rely on smoothing groups for knowing whether my engine should duplicate them, or I can export a list with every single vertex duplicated for each triangle, and then the engine doesn't have to duplicate anything to render the model correctly, but it wastes a lot of memory.


Also, apparently you can export FBX in ASCII.  However, the C++ SDK for FBX doesn't work with VS2012 (my main IDE).  There's a link for a beta version, but they have to manually add you to it when you email them (they haven't gotten back to me yet).  VS2012 lets you edit FBX, Collada, and OBJ models inside the IDE.  It's pretty cool.

#5022651 Getting model from 3DS Max to my game engine

Posted by on 17 January 2013 - 03:29 PM

Yes I am aware that I should not directly import the intermediate formats to my engine, but it is a necessary step at the moment so I can learn how it works.  I need to be able to parse these intermediate formats if I wish to convert the format to what my engine eventually uses.


I currently parse the ASCII model files directly to my engine.  It is slow, but it is a necessary step for the engine at the moment because I know very little about how files like this are stored in the real world, and how graphics theory translates to practical use. I don't know what the best way to make my own format for use directly with the engine will be.  


It is a better option for me to first load the models directly from human-readable formats so I can understand everything about them.  Once I know exactly what's practical to have in a final format and what needs to be stored, or what's better to calculate in real-time instead, I will settle on a format.


.FBX is useless to me for this because I can't open the file and look at it.


Also, Collada is not working very well either.  I can't figure out how to export a cube the way it should be exported.


With the ASCII format, 3DS Max would not generate duplicate vertices for non-smoothed faces.  So my engine generates additional vertices for faces that are meant to be not smooth based on the smoothing groups.


Collada does not seem to support smoothing groups.  Instead it just duplicates every vertex no matter what -- even if I set the entire object to all be in the same smoothing group.  If I turn off the duplication, then the triangle list in the exported file still tries to use indices of vertices that weren't output by the exporter.


For example:  this is the output of the collada exporter for a model with a cube with all faces set to the same smoothing group:



<triangles count="12"><input semantic="VERTEX" offset="0" source="#Box001-VERTEX"/><input semantic="NORMAL" offset="1" source="#Box001-Normal0"/><input semantic="TEXCOORD" offset="2" set="0" source="#Box001-UV0"/><p> 1 0 1 0 1 0 3 2 3 0 3 0 2 4 2 3 5 3 7 6 7 6 7 6 4 8 4 7 9 7 4 10 4 5 11 5 5 12 11 4 13 10 0 14 8 5 15 11 0 16 8 1 17 9 7 18 15 5 19 14 1 20 12 7 21 15 1 22 12 3 23 13 6 24 19 7 25 18 3 26 16 6 27 19 3 28 16 2 29 17 4 30 23 6 31 22 2 32 20 4 33 23 2 34 20 0 35 21</p></triangles>
The exporter still thinks there are 6 vertices per side of the cube (36 total) even though the output of the vertex list is just 8 vertices total.  This is completely broken.
How do I get around this?
EDIT:  I think I interpreted those lines wrong -- I think the offset tag means the that for each triangle three of those integers are part of a single vertex description, so every 3rd integer in that list is a vertex index, and the indices in that list that go higher than the number of vertices must refer to the normals.
But I still don't understand why there are more normals than vertices.  If they are all smoothed together, why isn't it just exporting a single normal per vertex?  Is it possible to correctly reconstruct the model if I use Collada? How can I know whether or not each triangle needs to have its own three distinct vertices whose normals aren't averaged with those of the surrounding faces?

#5022407 Getting model from 3DS Max to my game engine

Posted by on 16 January 2013 - 08:07 PM

I'm doing everything from scratch for this, so FBX is not an option (wikipedia says the format is not revealed and you have to use Autodesk's importer code).


I'll try the Collada exporter that you linked, thanks.

#5005892 Protecting game data against theft or modification

Posted by on 30 November 2012 - 06:06 PM

The only way to be sure is to only let people play your game while you stand behind them watching.

#5003189 Wrapper for Direct X on WinRT

Posted by on 22 November 2012 - 04:05 AM

You use C++ / CX to interface with the operating system. All code you would write in a C++ Direct3D program for Win32 (xp, vista, 7) will be identical in a C++ Direct3D Windows RT application. Almost all of the Direct3D code will be the same. The only difference is the code used to create the window and bind it to the device context, and to get the user input (event handlers instead of directinput).

None of the code in C++ / CX is managed. The WinRT components use reference counting, but you do not have to use the components except to create a Window. There is no overhead of the XAML layer.

#4959955 DX11 Real-Time Raytracing Tech Demo

Posted by on 17 July 2012 - 04:58 AM

This might look pretty, but it's a relatively easy scene for a ray-tracer to render. It looks like it's mostly geometric shapes, which can be checked for intersection as a whole instead of with triangles. The effects are nice, but they're not the bottle-neck for a ray-tracer.