neonic

Members
  • Content count

    173
  • Joined

  • Last visited

Community Reputation

367 Neutral

About neonic

  • Rank
    Member
  1. My Story of Getting A Job at Naughty Dog

    Lucky dog...   :)
  2. Might also make sense to switch over to a spherical coordinate system.  Pick a random value from 0 - 360 for X, 0 - 360 for y, and a random value from 0 - radius for z. Then convert that to the cartesian coordinates.   http://en.wikipedia.org/wiki/Spherical_coordinate_system It's just what I would do, you don't necessarily have to.
  3. Possible problem with rendering VBO's?

    Without knowing the exact answer to your problem, might I simply suggest picking up & learning how to use gDEBugger.  It has been an immense help for me while debugging some pretty complicated GL issues.  With this tool, you can verify that your data in the VBO has been uploaded to the card correctly to make sure your resources actually exist.  It can also break on GL errors, which is mighty helpful.
  4. I wouldn't implement a drawLine in a pixel shader.   The way this would work is you would create a piece of 3D geometry representing the "line" in 3D positioning.  You could convert the 2D window coordinates to 3D world coordinates, and then render that.  For a shader to work, you need to have geometry that generates pixels that make up your object in screen space.   The fragment shader that you have bound during your draw call will only determine what color that pixel ends up being... the 3D geometry is still required.   Again, I point you to read the link I posted, and it may be a little more clear.  Before any rendering is done, you generate your Vertex buffers, which uploads all of the vertex locations to the Video card.  Once that data is in there, you can draw all of that geometry with a single function call.  You set your shader to active, then draw the geometry.  That's what executes the shader.   Shaders don't represent a "program" like you would normally think of a desktop program.  They're simply a tool to help you deliver high fidelity graphics.
  5. If this is for maintaining a build solution cross platform, have you looked into using CMake? While this may not answer your question about where's standard, but I thought your comment about there not being a good cross platform build solution warranted a comment.   You can write some cmake modules for Finding specific packages, some packages even include their own CMake Find modules.
  6. Yes, I think you may have a misunderstanding of the programmable pipeline.  It's incredibly powerful.   Fragment shaders are per pixel.   Read this: http://msdn.microsoft.com/en-us/library/bb205123%28VS.85%29.aspx
  7. What kind of variables are you having to feed from the Main Game class to the Unit?  This sounds to me like it's indicative of a larger design problem.  Can you give a little bit more information on how your current system is set up, what is being passed between, and what is the relationship of the main level & the unit class.
  8. Animation state machines

    This may not be the perfect solution, but what I'm reading of your system, to me it makes sense to refactor it a little bit.  I wouldn't have concrete classes representing states during the "transition" between two animation states.  If I were designing a system like this, I would likely create a generic class to describe properties of the animation.  I would also include several transition types... a cross fade blend, a directional blend, etc. These would operate on two AnimationStates: from & to.  It could accept information such as duration, weights, curves, etc.   Then just do the operation until duration & set the final resulting AnimationState at the end.   Edit: I think your question, upon re-reading it, may have been more about making the animation sequence look polished.  I would have an Idle, idle_to_walk, walk_to_idle for starters.  When you start walking your weight shifts forwards slightly, and when you stop, you will shift your weight backwards a little bit.  The effect is exaggerated when running.  Hope this helps, even a little bit.
  9. Could you use a dot product in the forward (relative to the ship) and the desired heading, and if it's less than 0, flip the sign?
  10. engine/editor interaction

    I did something similar only with WinForms for the editor application.  I wrote a C# winforms application that spawned my engine.exe (C++) with a command line argument.  This made the engine initialize without creating a window, and instead opened a network socket. It waited for the C# app to communicate with the engine, and when it did, the C# app passed in the HWND of a panel in my winforms app.  Then, from there, whenever anything was done in the editor, it communicated via socket communication any changes that needed to be reflected in the engine application.   It worked pretty well for my needs.
  11. Which game engine to make Zelda-Style 2D game?

    The new Unity3D version will also work quite well now with 2D, as it was a major feature push for the 4.X branch.
  12. The way this is usually done is that you use some standard export format from the 3D tool to read it into your game.  In your engine, you should have some asset conditioning utilities, and this is traditionally where a conversion to a more efficient format takes place.  You bake all of the data down to a tightly packed binary file that you know the structure of.  This will remove all of the unneccesary information from the file, ensure that you can read the file in the order you initialize things, etc.   You can look into the FBX SDK from Autodesk, I've had some luck with that, or as Nanook suggests, Assimp is a good library that's really not difficult to use.
  13. This may help: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
  14. It worked just fine for me.  The only thing that was odd was you need to be sending information such as when the C# application moves or is resized, you need to inform the C++ engine that the viewport has changed.     Otherwise, it's like sliding a piece of paper with a hole cut in it over a monitor with 3D content on it (if that makes sense).
  15. Bacterius' suggestion is exactly what I did.  I had an engine application that would run, and a C# based Editor application for composing the scenes.  When you started the editor, it started a process (it wasn't a child process, was just running on its own, doing its own thing) with the -editor flag.  This flag, in my engine, simply opened a socket & waited for a connection.  The Editor application would connect, and send a pointer to a HWND of a control in the WinForm (I believe I used a Panel? can't remember thought). This would allow my engine to directly create the rendering context & such using the HWND of the C# control.   The rest of the editor application worked by sending messages across the network to the C++ engine.   http://msdn.microsoft.com/en-us/library/system.windows.forms.control.handle.aspx