Jump to content
  • Advertisement

Ungunbu

Member
  • Content Count

    33
  • Joined

  • Last visited

Community Reputation

300 Neutral

About Ungunbu

  • Rank
    Member
  1. Ungunbu

    OpenGL 2D RPG examples/source code

    You should address the two concerns separately. First you should learn how to draw sprites and text (I assume you'll want text in your game). If you want to use OpenGL then go for it. There are many tutorials online that will help you with that. Creating a game of any kind is a different thing though. My suggestion is you keep your rendering engine abstracted away from the game logic. You can design your RPG without even knowing which library will you end up using to render sprites on screen. You can imagine having a basic IRenderer interface with methods like DrawTexture() or DrawText(). How you implement that interface won't change the design of the rest of the project in any way.
  2. That's your opinion, mine is that it does, a bit.
  3. You can put your sprites into an atlas with a fixed frame size. You would need to align the sprites to make the animation look correct. Once you have your atlas ready you can create a tool that crops the sprites (to shrink the atlas) but also generates a data file. The info you need to store for each sprite are the source rectangle and the anchor offset relative to one corner of the rect. When loading the atlas you will have to parse the data file to reconstruct the original positioning of each sprite. That should do it.
  4. If you are already familiar with a specific language or engine it would make sense to go with that. Otherwise you can take a look at Unity3D.
  5. I suggest you go on with your game and only concern about rendering performance when it actually becomes a problem.
  6. Have you considered Unity3D? You can code in C# and build for the web. It also supports many more platforms including PC, mobile and consoles. The C# language feels a bit like the old C but it's incredibly powerful.
  7. What you say about Unity isn't true. The whole rendering process is optimized for 2D when using the relevant component (SpriteRenderer). An engine supporting 2D games doesn't just "lock" the 3D dimension, it also performs a lot of optimizations when rendering 2D objects. It certainly doesn't submit 1 draw call per sprite/quad. You could do everything with a pure 3D engine but you would need to do all the batching work by yourself. TL;DR; a modern 2D engine can be considered a 3D engine that performs heavy optimizations under the hood intended to improve rendering performance of 2D objects. Whether or not you want to have that feature in your engine is entirely up to you. Since you mention Unity also consider that they have a few more 2D features like sprite packing (automatic creation of sprite atlases), 2D animation support and even an upcoming integrated tile editor (which looks quite powerful)
  8. Ungunbu

    unity3d forum

    You are already posting on the right forums.
  9. Ungunbu

    Deleted Thread

    Just wanted to add:     AFAIK Unity can do that automagically but I'm still learning the tool.     Anyway this point makes #3 valid once again.
  10. Ungunbu

    Deleted Thread

      As expected it was a normals issue.   To fix the model:   1) Import the .3ds file in blender 2) Hit 'A' to select all objects 3) Join them by hitting 'Ctrl+J' 4) Go to edit mode hitting 'Tab' 5) Make sure to select all faces hitting 'A' 6) Mesh > Normals > Recalculate Outside or 'Ctrl+N' 7) Export it in .fbx format EDIT 8) Normals computed this way aren't very smooth. I'm sure it can be fixed from inside Blender but I don't know how. My (very easy) solution is: when you import the .fbx in Unity search the inspector for the 'Normals' property. Change it from 'Import' to 'Calculate' and all rounded things will look smooth as in your Blender render. Don't forget to click 'Apply'.   Here's the fixed .fbx file: download link   Now you can import it in Unity and it will display correctly. If you are using Unity 4 remember to activate shadows on your directional light. Once again, I strongly recommend you start using Unity 5.   VERY IMPORTANT WARNING   I must tell you that this model is practically useless for real time applications since it has an absurdly huge number of polys.     The above render shows some imperfections in the normals, I'm sorry but I couldn't get a better result than this one...
  11. Ungunbu

    Deleted Thread

    From the look of your screenshots:   1) You are missing shadows. From Unity you need to add a Directional Light (but you probably already have it in your scene) and then enable shadows on it from the  inspector (select the light in your scene hierarchy and look at the panel on the right for that setting). In Unity 5 new scenes starts with a Directional Light with shadows enabled.   2) The shader looks different. It seems you are missing specular highlights (but I'm not sure about this). Investigate "materials" in Unity manual.   3) Some parts of your geometry are messed up. As already mentioned in other posts it appears to be a culling issue. I think 3d modelling tools can fix this automatically but I'm not very experienced in 3d authoring.   Generally speaking I strongly recommend you use Unity 5 (if you aren't already doing that) since it unlocked ALL graphics engine features for free.   Good luck with your project!   - ung   EDIT If you are willing to share your model I may take a look at it.
  12. Maybe I could render a simplified version of the level mesh but only outputting its depth.
  13. Hi to everyone,       say I have a pre-rendered background image and I want to display 3d characters on top of it. I also have a depth map; to create it the same background is rendered again but this time the resulting texture stores depth values instead of colors. We then have 2 textures ready to use: color and depth. Now what I ideally need is a way to copy the visible portion of the depth map into the depth buffer prior to rendering 3d objects. I say visible because the backgrounds will most likely be very big. I'll probably need to create sub-images from them like a tile map with very big tiles. The same applies to the depth map of course.   I'm a bit new to Unity but I think I could render a quad using a custom shader that samples the depth map (the visible part of it) and outputs the read values to the depth buffer. Sadly, I have no idea how to implement this (in Unity) so I'd be very grateful if you could:   - Comment on my approach - Suggest a way to implement it or point me to relevant info   I'm sorry if my post is a bit confusing, please don't hesitate to ask more details.     Thank you very much, ung   EDIT: there's a typo in the title but I can't edit it, if you know how to do it please tell me. Thank you.
  14. Ungunbu

    Moving around in a 3D World

      A possible solution is to make the entity that asks for a path specify the kind of objects that can block it (e.g., the crates). The NavMesh will take that into account.
  15. Ungunbu

    Moving around in a 3D World

    @Buckeye Thank you for the lenghty reply, it's very inspiring. Also, I think that a NavMesh is the natural evolution of WayPoint systems.     Clearly this approach makes things easier in case the AI is thrown away by, say, an explosion: collisions are handled automatically as part of the physics simulation.     This is exactly what I was talking about. As you say the NavMesh system must take into account paths that are blocked by other actors or props.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!