Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

241 Neutral

About SephireX

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, I have a few questions regarding best practice for engine design. 1. Should smart pointers be used at a low level in the engine such as in the renderer or do they cause a significant decrease in performance? 2. Should the physics engine be wrapped in an abstraction layer to allow for other physics engines? For example this would allow a change of physics engine later. Although the wrapper would likely have to change to facilitate the new one. 3. The Banshee3D engine is an example of an engine that defines a common interface for physics, sound, renderer, rendering api and creates the implementations as plugins. This seems like a nice flexible approach instead of having the implementations as part of the main codebase. Are there any downsides to doing this? Of course this engine is open source and intended to be general purpose. i think the author's idea was to allow the user to more easily switch to different third party libraries.
  2. Apologies I didn't phrase my question well. I'm very familiar with DirectX 11. First learned to use it in 2012. I intend using Vulkan but just want to know from a customer standpoint, is DirectX 11 worth supporting anymore? I'd like the renderer to fully utilize Vulkan and other low level APIs and I'd rather not have to deal with a higher level API if possible.
  3. If building a 3D graphics engine in 2019 with the hope of creating a game for PC, would it be worth supporting the DirectX 11 graphics API? Vulkan would seem like the best API to support first and all NVidia GPUs from Kepler onward support Vulkan but does it perform well enough on Kepler/Pascal GPUs relative to DirectX 11?
  4. SephireX

    Are there too many Unity Developers?

    @ExErvus Using a game engine does not just involve "drag and drop". If you want to create complicated gameplay systems, you can't do that without writing gameplay code. There can be a huge amount of programming work that goes into writing the actual game on top of the game engine. Companies hire gamplay and A.I programmers and most of the time do not require that these programmers have knowledge of engine architecture, graphics, sound or physics. Not one job posting even for a game engine developer requires that the applicant has created their own engine. Actually its better to create smaller projects that showcase knowledge of graphics or physics than to write a full game engine that involves a lot of mundane uninteresting work. I think a lot of programmers started writing their own engines (me included) a few years ago back when engines such as UE4 and Unity weren't available or as appealing as they are now. They then developed knowledge of how to create game engines and refuse out of pride (ego) to use a third party solution for their personal projects. What ends up happening is that they try to create a game with their own engine which is far inferior to the third party solutions and the game either never gets released or when it does, it pales in comparison to what it could have been if created with a third party solution. So many indie developers are looking for people that have experience with Unity to work on the client part of their game. Companies looking for engine programmers usually require that they have already worked on x amount of shipped titles. With the number of companies using Unity and UE4 now, its more beneficial to have experience working with those engines for getting into the industry. Many engine programmers become so after joining a company as a general programmer and gaining experience working on a game engine over time. 
  5. @ExErvus   I can't see how the depth buffer would store this information. What if the floor is off camera or occluded by another object?
  6. Thanks for the replies. I understand why wrappers are used. Maybe "encapsulation" is the wrong word. I'm talking about providing a common physics interface that would allow different physics engines to be used with the same game engine. I know that if this interface is created by a programmer that has knowledge of only one physics engine, it may not fit other physics engines that well but the interface could be updated in the future if it was decided to switch physics engine. At least the engine would never point directly to a specific physics engine but rather the interface meaning it would be easier to update the code. Then again, it might be over-engineering.   While on the subject of game engine design, I have another question regarding the level editor. Most level editors have snap functionality allowing the user to let an object snap to the ground. For the editor, to know where the ground is, it will have to do a collision query; maybe a ray cast downwards to find the nearest point of intersection. Would a game engine use the physics engine for this ray cast or would it traverse its own spatial hierarchy and do the intersection tests itself? For example, let's say the user wants to place a chair on the ground and they can hit a key that will instantly place the chair on the floor. The floor will have a static actor in the physics engine because its not going to move in the game. Though what if the user wants to move the floor after they place the chair? Would the editor make all static objects dynamic actors in the physics engine for the sake of editing or would the physics engine be in use by the editor at all? Sorry for the long winded question.        
  7. Hi there. I am interested in game engine design and there are a few questions I have regarding the use of encapsulation in game engines. I know game engines have a graphics api wrapper for platform independence and have an engine api that hides the sound engine, rederer and physics engine. Though at the engine level, is the sound engine and physics engine encapsulated like the graphics api? For example if an engine uses Physx, will it call Physx directly or call a physics interface that could be implemented for different physics engines. Obviously this would have the benefit of being able to change the physics engine more easily. Unless of course physics engines are so different that a wrapper physics engine would be too messy to create. Is it bad design not to wrap the physics and sound engines? 
  8. @Hodgman For AAA studios, a royalty of 5% would not be feasible. That's why Epic offer a custom license which I think is around the 1 million mark. I think they also may offer a lifetime license but probably costs a good bit more. For indies, it does make sense especially if they are strapped for cash. Unreal allows small studios to make big games and if their games are successful, they may be able to afford a custom license in the future. Not having to pay a cent for tech until you have made money is an incentive for indies to use Unreal.
  9. Thanks again for the replies. Always good to hear other people's opinions.   @Hodgman Sorry I misinterpreted what you were saying the first time. I would agree that UE4's code is over-engineered but I suppose the programmer suffers so the content creators have an easier time.   @Shaarigan  I agree that most existing AAA devs won't move to third party engines in the foreseeable future. However, there are so many indie start ups using third party engines. Most of these indies will fail but some will succeed and grow into teams capable of making AAA games and they will likely continue to use the same technology. I therefore can't see engines like Unity and UE4 going anywhere. The barrier to entry is very low and so many teams are using them. For indie developers that plan to make big games, there really isn't much of a choice but to use third party engines. To make AAA quality tools that are stable would take too long. In my case, I have a full time job as a programmer but not in the games industry unfortunately. I can only dedicate 10 hours a week to work on game part time. There are many virtual teams made up of people in similar circumstances and third party engines really come in useful in these situations. Knowing how to write gameplay code in Unity or UE4 presents a lot of opportunities for joining other teams using these engines. Its also easy to carry gameplay code from one game over to another.
  10. @Ravyne Though if you look at Unreal Engine 4, you will see a huge variety of different types of games being made with it. UE4 is very different to UE3. It supports large open worlds out of the box and small studios are making very big games with it. For example, Ark: Survival Evolved was created by a virtual team of indie developers. The engine is very flexible and although it may not be optimal for a specific genre, the source can be modified to make it optimal. An indie team starting out will not be making boundary-pushing games. If the team is successful and grows, they can hire more programmers and heavily modify the engine for more ambitious projects. UE4 is a lot more flexible. The Oculus team replaced UE4's renderer with their own for VR optimization and used it in the games they are making. They've also made this branch of the engine publicly available.
  11. @Josh Petrie I suppose in the long run it balances out. Just comes down to money. Everyone has a source license to Unreal Engine 4 now. Just 5% royalty unless you get custom license.   @Glass_Knife I suppose artists can create textures and modelers can create and rig 3D models. Designers can work on design documents. Maybe even use a third party engine to prototype.   I personally prefer to know how the tech works but not reinvent the wheel. UE4 suits me and I'll probably use it. When starting out, its also easier to move to other teams and carry over gameplay code because so many other teams are also using the same engine.   @Norman Barrows Nicely put.
  12. @frob I'm talking about a team's decision to use third party or an internal engine. The current trend is that AAA studios make their own and indie devs use third party. Tools are always changing but a team has full control over how an internal engine changes or evolves but do not have the same control over a third party engine. My point is that I believe in time AAA studios will sacrifice having this amount of control in favor of the better tools and workflow that third party engines will provide.    @Hodgman As far as Unreal Engine 4 being derided, I doubt that. A number of Microsoft studios are using it. Square Enix is using it for FF7 Remake and Kingdom Hearts 3. It was used by Namco for Tekken 7 and by Capcom for Street Fighter V. Sony Bend are using it for a PS4 exclusive. Respawn Entertainment are also using it for TitanFall sequel. The list goes on. The code in the engine is nicely structured and very readable. The codebase might be bloated because it supports many different platforms but the engine will only be compiled for the target platform so its not a big issue. Hot code compile in the editor speeds up iteration times and the engine has a nice C++ api. Blueprints is also a nice scripting language. Though I would prefer lua.
  13. Thanks for the replies.    When I said that AAA studios would use third party engines, I meant engines like UE4, Cryengine and Lumberyard that give access to source code and I meant that these studios would modify the engines as they needed. Therefore, there would still have to be engineers on the team that can write low level engine code.There are so many parts to game engines that are common. Core utilities, maths library, memory allocation, multi-threading and resource management to name a few; that may not differ depending on the game. If every AAA game studio was building things from scratch, surely that would take more time. EA let many of their studios use the Frostbite engine. Eidos Montreal took IO Interactive's Glacier engine as a base to create the Dawn Engine. Arkane Studios' Void engine is a modified Id Tech 6. Reinventing the wheel seems counter-intuitive.   "I never want to hear that all AAA game studios have halted work on their own tech and are now exclusively using UE4/Unity/CryEngine/<Insert Generic Engine here>. I feel that would slow down the development of game tech in general and reduce the number of people who know how to code lower level tech and tools."   Third party engine developers would hire more engineers in response to greater demand. Therefore I'd imagine the number of people working on low level tech and tools would remain the same. More and more developers are using third party engines and as the companies providing them make more money, they will expand and the tools and work flow of these engines will improve beyond what internal studios can keep up with. 
  14. I am a programmer who is interested in the technology used to make games. I am developing a hobbyist game engine for learning purposes and plan to use UE4 for a commercial game in the future. I am interested in the current state of the industry. Most new game studios are using Unity and UE4 and some bigger studios such as Capcom Vancouver are moving from internal tech to UE4. Indie developers that choose to create their own engines for their games always mention source control as one of the major reasons for not using a third party engine. A famous example is Jonathan Blow and I wonder is this a case of not-invented-here syndrome. I can understand programmers wanting source access and that being a valid reason not to use Unity but seeing as UE4 gives source access, I don't see how not controlling the source would be a huge inconvenience. I understand that Epic could make an update that might conflict with an engine modification the developer has made but I can't see how this would happen too often or how it could not be easily addressed by the developer. It does seem to me that developers using this excuse are picking at straws but maybe I'm wrong.   As technology improves and third party tools improve, do you think that the bigger AAA game studios that have internal engines will eventually switch to using third party engines or will the industry continue as is for the foreseeable future? 
  15. @KryptOn I agree that mostly every graphics technique used in modern AAA game engines has been discovered by researchers and is present in published academic papers. However, when implementing something like physically based shading, the equations and algorithms can be got for papers but there are a lot of choices to be made on how to actually implement it. This is where the engineering comes in. Its not just about implementing one feature efficiently but also integrating numerous features efficiently. When you look at what a rendering engine does, there are a lot of things that can go wrong. It is responsible for handling rigid and skeletal meshes, materials, skinning for skeletal animations, deferred and forward lighting, shadow mapping, ambient occlusion, frustum culling, occlusion culling, screen space particles, multi-threading, instancing, transparency, translucency and many different post processing effects such as blur. These elements can be implemented one by one but implementing them optimally takes a huge amount of time. Time does seem to be the issue here and this is only the graphics engine. For that reason, I will probably use UE4 and modify it if necessary. 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!