Advertisement Jump to content
  • Advertisement

Vilem Otte

  • Content Count

  • Joined

  • Last visited

  • Days Won


Vilem Otte last won the day on August 23 2018

Vilem Otte had the most liked content!

Community Reputation

3107 Excellent


About Vilem Otte

Personal Information


  • Twitter
  • Github
  • Steam

Recent Profile Visitors

29161 profile views
  1. This topic is actually not that much related to game engine (assuming your engine has some sort of scenegraph, and resource management - which most engines have), you are bumping into the are of actual tools. Tools as software don't have that many resources available at all - and are extremely time consuming to build up (and extremely rewarding - at least for me) ... I do a post from time to time into my blog here regarding my hobby game engine, and most of the time I spend on the project is in tools. In Skye Cuillin(TM) 😁 editor I have managed resources for textures, models, etc. When their file is changed they are hot-loaded and replace original resource, some details are provided here (shameless self promotion): This works for resources that are directly referenced (textures are the best examples), it will also work for meshes (from engine's point of view mesh is a single geometry). Generally this action can't be undone (because resources are changed outside of the actual software)! The actual resource (like obj) is a model (not mesh) resource, which - when instantiated into scene - will create empty entity and then push in whole hierarchy from file. In case you would change geometry data for one of the meshes, your meshes in editor would be hot loaded and changed (this is because mesh data is referenced within MeshComponent on the node - they don't actually contain mesh). If you would change hierarchy itself - then that is a problem! There are few ways how to solve this: Replace EACH node hierarchy that was instantiated from this resource (this will cause issues when you F.e. moved one of the children under different parent, or added different node as child of node instantiated from resource) Replace only those node hierarchies that resemble model exactly before the load (which means you need to mark those that should be replaced - and then do so) - this is what actually Unity seems to be doing Ignoring the problem and doing nothing with old hierarchies is also a way to solve this problem, a perfectly valid one (probably better than doing the 1st solution) I currently apply 3rd solution. It is the most straight forward. I'm still considering supporting the 2nd one (if I would hold resource reference in node that was one used as topmost storage when instantiating resource into scene, along with flag whether it is dirty (not to reload) or not (to reload)), then it would work perfectly. Of course all of this applies only in editor (where actual saved scene is just saved scenegraph, and resource files are around). For compiled scene (for runtime), the scenegraph is still saved .... along with all resources compressed. I'm currently working on this part now.
  2. The only downside when linking DLL in the executable is for the case when you will use DLL which will use OpenGL/Vulkan context. At this point you will have in memory loaded both - DX and OpenGL/Vulkan. This won't cause any problem though (apart from using a little bit more memory). The linker will strip dead references, I can definitely confirm this for GCC at least (just build a project with some additional linker dependencies, and do ldd (under Windows you can use Dependency Walker for example) - they will not be mentioned). In that scenario you will need at least an include and at least one call.
  3. Sadly I don't have any experience with DirectX or Directsound dependent modules hot-loading. In my work we are using some hot-loaded modules which are DLLs (whole project is in C - which makes some things a lot more simple, and also we have certain rules what these modules should do). You generally (when using hot loading) need to make sure that there are no direct references to code in memory (address will change with recompile). Everything that would use function pointer needs some kind of "dispatch table". These summarizes some rules we had for those modules - generally if we ever wanted to use 3rd library inside, we were asked to dlopen/dlclose it ... and use dlsym to obtain pointer to any function it calls. I can imagine this might cause quite a mess with virtual functions. As those modules were mainly just dumb loaders or "simple" procedures that were called on data - it is a simple case compared to yours. I have never tried or heard of hot-loading DLL that would implement the renderer itself (to be honest I never thought of it!), while I can imagine that this may actually work with Vulkan (where you are able to obtain all function pointers to Vulkan functions dynamically), I don't think it will be that easy with Direct3D. Although it definitely sounds like an idea worth trying (You probably just stole the weekend from me!).
  4. There are 2 basic options when considering transparency and those are: Alpha Testing For cases, where transparency is a binary decision (either opaque, or fully transparent), alpha testing is an option. For such scenarios alpha testing is an option, where you can keep z-testing on, and draw in any order. In legacy OpenGL this was used as (example): glAlphaFunc(GL_GREATER, cutoutValue); glEnable(GL_ALPHA_TEST); In GLSL/HLSL it looks like: if (alphaValue < cutoutValue) discard; A discarded fragment will never affect - atomic counters, SSBO/RWBuffer, image load store operations and any bound color buffers. Unless specified that depth/stencil tests happen before shader - they also won't affect depth/stencil buffer values - this can be done in GLSL with: layout(early_fragment_tests) in; And this is why so many people point out that performing discard is slow. Why? Generally with opaque geometry there is always early-z optimization. This prevents unnecessary executions in fragment shader whenever we already know that the fragment is not going to be drawn. Which is why it is most efficient to render opaque geometry front-to-back. By doing discard in fragment shader (or any writes to depth buffer), we effectively require pixel shader to run for each pixel (therefore no early-z optimization is possible). Also, as GPU often executes group of pixels at once (F.e. 4x4 block) the execution time per block will differ based on - whether all fragments are discarded (especially early in the shader - which is best scenario), or some/none pixels are discarded (at which point you will need to wait for all the pixels to finish the pixel shader). Conclusion This doesn't necessarily means that using alpha test is bad, it just means that it has some performance hit for majority of renderers (Immediate Mode Rendering, Tile Based Rendering, etc.), and the performance hit can be especially seen on mobile hardware. The ideal order of rendering is - opaque geometry (front-to-back), alpha tested geometry (order doesn't matter) and then alpha blended geometry (back-to-front). Also the overall performance will be based on whether you can perform occlusion culling efficiently (using any methods available - BSP, Hi-Z, etc.), therefore reducing number of objects drawn that will actually be invisible in final image. Alpha Blending If alpha testing isn't enough for you (and you need non-binary transparency) - then the most straight forward solution is alpha blending. Blending is done by combining previous value in frame buffer with new value calculated from fragment shader. This leads to major problem - that all your transparent objects need to be drawn after all opaque objects in back to front order (sorting may not be enough though - and you may need to split the geometry to achieve correct ordering). With alpha blending early-z works (although note - you're drawing in back-to-front order - so it will have impact only for transparent geometry hidden behind opaque), therefore on some architectures it can outperform alpha testing (PowerVR is well known for this). In the end - it is hard to tell what is better for you, because it will depend on few major factors: How does your scene look like? How complex are your shaders? Target hardware? So the correct answer is what @pcmaster gave you. There are also other options that allow you for correct order-independent-transparency, but generally tend to be a lot heavier to implement (for example alpha-to-coverage, depth-peeling)
  5. It is possible there is an error before that, which you just don't see. Try enabling debug layer in D3D12, this is done by doing the following before your device creation: ID3D12Debug* dbg; D3D12GetDebugInterface(__uuidof(ID3D12Debug), (void**)&dbg); dbg->EnableDebugLayer(); dbg->Release(); Which could give you more information before you receive Removing Device message.
  6. Vilem Otte

    Good C++ ide with debugger on Linux?

    Warning - subjective answer! Visual Studio is addictive and pretty much unbeatable once you get used to it. Sadly it's not available for Linux, I'd recommend going with Visual Studio Code (although solid text editor like Sublime Text may be enough) - athough they're a bit more simple compared to next few. Which I didn't like that much ... few notable I've met throughout my career (all are usable): QtCreator, KDevelop, Code::Blocks, NetBeans, Geany that much - while they are not bad, Visual Studio beats all of them in my opinion - and I also went down the 'masochist way' (although I don't expect others to be masochists voluntarily and run vim/emacs, write their own makefiles and debug just with terminal gdb ... it is a way though, and quite comfortable one), which I enjoyed a LOT. I'd recommend - try few of them - if you are comfortable using any of them, go ahead. I also recommend trying VSCode and Sublime Text - they might fit you. Tl;dr - Use any tools that you're comfortable with (go ahead and try them, to find the one that fits you most).
  7. Vilem Otte

    Learning to Accept your Limitations as a Lone Developer

    This all boils down to quality of resulting product - if your product is well-designed, and execution is done in a good way ... then it doesn't really matter whether you've used Unity or in-house engine at all. I have noticed 2 groups of people that are the loudest around 'Unity' topic though (both groups are full of zealots, with whom no discussion is possible): First that hate Unity because bad products has been made from it, and therefore initiating flame war against anyone who wants or uses Unity. Second that are pushing Unity everywhere, and literally stating: 'why would you deploy another game engine when Unity is the final solution for everything'. Neither of these 2 groups is right, and it is better to just walk away from any person in these 2 groups. Unity is a tool, it may be useful for some things - but it isn't the best solution for everything. The same goes for other engines ... even for whole languages (example: using Python for implementation of high performance networking server is really a bad idea). Everything boils down to "Use the right tools for the right job". Using assets from Asset Store is a thing with which I generally have huge problem though (ignoring the mentioned scamming - because it produces poor quality and boring products), for most of my scenes - the models simply won't fit the whole scene at all. While their quality is often high, they just don't look right when you put multiple together (there are some exceptions though).
  8. Vilem Otte

    G-Buffer 2 Channel Normal

    I'm currently using Lambert Azimuthal Equal-Area projection (it is also noted in linked article), which is a spheremap transform. I'm storing normals in 2x 16-bit floating point channels - which works quite good (using just 8-bit precision isn't enough - especially on reflective surfaces). I haven't noticed any artifacts (I've tried multiple algorithms for 8-bit, but all were visible on smooth reflective surfaces). Screenshot attached (notice the visible 'blocky' pattern on first image in reflection): Fig. 01 - Normals stored in 2x 8-bit channel with spherical transform encoding Fig. 02 - Normals stored in 2x 16-bit channel with spherical transform encoding
  9. You may want to check it has some of the licenses shortened out to how you can use them.
  10. Vilem Otte

    Use orbiting planets in 4x game?

    Which I know very well due to my work. The point is, what is your timescale for 4K game (1 tick is 1 second? 1 hour? 1 day? 1 week? ...) -> This makes huge difference... Also due to sci-fi technologies, nothing states that you can't have a lot more delta-v available. I intentionally threw this in to point out that you still can realistically model trajectories and paths of not just bodies, but also 'space vehicles' - it may not be easy, and you might be required to play with time scale, or have ridiculously high delta-v (which is case for typical space game). For your information, in Stellaris (which has been named here multiple times) - 1 second represents 1 day on Normal speed, which means that in Earth system (which is in game) - the planet should travel about 1 degree each second - in 1 minute it has to travel about 60 degrees around the star. That is definitely not slow! This gives you around 6 minutes for whole year. Travelling from Earth to Mars with delta-v even as low as 12 km/s (which is easily doable now - but currently we always target the least delta-v transfer), transfer can take just 60 days, which would be just 1 minute in Stellaris on normal speed - and planet would still go around! Of course by increasing delta-v you can go faster. I'd recommend you look at
  11. Vilem Otte

    Effect: Volumetric light scattering

    I noticed that in the video I accidentally also captured Screen2Gif UI - as it doesn't impact the resulting video I will not be re-capturing it. Also note that this was actually my 1st time using Screen2Gif to capture HD video in 1080 format - I'm quite happy it worked well together with YouTube.
  12. This time around I've decided to try something different (and thanks @isu diss for suggestion), which is - volumetric light scattering. To make this a bit more challenging for me, I've decided to implement this completely in Unity, all the code snippets will be mostly in pseudo code. Alright, let's start! Introduction When we talk about volumetric light scattering - we often talk about simulating lights and shadows in fog or haze. Generally speaking, simulating light scattering in participating media. Before we go into basic math - we have to define some important properties of participating media we want to simulate (for the sake of computation efficiency): Homogeneity No light emission properties Light Scattering As we want to render this volume, we have to calculate how radiance reaching the camera changes when viewing light through the volume. To compute this a ray marching technique will be utilized, to accumulate radiance along the view ray from the camera. What we have to solve is a radiative transport equation: Where: Left side represents change in radiance along ray represents density (probability of collision in unit distance) albedo that equals probability of scattering (i.e. not absorbing) after collision represents phase function, probability density function of the scattering direction Note: similarity of this equation to "The Rendering Equation" is not random! This is actually Fredholm's 2nd kind integral equation - which are resulting in nested integrals. These equations are mostly solved using Monte Carlo methods, and one of the ways how to solve this one is using Monte Carlo Path Tracing. Although as I aim to use this effect in real time on average hardware, such approach is not possible and optimization is required. As per Real Time Volumetric Lighting in Participating Media by Balazs Toth and Tamas Umenhoffer (available from, we will further ignore multiple scattering and using single in-scattering term like: And therefore: Which can be integrated into: And therefore: This equation actually describes whole technique, or to be precise - the last term in equation (the sum) defines the actual radiance that was received from given point.The computation is quite straight forward: For ray (per-pixel) we determine position where we enter participating media From this on, we are going to make small steps through participating media, these are our sample points For each of this sample points, we compute in-scattering term , absorption factor and add into radiance that is going to be returned The last thing is in-scattering function, which has to be calculated for each light type separately. Which can be calculated by solving this: In short - it will always contain density and albedo , phase function, absorption factor , visibility function (returns 0 for point in shadow/invisible, 1 otherwise) and radiance energy received from the light over the whole hemisphere - for point light this will be where: represents power of the light represents distance between sample point and light Which wraps up the math-heavy part of the article, and let's continue with Ray Marching description as it is required for understanding of this technique. Ray Marching Is a technique where given a ray origin and direction we pass through the volume not analytically, but using small steps. At each step, some function is computed often contributing to resulting color. In some cases we can early-exit upon satisfying some conditions. Ray marching is often performed within some specific boundaries - often an axis-aligned box, which I'm going to use in the example ray marching implementation. For the sake of simplicity let's assume our axis-aligned bounding box is at position _Position with size of _Scale. To perform ray marching we have to find entry point of the ray, and perform up to N steps through it until we exit out of the box. Before going further - I assume everyone has knowledge of what world, view, etc. coordinates are. To this set let's add one coordinate system and that is volume coordinates. These coordinates are from 0.0 - 1.0 for 3 axes determining where we are inside the volume (actually just like 3D texture coordinates). Let's have a following function determining whether given position is inside of the sphere or not: // Are we inside of unit sphere bound to volume coordinates // position - given position on which we decide whether we're inside or // outside, in volume coordinates bool sphere(float3 position) { // Transform volume coordinates from [0 to 1] to [-1 to 1] and use sphere // equation (r^2=x^2+y^2+z^2) to determine whether we're inside or outside if (length(position * 2.0f - 1.0f) < 1.0f) { // Inside return true; } else { // Outside return false; } } Now, with ray marching technique we should be able to render a sphere in this volume. Simply by starting at the edge of specified volume, marching through and at each step determining whether we're inside or outside of the sphere. Whenever we're inside, we can return and render white color, otherwise continue (if we miss everything, render black color): // Origin of our ray float3 rayOrigin =; // Origin in volume coordinates float3 rayCoord = (rayOrigin - / + 0.5f; // Direction along which ray will march through volume float3 rayDirection = normalize( -; // Single step, the longest ray that is possible in volume is diagonal, // therefore we perform steps at size of diagonal length / N, where N // represents maximum number of steps possible during ray marching float rayStep = sqrt(_Scale.x * _Scale.x + _Scale.y * _Scale.y + _Scale.z * _Scale.z) / (float)_Samples; // Did we hit a sphere? bool hit = false; // Perform ray marching [loop] for (int i = 0; i < _Samples; i++) { // Determine whether we hit sphere or not hit = sphere(rayCoord); // If so, we can exit computation if (hit) { break; } // Move ray origin forward along direction by step size rayOrigin += rayDirection * rayStep; // Update volume coordinates rayCoord = (rayOrigin - / + 0.5f; // If we are out of the volume we can also exit if (rayCoord.x < 0.0f || rayCoord.x > 1.0f || rayCoord.y < 0.0f || rayCoord.y > 1.0f || rayCoord.z < 0.0f || rayCoord.z > 1.0f) { break; } } // Did we hit? float color = hit ? 1.0f : 0.0f; // Color output return float4(color, color, color, 1.0f); Which will yield us this-like result: Fig. 01 - Rendered sphere using ray marching technique Now if we would look at how many steps we have performed to render this (per pixel): Fig. 02 - Number of steps performed before hitting sphere or exiting out of the volume While this is one of the less efficient ways to render a sphere, ray marching allows us to actually process through volume at small steps accumulating values over time, therefore it is highly efficient method of rendering volumetric effects like fire, smoke, etc. Light Scattering Implementation Let's jump ahead into light scattering implementation. Based on the theory this will be quite straight forward - there is just one catch, as we are specifying the volume with axis-aligned bounding box, it is crucial to note that we need actually 2 different computations for both situations - one where camera is outside and one where camera is inside. Let's start with the one where camera is inside. // Screen space coordinates allowing for fullscreen projection of camera // z-buffer float2 projCoord = i.projection.xy / i.projection.w; projCoord *= 0.5f; projCoord += 0.5f; projCoord.y = 1.0f - projCoord.y; // Read z-value from camera depth texture, and linearize it to 0.0 - 1.0 float zvalue = LinearizeDepth(tex2D(ViewDepthTexture, projCoord).x); // Origin of our ray float3 rayOrigin =; // Origin in volume coordinates float3 rayCoord = (rayOrigin - / + 0.5f; // Direction along which ray will march through volume float3 rayDirection = normalize( -; // Push camera origin to near camera plane rayOrigin += rayDirection * CameraNearPlane; // Single step, the longest ray that is possible in volume is diagonal, // therefore we perform steps at size of diagonal length / N, where N // represents maximum number of steps possible during ray marching float rayStep = sqrt(_Scale.x * _Scale.x + _Scale.y * _Scale.y + _Scale.z * _Scale.z) / (float)_Samples; // Steps counter int steps = 0; // Resulting value of light scattering float3 L = float3(0.0f, 0.0f, 0.0f); // Perform ray marching [loop] for (int i = 0; i < _Samples; i++) { // Move ray origin forward along direction by step size rayOrigin += rayDirection * rayStep; // Update volume coordinates rayCoord = (rayOrigin - / + 0.5f; // Calculate linear z value for current position during the ray marching float z = -(mul(ViewMatrix, float4(rayOrigin, 1.0f)).z) / (CameraFarPlane - CameraNearPlane); // In case we are behind an object, terminate ray marching if (z >= zvalue) { break; } // Light scattering computation // Sample visibility for current position in ray marching, we use standard // shadow mapping to obtain whether current position is in shadow, in that // case returns 0.0, otherwise 1.0 float v = SampleShadow(rayOrigin, mul(ViewMatrix, float4(rayOrigin, 1.0f))); // Calculate distance from light for in-scattering component of light float d = length( -; // Radiance reaching the sample position // Depends on volume scattering parameters, light intensity, visibility and // attenuation function float L_in = exp(-d * _TauScattering) * v * LightIntensity / (4.0f * 3.141592f * d * d); // In-scattering term for given sample // Applies albedo and phase function float3 L_i = L_in * _TauScattering * * Phase(normalize(rayOrigin - WorldSpaceLightPosition), normalize(rayOrigin - WorldSpaceCameraPosition)); // Multiply by factors and sum into result L += L_i * exp(-length(rayOrigin - WorldSpaceCameraPosition) * _TauScattering) * rayStep; steps++; // If we are out of the volume we can also exit if (rayCoord.x < 0.0f || rayCoord.x > 1.0f || rayCoord.y < 0.0f || rayCoord.y > 1.0f || rayCoord.z < 0.0f || rayCoord.z > 1.0f) { break; } } // Output light scattering return float4(L, 1.0f); Which will compute light scattering inside the volume. Resulting in this like image: Fig. 03 - Light scattering inside volume with shadowed objects For the computation from outside of the volume one has to start with origin not being camera, but actual points where we enter the volume. Which isn't anything different than: // Origin of our ray float3 rayOrigin =; Of course, further when computing total radiance transferred from the step to the camera, the distance passed into exponent has to be only distance that has been traveled inside of the volume, e.g.: // Multiply by factors and sum into result L += L_i * exp(-length(rayOrigin - rayEntry) * _TauScattering) * rayStep; Where ray entry is input world position in the shader program. To have good image quality, post processing effects (like tone mapping) are required, resulting image can look like: Fig. 04 - Image with post processing effects. What wasn't described from the algorithm is the Phase function. The phase function determines probability density of scattering incoming photons into outgoing directions. One of the most common is Rayleigh phase function (which is commonly used for atmospheric scattering): float Phase(float3 inDir, float3 outDir) { float cosAngle = dot(inDir, outDir) / (length(inDir) * length(outDir)); float nom = 3.0f * (1.0f + cosAngle * cosAngle); float denom = 16.0f * 3.141592f; return nom / denom; } Other common phase functions are Henyey-Greenstein or Mie. On Optimization There could be a huge chapter on optimization of light scattering effects. To obtain images like above, one needs to calculate large amount of samples, which requires a lot of performance. Reducing them ends up in slicing which isn't visually pleasant. In the next figure, a low number of samples is being used: Fig. 05 - Low number of samples for ray marching. By simple modification, adding a random small offset to each step size - a noise will be introduced, and stepping artifacts will be removed, like: Fig. 06 - With low amount of samples and randomization, the image quality can be improved. Further, the shader programs provided in previous section were reference ones. They can be optimized a bit, reducing number of computations within the ray marching loop. Such as not working with ray origin, but only with ray coordinates within the loop. Also, rendering light scattering effects is often done at half or quarter resolution, or using interleaved sampling (For example, dividing whole image into block of 2x2 pixels, and calculating 1 of them each frame - thus reducing required computation power). The actual difference is then hardly visible when user moves around due to other effects like motion blur. All optimization tricks are left for a reader to try and implement. Extensions I intentionally pushed for implementing this algorithm inside the specified volume. While doing it as a full screen pass seems more straight forward, it actually is somehow limited. Using a specific volume can bring us further to simulating clouds or smoke, and lighting it correctly with this algorithm. Of course that would require a voxel texture representing density and albedo at each voxel in the volume. Applying a noise as density could result in interesting effects like ground fog, which may be especially interesting in various caves, or even outside. Results Instead of talking about the results, I will go ahead and share both - picture and a video: Fig. 07 - Picture showing the resulting light scattering Fig. 08 - Video showing the end result. The video shows really basic usage - it is rendered using the code explained earlier, only with albedo color for the fog. Conclusion This is by far one of the longest article I've ever written - I actually didn't expect the blog post being this long, and I can imagine some people here can be a bit scared by its size. Light scattering isn't easy, and even at this length of the article - it describes just basics. I hope you have enjoyed 'Effect' post this time, if I did any mistake in equations or source code - please let me know, I'll gladly update it. I'm also considering wrapping an improved version of this as unity package and making it available on Asset Store at some point (although there are numerous similar products available there). Anyways please let me know what you think!
  13. Vilem Otte

    Engine: Save/Load and other fun

    I'd like to make this a bit more regular hobby, releasing updates on my own game engine (among other projects), yet sometimes my real work steps in and I tend to be quite busy. Anyways back to the important part and that is what is this article about. Making a game engine with WYSIWYG editor is quite a challenge, it takes years of practice and learning, any resources are scarce - and therefore you have to improvise a lot. Sometimes it takes dozens of attempts before you got it right, and the feeling when you finally did it right is awesome. This time I'm going to dig a bit into how I designed save, load & publish system, and which parts of it are done. Introduction To understand save/load/publish system, one needs to understand how his engine is going to work. In Skye Cuillin (codename for my game engine) you have editor application, which is used to create your game scenes, test them and possibly publish them. On the other hand, the planned runtime application takes in exported scene a plays it. There are 2 possible ways to save a scene - first is something I call Editable Scene, second thing is what I call Exported Scene. Both file formats serves different purposes and stores all the data in completely different way. Editable Scene is saving primarily just a scenegraph, with all entities and their components in a way where editor can easily load them again. It doesn't store any resource files (meshes, textures, etc.) but uses the one on the hard drive. This is mainly to keep the amount of data in the file to minimum (after all it is a text file, not binary file). Exported Scene is storing both - scenegraph, with all entities and their components, along with all resource files (meshes, textures, etc.) in a specific compressed and binary encoded format, so it is fast to load, but losing possibility to be loaded back into editor. I won't get into details of Exported Scene type today, as that part is still in testing - yet I want to show some details on how Editable Scene is working. Further, I will describe and talk only about Editable Scene format. Saving & File Format In the engine, the scenegraph is a N-ary tree with a single root node. This node can't be edited in any way and can't have any components except transform component, which is at coordinates of (0, 0, 0). It is intentionally shown also in the scene graph view on the left side in the editor, and user is unable to select it through there. The only action that can be performed on it is adding new Entity as its child (either by assigning Entity's parent as root, or adding new entity into scene under root), or removing Entity as its child (either by re-assignment under different parent Entity, or actual deletion). Entity class has a Serialize method, which is this as of now: std::string Entity::Serialize() { std::stringstream ss; ss << "Entity" << std::endl; // Store the name ss << mName << std::endl; // Note. Each entity except Root will always have a parent! // Store the parent name if (mParent) { ss << mParent->mName << std::endl; } else { ss << std::endl; } // Store the transform ss << mTransform.Serialize(); // Store components for (auto it = mObject.ComponentsItBegin(); it != mObject.ComponentsItEnd(); it++) { ss << it->second->Serialize(); } ss << "(" << std::endl; // Store children (recursively) if (mChildren.size() > 0) { for (auto child : mChildren) { ss << child->Serialize(); } } ss << ")" << std::endl; return ss.str(); } Simply store some keyword, base entity data (name, parents and transformation), component data and then list of children within parentheses. Also note that each component has Serialize method which simply serializes it into text. On side, all this is also being used for Undo/Redo functionality of the engine. Example of serialized entity (spotlight) can look like this: Entity SpotLight Root Transformation -450 90 -40 0 0 0 0 1 1 1 LightComponent 1 1 1 1 1 1 2000 1 0 0.01 256 1 5000 900 -0.565916 -0.16169 0.808452 0 45 0 0 ( ) Another example (with links to resources) is F.e. this cube: Entity cube_Cube_Cube.001_0 cube Transformation 0 0 0 0 0 0 0 1 1 1 MaterialComponent ../Data/Shared/Models/Textures/Default_basecolor.tga ../Data/Shared/Models/Textures/Default_normal.tga ../Data/Shared/Models/Textures/Default_metallic.tga ../Data/Shared/Models/textures/Default_roughness.tga ../Data/Shared/Models/Textures/Default_height.tga MeshComponent cube_Cube_Cube.001_0 ( ) The names of resources are simply names used in key-value databases (key is name, value is pointer to resource). All resources in project folder are loaded when starting editor - therefore these reference names will always point to correct resources in key-value database for resources (this is assuming project folder matches the one which should be for scene). Loading When saving was done with Serialize method, then it makes sense to implement Deserialize method, which takes text as input and initializes everything for given entity based on it. While it is a bit longer, I'm going to paste here while source of it: void Entity::Deserialize(Scene* scene, const std::string& s, Entity* parent) { std::vector<std::string> lines = String::Split(s, '\n'); // Read name and store it mName = lines[1]; // We need to prevent creation of entity if it already exists in scene. This is mainly because root entity will be stored, // while at the same time it also already exists in scene (it always has to exist). if (scene->GetEntity(mName) == nullptr) { // If we didn't specify parent for deserialize through parameter if (parent == nullptr) { // Get entity from next line, locate it in scene and use it Entity* parsedParent = scene->GetEntity(lines[2]); parsedParent->AddChild(this); } else { // Parent is current node (only when passed through parameter - deserialize for some Undo/Redo cases) parent->AddChild(this); } } // Restore transformation, which is composed of 4 lines: // 1. keyword - "Transformation" // 2. translation X Y Z numbers representing 3D position - "0 0 0" // 3. rotation X Y Z W numbers representing quaternion - "0 0 0 1" // 4. scale X Y Z numbers - "0 0 0" // These 4 lines need to be joined, and passed in as a parameter for Deserialize method into Transform object std::vector<std::string> transformData; transformData.push_back(lines[3]); transformData.push_back(lines[4]); transformData.push_back(lines[5]); transformData.push_back(lines[6]); std::string transform = String::Join(transformData, '\n'); mTransform.Deserialize(transform); // Deserialize each component one by one unsigned int lineID = 7; while (true) { if (lineID == lines.size()) { break; } if (lines[lineID].size() < 1) { lineID++; continue; } if (lines[lineID][0] == '(') { break; } ComponentId compId = ComponentTypeId::Get(lines[lineID]); Component* c = ComponentFactory::CreateComponent(compId); std::vector<std::string> componentData; componentData.push_back(lines[lineID]); unsigned int componentEnd = lineID + 1; while (true) { if (componentEnd == lines.size()) { break; } if (lines[componentEnd].size() < 1) { componentEnd++; continue; } if (lines[componentEnd][0] == '(') { break; } if (lines[componentEnd].find("Component") != std::string::npos) { break; } componentData.push_back(lines[componentEnd]); componentEnd++; } std::string component = String::Join(componentData, '\n'); c->Deserialize(component); mObject.AddComponent(compId, c); lineID = componentEnd; } // If at this point we're not at the end yet, it means there are children for the node, parsing // those out is a bit tricky - we need to take whole list of entities within node, each into // separate string buffer // Children tree depth search (only level 1 has to be taken into account, all other levels have // to be included in string buffer for respective entity int level = 0; // Should we instantiate entity? int instEntity = 0; // String buffer std::vector<std::string> entityData; // Note: When deserializing children we can always pass 'this' as parent, simply because due to // the format, we know the hierarchy of the entities while (true) { if (lineID == lines.size()) { break; } if (lines[lineID].size() < 1) { lineID++; continue; } if (lines[lineID][0] == '(') { level++; if (level != 1) { entityData.push_back(lines[lineID]); } lineID++; continue; } if (level == 0) { break; } if (lines[lineID][0] == ')') { level--; if (level != 1) { entityData.push_back(lines[lineID]); } lineID++; continue; } if (level == 1) { if (lines[lineID].find("Entity") != std::string::npos) { if (instEntity == 1) { Entity* e = new Entity("_TempChild"); std::string entityString = String::Join(entityData, '\n'); e->Deserialize(scene, entityString, this); entityData.clear(); unsigned int id = scene->GetIDGenerator()->Next(); e->mSceneID = id; scene->GetSearchMap()->Add(id, e->GetName(), e); } // Offset line here by 1, in case name contained 'Entity' so we don't double-hit keyword instEntity = 1; entityData.push_back(lines[lineID]); lineID++; } // Push line entityData.push_back(lines[lineID]); } else { // Push line entityData.push_back(lines[lineID]); } lineID++; } // If there is one more, instantiate it if (instEntity == 1) { Entity* e = new Entity("_TempChild"); std::string entityString = String::Join(entityData, '\n'); e->Deserialize(scene, entityString, this); entityData.clear(); unsigned int id = scene->GetIDGenerator()->Next(); e->mSceneID = id; scene->GetSearchMap()->Add(id, e->GetName(), e); } } Which will load whole scene, and instantiate it into current scene graph (before loading scene it is cleared so that only root remains). Showtime And of course - save/load system has to be showed out!
  14. Vilem Otte

    Crowdsourcing projects.

    In case of targetting very specific group of people - you often don't reach to them through kickstarter or indiegogo campaign, but it often is by far easier to reach them directly. You may get some funding from them for your project, but the main problem I have with any kind of funding is - that you have to actually present them the project. At which point, you generally have enough resources to finish it without additional funding and sell it.
  15. Vilem Otte

    Multiplayer Bandwidth

    Whenever you do any project you need to do the math for economy correctly. Most of us (who do business) did some right, some wrong. The problem is that whatever you do wrong or miss - you're going to pay for it. The successful ones often tend to overestimate expenses a bit, while underestimate income. First, this: Not really. I can disclose some numbers for one of the testing servers - as they're publicly known. I personally have (for personal projects and testing) paid a virtual server - to put things straight, you pay CZK 90 per 'unit' (which is $3.93) per month. This unit has 1 virtual CPU, 1 GB of memory. Along with that you get 25 GB of space (as part of the price). The service has 'unlimited' bandwidth and 'unlimited amount of data transferred'. Which in reality means that you can't be billed for using as much bandwidth as available, and transmitting as much data as possible. Because in reality, these units are running virtually on multiple physical servers, each of them is at least on 1 gbps network, reducing the actual bandwidth and the amount of data you can transfer monthly greatly. Also as single physical machine holds multiple units, then due to FUP you may find your bandwidth rather limited. Of course you can house your own server (they do offer it), yet the pricing starts adding up (I believe it starts around $25 - $50 per connection per month just for the connection, you have to provide hardware and work with it yourselves). So, assuming $50 per connection (1 gbps) and additional $50 for space - you get to $100 per server, in this case you would clearly need 70 of them. That sums up to $7000 just for housing and connection. Keeping 70 servers running with hardware and software on them requires support staff, actually multiple people (I think wiki states 25 - 50 devices per single tech support guy or girl - let's go with lower estimate, which is 25, that means you need 3 tech supports ~ CZK 36104 (CZK 26941 in netto) is average monthly salary in my country for IT tech support ~ stats from Czech Statistics Office (CZK 48380 before all taxes - which is $2112) - that sums up to $6336. So currently you can probably keep the hardware running at $13336 per month in my country, excl. the actual cost of hardware. And excl. the actual cost for software development of course. This literally means that unless each player pays you at least $0.54 each month, your business is going to red numbers really fast. If you add software development, feel free to double or even triple that amount. And with 25,000 players you're going to need some kind of support staff - additional employees. You can save some money by not having offices (and using contractors instead), which may or may not be better for you - depending on how good are you at managing remote workers. I intentionally didn't count in hardware cost (yes, you can go for cloud services without having tech support for hardware - you will still need them for software, so 2 guys could handle it, yet there would be additional cost for actually using cloud, not just connection/housing ~ that would mix up the calculation a little). My point is, economic planning is very critical point of the project - assuming you know what you're doing. The main problem I see is that you're trying to figure out economics of your project, while not having prototype or anything - don't do that. Now, my 50c - I was doing quite a bit in networking (and I still do in some projects from time to time - sadly for most of them I can't disclose actual numbers, or source ... unless I'd want to get to jail real fast), and most of the networking business, even for larger scale projects, was actually done on that $3.93 virtual server.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!