Jump to content
  • Advertisement


Popular Content

Showing content with the highest reputation since 09/21/18 in all areas

  1. 6 points
    The past few days I have been playing with Godot engine with a view to using it for some Gamedev challenges. Last time for Tower Defence I used Unity, but updating it has broken my version so I am going to try a different 'rapid development' engine. So far I've been very impressed by Godot, it installs down to a small size, isn't bloated and slow with a million files, and is very easy to change versions of the engine (I compiled it from source to have latest version as an option). Unfortunately, after working through a couple of small tutorials I get the impression that Godot suffers from the same frame judder problem I had to deal with in Unity. Let me explain: (tipping a hat to Glenn Fiedler's article) Some of the first games ran on fixed hardware so they didn't have a big deal about timing, each 'tick' of the game was a frame that was rendered to the screen. If the screen rendered at 30fps, the game ran at 30fps for everyone. This was used on PCs for a bit, but the problem was that some hardware was faster than others, and there were some games that ran too fast or too slow depending on the PC. Clearly something had to be done to enable the games to deal with different speed CPUs and refresh rates. Delta Time? The obvious answer was to sample a timer at the beginning of each frame, and use the difference (delta) in time between the current frame and the previous to decide how far to step the simulation. This is great except that things like physics can produce different results when you give it shorter and longer timesteps, for instance a long pause while jumping due to a hard disk whirring could give enough time for your player to jump into orbit. Physics (and other logic) tends to work best and be simpler when given fixed regular intervals. Fixed intervals also makes it far easier to get deterministic behaviour, which can be critical in some scenarios (lockstep multiplayer games, recorded gameplay etc). Fixed Timestep If you know you want your gameplay to have a 'tick' every 100 milliseconds, you can calculate how many ticks you want to have complete at the start of any frame. // some globals iCurrentTick = 0 void Update() { // Assuming our timer starts at 0 on level load: // (ideally you would use a higher resolution than milliseconds, and watch for overflow) iMS = gettime(); // ticks required since start of game iTicksRequired = iMS / 100; // number of ticks that are needed this frame iTicksRequired -= iCurrentTick; // do each gameplay / physics tick for (int n=0; n<iTicksRequired; n++) { TickUpdate(); iCurrentTick++; } // finally, the frame update FrameUpdate(); } Brilliant! Now we have a constant tick rate, and it deals with different frame rates. Providing the tick rate is high enough (say 60fps), the positions when rendered look kind of smooth. This, ladies and gentlemen, is about as far as Unity and Godot typically get. The Problem However, there is a problem. The problem can be illustrated by taking the tick rate down to something that could be considered 'ridiculous', like 10 or less ticks per second. The problem is, that frames don't coincide exactly with ticks. At a low tick rate, several frames will be rendered with dynamic objects in the same position before they 'jump' to the next tick position. The same thing happens at high tick rates. If the tick does not exactly match the frame rate, you will get some frames that have 1 tick, some with 0 ticks, some with 2. This appears as a 'jitter' effect. You know something is wrong, but you can't put your finger on it. Semi-Fixed Timestep Some games attempt to fix this by running as many fixed timesteps as possible within a frame, then a smaller timestep to make up the difference to the delta time. However this brings with it many of the same problems we were trying to avoid by using fixed timestep (lack of deterministic behaviour especially). Interpolation The established solution that is commonly used to deal with both these extremes is to interpolate, usually between the current and previous values for position, rotation etc. Here is some code: // ticks required since start of game iTicksRequired = iMS / 100; // remainder iMSLeftOver = iMS % 100; // ... gameplay ticks // finally, the frame update float fInterpolationFraction = iMSLeftOver / 100.0f; FrameUpdate(fInterpolationFraction); ..... // very pseudocodey, just an example for translate for one object void FrameUpdate(float fInterpolationFraction) { // where pos is Vector3 translate m_Pos_render = m_Pos_previous + ((m_Pos_current - m_Pos_previous) * fInterpolationFraction); } The more astute among you will notice that if we interpolate between the previous and current positions, we are actually interpolating *back in time*. We are in fact going back by exactly 1 tick. This results in a smooth movement between positions, at a cost of a 1 tick delay. This delay is unacceptable! You may be thinking. However the chances are that many of the games you have played have had this delay, and you have not noticed. In practice, fast twitch games can set their tick rate higher to be more responsive. Games where this isn't so important (e.g. RTS games) can reduce processing by dropping tick rate. My Tower Defence game runs at 10 ticks per second, for instance, and many networked multiplayer games will have low update rates and rely on interpolation and extrapolation. I should also mention that some games attempt to deal with the 'fraction' by extrapolating into the future rather than interpolation back a frame. However, this can bring in new sets of problems, such as lerping into colliding situations, and snapping. Multiple Tick Rates Something which doesn't get mentioned much is that you can extend this concept, and have different tick rates for different systems. You could for example, run your physics at 30tps (ticks per second), and your AI at 10tps (an exact multiple for simplicity). Or use tps to scale down processing for far away objects. How do I retrofit frame interpolation to an engine that does not support it fully? With care is the answer unfortunately. There appears to be some support for interpolation in Unity for rigid bodies (Rigidbody.interpolation) so this is definitely worth investigating if you can get it to work, I ended up having to support it manually (ref 7) (if you are not using internal physics, the internal mechanism may not be an option). Many people have had issues with dealing with jitter in Godot and I am as yet not aware of support for interpolation in 3.0 / 3.1, although there is some hope of allowing interpolation from Bullet physics engine in the future. One option for engine devs is to leave interpolation to the physics engine. This would seem to make a lot of sense (avoiding duplication of data, global mechanism), however there are many circumstances where you may not wish to use physics, but still use interpolation (short of making everything a kinematic body). It would be nice to have internal support of some kind, but if this is not available, to support this correctly, you should explicitly separate the following: transform CURRENT (tick) transform PREVIOUS (tick) transform RENDER (where to render this frame) The transform depends on the engine and object but it will be typically be things like translate, rotate and scale which would need interpolation. All these should be accessible from the game code, as they all may be required, particularly 1 and 3. 1 would be used for most gameplay code, and 3 is useful for frame operations like following a player with a camera. The problem that exists today in some engines is that in some situations you may wish to manually move a node (for interpolation) and this in turn throws the physics off etc, so you have to be very careful shoehorning these techniques in. Delta smoothing One final point to totally throw you. Consider that typically we have been relying on a delta (difference) in time that is measured from the start of one frame (as seen by the app) and the start of the next frame (as seen by the app). However, in modern systems, the frame is not actually rendered between these two points. The commands are typically issued to a graphics API but may not be actually rendered until some time later (consider the case of triple buffering). As such the delta we measure is not actually the time difference between the 2 rendered frames, it is the delta between the 2 submitted frames. A dropped frame may for instance have very little difference in the delta for the submitted frames, but have double the delta between the rendered frames. This is somewhat a 'chicken and the egg' problem. We need to know how long the frame will take to render in order to decide what to render, where, but in order to know how long the frame will take to render, we need to decide what to render, and where!! On top of this, a dropped frame 2 frames ago could cause an artificially high delta in later submitted frames if they are capped to vsync! Luckily in most cases the solution is to stay well within performance bounds and keep a steady frame rate at the vsync cap. But in any situation where we are on the border between getting dropped frames (perhaps a high refresh monitor?) it becomes a potential problem. There are various strategies for trying to deal with this, for instance by smoothing delta times, or working with multiples of the vsync interval, and I would encourage further reading on this subject (ref 3). References 1 https://gafferongames.com/post/fix_your_timestep/ 2 http://www.kinematicsoup.com/news/2016/8/9/rrypp5tkubynjwxhxjzd42s3o034o8 3 http://frankforce.com/?p=2636 4 http://fabiensanglard.net/timer_and_framerate/index.php 5 http://lspiroengine.com/?p=378 6 http://www.koonsolo.com/news/dewitters-gameloop/ 7 https://forum.unity.com/threads/case-974438-rigidbody-interpolation-is-broken-in-2017-2.507002/
  2. 3 points
    Edit: Oops, did not read the paper well enough - my answer only covers how to relax UVs. I don't know anything about the indirection texture trick for POM. But for terrain the resulting UVs should resolve the stretching well. ----------- I do this a lot, but i have more complicated constraints so i don't have a full example, so 'll just dexcribe the algorithm: Preprocessing: First you start with the planar projecton you already have to get initial UVs for each triangle (each triangle has its own 3 UVs - they are not shared) For each UV triangle: Calculate its area center and set this as the origin for the triangle. Also store a 2D projection from the 3D trianlge at its normal plane (orientation does not matter) and set area center as origin as well. Calculate average scaling from 3D to UV to obtain a target area for each UV triangle. Finally for each triangle you have this data: struct { vec2 centerUV; vec2 UVs[3]; vec2 projectedFrom3D[3]; float targetUVArea; } Iterative solver: 1. For each triangle: Update UVs to target shape and set its orientation so the angular momentum relative to the current UV coords is zero (see code snippet), and scale to match target area. 2. For each vertex: calculate the average merged UV coord from all adjacent UV triangles (weighted by inverse area should work best). Keep the boundary vertices fixed if you want. 3. From those averaged vertex UVs calculate new triangle centers and UVs. Continue with step 1 until converged. The algorithm is quite simple but robust if distortion is not too big (which for a hightmap should never happen). Notice that there is solid research in this field, e.g. 'Least Squares Conformal Maps' or 'As Rigid As Possible parametrization', if naive approaches like mine or the paper you've mentioned are not good enough. static float Deformed2DTriangleRotation (const vec2 *posDeformed, const vec2 *posReference) // both triangles must have their area center at (0,0) { float aMc = 0; float aMs = 0; for (int i=0; i<3; i++) { float sql = (posDeformed[i].SqL() + posReference[i].SqL()) * 0.5f; // SqL meand squared length, so just do product with itself vec2 a0 = posDeformed[i]; vec2 a1 (a0[1], -a0[0]); float angle = atan2 (a1.Dot(posReference[i]), a0.Dot(posReference[i])); aMc += cos(angle) * sql;// * mass[i]; aMs += sin(angle) * sql;// * mass[i]; //alternative without trig - todo: faster //aMs += a1.Unit().Dot(posReference[i].Unit()) * sql; //aMc += a0.Unit().Dot(posReference[i].Unit()) * sql; } return atan2 (aMs, aMc); } posDeformed means UVcoords of triangle, and posReference is its target shape, so the projection from 3D The function returns just an angle so step 1 looks like this: for each triangle { float rotationAngle = Deformed2DTriangleRotation (triangle.UVs, triangle.projectedFrom3D); triangle.UVs[0..2] = Rotate ( triangle.projectedFrom3D[0...2], rotationAngle); // Edit: more likely use -rotationAngle SetTriangleArea (triangle.UVs, triangle.targetUVArea); // could optimize this away by doing it on the precomputed targetUVArea } So you set the UVs to their target shape in each iteration, but you minimize the amount of rotation caused by moving centers, which is the central idea of the algorithm. Edit: Can't edit the code, typo in comment means SqL = Dot Product with itself
  3. 2 points
    The cross product of (1,0,0) and (0,1,0) is (0,0,1) no matter what handedness you are using. Handedness only enters when you assign (1,0,0) the meaning of "to the right of the screen", (0,1,0) the meaning of "upwards along the screen" and (0,0,1) the meaning of "perpendicular to the screen away from the viewer", for example. Now in order to determine the sign of rotations and the sign of the cross product you can use a hand, and you'll find out whether this is a left-handed system or a right-handed system.
  4. 2 points
    Hi bilbo92, I read all of your post but the last optional section.. I am not a "professional in the industry", I have, however been learning programming languages and writing software since the early 90s. I worked for 13 years as a Computer Systems Engineer for Internet Service Providers and Telephone Companies. I understand the networking side of programming fairly well, and I've been toying with game programming as a hobby since I started programming in the 90s. I'm not going to try to tell you how long your projects could take, that's not possible. I will try to break down what I'm doing and hopefully it will help you get the picture you are looking for. I am one developer, and I challenged myself to construct an MMO. Open world with procedural everything. So I don't have to make the environment myself. I'm using Unity for the Game Client, so I don't have to write my own engine. I'm using a 3rd party Customizable Avatar system so I don't have to do any character modeling. So, in essence, I'm building a close analog to what you are talking about. I've spent 2 months developing the Client(unity) and Server(C# with VS) and I'm probably at 25% of where I would call it a bare-bones functional alpha/demo release. And that's 2 months of 8+ hours a day dedicated to this project. I don't have any degrees, or formal training. But I do have hard earned knowledge and a few decades of relevant experience to draw upon. I will be honestly and truly surprised if I make my bare-bones/alpha release of winter/fall 2019. And I'm taking nearly every shortcut to this goal I can imagine. Good luck out there, and I hope this helps a bit. Don't give up on your dreams, and really, don't worry about how long they will take to build. That just gives you reasons to quit before you start. Edit: And to be fair I should add, the only reason I've been able to develop in Unity so rapidly these past two months is because previous to this project I spent almost a full year working on another Unity Multi-Player game that I have shelved for the time being. I learned a LOT on that project though.
  5. 2 points
    First a brief description of what we're trying to achieve. We're basically trying for a single shard MMO that will accommodate all players on a giant planet, say 2000 km radius. In order to achieve this we will need .... well .... a giant plant. This presents a problem since the classical way to store worlds is triangles on disk and a world this big simply wouldn't fit on a players local computer. Therefore we won't generate world geometry on disk, we will instead define the majority of the terrain with functions and realize them into a voxel engine to get our meshes using a classic marching algorithm....... well actually a not so classic marching algorithm since it uses prisms instead of cubes. Our voxel engine will necessarily make use of an octrees to allow for the many levels of LOD and relatively low memory usage we need to achieve this. The work so far...... So below we have some test code that generates a sphere using the initial code of our Dragon Lands game engine..... void CDLClient::InitTest2() { // Create virtual heap m_pHeap = new(MDL_VHEAP_MAX, MDL_VHEAP_INIT, MDL_VHEAP_HASH_MAX) CDLVHeap(); CDLVHeap *pHeap = m_pHeap.Heap(); // Create the universe m_pUniverse = new(pHeap) CDLUniverseObject(pHeap); // Create the graphics interface CDLDXWorldInterface *pInterface = new(pHeap) CDLDXWorldInterface(this); // Create spherical world double fRad = 4.0; CDLValuatorSphere *pNV = new(pHeap) CDLValuatorSphere(fRad); CDLSphereObjectView *pSO = new(pHeap) CDLSphereObjectView( pHeap, fRad, 1.1 , 10, 7, pNV ); //pSO->SetViewType(EDLViewType::Voxels); pSO->SetGraphicsInterface(pInterface); // Create an astral reference from the universe to the world and attach it to the universe CDLReferenceAstral *pRef = new(pHeap) CDLReferenceAstral(m_pUniverse(),pSO); m_pUniverse->PushReference(pRef); // Create the camera m_pCamera = new(pHeap) CDLCameraObject(pHeap, FDL_PI/4.0, this->GetWidth(), this->GetHeight()); m_pCamera->SetGraphicsInterface(pInterface); // Create a world tracking reference from the unverse to the camera double fMinDist = 0.0; double fMaxDist = 32.0; m_pBoom = new(pHeap) CDLReferenceFollow(m_pUniverse(),m_pCamera(),pSO,16.0,fMinDist,fMaxDist); m_pUniverse->PushReference(m_pBoom()); // Set zoom speed in the client this->SetZoom(fMinDist,fMaxDist,3.0); // Create the god object (Build point for LOD calculations) m_pGod = new(pHeap) CDLGodObject(pHeap); // Create a reference for the god opbject and attach it to the universe CDLReferenceDirected *pGodRef = new(pHeap) CDLReferenceDirected(m_pUniverse(), m_pGod()); pGodRef->SetPosition(CDLDblPoint(0.0, 0.0, -6.0), CDLDblVector(0.0, 0.0, 1.0), CDLDblVector(0.0, 1.0, 0.0)); m_pCamera->PushReference(pGodRef); // Set the main camera and god object for the universe' m_pUniverse->SetMainCamera(m_pCamera()); m_pUniverse->SetMainGod(m_pGod()); // Load and compile the vertex shader CDLUString clVShaderName = L"VS_DLDX_Test.hlsl"; m_pVertexShader = new(pHeap) CDLDXShaderVertexPC(this,clVShaderName,false,0,1); // Attach the Camera to the vertex shader m_pVertexShader->UseConstantBuffer(0,static_cast<CDLDXConstantBuffer *>(m_pCamera->GetViewData())); // Create the pixel shader CDLUString clPShaderName = L"PS_DLDX_Test.hlsl"; m_pPixelShader = new(pHeap) CDLDXShaderPixelGeneral(this,clPShaderName,false,0,0); // Create a rasterizer state and set to wireframe m_pRasterizeState = new(pHeap) CDLDXRasterizerState(this); m_pRasterizeState->ModifyState().FillMode = D3D11_FILL_WIREFRAME; m_pUniverse()->InitFromMainCamera(); m_pUniverse()->WorldUpdateFromMainGod(); } And here's what it generates: Ok so it looks pretty basic, but it actually is a lot more complex than it looks.,,,, What it looks like is a subdivided icosahedron. This is is only partially true. The icosahedron is extruded to form 20 prisms. Then each each of those is subdivided to 8 sub-prisms and it's children are subdivided and so forth. Subdivision only takes place where there is function data (i.e. the function changes sign). This presents a big problem. How do we know where sign changes exist? Well we don't, so we have to go down the tree and look for them at the resolution we are currently displaying. Normally that would require us to build the trees all the way down just to look for sign changes, but since this is expensive we don't want to do it, so instead we do a "ghost walk". We build a lightweight tree structure which we use to find where data exists. We then use that structure to build the actual tree where voxels are needed. So I guess the obvious question is why is this any better than simply subdividing an icosohedron and perturbing the vertexes to get mountains, oceans and the like. The answer is this system will support full on 3D geometry. You can have caves, tunnels, overhangs, really whatever you can come up with. It's not just limited to height maps. Here's what our octree (or rather 20 octrees) looks like: Ok it's a bit odd looking but so far so good. Now we come to the dreaded voxel LOD and .....gulp .... chunking. .... So as would be expected we need a higher density of voxels close to the camera and lower density father away. We implement this by defining chunks. Each chunk is a prism somewhere in the tree(s) an contains sub-prism voxels. Chunks father way from the camera are larger and those closer to the camera are smaller, the same as with the voxels they contain. In the following picture we set the god object (the object that determiners our point for LOD calculation) to a fixed point so we can turn the world around and look at the chunk transitions. However normally we would attach it to the camera or the player's character so all chunking and LOD would be based on a players perspective. As you can see there are transitions from higher level chunks to lower level chunks and their corresponding voxels. In order to do this efficiently we divide the voxels in each chunk into two groups: center voxels and border voxels, and so each chunk can have two meshes that it builds. We do this so that if one chunk is subdivided into smaller chunks, we only need update the border mesh of neighboring chunks and so there is less work to be done and less data to send back down to the GPU. Actual voxel level transitions are done by special algorithms that handle each possible case. The whole transition code was probably the hardest thing to implement since there are a lot of possible cases. What else..... Oh yeah normals....... You can't see it with wire fame but we do generate normals. We basically walk around mesh vertexes in the standard way and add up the normals of the faces. There are a couple of complexities however. First we have to account for vertexes on along chunk boundaries, and second we wanted some way to support ridge lines, like you might find in a mountain range. So each edge in a mesh can be set to sharp which causes duplicate vertexes with different normals to be generated. Guess that's it for now. We'll try to post more stuff when we have simplex noise in and can generate mountains and such. Also we have some ideas for trees and houses but we'll get to that later.
  6. 1 point
    I like your voice,m but I would like to hear more than just a 4.5 second bit. The question you need to ask yourself, is can you keep doing that voice for 4+ hours? Keep at it though!
  7. 1 point
    I found the solution. I had to transform the Normal vectors too with this formula Normal = mat3(transpose(inverse(model))) * aNormal;
  8. 1 point
    Thanks for the feedback, glad it works for you. Albert
  9. 1 point
    @Alberth, just as a matter of feedback: worked fine. Thank you! I tested the idea with a Hangman game and I started coding the shoot 'em up game, the Aircraft is already on screen. Oh, and no more 20+ parameters lists 😆 Thanks again! int main() { World world; GameObject aircraft(sf::Vector2f(50.f, 50.f)); aircraft.addComponent<SpriteComponent>(&aircraft, Sprites::Aircraft); // Sprite Component constructor takes two parameters directly, //without the nested constructors and register itself in the World's Renderer class aircraft.addComponent<TextComponent>(&aircraft, Fonts::Arial); // Same thing as Sprite Component ... // other components: no more deeply nested constructors with 20+ parameters lists :) while (world.getWindow().isOpen()) { processInput(); update(sf::Time dt); renderer.draw(); } return 0; }
  10. 1 point
    Usually via YouTube if its a lengthy video with minor interruption. That said, I do like to listen to CDs that do not have vocals - such as a film soundtrack. They are a good way to judge how much work you are actually getting done, by pausing everytime there is an interruption. Edit: sorry, I didn't realise there was a previous page and the actual question being asked. I honestly don't have a problem with running music on the same machine unless you want background music to be uninterrupted if you are not always using the computer to do all your work. For example, you might be doing sketches by hand or trying out a music tune on your geee-tarrr. That said, its just given me an idea as I do my development at home; if you have background music - not head banging or too loud - it might be an indication to others that you are working and not to be interrupted...
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!