Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Optimization'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 66 results

  1. I've been experimenting with my own n-body simulation for some time and I recently discovered how to optimize it for efficient multithreading and vectorization with the Intel compiler. It did exactly the same thing after making it multithreaded and scaled very well on my ancient i7 3820 (4.3GHz). Then I changed the interleaved xy coordinates to separate arrays for x and y to eliminate the strided loads to improve AVX scaling and copy the coordinates to an interleaved array for OpenTK to render as points. Now the physics is all wrong, the points form clumps that interact with each other but they are unusually dense and accelerate faster than they decelerate causing the clumps to randomly fly off into the distance and after several seconds I get a NaN where 2 points somehow occupy exactly the same x and y float coordinates. This is the C++ DLL: #include "PPC.h" #include <thread> static const float G = 0.0000001F; const int count = 4096; __declspec(align(64)) float pointsx[count]; __declspec(align(64)) float pointsy[count]; void SetData(float* x, float* y){ memcpy(pointsx, x, count * sizeof(float)); memcpy(pointsy, y, count * sizeof(float)); } void Compute(float* points, float* velx, float* vely, long pcount, float aspect, float zoom) { #pragma omp parallel for for (auto i = 0; i < count; ++i) { auto forcex = 0.0F; auto forcey = 0.0F; for (auto j = 0; j < count; ++j) { if(j == i)continue; const auto distx = pointsx[i] - pointsx[j]; const auto disty = pointsy[i] - pointsy[j]; //if(px != px) continue; //most efficient way to avoid a NaN failure const auto force = G / (distx * distx + disty * disty); forcex += distx * force; forcey += disty * force; } pointsx[i] += velx[i] -= forcex; pointsy[i] += vely[i] -= forcey; if (zoom != 1) { points[i * 2] = pointsx[i] * zoom / aspect; points[i * 2 + 1] = pointsy[i] * zoom; } else { points[i * 2] = pointsx[i] / aspect; points[i * 2 + 1] = pointsy[i]; } /*points[i * 2] = pointsx[i]; points[i * 2 + 1] = pointsy[i];*/ } } This is the relevant part of the C# OpenTK GameWindow: private void PhysicsLoop(){ while(true){ if(stop){ for(var i = 0; i < pcount; ++i) { velx[i] = vely[i] = 0F; } } if(reset){ reset = false; var r = new Random(); for(var i = 0; i < Startcount; ++i){ do{ pointsx[i] = (float)(r.NextDouble()*2.0F - 1.0F); pointsy[i] = (float)(r.NextDouble()*2.0F - 1.0F); } while(pointsx[i]*pointsx[i] + pointsy[i]*pointsy[i] > 1.0F); velx[i] = vely[i] = 0.0F; } NativeMethods.SetData(pointsx, pointsy); pcount = Startcount; buffersize = (IntPtr)(pcount*8); } are.WaitOne(); NativeMethods.Compute(points0, velx, vely, pcount, aspect, zoom); var pointstemp = points0; points0 = points1; points1 = pointstemp; are1.Set(); } } protected override void OnRenderFrame(FrameEventArgs e){ GL.Clear(ClearBufferMask.ColorBufferBit); GL.EnableVertexAttribArray(0); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); mre1.Wait(); are1.WaitOne(); GL.BufferData(BufferTarget.ArrayBuffer, buffersize, points1, BufferUsageHint.StaticDraw); are.Set(); GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Float, false, 0, 0); GL.DrawArrays(PrimitiveType.Points, 0, pcount); GL.DisableVertexAttribArray(0); SwapBuffers(); } These are the array declarations: private const int Startcount = 4096; private readonly float[] pointsx = new float[Startcount]; private readonly float[] pointsy = new float[Startcount]; private float[] points0 = new float[Startcount*2]; private float[] points1 = new float[Startcount*2]; private readonly float[] velx = new float[Startcount]; private readonly float[] vely = new float[Startcount]; Edit 0: It seems that adding 3 zeros to G increases the accuracy of the simulation but I'm at a loss as to why its different without interleaved coordinates. Edit 1: I somehow achieved an 8.3x performance increase with AVX over scalar with the new code above!
  2. For reference I am use Unity as my game engine and the A* Pathfinding Project for path finding as there is no chance I would be able to create anything close to as performant as that in any reasonable amount of time. So I am looking to build a game that is going to have a very similar style as Prison Architect / Rim World / SimAirport / etc. One of the things that I assume is going to effect performance is path finding. Decisions about the game I have already made that I think relate to this are: 1. While I am going to be using Colliders, all of them will be trigger colliders so everything can pass through each other and I will not be use physics for anything else as it has no relevance for my game 2. I am going to want to have a soft cap at the map size being 300x300 (90,000 tiles), I might allow bigger sizes but do something like Rim World does in warning the player about possible side effect (whether it be performance or gameplay) 3. The map will be somewhat dynamic in that the user will be able to build / gather stuff from the map but outside of that, it should not change very much Now I am going to build my game around the idea that users would be in control of no more than 50 pawns at any given time (which is something I can probably enforce through the game play) but I am also going to want to have number other pawns that are AI controlled on the map (NPCs, animals, etc.) that would also need path finding enabled. Now I did a basic test in which I have X number of pawns pick a random location in the 300 x 300 map. move towards it, and then change the location every 3-5 seconds. My initial test was pretty slow (not surprising as I was calculating the path every frame for each pawn) so I decided to cache the calculated path results and only update it ever 2 seconds which got me: 100 pawns: 250 - 450 FPS 150 pawns: 160 - 300 FPS 200 pawns: 90 - 150 FPS 250 pawns: 50 - 100 FPS There is very little extra happening in the game outside of rendering the tilemap. I would imagine the most pawns on the map at a given time that need path finding might be a 1000 (and I would probably be able to make due with like 500 - 600). Now obviously I would not need all the pawn to be calculation paths every 2 seconds nor would they need to be calculating paths that are so long but even at a 5 second path refresh rate and paths that are up to 10 tiles long, I am still only able to get to about 400 pawns before I start to see some big performance issues. The issue with reducing the refresh rate is that there are going to be cases where maybe a wall is built before the pawns path is refreshed having them walk through the wall but not sure if there is a clean way to update the path only when needed. I am sure when I don't run the game in the Unity editor I will see increase performance but I am just trying to figure out what things I could be doing to make sure path finding is as smaller of a performance hit as possible as there is a lot of other simulation stuff I am going to want to run on top of the path finding.
  3. Hi, I want to learn to program games for PC and maybe Android. Sorry in advance if i don't know the correct terms. The game i have in mind will be relative simple (at least that's what i hope). It should be turn based with a map and different locations the player can switch in between. At the different locations there will be simple minigames, sometimes withwith simple animations. There is a storyline, the player sometimes can chose between different outcomes. I would like to include character progression in form of attributes and an inventory and i also would like to include a battle system (turn based). I've seen that there are some flash games out there who have similiar elements as i have planned for my game. The thing is, i don't know anything about flash and read, that it's not worth it to learn it anymore. Since i have some basic skills in html and css, i thought it would be better to give html5 and javascript a try (i have to learn javascript though). But iam not sure if it's really a good choice, since all html5 games i've seen so far are either actionshooters and/or have a really crappy graphics compared to flash games. In addition to that, i have no clue if the game i want to create is possible with either flash or html5/javascript. Another issue is, that flash needs adope flash and that's about 900 $ a year, money i don't have at the moment. It's a while ago, since i have made something with html and css, but i remember that there where a lot of problems with compatibility and the different browsers. I assume the same problems still exist and are true for games too? I would be really happy if someone could enlighten me, what would be the best possibilities for me to get in the gaming buisiness. What would be the best way, and what do i have to learn?
  4. A few years ago I started creating a procedural planet engine/renderer for a game in Unity, which after a couple of years I had to stop developing due to lack of time. At the time i didn't know too much about shaders so I did everything on the CPU. Now that I have plenty of time and am more aware of what shaders can do I'd like to resume development with the GPU in mind. For the terrain mesh I'm using a cubed-sphere and chunked LODs. The way I calculate heights is rather complex since it's based on a noise tree, where leaf nodes would be noise generators like Simplex, Value, Sine, Voronoi, etc. and branch nodes would be filters and modifiers (FBM, abs, neg, sum, ridged, billow, blender, selectors, etc). To calculate the heights for a mesh you'd call void CalcHeights( Vector3[] meshVertices, out float[] heights ) on the noise's root node, somewhere in a Unity's script. This approach offers a lot of flexibility but also introduces a lot of load in the CPU. The first obvious thing to do would be (I guess) to move all generators to the GPU via compute shaders, then do the same for the rest of the filters. But, depending on the complexity of the noise tree, a single call to CalcHeights could potentially cause dozens of calls back and forth between the CPU and GPU, which I'm not sure it's a good thing. How should I go about this? Thanks.
  5. reading this post: a doubt stay in my mind, Doom 2016 just upgraded OpenGL drivers 4.5 to 4.6, is there any differences in performance? for example, if i create a opengl context window 3.3 (im using by learnopengl tutorials) n create just a triangle rotating and i get 1000 fps... if i change the context to 4.0 (no using shaders or any 3.4~4.0 features) i'll get less fps (at least less 1 fps = 999)? the same to 4.5 to 4.6 window/game? @edit: im talking about the usage and fps too
  6. Hello Gamedev users. I am new to this forum, and though I am quite proficient at understanding the basics of just about any form of science, I do not have the natural skill or patience needed to become a great programmer, so I ask that you be patient, and bear with me. So, to get to the point, I have thought up a peripheral that can potentially be used for various PC games. Without giving away too many details to those not interested in contributing, I will give as informative an explanation as possible. In theory, it will be activated in intense danger sequences, such as in a FPS firefight, melee combat and fight scenes in medieval style RPG's, etc. The difficulty is not in designing the device, but in implementing the program that would detect these scenes in-game. The software would be for Windows PCs, and made to work with as many games as possible. I have some experience with how modding works, and have considered how it could be used alongside currently released games such as Fallout 4, Battlefield, Call of Duty, Elder Scrolls Skyrim etc. However, I simply do not know enough about programming to determine the viability of integrating features into the software that would allow the danger scenes and combat gameplay of already released games to trigger the software to activate the peripheral. Essentially I am wondering if this would this be something that could be integrated? If so, how could these combat sequences be detected by the software? If you are interested and have something important to contribute, I can promise you a copy of the software and option to purchase the peripheral at cost if you would like. Depending on your motivation, I am open to working together. Even if you are not highly experienced, everyone has to learn somewhere. Whether or not you are interested, I would be grateful for all the advice you have to offer. So, any ideas for how something like this could be implemented?
  7. It's been almost a week since release, but we manage to do several important updates to the game and ready to give more. So what about updates? there are 4 of them and all of them are small preparatory step to the next big one which is Customization. What we have now that we didn't have at the moment of update: - statistics of victories and defeats of each gamemode and best result in the menu (before you could see only best result at the end of each match) - usability of some menus (such as the ability for the client to close statistics for themselves before it was done by the server, additional information so player will know more about what they are doing, new visual controls and many more) - some improvements in the menu level regarding optimization What next? Now we are at update 1.4, when we make 2.0 it will be customization. Before it there will be several small updates concerning other things.
  8. Ruslan Sibgatullin

    How I halved apk size

    Originally posted on Medium You coded your game so hard for several months (or even years), your artist made a lot of high-quality assets, and the game is finally ready to be launched. Congratulation! You did a great job. Now take a look at the apk size and be prepared to be scared. What is the size — 60, 70 or even 80 megabytes? As it might be sounds strange to hear (in the era of 128GB smartphones) but I have some bad news — the size it too big. That’s exactly what happened to me after I’ve finished the game Totem Spirits. In this article I want to share several advises about how to reduce the size of a release apk file and yet not lose the quality. Please, note, that for development I used quite popular game development engine Libgdx, but tips below should be applicable for other frameworks as well. Moreover, my case is about rather simple 2D game with a lot of sprites (i.e. images), so it might be not that useful for large 3D products. To keep you motivated to read this article further I want to share the final result: I managed to halve the apk size — from 64MB to 32.36MB. Memory management The very first thing that needs to be done properly is a memory management. You should always have only necessary objects loaded into the memory and release resources once they are not in use. This topic requires a lot of details, so I’d rather cover it in a separate article. Next, I want to analyze the size of current apk file. As for my game I have four different types of game resources: 1. Intro — the resources for intro screen. Intro background Loaded before the game starts, disposed immediately after the loading is done. (~0.5MB) 2. In menu resources — used in menu only (location backgrounds, buttons, etc). Loaded during the intro stage and when a player exits a game level. Disposed during “in game resources” loading. (~7.5MB images + ~5.4MB music) 3. In game resources — used on game levels only (objects, game backgrounds, etc.). Loaded during a game level loading, disposed when a player exits the game level. Note, that those resources are not disposed when a player navigates between levels (~4.5MB images + ~10MB music) 4. Common — used in all three above. Loaded during the intro stage, disposed only once the game is closed. This one also includes fonts. (~1.5MB). The summed size of all resources is ~30MB, so we can conclude that the size of apk is basically the size of all its assets. The code base is only ~3MB. That’s why I want to focus on the assets in the first place (still, the code will be discussed too). Images optimization The first thing to do is to make the size of images smaller while not harming the quality. Fortunately, there are plenty services that offer exactly this. I used this one. This resulted in 18MB reduction already! Compare the two images below: Not optimized Optimized the sizes are 312KB and 76KB respectively, so the optimized image is 4 times smaller! But a human eye can’t notice the difference. Images combination You should combine the same images programmatically rather than having almost the same images (especially if they are quite big). Consider the following example: Before After God of Fire God of Water Rather than having four full-size images with different Gods but same background I have only one big background image and four smaller images of Gods that are then combined programmatically into one image. Although, the reduction is not so big (~2MB) for some cases it can make a difference. Images format I consider this as my biggest mistake so far. I had several images without transparency saved in PNG format. The JPG version of those images is 6 times more lightweight! Once I transformed all images without transparency into JPG the apk size became 5MB smaller. Music optimization At first the music quality was 256 kbps. Then I reduced it to 128 kbps and saved 5MB more. Still think that tracks can be compressed even more. Please, share in comments if you ever used 64 kbps in your games. Texture Packs This item might be a bit Libgdx-specific, although I think similar functionality should exist in other engines as well. Texture pack is a way to organize a bunch of images into one big pack. Then, in code you treat each pack as one unit, so it’s quite handy for memory management. But you should combine images wisely. As for my game, at first I had resources packed quite badly. Then, I separated all transparent and non-transparent images and gained about 5MB more. Dependencies and Optimal code base Now let’s see the other side of development process — coding. I will not dive into too many details about the code-writing here (since it deserves separate article as well). But still want to share some general rules that I believe could be applied to any project. The most important thing is to reduce the quantity of 3d party dependencies in the project. Do you really need to add Apache Commons if you use only one method from StringUtils? Or gson if you just don’t like the built-in json functionality? Well, you do not. I used Libgdx as a game development engine and quite happy with it. Quite sure that for the next game I’ll use this engine again. Oh, do I need to say that you should have the code to be written the most optimal way? :) Well, I mentioned it. Although, the most of the tips I’ve shared here can be applied at the late development stage, some of them (especially, optimization of memory management) should be designed right from the very beginning of a project. Stay tuned for more programming articles!
  9. Hi, my name is Carlos Coronado, I am a gamedev. Recently (April 2018) I released Infernium for Steam, Humble, Switch and PS4 using the Unreal Engine. I've got a lot of questions asking me via twitter how I released solo the game on Switch, and I figured out it would be cool if I could explain it in a video! Oh, and I also invited Alexander, ex community manager of Epic Games to help me explain the feedback. Anyways, I hope you find it useful Cheers, Carlos.
  10. MiniAlfa

    Horrible soundtrack

    I am now making an soundtrack for my game, but halfway i realized me it was horrible. Can somebody please give me some tips to improve it Ps: i have used Bosca Ceoil to make it. Pss: I'm not english, so don't get upset about my english 1.wav
  11. What a great week it's been on the development front. Completed the coding and testing of the Master/Login servers, built the standalone client, added in the new chat services we have been working on... The list goes on. But, as with anything great, you take the good with the bad. I messed up the repository by trying to sneak some changes into a file, accidentally deleted the repo copy aaand.. lost a couple days of work but HEY! That's what makes this exciting right? The development community has been amazingly helpful. A resource system is ready to implement, mounts are ready to implement and updated GUI elements are now pending a push to Test. I spent a couple of hours today working with the community group testing an upgrade to the network layer. The results were outstanding. We capped out at 107 unique clients connected to the hosting server (which was a 4CPU 3.3Ghz 8gigRAM ) and there was no errors or hiccups. This was with 100+ people in a tiny area all updating each other with network packets. Was a beautiful sight to see. We then ran a similar test with the old network code and the server ended up melting down at 80 clients in the same general area. We started to see errors at 50 but the whole thing went south for the winter at just over 80. So what does this mean for indie MMO development? Let's put it in perspective, Path of Exile never really has more than 20 people in a town at a time, World of Warcraft rarely has 100 people in close proximity (as in field of view top LoD). Even Elder Scrolls Online rarely sees 100+ player battles in close proximity. Albion online turns into a slideshow with 60+ people in a zone. I for one was very very pleased with the network results. This tells us we can have hundreds of players in an instance and a large portion of them in very close proximity. (Towns and cities anyone?) A massive amount of work has gone into the Unity HLAPI-CE network layer and it is really starting to show. Big props to vis2K and Paul and the rest of the development community for their work on that asset. This can change indie gaming development in such a positive way. Next steps? I am going to implement some of the new systems into the game like mounts, GUI updates and harvesting. These are foundational and allow for testing and need time for debugging. I’ve had the servers up for 4 days now and everything is running awesome. The Database is happy as a clam, the chat servers are good and... once I fix my boo boo with the client (related to the chat system but it desychn'd the entire client build arg!) we'll be in great shape! At this point I am comfortable saying that I anticipate putting "Milestone 1: Servers and core infrastructure" behind us this weekend and move on to feature implementation. The faction system is coming along well. I watched a test of AI fighting each other based on faction checks, very cool. Building an mmo is a massive, just a sec need that to sink in... I mean MASSIVE with a triple bold capital flashing letters M A S SI V E undertaking. Taking a project based approach, defining sprints and milestones and stabilizing your core game systems is, in my opinion, the only way to start. It's not about fireballs and story writing or anything else. Having the coolest fireball spell in the world means nothing if the server desynch's every time you cast it. I am hoping to put the website back up for the game "soon"(tm) but really focused on the nuts and bolts right now and not trying to make the project look all snazzy. I anticipate having some pretty cool screens and our first video footage in the next 2-3 weeks. That being said I am out of town for a week shortly so here is hoping I can get some stuff done. If any of you are experienced Unity3D world builders with a keen sense of poly optimization, LoD and occlusion feel free to drop me a line. World building is tremendously fun but.. I will be the first to admit it's really not my forte. If you want a project to showcase your world building talents and create some wicked in game video of your worlds we should talk. And remember... It's your world now!
  12. Awoken

    More Adventures in Robust Coding

    Hello GameDev, This entry is going to be a big one for me, and it's going to cover a lot. What I plan to cover on my recent development journey is the following: 1 - Goal of this Blog entry. 2 - Lessons learned using Node.js for development and testing as opposed to Chrome console. 3 - Linear Path algorithm for any surface. 4 - Dynamic Path Finding using Nodes for any surface, incorporating user created, dynamic assets. 5 - short term goals for the game. -- - -- - -- - -- - -- - -- - Goal of this Blog entry - -- - -- - -- - -- - -- - -- My goal for this adventure is to create a dynamic path-finding algorithm so that: - any AI that is to be moved will be able to compute the shortest path from any two points on the surface of the globe. - the AI will navigate around bodies of water, vegetation, dynamic user assets such as buildings and walls. - will compute path in less then 250 milliseconds. There are a few restrictions the AI will have to follow, in the image above you can see land masses that are cut off from one another via rivers and bodies of water are uniquely colored. If an AI is on a land mass of one color, for now, it will only be able to move to a location on the same colored land mass. However; there are some land masses that take up around 50% of the globe and have very intricate river systems. So the intended goal is be able to have an AI be on one end of the larger land mass and find the shortest path to the opposite end within 250 milliseconds. Currently my path finding algorithm can find the shortest path in anywhere from 10 ms and up, and when I say up, I mean upwards of 30 seconds, and that's because of the way I built the algorithm, which is in the process of being optimised. -- - -- - -- - -- - -- - -- - Lessons learned using Node.js for development and testing - -- - -- - -- - -- - -- - -- As of this writing I am using Node.js to test the efficiency of my algorithms. This has slowed down my development. I am not a programmer by trade, I've taught myself the bulk-work of what I know, and I often spend my time re-inventing the wheel and learning things the hard way. Last year I made the decision to move my project over to Node.js for continued development, eventually it all had to be ported over to Node.js anyways. In hind sight I would have done things differently. I would have continued to use Chrome console for testing and development, small scale, then after the code was proven to be robust would I then port it over to Node.js. If there is one lesson I'd like to pass on to aspiring and new programmers, it's this, use a language and development environment that allows you, the programmer, to jump into the code while it's running and follow each iteration, line by line, of code as it's be executed, basically debugging. It is so easy to catch errors in logic that way. Right now I'm throwing darts at a dart board, guesses what I should be sending to the console for feedback to help me learn more about logical errors using Node.js, see learning the hard way. -- - -- - -- - -- - -- - -- - Linear Path algorithm for any surface. - -- - -- - -- - -- - -- - -- In the blog entry above I go into detail explaining how I create a world. The important thing to take away from it is that every face of the world has information about all surrounding faces sharing vertices pairs. In addition, all vertices have information regarding those faces that use it for their draw order, and all vertices have information regarding all vertices that are adjacent to them. An example vertices and face object would look like the following: Vertices[ 566 ] = { ID: 566, x: -9.101827364, y: 6.112948791, z: 0.192387718, connectedFaceIDs: [ 90 , 93 , 94 , 1014 , 1015 , 1016 ], // clockwise order adjacentVertices: [ 64 , 65 , 567 , 568 , 299 , 298 ] // clockwise order } Face[ 0 ] = { ID: 0, a: 0, b: 14150, c: 14149, sharedEdgeVertices: [ { a:14150 , b: 14149 } , { a:0 , b: 14150 } , { a:14149 , b:0 } ], // named 'cv' in previous blog post sharedEdgeFaceIDs: [ 1 , 645 , 646 ], // named 's' in previous blog post drawOrder: [ 1 , 0 , 2 ], // named 'l' in previous blog post } Turns out the algorithm is speedy for generating shapes of large sizes. My buddy who is a Solutions Architect told me I'm a one trick pony, HA! Anyways, this algorithm comes in handy because now if I want to identify a linear path along all faces of a surface, marked as a white line in the picture above, you can reduce the number of faces to be tested, during raycasting, to the number of faces the path travels across * 2. To illustrate, imagine taking a triangular pizza slice which is made of two faces, back to back. the tip of the pizza slice is touching the center of the shape you want to find a linear path along, the two outer points of the slice are protruding out from the surface of the shape some distance so as to entirely clear the shape. When I select my starting and ending points for the linear path I also retrieve the face information those points fall on, respectively. Then I raycaste between the sharedEdgeVertices, targeting the pizza slice. If say a hit happens along the sharedEdgeVertices[ 2 ], then I know the next face to test for the subsequent raycaste is face ID 646, I also know that since the pizza slice comes in at sharedEdgeVertice[ 2 ], that is it's most likely going out at sharedEdgeVertices[ 1 ] or [ 0 ]. If not [ 1 ] then I know it's 99% likely going to be [ 0 ] and visa-versa. Being able to identify a linear path along any surface was the subject of my first Adventure in Robust Coding. Of course there are exceptions that need to be accounted for. Such as, when the pizza slice straddles the edge of a face, or when the pizza slice exits a face at a vertices. Sometimes though when I'm dealing with distances along the surface of a given shape where the pizza slice needs to be made up of more than one set of back to back faces, another problem can arise: I learned about the limitations of floating point numbers too, or at least that's what it appear to be to me. I'm sure most of you are familiar with some variation of the infinite chocolate bar puzzle So with floating point numbers I learned that you can have two faces share two vertices along one edge, raycaste at a point that is directly between the edges of two connecting faces, and occasionally, the raycaste will miss hitting either of the two faces. I attribute this in large part because floating point numbers only capture an approximation of a point, not the exact point. Much like in the infinite chocolate bar puzzle there exists a tiny gap along the slice equal in size to the removed piece, like wise, that tiny gap sometimes causes a miss for the raycaste. If someone else understands this better please correct me. -- - -- - -- - -- - -- - -- - Dynamic Path Finding using Nodes for any surface - -- - -- - -- - -- - -- - -- Now that I've got the linear path algorithm working in tip top shape, I use it in conjunction with Nodes to create the pathfinding algorithm. Firstly I identify the locations for all nodes. I do this using a Class I created called Orientation Vector, I mention them in the blog post above. When they're created, they have a position vector, a pointTo vector, and an axis vector. The beauty of this class is that I can merge them, which averages their position, pointTo, and axis vectors, and it allows me to rotate them along any axis, and it allows me to move them any distance along the axis of their pointTo vector. To create shoreline collision geometry, and node collision geometry, illustrated above, and node locations along shorelines, illustrated below, I utilise the Orientation Vector Class. Firstly, the water table for the world is set to an arbitrary value, right now it's 1.08, so if a vector for a given face falls below the table and one or two vertors are above the table then I know the face is a shoreline face. Then I use simple Math to determine at what two points the face meets the water and create two OVectors, each pointing at each-other. Then I rotate them along their y axis 90 and -90 degrees respectively so that they are now facing inland. Since each face, which are shoreline faces, touch one another, there will be duplicate OVectors a each point along the shore. However, each Ovector will have a pointTo vector relative to it's sister Ovector during creation. I merge the paired Ovectors at each point along the shore, this averages their position, pointTo and axis. I then move them inland a small distance. The result is the blue arrows above. The blue arrows are the locations of three of the thousands of nodes created for a given world. Each Node has information about the shoreline collision geometry, the node collision geometry ( the geometry connecting nodes ), and the Node to its left and the Node to its right. Each face of collision geometry is given a Node ID to refer to. So to create the path-finding algorithm. I first identify the linear path between the starting and ending points. I then test each segment of the linear path for collision geometry. If I get a hit, I retrieve the Node ID. This gives me the location for the Node associated for a given face of collision geometry. I then travel left and right along connecting Nodes checking to see if a new Linear path to the end point is possible, if no immediate collision geometry is encountered, the process continues and is repeated as needed. Subsequently, a list of points is established, marking the beginning, encountered Nodes and end of the line of travel. The List is then trimmed by testing linear paths between every third point, if a valid path is found, the middle point is spliced. Then all possible paths that have been trimmed are calculated for distance. the shortest one wins. Below is the code for the algorithm I currently use. its my first attempt at using classes to create an algorithm. Previously I just relied on elaborate arrays. I plan on improving the the process mentioned above by keeping track of distance as each path spreads out from it's starting location. Only the path which is shortest in distance will go through its next iteration. With this method, once a path to the end is found, I can bet it will be shortest, so I won't need to compute all possible paths like I am now. The challenge I've been facing for the past two months is sometimes the Nodes end up in the water, The picture above shows a shoreline where the distance the OVectors travel would place them in the water. Once a node is in the water, it allows the AI to move to it, then there is no shoreline collision geometry for it to encounter, which would keep it on land, and so the AI just walks into the ocean. Big Booo! I've been writing variations of the same function to correct the location of the geometry shown below in Red and Yellow below. But what a long process. I've rewritten this function time and time again. I want it to be, well as the title of this Blog states, Robust, but it's slow going. As of today's date, it's not Robust, and the optimised path-finding algorithm hasn't been written either. I'll be posting updates in this blog entry as I make progress towards my goal. I'll also make mention what I achieve for shortest, long time for pathfinding. Hopefully it'll be below 250 ms. -- - -- - -- - -- - -- - -- - short term goals for the game - -- - -- - -- - -- - -- - -- Badly... SO BADLY I want to be focusing on game content, that's all I've been thinking about. Argh, But this all has to get wrapped up before I can. I got ahead of myself, I'm guilty of being too eager. But there is no sense building game content on top of an engine which is prone to errors. My immediate goals for the engine are as follows: // TO DO's // // Dec 26th 2017 // /* * << IN PROGRESS >> -update path node geometry so no errors occur * -improve path finding alg with new technique * -improve client AI display -only one geometry for high detail, and one for tetrahedron. * -create ability to select many AI at the same time by drawing a rectangle by holding mouse button. * -create animation server to recieve a path and process animation, and test out in client with updates. * -re-write geometry merging function so that the client vertices and faces have a connected Target ID * -incorporate dynamic asset functionality into client. * -create a farm and begin writing AI. * -program model clusters * -sychronize server and client AI. Test how many AI and how quickly AI can be updated. Determine rough estimate of number of players the server can support. * */ see the third last one! That's the one, oh what a special day that'll be. I've created a Project page, please check it out. It gives my best description to date of what the game is going to be about. Originally I was going to name it 'Seed', a family member made the logo I use as my avatar and came up with the name back in 2014. The project is no longer going to be called Seed, it's instead going to be called Unirule. [ edit: 02/02/18 Some new screen shots to show off. All the new models were created by Brandross. There are now three earth materials, clay, stone and marble. There are also many types of animals and more tree types. ] Thanks for reading and if you've got anything to comment on I welcome it all. Awoken
  13. GRASBOCK WindyOrange

    #2 New System Works

    I finally got the new system working. I have never made that many mistakes in an algorithm at once, which explains why it took so long for me to post an update. Anyway. Now I can run more than 10000 Humans at once (however only random walking). The World has multiple Noisemaps overlapping each other generating a much more interesting terrain. The System allows for efficient pathfinding to be implemented. Now I will be able to do much more actual content. UI, Pathfinding and developing the systems which drive the simulation will be a big task. The 3 pictures show you the world at different states of expansion. The red dots being the humans, that explore the world and thus generate the new chunks when necessary. In case you have seen the pictures from my last entry and are wondering where the grass went. Sorry about that. It will come back eventually.
  14. I have spirtes that will be turned into animation images for the game actors. What would be the best way to change the weapon / armor for each actor? IE walking with sword swinging sword then when he equips axe walking with axe swinging axe ECT. Same for armor? Have sheets with the weapons and armor and then overlay them on to the base spirte when the user changes the weapon or have premade sheets with all of the various combos of armor / weapons that the solider can have and then just grab the ones needed for the current selection. I'm thinking the first option is better, but are there any other better ways?
  15. When I use the proptimizer tool in 3ds max and than export the file as a .fbx file, the file size seems to be bigger than the .fbx file that hasn't used the pro optimiser tool. This has confused me as I thought reducing the polygons would decrease the file size. I believe the problem comes down to having the mesh as an editable poly and than adding the modifier tool. So I have several meshes in a 3ds max scene. The meshes need to be combined into one. I do this by turning the meshes into editable polygons. After I turn each mesh into and editable polygon I'll add a modifier to each mesh.This modifier tool would be ProOptimizer or multi res. I'll than reduce the polygon count of each mesh. Once I have done that I use the attach option from the editable poly to combine the meshes into one. The issue is that once I click on edit poly after adding the modifier tool (prooptimzer or multi res) a message appears stating "Modifier depends on topology" and when I wish to continue by pressing yes the polygon count goes back up thus, the file size hasn't been reduced.1. 3ds max scene with multiple meshes2. change each mesh to an editable poly3. Add modifier tool to each mesh4. Modifier tool would be ProOptimizer or multi res5. reduce poly count of each mesh by using modifier tool stated in step 4.6. Use attach feature from edit poly to combine all meshes into one7. Upon selecting "edit poly" message appears on screen stating "modifier depends on topology"8. When yes is selected polygon count goes back up and as a result the file size has't been reduced.I am now looking into how to first reduce the poly count of each mesh and than combining them into one without the problem of the poly count going back up. Any support would much appreciated.
  16. It's for a 2D game, but the question is broader... Let's say I want to have some object (eg. a projectile) interact with some other object (eg. a button) so the projectile thrown by the player can trigger the button. I know there could be several ways of doing this, like the brute force o(n^2) method, the 'optimized' method using a QuadTree or spatial hash... But i thought about another method, and was wondering if it's a good idea or not : It consist of iterating two times over the active game object list: - the first looks for projectile objects, storing their pointers into some array - the second looks for button objects, checking if collision occurs with one of the projectiles in the array Other specific collisions checks could be done with this method, but that would need multiple pointer lists. Do you know how old games (like thoses on Genesis, we're talking about 8Mhz cpus...) achieved that ? Should I just implement some spatial hashing and checking all the collisions inside the restrained area, avoiding storing pointer lists ? My levels would have about 1000 objects, i'm not that much concerned about performance since I know how to optimize, but more on finding an elegant/simple way of doing this. I'd like keeping the code small and maintainable.
  17. How to unpack the frame buffer when packing by Compact YCoCg Frame Buffer?
  18. I'm making an small 2D engine using Kha and I have a timer class, which basically simply either waits a certain amount of time to call a function, or repeatedly calls a certain function after every x seconds. I simply want to know if I should have timers run on different threads. I'm aware that makes sense, but I might use many timers in a game for example, would that still be okay? Also I'm currently writing an animation components, which waits every x seconds to draw another image using the timer class. And in a normal 2D games, I would have many objects with animations on them, other than the other timers. So I just wanted to ask people who have more experience and knowledge than I have what I should do for timers: Either leave them on the same main thread, or make them run on different threads. Thanks in advance.
  19. I have been doing research into optimising 3d models in 3ds max. There seems to be so many different ways to optimise 3d models. I am unsure which method is the best and have been trying different tools such as the pro optimizer tool in 3ds max. Does anyone know the best way to optimise 3d models in 3ds max? I am trying to reduce the file size whilst maintain a high quality model. So produce a low polygon model which looks like a high polygon model.
  20. I've been doing some research on delta compression (used and described in the Quake3 doc http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking,), and I'm looking for some clarifications on that topic please if possible. I understand that the delta compressed state is the difference between any given world states, meaning that if we store the state of every entity in a world state at every tick, the difference would be only the states of entities that changed between these two world states. Now when sending back the delta state to the client, do we go as far as only mentioning the properties that changed? Let's say a character moved only on X axis but didn't move on the Y axis between two states, are we sending to the client the whole state (x, and y) or only the new x position? If that's the case and let's assume there are a few more properties that describes a character, how can the client identify which properties have actually changed when rebuilding the information from binary data.
  21. Hi, I am looking for a TCP or HTTP networking library similar to Lidgren (UDP). This is primarily for sending game map data and potentially other large messages from Server to Client. I do want to keep Lidgren for my chat messages, player position, small fast updates etc. I especially love the flow of data and the library usage in general, so any libraries of a similar style would be excellent. Preferably something open source, free and reliable. I also must be able to swap between localhost and an ip address with ease, like Lidgren, as I run a server for singleplayer/mp/lan. My game maps are similar to minecraft, but it is 2d and only one Z-level, so i'm sending a jagged array of Tile object data (currently only enum TileID.Grass) down the pipe to the Client. Problem is if i'm sending a large map 1024 x 1024 tiles down the to client that's quite a lot of data, and Lidgren is relatively slow to build the writes (before the message is even sent!). It is fine when i'm using smaller maps < 512 x 512 ( xTiles * yTiles ). I know about chunking and will look into implementing this later, whilst taking into account the user's position in the world to only send nearby chunks. An example of my code that can be slow: private void WriteWorld(NetOutgoingMessage outgoing) { try { var world = WorldManager.Instance.CurrentWorld; outgoing.Write(world.XTiles); outgoing.Write(world.YTiles); for (int x = 0; x < world.XTiles; x++) { for (int y = 0; y < world.YTiles; y++) { // Write Tile obj data outgoing.Write((int)world.Tiles[x][y]); // <-------- Slow here when xTiles and yTiles are each > 512 ! } } } catch (Exception ex) { // log send error } } I'd love to hear from you guys, especially if any of you have come across a similar challenge.
  22. So I have hundreds of moving objects that need to check there speed. One of the reasons they need to check there speed is so they don't accelerate into oblivion, as more and more force is added to each object. At first I was just using the Unity vector3.magnitude. However this is actually very slow; when used hundreds of times. Next I tried the dot-product check: vector3.dot(this.transform.foward, ShipBody.velocity) The performance boost was fantastic. However this only measures speed in the forward direction. Resulting in bouncing objects accelerating way past the allowed limit. I am hoping someone else knows a good way for me to check the speed with accuracy, that is fast on the CPU. Or just any magnitude calculations that I can test when I get home later. What if I used vector3.dot(ShipBody.velocity.normalized, ShipBody.velocity)? How slow is it to normalize a vector, compared to asking it's magnitude?
  23. Hello, I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle. There are many ways to draw a circle. I have found two ways, The one way: public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3 - 2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y--; d = d + 4 * (x - y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX - x, centerY + y, color); drawPoint(batch, centerX + x, centerY - y, color); drawPoint(batch, centerX - x, centerY - y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX - y, centerY + x, color); drawPoint(batch, centerX + y, centerY - x, color); drawPoint(batch, centerX - y, centerY - x, color); } The other way: public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop: polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ). Please help! Thank you.
  24. Hi, I am trying to implement a custom texture atlas creator tool in C++, need suggestion regarding any opensource fast API or library for image import and export? Also this tool will compress the final output atlas image into multiple formats like DXT5, PVRTC and ETC based on user input, what should be the best way to implement this? Thanks
  25. Hi guys, There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there. Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test. On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres). This method can be naturally extended to clustered light culling as well. The following image shows the general ideas Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included! Eric
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!