Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Optimization'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Developers

Developers


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 60 results

  1. Hello Gamedev users. I am new to this forum, and though I am quite proficient at understanding the basics of just about any form of science, I do not have the natural skill or patience needed to become a great programmer, so I ask that you be patient, and bear with me. So, to get to the point, I have thought up a peripheral that can potentially be used for various PC games. Without giving away too many details to those not interested in contributing, I will give as informative an explanation as possible. In theory, it will be activated in intense danger sequences, such as in a FPS firefight, melee combat and fight scenes in medieval style RPG's, etc. The difficulty is not in designing the device, but in implementing the program that would detect these scenes in-game. The software would be for Windows PCs, and made to work with as many games as possible. I have some experience with how modding works, and have considered how it could be used alongside currently released games such as Fallout 4, Battlefield, Call of Duty, Elder Scrolls Skyrim etc. However, I simply do not know enough about programming to determine the viability of integrating features into the software that would allow the danger scenes and combat gameplay of already released games to trigger the software to activate the peripheral. Essentially I am wondering if this would this be something that could be integrated? If so, how could these combat sequences be detected by the software? If you are interested and have something important to contribute, I can promise you a copy of the software and option to purchase the peripheral at cost if you would like. Depending on your motivation, I am open to working together. Even if you are not highly experienced, everyone has to learn somewhere. Whether or not you are interested, I would be grateful for all the advice you have to offer. So, any ideas for how something like this could be implemented?
  2. It's been almost a week since release, but we manage to do several important updates to the game and ready to give more. So what about updates? there are 4 of them and all of them are small preparatory step to the next big one which is Customization. What we have now that we didn't have at the moment of update: - statistics of victories and defeats of each gamemode and best result in the menu (before you could see only best result at the end of each match) - usability of some menus (such as the ability for the client to close statistics for themselves before it was done by the server, additional information so player will know more about what they are doing, new visual controls and many more) - some improvements in the menu level regarding optimization What next? Now we are at update 1.4, when we make 2.0 it will be customization. Before it there will be several small updates concerning other things.
  3. Ruslan Sibgatullin

    How I halved apk size

    Originally posted on Medium You coded your game so hard for several months (or even years), your artist made a lot of high-quality assets, and the game is finally ready to be launched. Congratulation! You did a great job. Now take a look at the apk size and be prepared to be scared. What is the size — 60, 70 or even 80 megabytes? As it might be sounds strange to hear (in the era of 128GB smartphones) but I have some bad news — the size it too big. That’s exactly what happened to me after I’ve finished the game Totem Spirits. In this article I want to share several advises about how to reduce the size of a release apk file and yet not lose the quality. Please, note, that for development I used quite popular game development engine Libgdx, but tips below should be applicable for other frameworks as well. Moreover, my case is about rather simple 2D game with a lot of sprites (i.e. images), so it might be not that useful for large 3D products. To keep you motivated to read this article further I want to share the final result: I managed to halve the apk size — from 64MB to 32.36MB. Memory management The very first thing that needs to be done properly is a memory management. You should always have only necessary objects loaded into the memory and release resources once they are not in use. This topic requires a lot of details, so I’d rather cover it in a separate article. Next, I want to analyze the size of current apk file. As for my game I have four different types of game resources: 1. Intro — the resources for intro screen. Intro background Loaded before the game starts, disposed immediately after the loading is done. (~0.5MB) 2. In menu resources — used in menu only (location backgrounds, buttons, etc). Loaded during the intro stage and when a player exits a game level. Disposed during “in game resources” loading. (~7.5MB images + ~5.4MB music) 3. In game resources — used on game levels only (objects, game backgrounds, etc.). Loaded during a game level loading, disposed when a player exits the game level. Note, that those resources are not disposed when a player navigates between levels (~4.5MB images + ~10MB music) 4. Common — used in all three above. Loaded during the intro stage, disposed only once the game is closed. This one also includes fonts. (~1.5MB). The summed size of all resources is ~30MB, so we can conclude that the size of apk is basically the size of all its assets. The code base is only ~3MB. That’s why I want to focus on the assets in the first place (still, the code will be discussed too). Images optimization The first thing to do is to make the size of images smaller while not harming the quality. Fortunately, there are plenty services that offer exactly this. I used this one. This resulted in 18MB reduction already! Compare the two images below: Not optimized Optimized the sizes are 312KB and 76KB respectively, so the optimized image is 4 times smaller! But a human eye can’t notice the difference. Images combination You should combine the same images programmatically rather than having almost the same images (especially if they are quite big). Consider the following example: Before After God of Fire God of Water Rather than having four full-size images with different Gods but same background I have only one big background image and four smaller images of Gods that are then combined programmatically into one image. Although, the reduction is not so big (~2MB) for some cases it can make a difference. Images format I consider this as my biggest mistake so far. I had several images without transparency saved in PNG format. The JPG version of those images is 6 times more lightweight! Once I transformed all images without transparency into JPG the apk size became 5MB smaller. Music optimization At first the music quality was 256 kbps. Then I reduced it to 128 kbps and saved 5MB more. Still think that tracks can be compressed even more. Please, share in comments if you ever used 64 kbps in your games. Texture Packs This item might be a bit Libgdx-specific, although I think similar functionality should exist in other engines as well. Texture pack is a way to organize a bunch of images into one big pack. Then, in code you treat each pack as one unit, so it’s quite handy for memory management. But you should combine images wisely. As for my game, at first I had resources packed quite badly. Then, I separated all transparent and non-transparent images and gained about 5MB more. Dependencies and Optimal code base Now let’s see the other side of development process — coding. I will not dive into too many details about the code-writing here (since it deserves separate article as well). But still want to share some general rules that I believe could be applied to any project. The most important thing is to reduce the quantity of 3d party dependencies in the project. Do you really need to add Apache Commons if you use only one method from StringUtils? Or gson if you just don’t like the built-in json functionality? Well, you do not. I used Libgdx as a game development engine and quite happy with it. Quite sure that for the next game I’ll use this engine again. Oh, do I need to say that you should have the code to be written the most optimal way? :) Well, I mentioned it. Although, the most of the tips I’ve shared here can be applied at the late development stage, some of them (especially, optimization of memory management) should be designed right from the very beginning of a project. Stay tuned for more programming articles!
  4. Hi, my name is Carlos Coronado, I am a gamedev. Recently (April 2018) I released Infernium for Steam, Humble, Switch and PS4 using the Unreal Engine. I've got a lot of questions asking me via twitter how I released solo the game on Switch, and I figured out it would be cool if I could explain it in a video! Oh, and I also invited Alexander, ex community manager of Epic Games to help me explain the feedback. Anyways, I hope you find it useful Cheers, Carlos.
  5. MiniAlfa

    Horrible soundtrack

    I am now making an soundtrack for my game, but halfway i realized me it was horrible. Can somebody please give me some tips to improve it Ps: i have used Bosca Ceoil to make it. Pss: I'm not english, so don't get upset about my english 1.wav
  6. What a great week it's been on the development front. Completed the coding and testing of the Master/Login servers, built the standalone client, added in the new chat services we have been working on... The list goes on. But, as with anything great, you take the good with the bad. I messed up the repository by trying to sneak some changes into a file, accidentally deleted the repo copy aaand.. lost a couple days of work but HEY! That's what makes this exciting right? The development community has been amazingly helpful. A resource system is ready to implement, mounts are ready to implement and updated GUI elements are now pending a push to Test. I spent a couple of hours today working with the community group testing an upgrade to the network layer. The results were outstanding. We capped out at 107 unique clients connected to the hosting server (which was a 4CPU 3.3Ghz 8gigRAM ) and there was no errors or hiccups. This was with 100+ people in a tiny area all updating each other with network packets. Was a beautiful sight to see. We then ran a similar test with the old network code and the server ended up melting down at 80 clients in the same general area. We started to see errors at 50 but the whole thing went south for the winter at just over 80. So what does this mean for indie MMO development? Let's put it in perspective, Path of Exile never really has more than 20 people in a town at a time, World of Warcraft rarely has 100 people in close proximity (as in field of view top LoD). Even Elder Scrolls Online rarely sees 100+ player battles in close proximity. Albion online turns into a slideshow with 60+ people in a zone. I for one was very very pleased with the network results. This tells us we can have hundreds of players in an instance and a large portion of them in very close proximity. (Towns and cities anyone?) A massive amount of work has gone into the Unity HLAPI-CE network layer and it is really starting to show. Big props to vis2K and Paul and the rest of the development community for their work on that asset. This can change indie gaming development in such a positive way. Next steps? I am going to implement some of the new systems into the game like mounts, GUI updates and harvesting. These are foundational and allow for testing and need time for debugging. I’ve had the servers up for 4 days now and everything is running awesome. The Database is happy as a clam, the chat servers are good and... once I fix my boo boo with the client (related to the chat system but it desychn'd the entire client build arg!) we'll be in great shape! At this point I am comfortable saying that I anticipate putting "Milestone 1: Servers and core infrastructure" behind us this weekend and move on to feature implementation. The faction system is coming along well. I watched a test of AI fighting each other based on faction checks, very cool. Building an mmo is a massive, just a sec need that to sink in... I mean MASSIVE with a triple bold capital flashing letters M A S SI V E undertaking. Taking a project based approach, defining sprints and milestones and stabilizing your core game systems is, in my opinion, the only way to start. It's not about fireballs and story writing or anything else. Having the coolest fireball spell in the world means nothing if the server desynch's every time you cast it. I am hoping to put the website back up for the game "soon"(tm) but really focused on the nuts and bolts right now and not trying to make the project look all snazzy. I anticipate having some pretty cool screens and our first video footage in the next 2-3 weeks. That being said I am out of town for a week shortly so here is hoping I can get some stuff done. If any of you are experienced Unity3D world builders with a keen sense of poly optimization, LoD and occlusion feel free to drop me a line. World building is tremendously fun but.. I will be the first to admit it's really not my forte. If you want a project to showcase your world building talents and create some wicked in game video of your worlds we should talk. And remember... It's your world now!
  7. Awoken

    More Adventures in Robust Coding

    Hello GameDev, This entry is going to be a big one for me, and it's going to cover a lot. What I plan to cover on my recent development journey is the following: 1 - Goal of this Blog entry. 2 - Lessons learned using Node.js for development and testing as opposed to Chrome console. 3 - Linear Path algorithm for any surface. 4 - Dynamic Path Finding using Nodes for any surface, incorporating user created, dynamic assets. 5 - short term goals for the game. -- - -- - -- - -- - -- - -- - Goal of this Blog entry - -- - -- - -- - -- - -- - -- My goal for this adventure is to create a dynamic path-finding algorithm so that: - any AI that is to be moved will be able to compute the shortest path from any two points on the surface of the globe. - the AI will navigate around bodies of water, vegetation, dynamic user assets such as buildings and walls. - will compute path in less then 250 milliseconds. There are a few restrictions the AI will have to follow, in the image above you can see land masses that are cut off from one another via rivers and bodies of water are uniquely colored. If an AI is on a land mass of one color, for now, it will only be able to move to a location on the same colored land mass. However; there are some land masses that take up around 50% of the globe and have very intricate river systems. So the intended goal is be able to have an AI be on one end of the larger land mass and find the shortest path to the opposite end within 250 milliseconds. Currently my path finding algorithm can find the shortest path in anywhere from 10 ms and up, and when I say up, I mean upwards of 30 seconds, and that's because of the way I built the algorithm, which is in the process of being optimised. -- - -- - -- - -- - -- - -- - Lessons learned using Node.js for development and testing - -- - -- - -- - -- - -- - -- As of this writing I am using Node.js to test the efficiency of my algorithms. This has slowed down my development. I am not a programmer by trade, I've taught myself the bulk-work of what I know, and I often spend my time re-inventing the wheel and learning things the hard way. Last year I made the decision to move my project over to Node.js for continued development, eventually it all had to be ported over to Node.js anyways. In hind sight I would have done things differently. I would have continued to use Chrome console for testing and development, small scale, then after the code was proven to be robust would I then port it over to Node.js. If there is one lesson I'd like to pass on to aspiring and new programmers, it's this, use a language and development environment that allows you, the programmer, to jump into the code while it's running and follow each iteration, line by line, of code as it's be executed, basically debugging. It is so easy to catch errors in logic that way. Right now I'm throwing darts at a dart board, guesses what I should be sending to the console for feedback to help me learn more about logical errors using Node.js, see learning the hard way. -- - -- - -- - -- - -- - -- - Linear Path algorithm for any surface. - -- - -- - -- - -- - -- - -- In the blog entry above I go into detail explaining how I create a world. The important thing to take away from it is that every face of the world has information about all surrounding faces sharing vertices pairs. In addition, all vertices have information regarding those faces that use it for their draw order, and all vertices have information regarding all vertices that are adjacent to them. An example vertices and face object would look like the following: Vertices[ 566 ] = { ID: 566, x: -9.101827364, y: 6.112948791, z: 0.192387718, connectedFaceIDs: [ 90 , 93 , 94 , 1014 , 1015 , 1016 ], // clockwise order adjacentVertices: [ 64 , 65 , 567 , 568 , 299 , 298 ] // clockwise order } Face[ 0 ] = { ID: 0, a: 0, b: 14150, c: 14149, sharedEdgeVertices: [ { a:14150 , b: 14149 } , { a:0 , b: 14150 } , { a:14149 , b:0 } ], // named 'cv' in previous blog post sharedEdgeFaceIDs: [ 1 , 645 , 646 ], // named 's' in previous blog post drawOrder: [ 1 , 0 , 2 ], // named 'l' in previous blog post } Turns out the algorithm is speedy for generating shapes of large sizes. My buddy who is a Solutions Architect told me I'm a one trick pony, HA! Anyways, this algorithm comes in handy because now if I want to identify a linear path along all faces of a surface, marked as a white line in the picture above, you can reduce the number of faces to be tested, during raycasting, to the number of faces the path travels across * 2. To illustrate, imagine taking a triangular pizza slice which is made of two faces, back to back. the tip of the pizza slice is touching the center of the shape you want to find a linear path along, the two outer points of the slice are protruding out from the surface of the shape some distance so as to entirely clear the shape. When I select my starting and ending points for the linear path I also retrieve the face information those points fall on, respectively. Then I raycaste between the sharedEdgeVertices, targeting the pizza slice. If say a hit happens along the sharedEdgeVertices[ 2 ], then I know the next face to test for the subsequent raycaste is face ID 646, I also know that since the pizza slice comes in at sharedEdgeVertice[ 2 ], that is it's most likely going out at sharedEdgeVertices[ 1 ] or [ 0 ]. If not [ 1 ] then I know it's 99% likely going to be [ 0 ] and visa-versa. Being able to identify a linear path along any surface was the subject of my first Adventure in Robust Coding. Of course there are exceptions that need to be accounted for. Such as, when the pizza slice straddles the edge of a face, or when the pizza slice exits a face at a vertices. Sometimes though when I'm dealing with distances along the surface of a given shape where the pizza slice needs to be made up of more than one set of back to back faces, another problem can arise: I learned about the limitations of floating point numbers too, or at least that's what it appear to be to me. I'm sure most of you are familiar with some variation of the infinite chocolate bar puzzle So with floating point numbers I learned that you can have two faces share two vertices along one edge, raycaste at a point that is directly between the edges of two connecting faces, and occasionally, the raycaste will miss hitting either of the two faces. I attribute this in large part because floating point numbers only capture an approximation of a point, not the exact point. Much like in the infinite chocolate bar puzzle there exists a tiny gap along the slice equal in size to the removed piece, like wise, that tiny gap sometimes causes a miss for the raycaste. If someone else understands this better please correct me. -- - -- - -- - -- - -- - -- - Dynamic Path Finding using Nodes for any surface - -- - -- - -- - -- - -- - -- Now that I've got the linear path algorithm working in tip top shape, I use it in conjunction with Nodes to create the pathfinding algorithm. Firstly I identify the locations for all nodes. I do this using a Class I created called Orientation Vector, I mention them in the blog post above. When they're created, they have a position vector, a pointTo vector, and an axis vector. The beauty of this class is that I can merge them, which averages their position, pointTo, and axis vectors, and it allows me to rotate them along any axis, and it allows me to move them any distance along the axis of their pointTo vector. To create shoreline collision geometry, and node collision geometry, illustrated above, and node locations along shorelines, illustrated below, I utilise the Orientation Vector Class. Firstly, the water table for the world is set to an arbitrary value, right now it's 1.08, so if a vector for a given face falls below the table and one or two vertors are above the table then I know the face is a shoreline face. Then I use simple Math to determine at what two points the face meets the water and create two OVectors, each pointing at each-other. Then I rotate them along their y axis 90 and -90 degrees respectively so that they are now facing inland. Since each face, which are shoreline faces, touch one another, there will be duplicate OVectors a each point along the shore. However, each Ovector will have a pointTo vector relative to it's sister Ovector during creation. I merge the paired Ovectors at each point along the shore, this averages their position, pointTo and axis. I then move them inland a small distance. The result is the blue arrows above. The blue arrows are the locations of three of the thousands of nodes created for a given world. Each Node has information about the shoreline collision geometry, the node collision geometry ( the geometry connecting nodes ), and the Node to its left and the Node to its right. Each face of collision geometry is given a Node ID to refer to. So to create the path-finding algorithm. I first identify the linear path between the starting and ending points. I then test each segment of the linear path for collision geometry. If I get a hit, I retrieve the Node ID. This gives me the location for the Node associated for a given face of collision geometry. I then travel left and right along connecting Nodes checking to see if a new Linear path to the end point is possible, if no immediate collision geometry is encountered, the process continues and is repeated as needed. Subsequently, a list of points is established, marking the beginning, encountered Nodes and end of the line of travel. The List is then trimmed by testing linear paths between every third point, if a valid path is found, the middle point is spliced. Then all possible paths that have been trimmed are calculated for distance. the shortest one wins. Below is the code for the algorithm I currently use. its my first attempt at using classes to create an algorithm. Previously I just relied on elaborate arrays. I plan on improving the the process mentioned above by keeping track of distance as each path spreads out from it's starting location. Only the path which is shortest in distance will go through its next iteration. With this method, once a path to the end is found, I can bet it will be shortest, so I won't need to compute all possible paths like I am now. The challenge I've been facing for the past two months is sometimes the Nodes end up in the water, The picture above shows a shoreline where the distance the OVectors travel would place them in the water. Once a node is in the water, it allows the AI to move to it, then there is no shoreline collision geometry for it to encounter, which would keep it on land, and so the AI just walks into the ocean. Big Booo! I've been writing variations of the same function to correct the location of the geometry shown below in Red and Yellow below. But what a long process. I've rewritten this function time and time again. I want it to be, well as the title of this Blog states, Robust, but it's slow going. As of today's date, it's not Robust, and the optimised path-finding algorithm hasn't been written either. I'll be posting updates in this blog entry as I make progress towards my goal. I'll also make mention what I achieve for shortest, long time for pathfinding. Hopefully it'll be below 250 ms. -- - -- - -- - -- - -- - -- - short term goals for the game - -- - -- - -- - -- - -- - -- Badly... SO BADLY I want to be focusing on game content, that's all I've been thinking about. Argh, But this all has to get wrapped up before I can. I got ahead of myself, I'm guilty of being too eager. But there is no sense building game content on top of an engine which is prone to errors. My immediate goals for the engine are as follows: // TO DO's // // Dec 26th 2017 // /* * << IN PROGRESS >> -update path node geometry so no errors occur * -improve path finding alg with new technique * -improve client AI display -only one geometry for high detail, and one for tetrahedron. * -create ability to select many AI at the same time by drawing a rectangle by holding mouse button. * -create animation server to recieve a path and process animation, and test out in client with updates. * -re-write geometry merging function so that the client vertices and faces have a connected Target ID * -incorporate dynamic asset functionality into client. * -create a farm and begin writing AI. * -program model clusters * -sychronize server and client AI. Test how many AI and how quickly AI can be updated. Determine rough estimate of number of players the server can support. * */ see the third last one! That's the one, oh what a special day that'll be. I've created a Project page, please check it out. It gives my best description to date of what the game is going to be about. Originally I was going to name it 'Seed', a family member made the logo I use as my avatar and came up with the name back in 2014. The project is no longer going to be called Seed, it's instead going to be called Unirule. [ edit: 02/02/18 Some new screen shots to show off. All the new models were created by Brandross. There are now three earth materials, clay, stone and marble. There are also many types of animals and more tree types. ] Thanks for reading and if you've got anything to comment on I welcome it all. Awoken
  8. GRASBOCK WindyOrange

    #2 New System Works

    I finally got the new system working. I have never made that many mistakes in an algorithm at once, which explains why it took so long for me to post an update. Anyway. Now I can run more than 10000 Humans at once (however only random walking). The World has multiple Noisemaps overlapping each other generating a much more interesting terrain. The System allows for efficient pathfinding to be implemented. Now I will be able to do much more actual content. UI, Pathfinding and developing the systems which drive the simulation will be a big task. The 3 pictures show you the world at different states of expansion. The red dots being the humans, that explore the world and thus generate the new chunks when necessary. In case you have seen the pictures from my last entry and are wondering where the grass went. Sorry about that. It will come back eventually.
  9. I have spirtes that will be turned into animation images for the game actors. What would be the best way to change the weapon / armor for each actor? IE walking with sword swinging sword then when he equips axe walking with axe swinging axe ECT. Same for armor? Have sheets with the weapons and armor and then overlay them on to the base spirte when the user changes the weapon or have premade sheets with all of the various combos of armor / weapons that the solider can have and then just grab the ones needed for the current selection. I'm thinking the first option is better, but are there any other better ways?
  10. When I use the proptimizer tool in 3ds max and than export the file as a .fbx file, the file size seems to be bigger than the .fbx file that hasn't used the pro optimiser tool. This has confused me as I thought reducing the polygons would decrease the file size. I believe the problem comes down to having the mesh as an editable poly and than adding the modifier tool. So I have several meshes in a 3ds max scene. The meshes need to be combined into one. I do this by turning the meshes into editable polygons. After I turn each mesh into and editable polygon I'll add a modifier to each mesh.This modifier tool would be ProOptimizer or multi res. I'll than reduce the polygon count of each mesh. Once I have done that I use the attach option from the editable poly to combine the meshes into one. The issue is that once I click on edit poly after adding the modifier tool (prooptimzer or multi res) a message appears stating "Modifier depends on topology" and when I wish to continue by pressing yes the polygon count goes back up thus, the file size hasn't been reduced.1. 3ds max scene with multiple meshes2. change each mesh to an editable poly3. Add modifier tool to each mesh4. Modifier tool would be ProOptimizer or multi res5. reduce poly count of each mesh by using modifier tool stated in step 4.6. Use attach feature from edit poly to combine all meshes into one7. Upon selecting "edit poly" message appears on screen stating "modifier depends on topology"8. When yes is selected polygon count goes back up and as a result the file size has't been reduced.I am now looking into how to first reduce the poly count of each mesh and than combining them into one without the problem of the poly count going back up. Any support would much appreciated.
  11. It's for a 2D game, but the question is broader... Let's say I want to have some object (eg. a projectile) interact with some other object (eg. a button) so the projectile thrown by the player can trigger the button. I know there could be several ways of doing this, like the brute force o(n^2) method, the 'optimized' method using a QuadTree or spatial hash... But i thought about another method, and was wondering if it's a good idea or not : It consist of iterating two times over the active game object list: - the first looks for projectile objects, storing their pointers into some array - the second looks for button objects, checking if collision occurs with one of the projectiles in the array Other specific collisions checks could be done with this method, but that would need multiple pointer lists. Do you know how old games (like thoses on Genesis, we're talking about 8Mhz cpus...) achieved that ? Should I just implement some spatial hashing and checking all the collisions inside the restrained area, avoiding storing pointer lists ? My levels would have about 1000 objects, i'm not that much concerned about performance since I know how to optimize, but more on finding an elegant/simple way of doing this. I'd like keeping the code small and maintainable.
  12. How to unpack the frame buffer when packing by Compact YCoCg Frame Buffer?
  13. I'm making an small 2D engine using Kha and I have a timer class, which basically simply either waits a certain amount of time to call a function, or repeatedly calls a certain function after every x seconds. I simply want to know if I should have timers run on different threads. I'm aware that makes sense, but I might use many timers in a game for example, would that still be okay? Also I'm currently writing an animation components, which waits every x seconds to draw another image using the timer class. And in a normal 2D games, I would have many objects with animations on them, other than the other timers. So I just wanted to ask people who have more experience and knowledge than I have what I should do for timers: Either leave them on the same main thread, or make them run on different threads. Thanks in advance.
  14. I have been doing research into optimising 3d models in 3ds max. There seems to be so many different ways to optimise 3d models. I am unsure which method is the best and have been trying different tools such as the pro optimizer tool in 3ds max. Does anyone know the best way to optimise 3d models in 3ds max? I am trying to reduce the file size whilst maintain a high quality model. So produce a low polygon model which looks like a high polygon model.
  15. I've been doing some research on delta compression (used and described in the Quake3 doc http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking,), and I'm looking for some clarifications on that topic please if possible. I understand that the delta compressed state is the difference between any given world states, meaning that if we store the state of every entity in a world state at every tick, the difference would be only the states of entities that changed between these two world states. Now when sending back the delta state to the client, do we go as far as only mentioning the properties that changed? Let's say a character moved only on X axis but didn't move on the Y axis between two states, are we sending to the client the whole state (x, and y) or only the new x position? If that's the case and let's assume there are a few more properties that describes a character, how can the client identify which properties have actually changed when rebuilding the information from binary data.
  16. Hi, I am looking for a TCP or HTTP networking library similar to Lidgren (UDP). This is primarily for sending game map data and potentially other large messages from Server to Client. I do want to keep Lidgren for my chat messages, player position, small fast updates etc. I especially love the flow of data and the library usage in general, so any libraries of a similar style would be excellent. Preferably something open source, free and reliable. I also must be able to swap between localhost and an ip address with ease, like Lidgren, as I run a server for singleplayer/mp/lan. My game maps are similar to minecraft, but it is 2d and only one Z-level, so i'm sending a jagged array of Tile object data (currently only enum TileID.Grass) down the pipe to the Client. Problem is if i'm sending a large map 1024 x 1024 tiles down the to client that's quite a lot of data, and Lidgren is relatively slow to build the writes (before the message is even sent!). It is fine when i'm using smaller maps < 512 x 512 ( xTiles * yTiles ). I know about chunking and will look into implementing this later, whilst taking into account the user's position in the world to only send nearby chunks. An example of my code that can be slow: private void WriteWorld(NetOutgoingMessage outgoing) { try { var world = WorldManager.Instance.CurrentWorld; outgoing.Write(world.XTiles); outgoing.Write(world.YTiles); for (int x = 0; x < world.XTiles; x++) { for (int y = 0; y < world.YTiles; y++) { // Write Tile obj data outgoing.Write((int)world.Tiles[x][y]); // <-------- Slow here when xTiles and yTiles are each > 512 ! } } } catch (Exception ex) { // log send error } } I'd love to hear from you guys, especially if any of you have come across a similar challenge.
  17. So I have hundreds of moving objects that need to check there speed. One of the reasons they need to check there speed is so they don't accelerate into oblivion, as more and more force is added to each object. At first I was just using the Unity vector3.magnitude. However this is actually very slow; when used hundreds of times. Next I tried the dot-product check: vector3.dot(this.transform.foward, ShipBody.velocity) The performance boost was fantastic. However this only measures speed in the forward direction. Resulting in bouncing objects accelerating way past the allowed limit. I am hoping someone else knows a good way for me to check the speed with accuracy, that is fast on the CPU. Or just any magnitude calculations that I can test when I get home later. What if I used vector3.dot(ShipBody.velocity.normalized, ShipBody.velocity)? How slow is it to normalize a vector, compared to asking it's magnitude?
  18. Hello, I am trying to make a GeometryUtil class that has methods to draw point,line ,polygon etc. I am trying to make a method to draw circle. There are many ways to draw a circle. I have found two ways, The one way: public static void drawBresenhamCircle(PolygonSpriteBatch batch, int centerX, int centerY, int radius, ColorRGBA color) { int x = 0, y = radius; int d = 3 - 2 * radius; while (y >= x) { drawCirclePoints(batch, centerX, centerY, x, y, color); if (d <= 0) { d = d + 4 * x + 6; } else { y--; d = d + 4 * (x - y) + 10; } x++; //drawCirclePoints(batch,centerX,centerY,x,y,color); } } private static void drawCirclePoints(PolygonSpriteBatch batch, int centerX, int centerY, int x, int y, ColorRGBA color) { drawPoint(batch, centerX + x, centerY + y, color); drawPoint(batch, centerX - x, centerY + y, color); drawPoint(batch, centerX + x, centerY - y, color); drawPoint(batch, centerX - x, centerY - y, color); drawPoint(batch, centerX + y, centerY + x, color); drawPoint(batch, centerX - y, centerY + x, color); drawPoint(batch, centerX + y, centerY - x, color); drawPoint(batch, centerX - y, centerY - x, color); } The other way: public static void drawCircle(PolygonSpriteBatch target, Vector2 center, float radius, int lineWidth, int segments, int tintColorR, int tintColorG, int tintColorB, int tintColorA) { Vector2[] vertices = new Vector2[segments]; double increment = Math.PI * 2.0 / segments; double theta = 0.0; for (int i = 0; i < segments; i++) { vertices[i] = new Vector2((float) Math.cos(theta) * radius + center.x, (float) Math.sin(theta) * radius + center.y); theta += increment; } drawPolygon(target, vertices, lineWidth, segments, tintColorR, tintColorG, tintColorB, tintColorA); } In the render loop: polygonSpriteBatch.begin(); Bitmap.drawBresenhamCircle(polygonSpriteBatch,500,300,200,ColorRGBA.Blue); Bitmap.drawCircle(polygonSpriteBatch,new Vector2(500,300),200,5,50,255,0,0,255); polygonSpriteBatch.end(); I am trying to choose one of them. So I thought that I should go with the one that does not involve heavy calculations and is efficient and faster. It is said that the use of floating point numbers , trigonometric operations etc. slows down things a bit. What do you think would be the best method to use? When I compared the code by observing the time taken by the flow from start of the method to the end, it shows that the second one is faster. (I think I am doing something wrong here ). Please help! Thank you.
  19. Hi, I am trying to implement a custom texture atlas creator tool in C++, need suggestion regarding any opensource fast API or library for image import and export? Also this tool will compress the final output atlas image into multiple formats like DXT5, PVRTC and ETC based on user input, what should be the best way to implement this? Thanks
  20. Hi guys, There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there. Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test. On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres). This method can be naturally extended to clustered light culling as well. The following image shows the general ideas Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included! Eric
  21. Hi Forum, in terms of rendering a tiled game level, lets say the level is 3840x2208 pixels using 16x16 tiles. which method is recommended; method 1- draw the whole level, store it in a texture-object, and only render whats in view, each frame. method 2- on each frame, loop trough all tiles, and only draw and render it to the window if its in view. are both of these methods valid? is there other ways? i know method 1 is memory intensive but method 2 is processing heavy. thanks in advance
  22. Hi there. I am really sorry to post this, but I would like to clarify the delta compression method. I've read Quake 3 Networking Model: http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking, but still have some question. First of all, I am using LiteNetLib as networking library, it works pretty well with Google.Protobuf serialization. But then I've faced with an issue when the server pushes a lot of data, let's say 10 players, and server pushes 250kb/s of data with 30hz tickrate, so I realized that I have to compress it, let's say with delta compression. As I understood, the client and server both use unreliable channel. LiteNetLib meta file says that unreliable packet can be dropped, or duplicated; while sequenced channel says that packet can be dropped but never duplicated, so I think I have to use the sequenced channel for Delta compression? And do I have to use reliable channel for acknowledgment, or I can just go with sequenced, and send the StateId with a snapshot and not separately? Thank you.
  23. Welcome back colony managers, here comes a set of awesome new features improving your space colonization systems. This month we spent a lot of time into pimping the user interface of the game into it's 4th evolution and making it 4k ready by the way. With the remediation center a very important infrastructure building made it into this release and also the logistic center got a workover enabling you to fully automate repairing and cleaning processes. And there's more... TL;DR New user interface Remediation center & carbon sequestration Logistic center is now maintenance station Desertification threat and temple power Solar park and wind farm alignement Mountain variations Fixes & improvements New User Interface The existing user interface clearly did not meet the demands of our colony & planet simulation. It was very clumsy and reminiscent of a casual mobile phone game. So we rebuilt it, using the smaller and much clearer Roboto font for all text elements, while keeping our futuristic fonts for headers and elements that need highlighting. The new interface now has become much easier to read while taking less screen space and thisway giving even more focus to the planet itself. [gallery columns="2" ids="6249,6248,6254,6255"] If you are using a relatively small screen with a high resolution you can still zoom the interface up to 150% in the options. A side effect of our work is that the new user interface has a higher resolution and thus is ready for your 4K display. Remediation Center & Carbon Sequestration The new building remediation center comes with an additional worker drone and uses this drone to automatically start the clean up process for nearby fields. An powerful upgrade for the center is carbon sequestration - the technical separation and storage of CO2 emissions from surrounding buildings and power plants. Thanks to underground compression, the exhaust gases do not enter the atmosphere. The second ugrade for this building is "Advanced Remediation Process" it halves the cost of soil clean up in the area. Logistic Center is now Maintenance Station The logistic center is now called "Maintenance station" and automatically repairs nearby damaged buildings by default. It's upgrades are the fire station and "Advanced Repair Process", which halves the cost of repair processes in the area. Desertification Threat and Temple Power In the course of global warming there will be new deserts emerging next to others. Thereby the infertile wasteland is growing. The only way to prevent this is to plant forests onto desert fields so it can't spread. Trees will effectively stop the process of desertification. In the illuminati temple you can use gaian energy to create a field of desert anywhere around the world. Except on fields with forests on them. Solar Panels and Wind Farms They are aligning to the sun and the wind direction now. Mountain Variations Each mountain has two versions now to bring more variety into the game's look. These two versions are now put to the three edges of big mountains as well to make it easier to see whether a field is blocked by mountains. Fixes & Improvements Fixed animation problems: we had some seriously strange problems with sub models and animations. Mystery finally concluded! Ships and oilplatforms no longer visible under water, while being built. City expansion now also needs drones. Show diplomatic relation progress as ring (full ring reaches next diplomatic level) Fixed orientation angle of volcanoes and huge mountains on small planets Sandbox category for mushroom forests As always we wish you a good fun and hope you let us know if anything comes to your mind about the new features! Jens & Martin
  24. Hello! As far as I understand, the traditional approach to the architecture of a game with different states or "screens" (such as a menu screen, a screen where you fly your ship in space, another screen where you walk around on the surface of a planet etc.) is to make some sort of FSM with virtual update/render methods in the state classes, which in turn are called in the game loop; something similar to this: struct State { virtual void update()=0; virtual void render()=0; virtual ~State() {} }; struct MenuState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; struct FreeSpaceState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; struct PlanetSurfaceState:State { void update() override { /*...*/ } void render() override { /*...*/ } }; MenuState menu; FreeSpaceState freespace; PlanetSurfaceState planet; State * states[] = {&menu, &freespace, &planet}; int currentState = 0; void loop() { while (!exiting) { /* Handle input, time etc. here */ states[currentState]->update(); states[currentState]->render(); } } int main() { loop(); } My problem here is that if the state changes only rarely, like every couple of minutes, then the very same update/render method will be called several times for that time period, about 100 times per second in case of a 100FPS game. This seems a bit to make dynamic dispatch, which has some performance penalty, pointless. Of course, one may argue that a couple hundred virtual function calls per second is nothing for even a not so modern computer, and especially nothing compared to the complexity of the render/update function in a real life scenario. But I am not quite sure. Anyway, I might have become a bit too paranoid about virtual functions, so I wanted to somehow "move out" the virtual function calls from the game loop, so that the only time a virtual function is called is when the game enters a new state. This is what I had in mind: template<class TState> void loop(TState * state) { while (!exiting && !stateChanged) { /* Handle input, time etc. here */ state->update(); state->render(); } } struct State { /* No update or render function declared here! */ virtual void run()=0; virtual ~State() {} }; struct MenuState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<MenuState>(this); } }; struct FreeSpaceState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<FreeSpaceState>(this); } }; struct PlanetSurfaceState:State { void update() { /*...*/ } void render() { /*...*/ } void run() override { loop<PlanetSurfaceState>(this); } }; MenuState menu; FreeSpaceState freespace; PlanetSurfaceState planet; State * states[] = {&menu, &freespace, &planet}; void run() { while (!exiting) { stateChanged = false; states[currentState]->run(); /* Runs until next state change */ } } int main() { run(); } The game loop is basically the same as the one before, except that it now exits in case of a state change as well, and the containing loop() function has become a function template. Instead of loop() being called directly by main(), it is now called by the run() method of the concrete state subclasses, each instantiating the function template with the appropriate type. The loop runs until the state changes, in which case the run() method shall be called again for the new state. This is the task of the global run() function, called by main(). There are two negative consequences. First, it has become slightly more complicated and harder to maintain than the one above; but only SLIGHTLY, as far as I can tell based on this simple example. Second, code for the game loop will be duplicated for each concrete state; but it should not be a big problem as a game loop in a real game should not be much more complicated than in this example. My question: Is this a good idea at all? Does anybody else do anything like this, either in a scenario like this, or for completely different purposes? Any feedback is appreciated!
  25. Hello, I want to optimize the used memory in my game so that it supports low end devices - for instance iPhone 4s. I know that some of the main things I should look into are memory leaks, big textures and some game specific things, which occupy a lot of memory. To detect all that I am using MTuner on Windows and Instruments (Allocations) on XCode. What are you generally looking for when optimizing memory? What instruments are you using? My target platform is iOS.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!