Jump to content
  • Advertisement
    1. Past hour
    2. A have a "free form" rotation method that looks like this: Vector3 screenVector = new Vector3(delta.X, -delta.Y, 0.0f); Vector3 rotationVector = Vector3.Cross(Vector3.UnitZ, screenVector); float angle = rotationVector.Length(); rotation = Quaternion.Concatenate(rotation, Quaternion.CreateFromAxisAngle(Vector3.Normalize(rotationVector), angle)); world = Matrix.CreateFromQuaternion(rotation); The values of delta.X/delta.Y are displacement values along the screen coordinates. This allows me to freely rotate an object in 3D, no matter how it is orientated. I wish to smooth out this affect by adding a tweener to the rotation. This seems straightforward: split up the X and Y components of the rotation vector (the cross product above), separate the angle value in to two values X and Y, tween the angles, and then create a composite Quaternion. However, this doesn't seem to work in practice. When concatenating the two Quaternions, there is no movement along some arbitrary axis (depending on how much the object has been rotated). Furthermore, the object doesn't rotate freely anymore and instead starts to rotate in a manner similar to a third/first person camera (once the elevation goes past 90 degrees, the orientation is "flipped"). Instead of concatenating the Quaternions, I tried to Matrix.CreateFromQuaternion for each one instead, and then multiply the matrices. But this yields the exact same effect as described above. Going to the extremes, I tried to tween the rotation vector, instead of manipulating the angles. This doesn't work as expected either. Whenever the direction changes, the normalization of the rotation vector causes it to rotate on the opposite axis to reverse direction. This means that if you're rotating along the X axis in the positive direction and then suddenly start rotating in the negative direction, at some point during the tweening affect, there will be some rotation along the Y axis, as the rotation "falls over" in the opposite direction. Any recommendations on making this work correctly? Tweening the angle values does feel like the right way to go about it, but how do I construct the correct final rotation matrix if I separate them out?
    3. Today
    4. The main problem with NoSQL overall, is that it requires you to know all your access patterns ahead of time. And because secondary indices are usually cumbersome or expensive, if you have more than the plain "given a key, find me a bag of data" then NoSQL starts showing its weakness. Another thing that NoSQL got first, but SQL is now getting too, is in-RAM databases. Depending on performance needs and cost / operations specifics, you may want to look at databases specific for RAM. On the other hand, if you have data with a "long tail" (old data that's seldom accessed,) then all-RAM is almost certainly the wrong choice. Also: Putting more RAM into a SQL database host, to make it cache better, often reduces the cost difference between in-RAM and on-disk. If you're familiar with MySQL, there's nothing wrong in just using that. It works great, and can scale far. If you need specific features of Redis (atomic push/pop, sets, pub/sub) then you might want to include that, too, with the caveat that you have to put a time-to-live on all data, because when Redis runs out of RAM, that's it -- no more data! Another interesting option is FoundationDB, which recently went open source when the company was bought by Apple. It's a key/value store that supports federation/distribution -- you can add nodes to get more capacity, without changing the semantics of the database. Redis, and most other NoSQL databases, by contrast, don't allow transactional work across multiple hosts.
    5. Automated builds are a pretty important tool in a game developer's toolbox. If you're only testing your Unreal-based game in the editor (even in standalone mode), you're in for a rude awakening when new bugs pop up in a shipping build that you've never encountered before. You also don't want to manually package your game from the editor every time you want to test said shipping build, or to distribute it to your testers (or Steam for that matter). Unreal already provides a pretty robust build system, and it's very easy to use it in combination with build automation tools. My build system of choice is Gradle , since I use it pretty extensively in my backend Java and Scala work. It's pretty easy to learn, runs everywhere, and gives you a lot of powerful functionality right out of the gate. This won't be a Gradle tutorial necessarily, so you can familiarize yourself with how Gradle works via the documentation on their site. Primarily, I use Gradle to manage a version file in my game's Git repository, which is compiled into the game so that I have version information in Blueprint and C++ logic. I use that version to prevent out-of-date clients from connecting to newer servers, and having the version compiled in makes it a little more difficult for malicious clients to spoof that build number, as opposed to having it stored in one of the INI files. I also use Gradle to automate uploading my client build to Steam via the use of steamcmd. Unreal's command line build tool is known as the Unreal Automation Tool. Any time you package from the editor, or use the Unreal Frontend Tool, you're using UAT on the back end. Epic provides handy scripts in the Engine/Build/BatchFiles directory to make use of UAT from the command line, namely RunUAT.bat. Since it's just a batch file, I can call it from a Gradle build script very easily. Here's the Gradle task snippet I use to package and archive my client: task packageClientUAT(type: Exec) { workingDir = "[UnrealEngineDir]\\Engine\\Build\\BatchFiles" def projectDirSafe = project.projectDir.toString().replaceAll(/[\\]/) { m -> "\\\\" } def archiveDir = projectDirSafe + "\\\\Archive\\\\Client" def archiveDirFile = new File(archiveDir) if(!archiveDirFile.exists() && !archiveDirFile.mkdirs()) { throw new Exception("Could not create client archive directory.") } if(!new File(archiveDir + "\\\\WindowsClient").deleteDir()) { throw new Exception("Could not delete final client directory.") } commandLine "cmd", "/c", "RunUAT", "BuildCookRun", "-project=\"" + projectDirSafe + "\\\\[ProjectName].uproject\"", "-noP4", "-platform=Win64", "-clientconfig=Development", "-serverconfig=Development", "-cook", "-allmaps", "-build", "-stage", "-pak", "-archive", "-noeditor", "-archivedirectory=\"" + archiveDir + "\"" } My build.gradle file is in my project's directory, alongside the uproject file. This snippet will spit the packaged client out into [ProjectDir]\Archive\Client. For the versioning, I have two files that Gradle directly modifies. The first, a simple text file, just has a number in it. In my [ProjectName]\Source\[ProjectName] folder, I have a [ProjectName]Build.txt file with the current build number in it. Additionally, in that same folder, I have a C++ header file with the following in it: #pragma once #define [PROJECT]_MAJOR_VERSION 0 #define [PROJECT]_MINOR_VERSION 1 #define [PROJECT]_BUILD_NUMBER ### #define [PROJECT]_BUILD_STAGE "Pre-Alpha" Here's my Gradle task that increments the build number in that text file, and then replaces the value in the header file: task incrementVersion { doLast { def version = 0 def ProjectName = "[ProjectName]" def vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Build.txt") if(vfile.exists()) { String versionContents = vfile.text version = Integer.parseInt(versionContents) } version += 1 vfile.text = version vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Version.h") if(vfile.exists()) { String pname = ProjectName.toUpperCase() String versionContents = vfile.text versionContents = versionContents.replaceAll(/_BUILD_NUMBER ([0-9]+)/) { m -> "_BUILD_NUMBER " + version } vfile.text = versionContents } } } I manually edit the major and minor versions and the build stage as needed, since they don't need to update with every build. You can include that header into any C++ file that needs to know the build number, and I also have a few static methods in my game's Blueprint static library that wrap them so I can get the version numbers in Blueprint. I also have some tasks for automatically checking those files into the Git repository and committing them: task prepareVersion(type: Exec) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "reset" } task stageVersion(type: Exec, dependsOn: prepareVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "add", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Build.txt", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Version.h" } task commitVersion(type: Exec, dependsOn: stageVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "commit", "-m", "\"Incrementing [ProjectName] version\"" } And here's the task I use to actually push it to Steam: task pushBuildSteam(type: Exec) { doFirst { println "Pushing build to Steam..." } workingDir = "[SteamworksDir]\\sdk\\tools\\ContentBuilder" commandLine "cmd", "/c", "builder\\steamcmd.exe", "+set_steam_guard_code", "[steam_guard_code]", "+login", "\"[username]\"", "\"[password]\"", "+run_app_build", "..\\scripts\\[CorrectVDFFile].vdf", "+quit" } You can also spit out a generated VDF file with the build number in the build's description so that it'll show up in SteamPipe. I have a single Gradle task I run that increments the build number, checks in those version files, packages both the client and server, and then uploads the packaged client to Steam. Another great thing about Gradle is that Jenkins has a solid plugin for it, so you can use Jenkins to set up a nice continuous integration pipeline for your game to push builds out regularly, which you absolutely should do if you're working with a team.
    6. So you have a lot of questions, which is understandable because this is all fairly complex and confusing. I think the first thing you should probably know is that DirectX shader programs generally only work in terms of 32-bit datatypes. So when you write HLSL code to add two values or transform a vector by matrix, the actual ALU operations are mostly going to be working with 32-bit floating point values or 32-bit integers. These are the only data formats that are guaranteed to be supported by the hardware in your shader programs, and most games/programs out there work exclusively with 32-bit operations. There are ways to work with operations that run at lower-than-32-bit precision, as well as the "double" type that can be used to access double-precision operations. However both of these are optional, and are only supported by certain GPU's. For most of the DX10 and DX11 era desktop GPU's only supported 32-bit operations, but recent AMD and Nvidia GPU's have started adding support for fp16 operations. Support for double-precision can be rather patchy, since it's generally only intended for high-end compute usage rather than 3D graphics. So with that said, the other important thing you should know is that there's a difference between how your data is stored in buffers or textures, vs. how it's operated on in your shader programs. Your shaders have the ability to read or write data through various paths depending on the type of shader, and those paths can possibly support automatic conversion from its format in memory to the format that the shader program can consume it in. The classic case is textures: textures are almost never stored as a set of full 32-bit floats, and are instead typically stored as a fixed-point format like R8G8B8A8_UNORM. In order for a shader to sample a program, the texture has to specify that its data is laid out according to one of the supported DXGI format types from the DXGI_FORMAT enum. This is key, because these formats specify not just how the data is stored in memory, but also how the data is interpreted by a shader program. If you go to the docs for DXGI_FORMAT and scroll down to the section entitled "Format Modifiers", you'll see what I'm talking about. If we go to the commonly-used UNORM format modifer, you'll see this: Unsigned normalized integer; which is interpreted in a resource as an unsigned integer, and is interpreted in a shader as an unsigned normalized floating-point value in the range [0, 1]. All 0's maps to 0.0f, and all 1's maps to 1.0f. A sequence of evenly spaced floating-point values from 0.0f to 1.0f are represented. For instance, a 2-bit UNORM represents 0.0f, 1/3, 2/3, and 1.0f. The part that I bolded for emphasis is the important part: it's basically saying that the shader will see a floating point value in the range [0, 1], not an integer value. This lets you write shader code like this: Texture2D<float4> ColorTexture; SamplerState ColorSampler; cbuffer Constants { float3 LightDir; float3 LightColor; } float4 PSMain(in float3 uv : UV, in float3 normal : NORMAL) : SV_Target0 { float3 surfaceColor = ColorTexture.Sample(ColorSampler, uv).xyz; float3 lighting = surfaceColor * saturate(dot(normal, LightDir)) * LightColor; return float4(lighting, 1.0f); } So the shader is just working entirely in float's, and doesn't care about the data format of the texture. The texture might be R8G8B8A8_UNORM, it might by R16G16B16A16_FLOAT, or it might be BC1_UNORM. It doesn't really matter to the shader, as long as the format is ultimately interpreted as a float. Hence you don't see any unpacking or conversion code here, we just go right to multiplying the texture sample with the lighting. Generally the only time you do care is if you use a UINT or SINT format, since those require decorating the texture with <uint> or <int>. In those cases the integer values will be case to a 32-bit int type on read, which lets the shader work with them using 32-bit operations. On a similar note, we can have data conversion happening when the pixel shader's output value gets written to a render target texture. The pixel shader outputs a float4, but the render target might be using R16G16B16A16_FLOAT, which is a common format for HDR rendering. If that's the case, the pixel shader output value will automatically get converted to that format on write, again without the shader really knowing or caring. The same concepts extend to a few other places in the pipeline. Shaders can read from the "Buffer<>" type in HLSL, which maps to a shader resource view with D3D11_SRV_DIMENSION_BUFFER. This kind of buffer also uses a DXGI_FORMAT, and supports automatic conversion from the packed format to 32-bit values. RWTexture and RWBuffer UAV's can also perform format conversion on write, and possibly also on load depending on the GPU and OS that you're using. The other place where format conversion is commonly uses is the input assembler (IA), which is responsible for reading from your vertex and index buffer and passing the vertex data to your vertex shader. For the formats that the IA supports, conversion works the same way it does for textures: transparently. Your HLSL code just needs to declare the input variable according to the way that the format is interpreted as specified in the docs, which means float for FLOAT/UNORM/SNORM, int for SINT, and uint for UINT formats. So if you have this as your VS input struct: struct VertexInput { float3 pos; float2 tex; float3 normal; }; Those 3 attributes could be using any of the FLOAT, UNORM, or SNORM formats in the vertex stream. You don't need to change your shader declaration if you change the vertex packing, as long as you're continuing to use formats with one of those 3 modifiers. This means that you can totally use lower-than-32-bit packing for your vertex attributes if you want to save on memory and bandwidth, and you absolutely should do this! You just have to make sure that the precision is adequate for your use cases. As for how the conversion happens, it's really up to the hardware. Texture format conversion is generally handled by dedicated hardware in the texture units, which is necessary because unpacking (and possibly decompression from a BC format) needs to happen before filtering can occur. on the flip side, some hardware (particularly mobile hardware) will patch code into the pixel shader to handle conversion from 32-bit floats to the render target format. For the IA, it depends. Some GPU's still have dedicated hardware and caches for fetching from vertex buffers. Some don't have any hardware for this, and instead will patch in a pre-amble to your vertex shader that reads and unpacks the vertex data. But even in those cases the decode cost tends to be pretty small compared to the cost of additional memory access, so tightly-packed data is most likely still going to be a win.
    7. Hi, I am fresh to database design, recently trying building my mobile multiplayer game, it will be something like pokemonGo. I have experience on MySQL, and I know some NoSQL engines like redis. I saw some existing game projects which store their data on both SQL database and noSQL database. Could anyone give some advice that what kind of data should store in SQL and what kind of data is better to be in noSQL. It would be nice if giving some real scenario examples. My understanding is data like user profile, purchase transactions should be in SQL. Field map information, enemy status can be NoSQL.
    8. Yesterday
    9. I've been working on my own Metroidvania via GameMaker Studio for the past few years. You play as a bat named Ralph as he goes on an adventure to obtain 7 Crystal Medallions hidden in dungeons with the help of a cult known as the Crimson Fog. Along the way, there will be quests unlocked in Cedrus Village as you progress through the game. I've managed to complete a demo of the game up to the first dungeon and boss fight. I have only a PC build available, and the only gamepads I managed to install were Logitech Precision and Xbox PC gamepads. I had some trouble on gamepad detection though, so they may have connection issues. The desktop controls are similar to Terarria's control scheme if it's too much trouble. I don't have any music at this point, I'll need to get someone else to compose it later on. The music I make isn't bad, but it doesn't fit the aesthetic that well. I'm really hoping I can get feedback regarding the general content. Crimson Fog.0.2.zip
    10. Hodgman

      alternative to NVidia FX Composer?

      Honestly I would recommend Unity. It's obviously a sledgehammer where a smaller too is required... But pretty much contains the feature set of something like FX Composer within its editor. They have their own wacky shader format, but it's based on Cg/HLSL originally so will be similar.
    11. You can try the Unity LateUpdate here. You could also use OnCollisionStay2D instead of OnCollisionEnter2D.
    12. Hodgman

      Why are member variables called out?

      No, that just assigns the arguments to themselves. The only place this works is in constructor initializer lists: struct foo { int x, y; foo(int x, int y) : x(x), y(y) {} }; //copies the arguments to the members
    13. Been playing games since I was a kid, mostly PC strategy game. That was back in ~1995-2000. Fast forward to year 2000, when I was in middle school, I thought about how cool it was to make games, and have the character run in any world you create. But unfortunately, I was in Singapore, and it seem like all the game are made in Japan/USA to me, it seems like an impossible dreams. Fast forward to year 2008, I was about to go to college/University, and ta-dah, DigiPen Singapore just opened, and I was the pioneer batch. And I was lucky after graduation to be able to work on games like XCOM 2 and The Evil Within :D. Now I'm out working it out on my own! Here's the game I am working on at my startup. AwayFromGuild.com
    14. ChaosEngine

      Why are member variables called out?

      I'm very skeptical of that. Are you claiming that x and y are both members and parameters? How would the compiler know which x is the parameter and which is the member? Does it assume that lvalues are members? Can you tell me which compiler does that so I can avoid it like the plague? Even on the off chance you could do that, don't. There is nothing good about that code. It's confusing, ambiguous and definitely non-portable.
    15. GRASBOCK WindyOrange

      #1 Grass and Animated Sprites introduced

      I think i understand. This will get fairly difficult.
    16. Zephyr Wong

      Away From Guild

      Game Website - www.awayfromguild.com Twitter - www.twitter.com/lightninggamesg Facebook - www.facebook.com/Away-From-Guild-215904675813418 Away From Guild is a idle mobile game where you are the Guild Master.Create, expand and customize your own guild.Recruit adventurers and send them out on quests.Witness their epic journeys and collect loots.Gain fame and become the best guild in the world!
    17. Hi all, Some of you may already be familiar with Scene Fusion for Unity. Well, we kept getting asked when the Unreal version will be coming. After many months of hard work, we were able to produce our first teaser! We will be going into a closed alpha test in the coming weeks: In the meantime, check it out!
    18. @Scouting Ninja Thank you so much for the feedback, everything has been taken on board and i will do that before posting to my art station page. I will add some wires to this post too. Thanks!
    19. Don't allow the bricks to move down during the time the ball is that close and moving up. ED: Missed the Unity tag.
    20. I've got a bug with my brick breaker style game. The bricks move down one line at a time ever 1.5 seconds. What appears to be happening is occasionally the ball will be just about to hit the brick when the brick moves down a line, and now the ball is behind it. I'm not sure how to fix this. I have two ideas but I'm not sure of implementation. 1 solution would be to check where they were and where they are going to be before rendering the frame. Then if they crossed paths, then register the brick as hit. Solution 2 would be change how the bricks move. I could maybe slide them down line by line, instead of a jump down. I'm not sure of this will fix the issue or not. Any ideas?
    21. Scouting Ninja

      What does MMORPG require?

      A Massive Multiplayer Online Games requires millions of dollars and a huge team. MMORPG games died because there is too many of them. Too much cost and too few players, simple as that. If you just want to make a multiplayer game then you don't need much, just time really. Grab a engine and get started.
    22. mmmax3d

      Add floor in skybox

      THis thread is moved to Graphics:
    23. mmmax3d

      Add floor in skybox

      Ok thanks I moved it to graphics. Hi Thank you very much for your reply. I have moved the post under Graphics. Could you please copy+paste your reply there? Thanks!
    24. Hi everyone, I would need some assistance from anyone who has a similar experience or a nice idea! I have created a skybox (as cube) and now I need to add a floor/ground. The skybox is created from cubemap and initially it was infinite. Now it is finite with a specific size. The floor is a quad in the middle of the skybox, like a horizon. I have two problems: When moving the skybox upwards or downwards, I need to sample from points even above the horizon while sampling from the botton at the same time. I am trying to create a seamless blending of the texture at the points of the horizon, when the quad is connected to the skybox. However, I get skew effects. Does anybody has done sth similar? Is there any good practice? Thanks everyone!
    25. Actually, when you create a canny image out of the original one, the cluster is already grouped into one color, so the mask could be just black and white, in which white is vegetation area and black is not, I think it would work somehow.. thanks Jack
    26. Liudvikas Lazauskas

      What does MMORPG require?

      First of all, MMORPG games have always been an important part of video games for me. Whether it's roleplaying or grinding, I have always enjoyed MMORPGs from the bottom of my heart. Personally, I think that where the current world is rather drawn of away from these games. Battle royale games or FPS shooters are currently taking over the industry. Therefore, current MMORPGS fall behind, even more, when they fail to satisfy the gamer's needs the game usually gets abandoned. On the other hand, I strongly believe, that since our current society is making a rather big breakthrough with all the technological development, VR and AR are not that far from us. As a consequence, I think people will more likely be drawn back to these open world series of games and roleplay to the bottom of their hearts. What I want to ask or just gain criticism of my foolishness is what does MMORPG take? I have some experience in C++ and Python as coding languages, some experience working on application development. I am currently writing a pretty detailed Game Design Document, so I know what I want from the game and what it should do. I know such task is not for a single person to handle, therefore I am just saying I have couple of friends willing to help me along. What languages of coding should I learn, and what aspects should I take to approach MMORPGs. I am currently pretty unsure where to start, and what to prepare. I am fully aware it is not a short term project, but I am ready to improve and adapt for this
    27. Hive Entertainment

      Hive Entertainment Technologies

      The name of the game is Bunny. You play as a female character in a dystopian open world enviroment. The world was devistated when a virus comes around to mutate all living organisims. Upon character creation there are 3 main options for the character class one that uses guns one that uses knives and swords and one that uses bows. The main objective is to find the cause behind the virus. Later in the story Bunny (the character you play as) takes off her gas mask to reveal she is actually a robot with sophisticated A.I. the main objective then diverts to trying to find her Creator who holds a very vital piece of information as to how the virus came to be. There is a simple sketch attached as to what the character will look like. Every time she dies the color of the jacket changes. As I said it is a rough sketch. Also my name is Christopher (I go by Chris). It is a pleasure to meet you all. I also have a few properties waiting to see if I get enough team members behind this project before they will go in. If more info is needed please do not be afraid to ask.

    Members You Follow

    You must be logged in to view this content.

    Don't have a GameDev.net account? Sign Up

    Recent Blogs

    1. This update of DRTS brings several improvements to the in-game interface. You can play the newest version at https://play.drtsgame.com Simpler Controls And Touch Screen Support Controlling the game using a mouse does not anymore require using different mouse buttons. You can select and send units, and scroll the camera using any mouse button. As before, zooming works with the mouse wheel and keyboard. This update introduces support for touchscreens. You can zoom in and out using two fingers. Sound Effects I added sound effects to help keep track of in-game events. You can hear, for example, when you captured a node or a unit is defeated. Rendering To avoid choppy movements on the screen, I made some improvements to rendering. The units locations are animated using client-side prediction and easing. Easing is also applied to camera movements.
      0 comments
    2. What have I done this week This week I have fixed few bugs and finally implemented a fully working obstacle avoidance system which makes my pathfinding and collision/obstacle avoidance system done. Some minor things which I did include: Fixed bug where text rendering causes lighting issues Added calculation of game entity's axis-aligned bounding box from data contained in .obj file Added AABB to AABB collision detection and response Added Ray to AABB intersection detection Made the map size resizable My own solution to obstacle avoidance problem I had really hard time finding information about an easy way to do obstacle avoidance in the way I wanted it to be. So instead I worked few days and came up with my own solution which works pretty well. I think I kind of reinvented a wheel and someone might have a better approach to this problem than I do. But anyways, it's already done and it works the way I wanted it to, which is all I care about now. What I wanted from my obstacle avoidance/collision system For the obstacle avoidance system I wanted it to do few things: Stop the vehicle if it gets too close to other vehicle If vehicles are about to collide (traveling towards each other at the angle less than the threshold angle) then make them steer away from each other Don't ever let two vehicles overlap I tried and I failed Before I explain how it works, I will say things which I tried and which didn't work that well. First thing I tried was to create a separate AABB in the front of the enemy's car and check if it collides with any other car. If it does, stop the vehicle. I thought this would make it so that the vehicles wouldn't get too close to each other and hence wouldn't overlap. Well, that didn't work. Because when both vehicles collide to each other, they will both stop and will get stuck. To fix this, I added an if statement which checks whether the vehicles collide at each other if so, make it so that only one of two vehicles would stop and other would continue driving. But this made them overlap some of the times, which I didn't want. So after thinking for a while, I decided I should add AABB collision response, so that when cars hit each other they don't overlap, but get pushed back. So I did that and now it works pretty good, BUT if the vehicles are travelling towards each other there's no way of knowing which way to turn to avoid the collision. So I decided to scrap this AABB in front of the vehicle approach and try casting rays. Approach which worked My last and final try was to use rays instead of AABB to check for collisions with other entities. This time I say entities because I also want to check if the ray intersects the towers as well, this way we will know which way the vehicle can turn to avoid collisions. So the way I do it is pretty simple. The vehicle casts number of rays from its center towards the front of the car which check for intersection between towers' and enemies' bounding boxes. Then I have a function which does some magic and calculates (from the ray intersection information) which way the vehicle should steer. This steering is only done if two vehicles are facing each other and moving towards. If they are not moving towards, I check if the ray distance is smaller than the threshold value and if it is I just stop the vehicle. There are other few tiny hacks and tricks which I did to polish the system. But this is mainly how it works. Next week I still haven't decided what I am going to do this coming week. But I think I will add different tower types, add GUI and make it so that enemies will spawn inside a building or in a hidden area from where they will come to the game's map. You can see the state of the game here:     My website: http://extrabitgames.com  
      4 comments
    3. Another enemy Im working on recently.
      Enjoy and subscribe Youtube channel if You like my project. Thank You!  
      0 comments
    4. Hello All! After some great feedback from members of the community, I decided to create a trailer for the game. I welcome your feedback!   _________________________________________________________________________________________ As always, thank you for reading. I would love to hear from you and would love to hear any comments or ideas you have. Feel free to leave a comment or email me at watermoongames@gmail.com. _________________________________________________________________________________________  
      0 comments
    5. Hey-o folks! Boy, it sure has been a while since the last update, huh? There's two main reasons for that: 1. We've been hard at work on the project and gotten down in to the deep of it. And.. 2. I wanted to space these updates out to keep some elements of surprise as well as have some note-worthy content in which to showcase, though, this is a dev-blog on a dev site, so y'know.. spoilers ahead, capi-tahn. I do apologize for the quiet time and thank everyone that's liked the blog, sent in comments and PMs, it makes us feel amazing to see people are genuinely interested in our little game and, during the cold winters of development, all it takes is a warm and fuzzy to keep us goin'. So thank you.   Now, let's get on with the update!  One of the major additions is we've migrated out of the Testing Scene and in to actual maps! How exciting!  Once we've made the area fairly polished, we'll be looking at video demos of the place to give you an idea of the area the main character will start out in.  Another thing we've added in (though still could use some polish) is the day/night cycle. Watching the area slowly transition from day to night is kind of peaceful! Though in areas where monsters are abundant, it could have some serious consequences if you're out after dark!    But fear not, magic initiate! For two new abilities have been prototyped! First, The Ground Smash: Using a combination of the Block and Dash button will send you mashing down on the ground with magical power and sending a shockwave out, hurting nearby monsters and, in certain places, destroying things that lay beneath you. But watch out, this uses up your Mana like cru-aaaazy, and you'll need to beat monsters the old fashioned way (ala: punching them in the face) if you run out. Don't worry, though, Mana is one of the drops from many creatures and objects (aside from that cash-money, yo) so you can keep smashing away at stuff like a Hulk OC. The second of our two new abilities is the Charge Dash, because why walk around walls when you smash right through them? Holding down the attack button (whilst you're bursting with that Mana, that is!) will see you start a charge up attack. After a brief charging period, releasing the attack button will see you fly through at a ridiculous speed, knocking enemies out of your way and, in the event that you hit a special wall, will smash right through it!   Not enough bang for your buck, eh? Well, there's also a few things we've added that are kind of hard to get screenshots of (didn't stop me from trying though.) Scene Transitions Enemy Abilities Menus upon menus upon menus Saving/Loading (Started but not yet complete) More Dialogue options Event System that we'll use to make Cutscenes. Oh, so many fixes, patches and bug squishin'.  We're planning on having the entire first area done soon, so that we can make videos to show off instead of having to rely on screenshots, and who knows? Maybe a bit of commentary will come with them to better explain what we're showing off. I want to sign off by, again, thanking everyone for their comments, PMs and liking the blog posts, it really does help to know people like what we're building, so please keep them coming! I really want to show off more but for now, I'll see y'all next time!                  
      0 comments
    6. Hello, this time i have worked on the ability to animate sprites, small but essential system changes, growing grass (to get a gist of the basic procedure i might use), developing a field of view algorithm for the entitys that are exploring the world Watch a short clip here: Video This took me longer then expected due to some bugs I encountered in the process. Next up I will probably have to carefully think on how I will not only proceed but also if my current setup is worth keeping, as retrieveing positions with my current build is quiet a hassle and will hurt performance once I start using field of view and pathfinding algorithms. I might have to give the world a fixed size (which i have some good solutions to make that aspect a feature).  It would really help me what you guys think has to be changed with dwarf fortress in terms of gameplay, as it decides how the game is built. So I would appreciate it if you could tell me either via PM or here in the comments your opinion on the matter.
    7. Welcome to this week’s From the Forum. In this post, we highlight a few Corona Community Forums posts that cover important topics. Do you use a game module? Corona community developer extraordinaire @roaminggamer, posed an interesting but simple question in the forums: “Do you use a game module?” Asking about how you organize your code. This active thread covers how many of our experienced developers handle placing their game code in their projects. It’s well worth the read! Need sprites? While there are efficiencies in using sprite sheets (or image sheets), do you have to use them? Not always. This thread jumps into circumstances about when you need them and when you don’t. Taps or touches? And when to use them. This thread originates around the question of how to get a ship to fire a laser continuously. How do you get the event listener to make that effect possible? Learn more from our community developers! Do you have a particular forum thread that was helpful for you? Let us know about it! Email support@coronalabs.com, put FTF: and the forum title in the subject, and include the URL in the email. We will consider adding it to an upcoming edition of From the Forum.
      View the full article
      0 comments
    8. Hello gentlemen. I currently wrap up the concept of the game. The idea is to divide the functionality of the game into two groups: a) what a MVP should have; b) all other features. I am also very lucky to have a great designer who agreed to help with textures, sprites and all that stuff. She has no experience in making such stuff for games, but she draws really well, loves the idea and has proper equipment. We chat daily via Telegram.  So, the goal is: by next week finish the concept text of the game and actually start the development. I will publish the doc here. Also, for you to have a perspective of how the game will look, I plan to ask my colleague to draw some concept arts. Hopefully, she'll create some in a month. I not really sure I can do it, but I'll do my best. Best wishes for everybody, Nikita K.
      0 comments
    9. Crowdsourcer.io is a growing concept, it allows projects/small businesses to bring in collaborators on a revenue share basis to help them grow and expand. Felicity Toad is one of many projects that is successfully developing their game through this platform and we want to share some insights with you. Felicity Toad has started as a labour of love and grown into a team of co-operative people who are willing to work on the final vision of this original game. Its foundations lay with one Neil Badman, who being 43, has decided to go with a dream. He had spent many years in menial jobs, the most recent being in care, but working in the health sector gave him purpose and meaning, which would profoundly alter the way in which he saw his life. Unfortunately, after a time he fell into disarray, which ended with a breakdown, but without this event Neil would never have been diagnosed with schizophrenia, a much stigmatized illness, in part thanks to Hollywood and other uneducated outlets, he says. Unperturbed and with a new and hopefully temporary found freedom, he wondered what to do with his time out of work. He began a band, but couldn’t quite find the sound he wanted despite many auditions and help from friends, so he put this project aside for the time being and pondered what was next.   A gamer turned developer Before his breakdown he had found himself playing various computer games in his spare time, something he had always loved, and stumbled across a game called Oolite, a cooperatively built game reverse engineered from an old classic called Elite, a 3D space game that ran on 32k and even less for the Vic-20, a remarkable technical achievement. Oolite was a modern take on this classic, and being cooperatively built allowed you to construct your own content that would go up on the game’s site, for other people to download. Being a lifelong drawer and creative he took to looking at what other people weren’t particularly working on, settling with the look of the stars and the nebula’s generated from various images within the file system. He spent a year perfecting the use of the nebula generator, which was extremely popular with the player base. After creating a few different assets for this game he moved on and discovered a game called Battle For Wesnoth, a tongue in cheek strategy game that was simple, but very fun. It too was a labour of love by a very involved community and was not only moddable but had its own programming language. It was around here he was diagnosed with schizophrenia and suddenly found himself with a lot of free time, a very frightening time but free nonetheless. Neil needed to find something to occupy himself with, a distraction therapy from the nightmarish voices that were plaguing him, this was when he started to explore and start the band.  He found himself returning to Battle For Wesnoth, as it was a perfect platform to learn something new, whilst creating a story and the characters in it, the start of a therapeutic addiction that instantly rang true for him. This was nearly but not quite all his loves combined, he discovered the joy of creating through programming, despite his extremely messy first attempts. The code was however functional, and a story emerged. Neil wondered what it would be to have total creative freedom other than modding someone else’s build, so he researched and soon found Unity3D, a platform for developing games from scratch that is free up until you make a certain amount of money, if any.   The start of something new Now he needed an idea, he went through the motions of beginning to create the basics of a game, but soon found the technology wasn’t quite there yet to do what he wanted, so it was back to the drawing board. This was the birth of Felicity Toad, a tongue in cheek adventure, but dark and gritty in places. This would be a labour of love, a therapy, and a possible path back into work. It seemed perfect.      The Beginnings of Felicity Toad   Soon the idea was growing, and Neil realized if he was going to build the game it would take years of learning all the different aspects, and as much as he had the drive to get on with it, technology is changing at a rate, and in the supposed time he took to build the game it would be out of date by the time he finished, surrounded by up and coming virtual reality and holographic technology, would a 2D platformer survive in this environment? He needed a team, he didn’t have money and finding people who would fall in love with his game and join up seemed extremely unlikely. After posting on numerous sites, sometimes in the wrong places, he garnered a small amount of interest, but it was through exploring the different sites he could try that he stumbled upon a suggestion posted by someone for someone else. A site called Crowdsourcer.io was up and coming, promising to set people such as himself on the right path with the right help. There was nothing to lose.   Progress Made on Felicity Toad After Signing Up On Crowdsourcer.io   Crowdsourcer.io has helped the project to get its creative foothold in a world already swimming in games, so what would be different about it? Well, firstly he decided he needed to be doing something doable as a first game, so he settled on a 2D platformer, but there are many 2D indie games out there and being developed, even AAA companies still produce 2D platformers of extremely and unobtainable quality to compete with. There was still hope, a large sector of society has a love of independently made games, simply because they can take risks big AAA companies can’t afford to, and the 2D style is very reminiscent yet appealing to all age ranges. So what could be different? What needs to be familiar? This was a balance that needed to be assessed, and the Felicity Toad team have gone a way to addressing this interesting situation.   Neil is still working on the game and making strides to finishing it every day. You can learn more about Neil’s project, Felicity Toad by following the link or finding his project on www.crowdsourcer.io. If you would like to contribute to Neil’s project you can find it here and can apply to contribute! 
      View the full article
      0 comments
    10. Hi everybody, i'm ready to announce that Xilvan Design build games since 1993: The Labyrinth. v3.07. Soul of Sphere Platinum v2.75. Alako the Koala. v1.01 Age of Dreams v2.50. Lights of Dreams IV v7.97. Candy World II v8.37. Candy Racing Cup v2.75. Candy World Adventures v5.17. Candy to the Rescue IV v5.97. Candy in Space III v5.47. Candy's Space Adventures v15.75. Candy's Space Mysteries II v6.27. Two updates since last edit: - Enhanced the Space, Missions & Sceneries in Candy's Space Adventures. - Verified bonus, options, maps & more tricks in Candy's Space Mysteries II. - Reverified all of our old games like Candy World II, Candy to the Rescue IV, Candy World Adventures IV. - Maybe a Candy in Space III quest will be available soon. - We are planning Lights of Dreams V. - Worked on the "Level Select Screen" in Candy World Adventures IV. - We are actually debuging all of our games since now then.   They are available for download on my website: - Xilvan Design Websites -   One new link are now available: a OneDrive link. But, there is still a Google Drive & old CNET.download.com link. Hope you'll appreciate !   If you want to watch the videos of our games: - My youtube Channel -   Please, Subscribe to my channel for more infos about our new releases.   Friendly, Xylvan, Xilvan Design.
      2 comments

    Latest Forum Topics

    Latest News

    1. At this year’s Electronic Entertainment Expo (E3), taking place June 12-14 in Los Angeles, Epic Games will once again bestow the Unreal E3 Awards to highlight outstanding achievements of Unreal Engine developers.  Epic is partnering with co-sponsors NVIDIA and Intel to provide each winning team with an NVIDIA GeForce GTX 1080 Ti graphics card and an Intel® Core™ i7-8700K Processor. Judges will evaluate hundreds of Unreal-powered games throughout the show and announce nominees on Thursday, June 14, with winners revealed the following week on Thursday, June 21. This year’s award categories aim to highlight a variety of accolades: Eye Candy: This award is given to the most visually impressive Unreal Engine game at E3 2018 and rewards the use of leading-edge graphics that push the medium of interactive entertainment forward. Most Addictive: This award is given to the experience that people simply can’t put down. Nominees will make players forget about their surroundings and lead to fun-induced sleep deprivation. Best Original Game: This award is given to the project with huge potential as an all-new IP. Nominees will spark interest not only through gameplay, but through original characters and worlds that are put on full display during E3 2018. Biggest Buzz: This award is given to the project that creates the most talked about moment of E3 2018. From a major game reveal to an undeniably impressive demo or a major twist that flips the industry on its ear, this award goes to the Unreal Engine team or project that produces the most buzz. Unreal Underdog: This award is given to the independent team that pushes the limits to achieve an amazing showing for their title at E3 2018. Focusing on not just the product, but the people and process behind it, this award acknowledges a team’s perseverance to make a big splash and stand out from the crowd. To alert the judging panel about an Unreal Engine team’s presence at E3, simply email e32018@unrealengine.com with a brief description or the game or experience, along with details around where it will be shown in or around E3. Epic will respect the confidentiality of any details that are shared. Read more on the Unreal Engine blog. Follow @UnrealEngine on Twitter throughout E3, June 12-14, to hear about the nominees and see developer interviews straight from the expo floor.
      1 comments
    2. Amazon announced Sumerian at AWS re:Invent 2017 but today announced its availability to developers. Sumerian is a web-based visual editor that allows creators to develop 2D, 3D, and VR experiences without having to master specialized tools.  Experiences can be deployed across multiple platforms without having to write custom code or worry about deployment systems. From the announcement: Learn more from the Amazon blog here.
      0 comments
    3. Valve has released a new version of the OpenVR SDK. The 1.0.15 release introduces SteamVR Input, allowing users to build binding configurations for their favorite games with any controller, ultimately making it easier for developers to adapt their games more easily to diverse controllers. From the announcement: Learn more from the StreamVR blog announcement here.
      0 comments
    4. Active Gaming Media Inc. and indie game platform PLAYISM has announced a new collaborative project with KADOKAWA Corporation (CEO: Masaki Matsubara; HQ: Tokyo, Chiyoda-ku) on the latest title in the Maker series: Pixel Game Maker MV.  Pixel Game Maker MV is a game creation software that lets you easily create your very own original action games, without any sort of programming skills or specialized knowledge whatsoever. The action games you can create are limited only by your imagination.
       
      The ability to use original materials and resources for character animations, background maps, sound, etc., allows you to create truly unique games. Games can include physics, multiplayer capabilities, particle systems, user interfaces, animations, and more. Learn more at the Pixel Game Maker MV website at https://tkool.jp/act/.
      0 comments
    5. Perforce Software, a provider of solutions to development teams requiring scale, visibility, and security along the development lifecycle, announced a new package offer for independent game development studios. The package, called the “Indie Studio Pack,” is designed to provide up-and-coming game development studios with the tools they need to plan and version their next game. It includes five free users on both its version control and project management solutions: Helix Core, a long-time industry standard, and Hansoft, an Agile project management tool built by game developers for game developers. Recognizing that independent studios come in many sizes, the package includes special pricing for those needing more than five users. The offer comes after real feedback from game developers at the industry’s largest global event — the 2018 Game Developer Conference (GDC)held last February in San Francisco. “What we heard at GDC is that indie game developers love Helix Core, which is currently available for free to teams of five or fewer, but they wanted similar access to our Hansoft project management product,” says Nico Krüger, Perforce General Manager. “So, we’ve bundled them together.” In a crowded field of project management tools, Hansoft is one that was built by game developers for developers. It’s currently used by AAA studios for its speed, flexibility (it supports Agile and hybrid methodologies), and scalability. “[Indie game developers] told us that what they end up doing is using free project management software they find online — and there’s a lot of it these days — but that those tools aren’t really built for game developers,” says Krüger. “They now have access to the same version control and project management tools used by AAA game studios.” Visit the webpage for more details about the Perforce Indie Studio Pack.
      0 comments
    6. Enhance has announced the launch of the EnhanceMyApp podcast where they will talk about valuable tips and advice on how to improve your mobile app in all stages of development and beyond. EnhanceMyApp will be picking the brains of some of the mobile industry’s most successful developers, publishers and service providers, discussing everything from cultivating your first app idea, all the way to generating more installs. The first episode of the show will focus on all the phases of building your first mobile game, from start to finish. Indie game developers Damien Crawford and Robert Podgorski join to discuss everything from how game jams can spark your creative juices, to how user feedback should be taken seriously in the development phase. Head over to Enhance.co/podcast and subscribe on your preferred listening platform. Enhance also has a Facebook, Twitter, and Instagram as well with the latest updates on the show.
      0 comments
    7. Unity has released the official 2018.1 build and provided it for download. As we reported here when we spoke to Unity at the Game Developers Conference in March, the 2018.1 release includes the Scriptable Render Pipeline (SRP) Preview, the new C# Job System and Entity Component System (ECS), a Package Manager, improvements to level design and shaders, and more. Watch the video for an overview. Scriptable Render Pipeline (SRP) The Scriptable Render Pipeline allows developers and technical artists to have the full power of modern GPU hardware using C# code and material shaders. Two render pipelines are being provided with Unity: the High-Definition Render Pipeline for developers with AAA aspirations, and the Lightweight Render Pipeline (LW RP) for those looking for a combination of beauty, speed, and optimized battery life on mobile devices and similar platforms. High Definition Render Pipeline (HD RP) This modern renderer will support a limited set of platforms: PC DX11+, PS4, XBox One, Metal, Vulkan - no XR. It targets  high-end PCs and consoles and prioritizes high definition visuals., The tradeoff is that HD RP will not work with less powerful platforms, and some learning and re-tooling may be required. The renderer is a hybrid Tile/Cluster Forward/Deferred renderer. It features volumetric and unified lighting, new light shapes, and decals.   Lightweight Render Pipeline (LW RP) This pipeline is a single-pass forward renderer intended to use fewer draw calls, which is an ideal solution for mobile and performance-hungry applications like XR. LW RP has its own process for rendering and requires shaders written with it in mind.  C# Job System and Entity Component System (ECS) The Entity Component System combined with the Job system allows Unity developers to take full advantage of multicore processors without the programming headache. Developers can use the extra horsepower to add more effects, complexity, and behaviors to their games. Level Design and Shaders More improvements have been added to help artists, designers, and developers collaborate more efficiently by making it possible to create levels, cinematic content, and gameplay sequences without coding. Packages Unity 2018.1 builds on the Package Manager introduced in Unity 2017.2 with a new Package Manager User Interface, the Hub and Project Templates, and more to help developers get started faster and more efficiently. Unity's goal is to make the toolset more modular where features can be installed, removed, enabled, and disabled quickly and easily. Much more is included in the 2018.1 release. Learn more and view all the details at the Unity announcement here.
      0 comments
    8. DreamHack and Game Jolt will showcase 10 games from this jam at the Game Jolt booth June 1-3, during DreamHack Austin!   DreamHack is the world’s largest digital festival, bringing esports tournaments, BYOC LAN parties, cosplay competitions, music acts, technology exhibitions and panel discussions to all 11 of their events in 6 countries across the globe! You too can be a part of it all by entering in this jam! Use this thread to find teams and discuss the jam or join us on Discord: https://discord.gg/FA9YgEH
      Visit the Jam Page: http://jams.gamejolt.io/dreamhackaustinjam
      0 comments
    9. Qualcomm has released a new version of Snapdragon Profiler, the mobile performance profiling tool. The release includes various bug fixes, a new Analysis mode, new Trace metrics across various SoC subsystems, and an experimental feature allowing developers to use the latest profiler updates for graphics profiling without having to update mobile drivers. Here are the full release notes: Added Analysis mode to view statistics on captured data Added experimental feature allowing applications to use the latest profiling layer (Android N+) Added additional Trace metrics across various subsystems of the SoC Improved context reporting for graphics applications Improved metric calculations for OpenCL applications Improved render target information in Inspector view for Rendering Stages Trace captures Improved overhead performance of graphics profiling layer Fixed 'Used' resource filter not displaying some resources for some graphics applications Fixed crash when toggling Resources view 'Used' and 'Bound' buttons  Fixed crash while performing a Snapshot capture with metrics enabled for some graphics applications Fixed issue where various Realtime metrics kept running after application exit on device Fixed various issues in Snapshot affecting EGL images Fixed issue affecting Snapshot captures that created empty columns in Data Explorer for per-drawcall metrics Learn more and download at https://developer.qualcomm.com/software/snapdragon-profiler/tools.
      1 comments
    10. Bluezone Corporation is proud to present 'Alien Planet Ambiences and Sound Effects', a brand-new immersive SFX sound library for immediate download. This inspiring downloadable sample pack offers a wide range of meticulously recorded nature ambience sound effects : immersive rainforest sounds, flowing water and mountain stream sounds, blowing winds and also includes textured night creatures, mysterious alien insects, dark and disturbing animal / organic sounds. 'Alien Planet Ambiences and Sound Effects' has been designed to enhance science fiction, mystery, suspense and fantasy video game and scoring projects. WAV files are provided as 24 Bit / 96 kHz. Like all Bluezone sound libraries, you can inject these inspirational elements into your commercial projects without having to worry about any additional licensing fees.     Editor : Bluezone Reference : BC0247 Delivery : Download link Download size : 984 MB Extracted size : 1010 MB Format : WAV Resolution : 24 Bit / 96 kHz Channel : Stereo License : Royalty free Total files : 113 Total samples : 98 WAV   More information and download : Alien Planet Ambiences and Sound Effects
      0 comments

    Recent Articles and Tutorials

    1. Automated builds are a pretty important tool in a game developer's toolbox. If you're only testing your Unreal-based game in the editor (even in standalone mode), you're in for a rude awakening when new bugs pop up in a shipping build that you've never encountered before. You also don't want to manually package your game from the editor every time you want to test said shipping build, or to distribute it to your testers (or Steam for that matter). Unreal already provides a pretty robust build system, and it's very easy to use it in combination with build automation tools. My build system of choice is  Gradle , since I use it pretty extensively in my backend Java and Scala work. It's pretty easy to learn, runs everywhere, and gives you a lot of powerful functionality right out of the gate. This won't be a Gradle tutorial necessarily, so you can familiarize yourself with how Gradle works via the documentation on their site. Primarily, I use Gradle to manage a version file in my game's Git repository, which is compiled into the game so that I have version information in Blueprint and C++ logic. I use that version to prevent out-of-date clients from connecting to newer servers, and having the version compiled in makes it a little more difficult for malicious clients to spoof that build number, as opposed to having it stored in one of the INI files. I also use Gradle to automate uploading my client build to Steam via the use of steamcmd. Unreal's command line build tool is known as the Unreal Automation Tool. Any time you package from the editor, or use the Unreal Frontend Tool, you're using UAT on the back end. Epic provides handy scripts in the Engine/Build/BatchFiles directory to make use of UAT from the command line, namely RunUAT.bat. Since it's just a batch file, I can call it from a Gradle build script very easily. Here's the Gradle task snippet I use to package and archive my client: task packageClientUAT(type: Exec) { workingDir = "[UnrealEngineDir]\\Engine\\Build\\BatchFiles" def projectDirSafe = project.projectDir.toString().replaceAll(/[\\]/) { m -> "\\\\" } def archiveDir = projectDirSafe + "\\\\Archive\\\\Client" def archiveDirFile = new File(archiveDir) if(!archiveDirFile.exists() && !archiveDirFile.mkdirs()) { throw new Exception("Could not create client archive directory.") } if(!new File(archiveDir + "\\\\WindowsClient").deleteDir()) { throw new Exception("Could not delete final client directory.") } commandLine "cmd", "/c", "RunUAT", "BuildCookRun", "-project=\"" + projectDirSafe + "\\\\[ProjectName].uproject\"", "-noP4", "-platform=Win64", "-clientconfig=Development", "-serverconfig=Development", "-cook", "-allmaps", "-build", "-stage", "-pak", "-archive", "-noeditor", "-archivedirectory=\"" + archiveDir + "\"" } My build.gradle file is in my project's directory, alongside the uproject file. This snippet will spit the packaged client out into [ProjectDir]\Archive\Client. For the versioning, I have two files that Gradle directly modifies. The first, a simple text file, just has a number in it. In my [ProjectName]\Source\[ProjectName] folder, I have a [ProjectName]Build.txt file with the current build number in it. Additionally, in that same folder, I have a C++ header file with the following in it: #pragma once #define [PROJECT]_MAJOR_VERSION 0 #define [PROJECT]_MINOR_VERSION 1 #define [PROJECT]_BUILD_NUMBER ### #define [PROJECT]_BUILD_STAGE "Pre-Alpha" Here's my Gradle task that increments the build number in that text file, and then replaces the value in the header file: task incrementVersion { doLast { def version = 0 def ProjectName = "[ProjectName]" def vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Build.txt") if(vfile.exists()) { String versionContents = vfile.text version = Integer.parseInt(versionContents) } version += 1 vfile.text = version vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Version.h") if(vfile.exists()) { String pname = ProjectName.toUpperCase() String versionContents = vfile.text versionContents = versionContents.replaceAll(/_BUILD_NUMBER ([0-9]+)/) { m -> "_BUILD_NUMBER " + version } vfile.text = versionContents } } } I manually edit the major and minor versions and the build stage as needed, since they don't need to update with every build. You can include that header into any C++ file that needs to know the build number, and I also have a few static methods in my game's Blueprint static library that wrap them so I can get the version numbers in Blueprint. I also have some tasks for automatically checking those files into the Git repository and committing them: task prepareVersion(type: Exec) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "reset" } task stageVersion(type: Exec, dependsOn: prepareVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "add", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Build.txt", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Version.h" } task commitVersion(type: Exec, dependsOn: stageVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "commit", "-m", "\"Incrementing [ProjectName] version\"" } And here's the task I use to actually push it to Steam: task pushBuildSteam(type: Exec) { doFirst { println "Pushing build to Steam..." } workingDir = "[SteamworksDir]\\sdk\\tools\\ContentBuilder" commandLine "cmd", "/c", "builder\\steamcmd.exe", "+set_steam_guard_code", "[steam_guard_code]", "+login", "\"[username]\"", "\"[password]\"", "+run_app_build", "..\\scripts\\[CorrectVDFFile].vdf", "+quit" } You can also spit out a generated VDF file with the build number in the build's description so that it'll show up in SteamPipe. I have a single Gradle task I run that increments the build number, checks in those version files, packages both the client and server, and then uploads the packaged client to Steam. Another great thing about Gradle is that Jenkins has a solid plugin for it, so you can use Jenkins to set up a nice continuous integration pipeline for your game to push builds out regularly, which you absolutely should do if you're working with a team.
      0 comments
    2. If you are a software developer working in the video game industry and wondering what else you could do to improve the quality of your product or make the development process easier and you don't use static analysis – it's just the right time to start doing so. You doubt that? OK, I'll try to convince you. And if you are just looking to see what coding mistakes are common with video-game and game-engine developers, then you're, again, at the right place: I have picked the most interesting ones for you. Why you should use static analysis Although video-game development includes a lot of steps, coding remains one of the basic ones. Even if you don't write thousands of code lines, you have to use various tools whose quality determines how comfortable the process is and what the ultimate result will be. If you are a developer of such tools (such as game engines), this shouldn't sound new to you. Why is static analysis useful in software development in general? The main reasons are as follows: Bugs grow costlier and more difficult to fix over time. One of the principal advantages of static analysis is detecting bugs at early development stages (you can find an error when code writing). Therefore, by using static analysis, you could make the development process easier both for your coworkers and yourself, detecting and fixing lots of bugs before they become a headache. Static analysis tools can recognize a great variety of bug patterns (copy-paste, typos, incorrect use of functions, etc.). Static analysis is generally good at detecting those defects that defy dynamic analysis. However, the opposite is also true. Negative side effects of static analysis (such as false positives) are usually 'smoothed out' through means provided by the developers of powerful analyzers. These means include various mechanisms of warning suppression (individually, by pattern, and so on), switching off irrelevant diagnostics, and excluding files and folders from analysis. By properly tweaking the analyzer settings, you can reduce the amount of 'noise' greatly. As my colleague Andrey Karpov has shown in the article about the check of EFL Core Libraries, tweaking the settings helps cut down the number of false positives to 10-15% at most. But it's all theory, and you are probably interested in real-life examples. Well then, I've got some. Static analysis in Unreal Engine If you have read this far, I assume you don't need me telling you about Unreal Engine or the Epic Games company – and if you don't hold these guys in high regard, I wonder whom you do. The PVS-Studio team has cooperated with Epic Games a few times to help them adopt static analysis in their project (Unreal Engine) and fix bugs and false positives issued by the analyzer. I'm sure both parties found this experience interesting and rewarding. One of the effects of this cooperation was adding a special flag into Unreal Engine allowing the developers to conveniently integrate static analysis into the build system of Unreal Engine projects. The idea is simple: the guys do care about the quality of their code and adopt various techniques available to maintain it, static analysis being one of them. John Carmack on static analysis John Carmack, one of the most renowned video-game developers, once called the adoption of static analysis one of his most important accomplishments as a programmer: "The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis." The next time you hear someone say that static analysis is a tool for newbies, show them this quote. Carmack described his experience in this article, which I strongly recommend checking out – both for motivation and general knowledge. Bugs found in video games and game engines with static analysis One of the best ways to prove that static analysis is a useful method is probably through examples showing it in action. That's what the PVS-Studio team does while checking open-source projects. It's a practice that everyone benefits from: The project authors get a bug report and a chance to fix the defects. Ideally, it should be done in quite a different way, though: they should run the analyzer and check the warnings on their own rather than fix them relying on someone else's log or article. It matters, if only because the authors of articles might miss some important details or inadvertently focus on bugs that aren't much critical to the project. The analyzer developers can use the analysis results as the basis for improving the tool, as well as demonstrating its bug-detecting capabilities. The readers learn about bug patterns, gain experience, and get started with static analysis. So, isn't that proof of the effectiveness of this approach? Teams already using static analysis While some are pondering introducing static analysis into their development process, others have long been using and benefiting from it! These are, among others, Rocksteady, Epic Games, ZeniMax Media, Oculus, Codemasters, Wargaming (source). Top 10 software bugs in video-game industry I should point right off that this is not some ultimate top list, but simply bugs which were found by PVS-Studio in video games and game engines and which I found most interesting. As usual, I recommend trying to find the bug in each example on your own first and only then go on reading the warning and my comments. You'll enjoy the article more that way. Tenth place Source: Anomalies in X-Ray Engine The tenth place is given to the bug in X-Ray Engine employed by the S.T.A.L.K.E.R game series. If you played them, you surely remember many of funny (and not quite funny) bugs they had. This is especially true for S.T.A.L.K.E.R.: Clear Sky, which was impossible to play without patches (I still remember the bug that 'killed' all my saves). The analysis revealed there were many bugs indeed. Here's one of them. BOOL CActor::net_Spawn(CSE_Abstract* DC) { .... m_States.empty(); .... } PVS-Studio warning: V530 The return value of function 'empty' is required to be utilized. The problem is quite simple: the programmer is not using the logical value returned by the empty method describing whether the container is empty or not. Since the expression contains nothing but a method call, I assume the programmer intended to clear the container but called the empty method instead of clear by mistake. You may argue that this bug is too plain for a Top-10 list, but that's the nice thing about it! Even though it looks straightforward to someone not involved in writing this code, 'plain' bugs like that still appear (and get caught) in various projects. Ninth place Source: Long-Awaited Check of CryEngine V Going on with bugs in game engines. This time it's a code fragment from CryEngine V. The number of bugs I have encountered in games based on this engine was not as large as in games based on X-Ray Engine, but it turns out it has plenty of suspicious fragments too. void CCryDXGLDeviceContext:: OMGetBlendState(...., FLOAT BlendFactor[4], ....) { CCryDXGLBlendState::ToInterface(ppBlendState, m_spBlendState); if ((*ppBlendState) != NULL) (*ppBlendState)->AddRef(); BlendFactor[0] = m_auBlendFactor[0]; BlendFactor[1] = m_auBlendFactor[1]; BlendFactor[2] = m_auBlendFactor[2]; BlendFactor[2] = m_auBlendFactor[3]; *pSampleMask = m_uSampleMask; } PVS-Studio warning: V519 The 'BlendFactor[2]' variable is assigned values twice successively. Perhaps this is a mistake. As we mentioned many times in our articles, no one is safe from mistyping. Practice has also shown more than once that static analysis is very good at detecting copy-paste-related mistakes and typos. In the code above, the values of the m_auBlendFactor array are copied to the BlendFactor array, but the programmer made a mistake by writing BlendFactor[2] twice. As a result, the value at m_auBlendFactor[3] is written to BlendFactor[2], while the value at BlendFactor[3] remains unchanged. Eighth place Source:  Unicorn in Space: Analyzing the Source Code of 'Space Engineers'  Let's change course a bit and take a look at some C# code. What we've got here is an example from the Space Engineers project, a 'sandbox' game about building and maintaining various structures in space. I haven't played it myself, but one guy said in the comments, "I'm not much surprised at the results ". Well, we did manage to find some bugs worth mentioning, and here's two of them. public void Init(string cueName) { .... if (m_arcade.Hash == MyStringHash.NullOrEmpty && m_realistic.Hash == MyStringHash.NullOrEmpty) MySandboxGame.Log.WriteLine(string.Format( "Could not find any sound for '{0}'", cueName)); else { if (m_arcade.IsNull) string.Format( "Could not find arcade sound for '{0}'", cueName); if (m_realistic.IsNull) string.Format( "Could not find realistic sound for '{0}'", cueName); } } PVS-Studio warnings:  V3010  The return value of function 'Format' is required to be utilized.  V3010  The return value of function 'Format' is required to be utilized. As you can see, it's a common problem, both in C++-code and C#-code, where programmers ignore methods' return values. The String.Format method forms the resulting string based on the format string and objects to substitute and then returns it. In the code above, the else-branch contains two string.Format calls, but their return values are never used. It looks like the programmer intended to log these messages in the same way as they did in the then-branch of the if statement using the MySandboxGame.Log.WriteLine method. Seventh place Source: Analyzing the Quake III Arena GPL project Did I tell you already that static analysis is good at detecting typos? Well, here's one more example. void Terrain_AddMovePoint(....) { .... x = ( v[ 0 ] - p->origin[ 0 ] ) / p->scale_x; y = ( v[ 1 ] - p->origin[ 1 ] ) / p->scale_x; .... } PVS-Studio warning: V537 Consider reviewing the correctness of 'scale_x' item's usage. The variables x and y are assigned values, yet both expressions contain the p->scale_x subexpression, which doesn't look right. It seems the second subexpression should be p->scale_y instead. Sixth place Source: Checking the Unity C# Source Code Unity Technologies recently made the code of their proprietary game engine, Unity, available to the public, so we couldn't ignore the event. The check revealed a lot of interesting code fragments; here's one of them: public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 || pageSize <= 1000) && totalFilters <= 10; } PVS-Studio warning: V3063 A part of conditional expression is always true if it is evaluated: pageSize <= 1000. What we have here is an incorrect check of the range of pageSize. The programmer must have intended to check that the pageSize value was within the range [1; 1000] but made a sad mistake by typing the '||' operator instead of '&&'. The subexpression actually checks nothing. Fifth place Source: Discussing Errors in Unity3D's Open-Source Components This place was given to a nice bug found in Unity3D's components. The article mentioned above was written a year prior to revealing Unity's source code, but there already were interesting defects to find there at the time. public static CrawledMemorySnapshot Unpack(....) { .... var result = new CrawledMemorySnapshot { .... staticFields = packedSnapshot.typeDescriptions .Where(t => t.staticFieldBytes != null & t.staticFieldBytes.Length > 0) .Select(t => UnpackStaticFields(t)) .ToArray() .... }; .... } PVS-Studio warning: V3080 Possible null dereference. Consider inspecting 't.staticFieldBytes'. Note the lambda expression passed as an argument to the Where method. The code suggests that the typeDescriptions collection could contain elements whose staticFieldBytes member could be null – hence the check staticFieldBytes != null before accessing the Length property. However, the programmer mixed up the '&' and '&&' operators. It means that no matter the result of the left expression (true/false), the right one will also be evaluated, causing a NullReferenceException to be thrown when accessing the Length property if staticFieldBytes == null. Using the '&&' operator could help avoid this because the right expression won't be evaluated if staticFieldBytes == null. Although Unity was the only engine to hit this top list twice, it doesn't prevent enthusiasts from building wonderful games on it. Including one(s) about fighting bugs. Fourth place Source:  Analysis of Godot Engine's Source Code  Sometimes we come across interesting cases that have to do with missing keywords. For example, an exception object is created but never used because the programmer forgot to add the throw keyword. Such errors are found both in C# projects and C++ projects. There was one missing keyword in Godot Engine as well. Variant Variant::get(const Variant& p_index, bool *r_valid) const { .... if (ie.type == InputEvent::ACTION) { if (str =="action") { valid=true; return ie.action.action; } else if (str == "pressed") { valid=true; ie.action.pressed; } } .... } PVS-Studio warning: V607 Ownerless expression 'ie.action.pressed'. In the given code fragment it is obvious that a programmer wanted to return a certain value of the Variant type, depending on the values ie.type and str. Yet only one of the return statements – return ie.action.action; – is written properly, while the other is lacking the return operator, which prevents the needed value from returning and forces the method to keep executing. Third place Source: PVS-Studio: analyzing Doom 3 code Now we've reached the Top-3 section. The third place is awarded to a small code fragment of Doom 3's source code. As I already said, the fact that a bug may look straightforward to an outside observer and make you wonder how one could have made such a mistake at all shouldn't be confusing: there are actually all sorts of bugs to be found in the field... void Sys_GetCurrentMemoryStatus( sysMemoryStats_t &stats ) { .... memset( &statex, sizeof( statex ), 0 ); .... } PVS-Studio warning: V575 The 'memset' function processes '0' elements. Inspect the third argument. To figure this error out, we should recall the signature of the memset function: void* memset(void* dest, int ch, size_t count); If you compare it with the call above, you'll notice that the last two arguments are swapped; as a result, some memory block that was meant to be cleared will stay unchanged. Second place The second place is taken by a bug found in the code of the Xenko game engine written in C#. Source: Catching Errors in the Xenko Game Engine private static ImageDescription CreateDescription(TextureDimension dimension, int width, int height, int depth, ....) { .... } public static Image New3D(int width, int height, int depth, ....) { return new Image(CreateDescription(TextureDimension.Texture3D, width, width, depth, mipMapCount, format, 1), dataPointer, 0, null, false); } PVS-Studio warning: V3065 Parameter 'height' is not utilized inside method's body. The programmer made a mistake when passing the arguments to the CreateDescription method. If you look at its signature, you'll see that the second, third, and fourth parameters are named width, height, and depth, respectively. But the call passes the arguments width, width, and depth. Looks strange, doesn't it? The analyzer, too, found it strange enough to point it out. First place Source: A Long-Awaited Check of Unreal Engine 4 This Top-10 list is led by a bug from Unreal Engine. Just like it was with the leader of "Top 10 Bugs in the C++ Projects of 2017", I knew this bug should be given the first place the very moment I saw it. bool VertInfluencedByActiveBone( FParticleEmitterInstance* Owner, USkeletalMeshComponent* InSkelMeshComponent, int32 InVertexIndex, int32* OutBoneIndex = NULL); void UParticleModuleLocationSkelVertSurface::Spawn(....) { .... int32 BoneIndex1, BoneIndex2, BoneIndex3; BoneIndex1 = BoneIndex2 = BoneIndex3 = INDEX_NONE; if(!VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[0], &BoneIndex1) && !VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[1], &BoneIndex2) && !VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[2]) &BoneIndex3) { .... } PVS-Studio warning: V564 The '&' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' operator. I wouldn't be surprised if you read the warning, looked at the code, and wondered, "Well, where's the '&' used instead of '&&'?" But if we simplify the conditional expression of the if statement, keeping in mind that the last parameter of the VertInfluencedByActiveBone function has a default value, this will clear it all up: if (!foo(....) && !foo(....) && !foo(....) & arg) Take a close look at the last subexpression: !VertInfluencedByActiveBone(Owner, SourceComponent, VertIndex[2]) &BoneIndex3 This parameter with the default value has messed things up: but for this value, the code would have never compiled at all. But since it's there, the code compiles successfully and the bug blends in as successfully. It's this suspicious fragment that the analyzer spotted – the infix operation '&' with the left operand of type bool and the right operand of type int32. Conclusion I hope I have convinced you that static analysis is a very useful tool when developing video games and game engines, and one more option to help you improve the quality of your code (and thus of the final product). If you are a video game industry developer, you ought to tell your coworkers about static analysis and refer them to this article. Wondering where to start? Start with PVS-Studio.
      0 comments
    3. Are you considering developing a mobile game? If you want to be successful, you should avoid making the most common mistakes. Trying to build a game without figuring out the right approach is a recipe for disaster. There are experienced developers like MyIsaak from Sweden, an expert in C# and Unity game development who frequently livestreams his Diablo III Board game development process. The more you learn from professionals like him, who have gone through the processes, the faster you can avoid making the common game development mistakes. Here are the top 5 game developments mistakes to avoid. 1. Ignoring the target group Creating a game without properly studying your target group is a huge barrier that will keep it from being downloaded and played. Who are you building the game for? What are their main interests? What activities do they like participating in? Can the target group afford the gaming app? Does your target audience use iOS or Android operating system? Seeking answers to the above questions and others can assist in correctly identifying your target group. Consequently, you can design its functionalities around their preferences. Just like an ice cream vendor is likely to set up shop at the beach during summer, you should focus on consumers whose behaviors are likely to motivate them to play your game. For example, if you want to create a gun shooting game, you can target college-educated men in their 20s and 30s, while targeting other demographic groups secondarily. 2. Failure to study the competitors To create a successful game that will increase positive reviews and retention, you should analyze the strengths and weaknesses of your competitors. Studying your competition will allow you to understand your capabilities to match or surpass the consumer demand for your mobile or web-based game. If you fail to do it, you will miss the opportunity to fill the actual needs in the gaming industry and correct the mistakes made by the developers in your niche. You should ask questions like “What is their target audience?” “How many downloads do their gaming app receive per month?” “What resources do they have?”. Answering such questions will give you a good idea of the abilities of your competition, the feasibility of competing with them, and the kind of strategies to adopt to out-compete them. Importantly, instead of copying the strategies of your competitors, develop a game that is unique and provides an added value to users. 3.  Design failure When building a mobile or a web-based game, it’s essential that you employ a unique art style and visually appealing design—without any unnecessary sophistication. People are attracted to games based on the user interface design and intuitiveness. So, instead of spending a lot of time trying to write elegant and complicated lines of code, take your time to provide a better design. No one will download a game because its code is beautiful. People download games to play them. And, the design of the game plays a critical part in assisting them to make the download decision. 4. Trying to do everything If you try to code, develop 3D models, create animations, do voice-overs—all by yourself—then you are likely to create an unsuccessful game. The secret to succeeding is to complete tasks that align with your core competencies and outsource the rest of the work. Learn how to divide your work to other experts and save yourself the headaches. You should also avoid trying to reinvent the wheel. Instead of trying to do everything by yourself, go for robust tools available out there that can make your life easier. Trying to build something that is already provided in the open source community will consume a lot of your development time and make you feel frustrated. Furthermore, do not be the beta tester of your own game. If you request someone else to do the beta testing, you’ll get useful outside perspective that will assist in discovering some hidden issues. 5. Having unrealistic expectations Unrealistic expectations are very dangerous because they set your game development career up for failure. Do not put your expectations so high such that you force somethings to work your way. For example, dreaming too big can make you include too many rewards in your game. As much as rewards are pivotal for improving engagement and keeping users motivated, gamers will not take you seriously if you incorporate rewards in every little achievement they make. Instead, you should select specific rewards for specific checkpoints; this way, the players will feel that they’ve made major milestones. Conclusion The mistakes discussed in this article have made several game developers to be unsuccessful in their careers. So, be cautious and keep your head high so that you don’t fall into the same trap. The best way to avoid making the common mistakes is through learning how to build games from the experts. Who knows? You could develop the next big game in the industry.
      0 comments
    4. The Game Dev Loadout podcast (here and here) has shared their recent Cliff Harris interview with us. The original interview transcription is at Game Dev Loadout. You can also see more podcast interviews from Game Dev Loadout here on GameDev.net. Game designer, programmer, and running a one-man games business, Cliff Harris of Positech Games (@cliffski32 on GameDev.net) is behind strategy and simulation games such as Production Line, Democracy, and Gratuitous Space Battles.  In this interview, Cliff talks about his journey in the game industry, emphasizes on making sure you are taking advice from the right people, and why you need to invest in a great chair for yourself and your team. How did you get started in the game industry? I started programming as a kid when I was 11 on a tiny home computer and then I kind of got out of computers, had all sorts of weird careers working on the stock market and in I.T and boat building and playing the guitar. I taught myself C++ from a home study course on floppy disks then I started making games and that was 20 years ago so it was before indie games were really a thing. I worked at Elixir Studios and Lionhead for 3 years and then I quit that and been full-time indie ever since. What was that push that made you join the game industry? At that time in the U.K, you couldn’t get a job as a programmer unless you had a former qualification as a programmer or you already had a job as a programmer. I always wanted to program video games like a lot of people did and it became possible with the internet that you could program games from home. It was a hobby that turned into a career and a proper business. What is something we probably don’t know about in AI mechanic that we should know? The thing that people don’t realize about AI is that it’s very easy to make something seem alive with few lines of code. Like I have two cats and they really both are predictable. Sometimes the way my cats behave I think you are just few thousand lines of C++. When you break it down, it’s not that difficult to program NPC’s in games that behave and move in quite a natural way. It’s funny because some stuff that you consider easy, it’s almost impossible. So like finding your way out of a maze, we’re pretty good at that but computers are rubbish at it. If you want to program an NPC with text so that it seems to converse with you in a way that doesn’t seem too scripted, that’s not too hard. The reason people tend to encounter rubbish NPC in AI games is due to an obsession with having voice acting for everything. It’s easy to program an NPC that can talk to you about 100 different topics with thousands of different variations that sound fluent and responsive to you. But if you want to record a thousand lines of dialogue and you have got some big name actor then, you can’t afford it. So that’s actually the bottleneck. The other possibility is the translation. The problem is the minute you translate it to German, it’s absolute chaos. The sentence structure is different and you obviously have to pay a little bit to translate the text into German. And if you have got like 20 languages and there are tons of different phrases in each language, suddenly that’s what becomes expensive and difficult. But as you program, it’s fairly easy and fun. Production Line Would you suggest new developers stick to text or voice acting? If you want to capture the whole audience and to have a successful indie game, you probably are going to want it in like 10 different languages. So to get it professionally done is probably like 10 cents a word. So every time you type like one line of dialogue, it’s going to cost you like 50 dollars. If you want to record the audio and then you want that in 10 languages, that’s going to cost you even more. And does it really add to the experience? I’m not sure it does. And the other thing is that I am very impatient and I value time a lot so I’d rather read the dialogue personally than hear it. I want to talk about the worst moment of your career, that one moment that’s still vivid in your mind. They are a few. I have left a game company in a very heated argument, which is funny because I recently bumped into the guy I had that argument with and he’s fine, and I am fine and we get along, but it was just really stressful. I have had bugs that are really bad. I had a bug that could potentially destroy someone’s computer and someone reported a problem to me and I was like “No, I am sure this is not a bug of mine” and I looked into it and I did a lot of experimentation to get this bug to trigger on my PC. I thought “Oh my God, I cannot believe I had made this mistake.” Yeah, I remember frantically coding this patch for it and putting it out immediately and no one else got infected by it. But I was really worried. It was a very rare circumstance but it would delete stuff on your hard drive. It started happening to me but people had anti-virus so it was like a flag going “Hey, what are you doing?” That’s a reputation-destroying bug that deleted a file where the player goes to delete some content intentionally and under certain circumstances, the file name that would be passed in would be empty and given the structure of that code it would then start to delete everything. I had to make loads of checks in all of the areas of my game that can never delete a file. There’s so much code wrapped around this and I can never ever get into that position again because that was bad. From the heated argument story, how did you handle the situation and what was the outcome? Well, I handled it badly, actually, everyone handled it badly. I mean, I had been in this company too long and I was very frustrated and sort of wanted to leave but I had stayed on the assumption that things would change. I just lost it, I got into a very strong argument and someone there had stormed off. And that was it, that was how I left. Looking back on it, I stayed in the company too long. I am very Indie, I don’t like working for other people. I am not very easy to employ because I am quite outspoken and maybe not massively respectful of authority. So it was kind of like a bad fit. To be honest I have stormed out of another job as well. So what should we take away from that experience? The game industry is a lot like the music industry because almost everyone in the music industry makes no money and some people make piles of money. And then there are many more people who want to be in games than the industry will support, which is a lot like music. So you end up with very stressed people who are doing what they love doing, they are very passionate about it and very intensely into it and they are working very hard. It is basically a recipe for everyone to get into huge fights and hate each other. It’s a bit better now I think but there were a lot of companies where people would work very long hours and they would work very late and they go beyond what you would normally put into a job. They aren’t massively well paid at any point. Also, the game industry attracts people like me who are fairly introverted, so we have good technical skills but not very good people skills. You put all that together and it is going to be tough. But I don’t think people hold grudges, I mean I have had two huge arguments with people, some famous and some not and always ended up getting along with them because ultimately very few people come into the industry for money because there isn’t much. So generally you realize that everyone is here just to try to make cool games and work on great stuff no matter how much we like kind of get on each other’s nerves. What are bad recommendations that you hear in your profession? There are a lot of people that will give you advice. Like if you go on to Reddit and sort of say “Oh I am thinking of making this game, oh I am thinking of porting to this platform”. You will get a massive amount of advice and almost all of it will be rubbish. That’s because the people who are always on Facebook, Twitter and Reddit, just sit there, waiting on giving advice to others. I am never like I have to go there and post it and see if anyone is ever asking about something I know about. I’ve read a huge amount of stuff that says there is no point in advertising your indie game. Some of them might say that I have spent like a $100 on Facebook Ads, I didn’t notice any difference in the number of downloads of my game and what you can take from that is you know one person who spent a $100, they did not receive a direct difference. I’ve spent $265,000 on Facebook Ads over the years, so as you can imagine I am pretty convinced they work. But what I am saying is that I have literally a thousand times more data points on that issue. So if you see me talking about the Pro’s and Con’s of Facebook Ads, well I know what I am talking about. If you hear me talking about whether Android or iPhone is a better platform, then just slap me and tell me “Cliff you have no idea, no you don’t know anything about it”. Because you have never made a mobile game and I think that’s the most important thing, you have to know whose advice you are taking and on what basis. We will have an intuition about games and stuff and what should work, what shouldn’t work and often our intuition is wrong. I honestly think that free to play should not work but it clearly does. I think that there is no way you can run an entire games business based on selling virtual hacks, I know that cannot possibly work. But I know it does. So you have to listen to specific people on specific issues. What is one of the best investments you have ever made? Could be an investment in time, energy, or money. Okay, I’ll give you two, time and money. The time thing is learning how to code my own game engine. I learned to code everything like graphics, sound and whatever from scratch because I had to. It gives you a big insight into performance and why your game may be slow. I get to code very fast stuff when I need it. Also, I am not relying on anything else. I am not paying any money to Unity and if they update Unity or Unreal and it breaks everything, obviously I don’t care because I am not using it. So that’s given me an independence that’s very helpful. Obviously, it takes a lot of time. Gratuitous Space Battles The physical thing which I have told a lot of people over the years is this chair that I’m sitting in. If you’ve got a tech startup and you have coders and you have money then buy these Herman Miller Aeron chairs for everyone. It was 800 pounds which is a lot, $1100. I am fully aware that it’s a crazy amount of money. But I sit on it ten hours a day and they last forever. It affects your health and mood. It really is a good investment and even if you think it’s kind of over the top and unnecessary. Don’t buy a $20 chair from a cheap shop. Make some sort of investment in your comfort because in the end you spend a lot of time sitting in front of the keyboard and it’s so much better for you to be comfortable when you do that.  BETA PHASE: Rapid Fire Questions What was holding you back from joining the game industry? I don’t know. Nothing would stop me right now but back before I joined, the industry was tiny. It was not something people did. Nobody knew anyone in the games industry, but now nothing would hold me back. Back then it was just like “that’s not a real job”. What’s the personal habit that contributes to your success? Getting out bed early. I was a real workaholic. I would be working at my desk by 8 o’clock, which doesn’t sound that early but for a computer programmer, it is because then we work till like 8 o’clock in the evening. Just learning how to get out bed and go straight to work without messing around, that’s the best thing. What’s the best piece of advice you’ve ever received? Ask for more money. Most of the time you don’t get it, but occasionally you do and it’s just like free money and you’re like “Hey, they weren’t going to give me that unless I said it”. It’s really awkward but do it. What’s that great marketing tip to make yourself and your game stand out? Put faces in your games. Read into Neuroscience, a disproportionate amount of your brain is dedicated to looking for faces and looking for emotions in faces and if you have three images of games and if one of them has a face in the image, that is the one everyone looks at first. What resources should we game developers use to get started today? If you are technically-minded and if you want to be a programmer more than anything else, then buy some good C++ books and learn how to code everything from scratch. If you’re not, then use Unity but don’t put off the idea of coding from the ground up, it’s very valuable. Imagine you woke up the next morning in a brand new world and you knew no one, you still have all the experience and knowledge you currently have today, your food and shelter are taking care of and you have a laptop. What would you do step by step on the path to join and become successful in the game industry? I would make a PC strategy game and sell it on Steam. It would be 2D, top down. I would find an artist to do like revenue share on it and I would do all my end marketing and game designing and code it from scratch, probably don’t even need ‘Unity’ to do that and it might even be easier. That’s the safest and best route to actually making a game that will make money. You can listen to the entire podcast and more interviews with developers and others in the industry at Game Dev Loadout.
      0 comments
    5. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells.  Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively:   Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
      0 comments
    6. As DMarket platform development continues, we would like to share a few case studies regarding the newest functionality on the platform. With these case studies we would like to illuminate our development process, user requirements gathering and analysis, and much more. The first case study we’re going to share is “DMarket Wallet Development”: how, when and why we decided to implement functionality which improved virtual items and DMarket Coins collection and transfer.   DMarket cares about every user, no matter how big or small the user group is. And that’s why we recently updated our virtual item purchase rules, bringing a brand new “DMarket Wallet” feature to our users. So let’s take a retrospective look and find out what challenges were brought to the DMarket team within this feature and how these challenges were met.  DMarket and Blockchain Virtual Items Trading Rules Within the first major release of the DMarket platform, we provided you with a wide range of possibilities and options, assuring Steam account connection within user profile, confirmation of account and device ownership via email for enhanced security, DMarket Coins, and DMarket Tokens exchanging, transactions with intermediaries on blockchain within our very own Blockchain system called “Blockchain Explorer”.  And well, regarding Blockchain... While it has totally proved itself as a working solution, we were having some issues with malefactors, as many of you may already know. DMarket specialists conducted an investigation, which resulted in a perfect solution: we found out that a few users created bots to buy our Founder’s Mark, a limited special edition memorabilia to commemorate the launch of the platform, for lower prices and then sell them at higher prices. Sure thing, there was no chance left for regular users. A month ago we fixed the issue, to our great relief. We received real feedback from the community, a real proof-of-concept. The whole DMarket ecosystem turned out to be truly resilient, proving all our detractors wrong.  And while we’ve got proof, we also studied how users feel about platform UX since blockchain requires additional efforts when buying or selling an item. With our first release of the Demo platform, we let users sign transactions with a private key from their wallet. In terms of user experience, that practice wasn’t too good. Just think about it: you should enter the private key each time you want to buy or sell something. Every transaction required a lot of actions from the user’s side, which is unacceptable for a great and user-friendly product like ours. That’s why we decided to move from that approach, and create a single unified “wallet” on the DMarket side in order to store all the DMarket Coins and virtual items on our side and let users buy or sell stuff with a few clicks instead of the previous lengthy process. In other words, every user received a public key which serves as a destination address, while private keys were held on the DMarket side in order to avoid transaction signing each time something is traded. This improved usability, and most of our users were satisfied with the update. But not all of them... You Can’t Make Everyone Happy….. Can You? By removing the transaction signing requirement we made most of our users happy. Of course, within a large number of happy people, we can always find those who are worried about owning a public key wallet. When you don’t own a public key, it may disturb you a little bit. Sure, DMarket is a trusted company, but there are people who can’t trust even themselves sometimes. So what were we gonna do? Ignore them? Roll back to the previous way of buying virtual items and coins? No! We decided to go the other way. Within the briefest timeline, the DMarket team decided on providing a completely new feature on Blockchain Explorer — wallet creation functionality. With this functionality, you can create a wallet with 2 clicks, getting both private and public keys and therefore ensuring your items’ and coins’ safety. Basically, we separated wallets on the marketplace and wallets on our Blockchain in order to keep great UX and reassure a small part of users with a needed option to keep everything in a separate wallet. You can go shopping on DMarket with no additional effort of signing every transaction, and at the same time, you are free to transfer all the goods to your very own wallet anytime you feel the need. Isn’t it cool? Outcome  After implementation of a separate DMarket wallet creation feature, we killed two birds with one stone and made everyone satisfied. Though it wasn’t too easy since we had a very limited amount of time. So if you need it, you can try it. Moreover, the creation of DMarket wallet within Blockchain Explorer will let you manage your wallet even on mobile devices because with downloading private and public keys you also get a 12-word mnemonic phrase to restore your wallet on any mobile device, from smartphone to tablet. Wow, but that’s another story — a story about DMarket Wallet application which has recently become available for Android users in the Google Play. Stay tuned for more case studies and don't forget to check out our website and gain firsthand experience with in-game items trading!
      0 comments
    7. I got into a conversation awhile ago with some fellow game artists and the prospect of signing bonuses got brought up. Out of the group, I was the only one who had negotiated any sort of sign on bonus or payment above and beyond base compensation. My goal with this article and possibly others is to inform and motivate other artists to work on this aspect of their “portfolio” and start treating their career as a business.  What is a Sign-On Bonus? Quite simply, a sign-on bonus is a sum of money offered to a prospective candidate in order to get them to join. It is quite common in other industries but rarely seen in the games unless it is at the executive level. Unfortunately, conversations centered around artist employment usually stops at base compensation, quite literally leaving money on the table. Why Ask for a Sign-On Bonus? There are many reasons to ask for a sign-on bonus. In my experience, it has been to compensate for some delta between how much I need vs. how much the company is offering. For example, a company has offered a candidate a position paying $50k/year. However, research indicates that the candidate requires $60k/year in order to keep in line with their personal financial requirements and long-term goals. Instead of turning down the offer wholesale, they may ask for a $10k sign on bonus with actionable terms to partially bridge the gap. Whatever the reason may be, the ask needs to be reasonable. Would you like a $100k sign-on bonus? Of course! Should you ask for it? Probably not. A sign-on bonus is a tool to reduce risk, not a tool to help you buy a shiny new sports car. Aspects to Consider Before one goes and asks for a huge sum of money, there are some aspects of sign-on bonus negotiations the candidate needs to keep in mind. - The more experience you have, the more leverage you have to negotiate - You must have confidence in your role as an employee. - You must have done your research. This includes knowing your personal financial goals and how the prospective offer changes, influences or diminishes those goals. To the first point, the more experience one has, the better. If the candidate is a junior employee (roughly defined as less than 3 years of industry experience) or looking for their first job in the industry, it is highly unlikely that a company will entertain a conversation about sign-on bonuses. Getting into the industry is highly competitive and there is likely very little motivation for a company to pay a sign-on bonus for one candidate when there a dozens (or hundreds in some cases) of other candidates that will jump at the first offer. Additionally, the candidate must have confidence in succeeding at the desired role in the company. They have to know that they can handle the day to day responsibilities as well as any extra demands that may come up during production. The company needs to be convinced of their ability to be a team player and, as a result, is willing to put a little extra money down to hire them. In other words, the candidate needs to reduce the company’s risk in hiring them enough that an extra payment or two is negligible. And finally, they must know where they sit financially and where they want to be in the short-, mid-, and long-term. Having this information at hand is essential to the negotiation process. The Role Risk Plays in Employment The interviewing process is a tricky one for all parties involved and it revolves around the idea of risk. Is this candidate low-risk or high-risk? The risk level depends on a number of factors: portfolio quality, experience, soft skills, etc. Were you late for the interview? Your risk to the company just went up. Did you bring additional portfolio materials that were not online? Your risk just went down and you became more hireable. If a candidate has an offer in hand, then the company sees enough potential to get a return on their investment with as little risk as possible. At this point, the company is confident in their ability as an employee (ie. low risk) and they are willing to give them money in return for that ability. Asking for the Sign-On Bonus So what now? The candidate has gone through the interview process, the company has offered them a position and base compensation. Unfortunately, the offer falls below expectations. Here is where the knowledge and research of the position and personal financial goals comes in. The candidate has to know what their thresholds and limits are. If they ask for $60k/year and the company is offering $50k, how do you ask for the bonus? Once again, it comes down to risk. Here is the point to remember: risk is not one-sided. The candidate takes on risk by changing companies as well. The candidate has to leverage the sign-on bonus as a way to reduce risk for both parties. Here is the important part: A sign-on bonus reduces the company’s risk because they are not commiting to an increased salary and bonus payouts can be staggered and have terms attached to them. The sign-on bonus reduces the candidate’s risk because it bridges the gap between the offered compensation and their personal financial requirements. If the sign-on bonus is reasonable and the company has the finances (explained further down below), it is a win-win for both parties and hopefully the beginning a profitable business relationship. A Bit about Finances First off, I am not a business accountant nor have I managed finances for a business. I am sure that it is much more complicated than my example below and there are a lot of considerations to take into account. In my experience, however, I do know that base compensation (ie. salary) will generally fall into a different line item category on the financial books than a bonus payout. When companies determine how many open spots they have, it is usually done by department with inter-departmental salary caps. For a simplified example, an environment department’s total salary cap is $500k/year. They have 9 artists being paid $50k/year, leaving $50k/year remaining for the 10th member of the team. Remember the example I gave earlier asking for $60k/year? The company cannot offer that salary because it breaks the departmental cap. However, since bonuses typically do not affect departmental caps, the company can pull from a different pool of money without increasing their risk by committing to a higher salary. Sweetening the Deal Coming right out of the gate and asking for an upfront payment might be too aggressive of a play (ie. high risk for the company). One way around this is to attach terms to the bonus. What does this mean? Take the situation above. A candidate has an offer for $50k/year but would like a bit more. If through the course of discussing compensation they get the sense that $10k is too high, they can offer to break up the payments based on terms. For example, a counterpoint to the initial base compensation offer could look like this: $50k/year salary $5k bonus payout #1 after 30 days of successful employment $5k bonus payout #2 after 365 days (or any length of time) of successful employment In this example, the candidate is guaranteed $55k/year salary for 2 years. If they factor in a standard 3% cost of living raise, the first 3 years of employment looks like this: Year 0-1 = $55,000 ($50,000 + $5,000 payout #1) Year 1-2 = $56,500 (($50,000 x 1.03%) + $5,000 payout #2) Year 2-3 = $53,045 ($51,500 x 1.03%) Now it might not be the $60k/year they had in mind but it is a great compromise to keep both parties comfortable. If the Company Says Yes Great news! The company said yes! What now? Personally, I always request at least a full 24 hours to crunch the final numbers. In the past, I’ve requested up to a week for full consideration. Even if you know you will say yes, doing due diligence with your finances one last time is always a good practice. Plug the numbers into a spreadsheet, look at your bills and expenses again, and review the whole offer (base compensation, bonus, time off/sick leave, medical/dental/vision, etc.). Discuss the offer with your significant other as well. You will see the offer in a different light when you wake up, so make sure you are not rushing into a situation you will regret. If the Company Say No If the company says no, then you have a difficult decision to make. Request time to review the offer and crunch the numbers. If it is a lateral move (same position, different company) then you have to ask if the switch is worth it. Only due diligence will offer that insight and you have to give yourself enough time to let those insights arrive. You might find yourself accepting the new position due to other non-financial reasons (which could be a whole separate article!). Conclusion/Final Thoughts  When it comes to negotiating during the interview process, it is very easy to take what you can get and run. You might fear that in asking for more, you will be disqualifying yourself from the position. Keep in mind that the offer has already been extended to you and a company will not rescind their offer simply because you came back with a counterpoint. Negotiations are expected at this stage and by putting forth a creative compromise, your first impression is that of someone who conducts themselves in a professional manner. Also keep in mind that negotiations do not always go well. There are countless factors that influence whether or not someone gets a sign-on bonus. Sometimes it all comes down to being there at the right time at the right place. Just make sure you do your due diligence and be ready when the opportunity presents itself. Hope this helps!
      0 comments
    8. Recently a long-awaited event has happen - Unity Technologies uploaded the C# source code of the game engine, available for free download on Github. The code of the engine and the editor is available. Of course, we couldn't pass up, especially since lately we've not written so many articles about checking projects on C#. Unity allows to use the provided sources only for information purposes. We'll use them exactly in these ways. Let's try out the latest version PVS-Studio 6.23 on the Unity code. Introduction Previously we've written an article about checking Unity. At that time so much C#-code was not available for the analysis: some components, libraries and examples of usage. However, the author of the article managed to find quite interesting bugs. How did Unity please us this time? I'm saying "please" and hope not to offend the authors of the project. Especially since the amount of the source Unity C#-code, presented on GitHub, is about 400 thousand lines (excluding empty) in 2058 files with the extension "cs". It's a lot, and the analyzer had a quite considerable scope. Now about the results. Before the analysis, I've slightly simplified the work, having enabled the mode of the code display according to the CWE classification for the found bugs. I've also activated the warnings suppression mechanism of the third level of certainty (Low). These settings are available in the drop-down menu of PVS-Studio in Visual Studio development environment, and in the parameters of the analyzer. Getting rid of the warnings with low certainty, I made the analysis of the Unity source code. As a result, I got 181 warnings of the first level of certainty (High) and 506 warnings of the second level of certainty (Medium). I have not studied absolutely all the warnings, because there were quite a lot of them. Developers or enthusiasts can easily conduct an in-depth analysis by testing Unity themselves. To do this, PVS-Studio provides free trial and free modes of using. Companies can also buy our product and get quick and detailed support along with the license. Judging by the fact that I immediately managed to find couple of real bugs practically in every group of warnings with one or two attempts, there are a lot of them in Unity. And yes, they are diverse. Let's review the most interesting errors. Results of the check Something's wrong with the flags PVS-Studio warning: V3001 There are identical sub-expressions 'MethodAttributes.Public' to the left and to the right of the '|' operator. SyncListStructProcessor.cs 240 MethodReference GenerateSerialization() { .... MethodDefinition serializeFunc = new MethodDefinition("SerializeItem", MethodAttributes.Public | MethodAttributes.Virtual | MethodAttributes.Public | // <= MethodAttributes.HideBySig, Weaver.voidType); .... } When combining enumeration flags MethodAttributes, an error was made: the Public value was used twice. Perhaps, the reason for this is the wrong code formatting. A similar bug is also made in code of the method GenerateDeserialization: V3001 There are identical sub-expressions 'MethodAttributes.Public' to the left and to the right of the '|' operator. SyncListStructProcessor.cs 309 Copy-Paste PVS-Studio warning: V3001 There are identical sub-expressions 'format == RenderTextureFormat.ARGBFloat' to the left and to the right of the '||' operator. RenderTextureEditor.cs 87 public static bool IsHDRFormat(RenderTextureFormat format) { Return (format == RenderTextureFormat.ARGBHalf || format == RenderTextureFormat.RGB111110Float || format == RenderTextureFormat.RGFloat || format == RenderTextureFormat.ARGBFloat || format == RenderTextureFormat.ARGBFloat || format == RenderTextureFormat.RFloat || format == RenderTextureFormat.RGHalf || format == RenderTextureFormat.RHalf); } I gave a piece of code, preliminary having formatted it, so the error is easily detected visually: the comparison with RenderTextureFormat.ARGBFloat is performed twice. In the original code, it looks differently: Probably, another value of enumeration RenderTextureFormat has to be used in one of two identical comparisons. Double work PVS-Studio warning: V3008 CWE-563 The 'fail' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1633, 1632. UNetWeaver.cs 1633 class Weaver { .... public static bool fail; .... static public bool IsValidTypeToGenerate(....) { .... if (....) { .... Weaver.fail = true; fail = true; return false; } return true; } .... } The true value is assigned twice to the value, as Weaver.fail and fail is one and the same static field of the Weaver class. Perhaps, there is no crucial error, but the code definitely needs attention. No options PVS-Studio warning: V3009 CWE-393 It's odd that this method always returns one and the same value of 'false'. ProjectBrowser.cs 1417 // Returns true if we should early out of OnGUI bool HandleCommandEventsForTreeView() { .... if (....) { .... if (....) return false; .... } return false; } The method always returns false. Pay attention to the comment in the beginning. A developer forgot about the result PVS-Studio warning: V3010 CWE-252 The return value of function 'Concat' is required to be utilized. AnimationRecording.cs 455 static public UndoPropertyModification[] Process(....) { .... discardedModifications.Concat(discardedRotationModifications); return discardedModifications.ToArray(); } When concatenating two arrays discardedModifications and discardedRotationModifications the author forgot to save the result. Probably a programmer assumed that the result would be expressed immediately in the array discardedModifications. But it is not so. As a result, the original array discardedModifications is returned from the method. The code needs to be corrected as follows: static public UndoPropertyModification[] Process(....) { .... return discardedModifications.Concat(discardedRotationModifications) .ToArray(); } Wrong variable was checked PVS-Studio warning: V3019 CWE-697 Possibly an incorrect variable is compared to null after type conversion using 'as' keyword. Check variables 'obj', 'newResolution'. GameViewSizesMenuItemProvider.cs 104 private static GameViewSize CastToGameViewSize(object obj) { GameViewSize newResolution = obj as GameViewSize; if (obj == null) { Debug.LogError("Incorrect input"); return null; } return newResolution; } In this method, the developers forgot to consider a situation where the variable objis not equal to null, but it will not be able to cast to the GameViewSize type. Then the variable newResolution will be set to null, and the debug output will not be made. A correct variant of code will be like this: private static GameViewSize CastToGameViewSize(object obj) { GameViewSize newResolution = obj as GameViewSize; if (newResolution == null) { Debug.LogError("Incorrect input"); } return newResolution; } Deficiency PVS-Studio warning: V3020 CWE-670 An unconditional 'return' within a loop. PolygonCollider2DEditor.cs 96 private void HandleDragAndDrop(Rect targetRect) { .... foreach (....) { .... if (....) { .... } return; } .... } The loop will execute only one iteration, after that the method terminates its work. Various scenarios are probable. For example, return must be inside the unit if, or somewhere before return, a directive continue is missing. It may well be that there is no error here, but then one should make the code more understandable. Unreachable code PVS-Studio warning: V3021 CWE-561 There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless CustomScriptAssembly.cs 179 public bool IsCompatibleWith(....) { .... if (buildingForEditor) return IsCompatibleWithEditor(); if (buildingForEditor) buildTarget = BuildTarget.NoTarget; // Editor .... } Two identical checks, following one after another. It is clear that in case of buildingForEditor equality to the true value, the second check is meaningless, because the first method terminates its work. If the value buildingForEditor is false, neither then-brunch nor if operator will be executed. There is an erroneous construction that requires correction. Unconditional condition PVS-Studio warning: V3022 CWE-570 Expression 'index < 0 && index >= parameters.Length' is always false. AnimatorControllerPlayable.bindings.cs 287 public AnimatorControllerParameter GetParameter(int index) { AnimatorControllerParameter[] param = parameters; if (index < 0 && index >= parameters.Length) throw new IndexOutOfRangeException( "Index must be between 0 and " + parameters.Length); return param[index]; } The condition of the index check is incorrect - the result will always be false. However, in case of passing the incorrect index to the GetParameter method, the exception IndexOutOfRangeException will still be thrown when attempting to access an array element in the return block. Although, the error message will be slightly different. One has to use || in a condition instead of the operator && so that the code worked the way a developer expected: public AnimatorControllerParameter GetParameter(int index) { AnimatorControllerParameter[] param = parameters; if (index < 0 || index >= parameters.Length) throw new IndexOutOfRangeException( "Index must be between 0 and " + parameters.Length); return param[index]; } Perhaps, due to the use of the Copy-Paste method, there is another the same error in the Unity code: PVS-Studio warning: V3022 CWE-570 Expression 'index < 0 && index >= parameters.Length' is always false. Animator.bindings.cs 711 And another similar error associated with the incorrect condition of the check of the array index: PVS-Studio warning: V3022 CWE-570 Expression 'handle.valueIndex < 0 && handle.valueIndex >= list.Length' is always false. StyleSheet.cs 81 static T CheckAccess<T>(T[] list, StyleValueType type, StyleValueHandle handle) { T value = default(T); if (handle.valueType != type) { Debug.LogErrorFormat(.... ); } else if (handle.valueIndex < 0 && handle.valueIndex >= list.Length) { Debug.LogError("Accessing invalid property"); } else { value = list[handle.valueIndex]; } return value; } And in this case, a release of the IndexOutOfRangeException exception is possible.As in the previous code fragments, one has to use the operator || instead of && to fix an error. Simply strange code Two warnings are issued for the code fragment below. PVS-Studio warning: V3022 CWE-571 Expression 'bRegisterAllDefinitions || (AudioSettings.GetSpatializerPluginName() == "GVR Audio Spatializer")' is always true. AudioExtensions.cs 463 PVS-Studio warning: V3022 CWE-571 Expression 'bRegisterAllDefinitions || (AudioSettings.GetAmbisonicDecoderPluginName() == "GVR Audio Spatializer")' is always true. AudioExtensions.cs 467 // This is where we register our built-in spatializer extensions. static private void RegisterBuiltinDefinitions() { bool bRegisterAllDefinitions = true; if (!m_BuiltinDefinitionsRegistered) { if (bRegisterAllDefinitions || (AudioSettings.GetSpatializerPluginName() == "GVR Audio Spatializer")) { } if (bRegisterAllDefinitions || (AudioSettings.GetAmbisonicDecoderPluginName() == "GVR Audio Spatializer")) { } m_BuiltinDefinitionsRegistered = true; } } It looks like an incomplete method. It is unclear why it has been left as such and why developers haven't commented the useless code blocks. All, that the method does at the moment: if (!m_BuiltinDefinitionsRegistered) { m_BuiltinDefinitionsRegistered = true; } Useless method PVS-Studio warning: V3022 CWE-570 Expression 'PerceptionRemotingPlugin.GetConnectionState() != HolographicStreamerConnectionState.Disconnected' is always false. HolographicEmulationWindow.cs 171 private void Disconnect() { if (PerceptionRemotingPlugin.GetConnectionState() != HolographicStreamerConnectionState.Disconnected) PerceptionRemotingPlugin.Disconnect(); } To clarify the situation, it is necessary to look at the declaration of the methodPerceptionRemotingPlugin.GetConnectionState(): internal static HolographicStreamerConnectionState GetConnectionState() { return HolographicStreamerConnectionState.Disconnected; } Thus, calling the Disconnect() method leads to nothing. One more error relates to the same method PerceptionRemotingPlugin.GetConnectionState(): PVS-Studio warning: V3022 CWE-570 Expression 'PerceptionRemotingPlugin.GetConnectionState() == HolographicStreamerConnectionState.Connected' is always false. HolographicEmulationWindow.cs 177 private bool IsConnectedToRemoteDevice() { return PerceptionRemotingPlugin.GetConnectionState() == HolographicStreamerConnectionState.Connected; } The result of the method is equivalent to the following: private bool IsConnectedToRemoteDevice() { return false; } As we can see, among the warnings V3022 many interesting ones were found. Probably, if one spends much time, he can increase the list. But let's move on. Not on the format PVS-Studio warning: V3025 CWE-685 Incorrect format. A different number of format items is expected while calling 'Format' function. Arguments not used: index. Physics2D.bindings.cs 2823 public void SetPath(....) { if (index < 0) throw new ArgumentOutOfRangeException( String.Format("Negative path index is invalid.", index)); .... } There is no error in code, but as the saying goes, the code "smells". Probably, an earlier message was more informative, like this: "Negative path index {0} is invalid.". Then it was simplified, but developers forgot to remove the parameter index for the method Format. Of course, this is not the same as a forgotten parameter for the indicated output string specifier, i.e. the construction of the type String.Format("Negative path index {0} is invalid."). In such a case, an exception would be thrown. But in our case we also need neatness when refactoring. The code has to be fixed as follows: public void SetPath(....) { if (index < 0) throw new ArgumentOutOfRangeException( "Negative path index is invalid."); .... } Substring of the substring PVS-Studio warning: V3053 An excessive expression. Examine the substrings 'UnityEngine.' and 'UnityEngine.SetupCoroutine'. StackTrace.cs 43 static bool IsSystemStacktraceType(object name) { string casted = (string)name; return casted.StartsWith("UnityEditor.") || casted.StartsWith("UnityEngine.") || casted.StartsWith("System.") || casted.StartsWith("UnityScript.Lang.") || casted.StartsWith("Boo.Lang.") || casted.StartsWith("UnityEngine.SetupCoroutine"); } Search of the substring "UnityEngine.SetupCoroutine" in the condition is meaningless, because before that the search for "UnityEngine." is performed. Therefore, the last check should be removed or one has to clarify the correctness of substrings. Another similar error: PVS-Studio warning: V3053 An excessive expression. Examine the substrings 'Windows.dll' and 'Windows.'. AssemblyHelper.cs 84 static private bool CouldBelongToDotNetOrWindowsRuntime(string assemblyPath) { return assemblyPath.IndexOf("mscorlib.dll") != -1 || assemblyPath.IndexOf("System.") != -1 || assemblyPath.IndexOf("Windows.dll") != -1 || // <= assemblyPath.IndexOf("Microsoft.") != -1 || assemblyPath.IndexOf("Windows.") != -1 || // <= assemblyPath.IndexOf("WinRTLegacy.dll") != -1 || assemblyPath.IndexOf("platform.dll") != -1; } Size does matter PVS-Studio warning: V3063 CWE-571 A part of conditional expression is always true if it is evaluated: pageSize <= 1000. UNETInterface.cs 584 public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 || pageSize <= 1000) && totalFilters <= 10; } Condition for a check of a valid page size is erroneous. Instead of the operator ||, one has to use &&. The corrected code: public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 && pageSize <= 1000) && totalFilters <= 10; } Possible division by zero PVS-Studio warning: V3064 CWE-369 Potential division by zero. Consider inspecting denominator '(float)(width - 1)'. ClothInspector.cs 249 Texture2D GenerateColorTexture(int width) { .... for (int i = 0; i < width; i++) colors[i] = GetGradientColor(i / (float)(width - 1)); .... } The problem may occur when passing the value width = 1 into the method. In the method, it is not checked anyway. The method GenerateColorTexture is called in the code just once with the parameter 100: void OnEnable() { if (s_ColorTexture == null) s_ColorTexture = GenerateColorTexture(100); .... } So, there is no error here so far. But, just in case, in the method GenerateColorTexture the possibility of transferring incorrect width value should be provided. Paradoxical check PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_Parent'. EditorWindow.cs 449 public void ShowPopup() { if (m_Parent == null) { .... Rect r = m_Parent.borderSize.Add(....); .... } } Probably, due to a typo, the execution of such code guarantees the use of the null reference m_Parent. The corrected code: public void ShowPopup() { if (m_Parent != null) { .... Rect r = m_Parent.borderSize.Add(....); .... } } The same error occurs later in the code: PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_Parent'. EditorWindow.cs 470 internal void ShowWithMode(ShowMode mode) { if (m_Parent == null) { .... Rect r = m_Parent.borderSize.Add(....); .... } And here's another interesting bug that can lead to access by a null reference due to incorrect check: PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'objects'. TypeSelectionList.cs 48 public TypeSelection(string typeName, Object[] objects) { System.Diagnostics.Debug.Assert(objects != null || objects.Length >= 1); .... } It seems to me that Unity developers quite often make errors related to misuse of operators || and && in conditions. In this case, if objects has a null value, then this will lead to a check of second part of the condition (objects != null || objects.Length >= 1), which will entail the unexpected throw of an exception. The error should be corrected as follows: public TypeSelection(string typeName, Object[] objects) { System.Diagnostics.Debug.Assert(objects != null && objects.Length >= 1); .... } Early nullifying PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_RowRects'. TreeViewControlGUI.cs 272 public override void GetFirstAndLastRowVisible(....) { .... if (rowCount != m_RowRects.Count) { m_RowRects = null; throw new InvalidOperationException(string.Format("....", rowCount, m_RowRects.Count)); } .... } In this case, the exception throw (access by the null reference m_RowRects) will happen when generating the message string for another exception. Code might be fixed, for example, as follows: public override void GetFirstAndLastRowVisible(....) { .... if (rowCount != m_RowRects.Count) { var m_RowRectsCount = m_RowRects.Count; m_RowRects = null; throw new InvalidOperationException(string.Format("....", rowCount, m_RowRectsCount)); } .... } One more error when checking PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'additionalOptions'. MonoCrossCompile.cs 279 static void CrossCompileAOT(....) { .... if (additionalOptions != null & additionalOptions.Trim().Length > 0) arguments += additionalOptions.Trim() + ","; .... } Due to the fact that the & operator is used in a condition, the second part of the condition will always be checked, regardless of the result of the check of the first part. In case if the variable additionalOptions has the null value, the exception throw is inevitable. The error has to be corrected, by using the operator && instead of &. As we can see, among the warnings with the number V3080 there are rather insidious errors. Late check PVS-Studio warning: V3095 CWE-476 The 'element' object was used before it was verified against null. Check lines: 101, 107. StyleContext.cs 101 public override void OnBeginElementTest(VisualElement element, ....) { if (element.IsDirty(ChangeType.Styles)) { .... } if (element != null && element.styleSheets != null) { .... } .... } The variable element is used without preliminary check for null. While later in the code this check is performed. The code probably needs to be corrected as follows: public override void OnBeginElementTest(VisualElement element, ....) { if (element != null) { if (element.IsDirty(ChangeType.Styles)) { .... } if (element.styleSheets != null) { .... } } .... } In code there are 18 more errors. Let me give you a list of the first 10: V3095 CWE-476 The 'property' object was used before it was verified against null. Check lines: 5137, 5154. EditorGUI.cs 5137 V3095 CWE-476 The 'exposedPropertyTable' object was used before it was verified against null. Check lines: 152, 154. ExposedReferenceDrawer.cs 152 V3095 CWE-476 The 'rectObjs' object was used before it was verified against null. Check lines: 97, 99. RectSelection.cs 97 V3095 CWE-476 The 'm_EditorCache' object was used before it was verified against null. Check lines: 134, 140. EditorCache.cs 134 V3095 CWE-476 The 'setup' object was used before it was verified against null. Check lines: 43, 47. TreeViewExpandAnimator.cs 43 V3095 CWE-476 The 'response.job' object was used before it was verified against null. Check lines: 88, 99. AssetStoreClient.cs 88 V3095 CWE-476 The 'compilationTask' object was used before it was verified against null. Check lines: 1010, 1011. EditorCompilation.cs 1010 V3095 CWE-476 The 'm_GenericPresetLibraryInspector' object was used before it was verified against null. Check lines: 35, 36. CurvePresetLibraryInspector.cs 35 V3095 CWE-476 The 'Event.current' object was used before it was verified against null. Check lines: 574, 620. AvatarMaskInspector.cs 574 V3095 CWE-476 The 'm_GenericPresetLibraryInspector' object was used before it was verified against null. Check lines: 31, 32. ColorPresetLibraryInspector.cs 31 Wrong Equals method PVS-Studio warning: V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. CurveEditorSelection.cs 74 public override bool Equals(object _other) { CurveSelection other = (CurveSelection)_other; return other.curveID == curveID && other.key == key && other.type == type; } Overload of the Equals method was implemented carelessly. One has to take into account the possibility of obtaining null as a parameter, as this can lead to a throw of an exception, which hasn't been considered in the calling code. In addition, the situation, when _other can't be cast to the type CurveSelection, will lead to a throw of an exception. The code has to be fixed. A good example of the implementation of Object.equals overload is given in the documentation. In the code, there are other similar errors: V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. SpritePackerWindow.cs 40 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. PlatformIconField.cs 28 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ShapeEditor.cs 161 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ActiveEditorTrackerBindings.gen.cs 33 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ProfilerFrameDataView.bindings.cs 60 Once again about the check for null inequality PVS-Studio warning: V3125 CWE-476 The 'camera' object was used after it was verified against null. Check lines: 184, 180. ARBackgroundRenderer.cs 184 protected void DisableARBackgroundRendering() { .... if (camera != null) camera.clearFlags = m_CameraClearFlags; // Command buffer camera.RemoveCommandBuffer(CameraEvent.BeforeForwardOpaque, m_CommandBuffer); camera.RemoveCommandBuffer(CameraEvent.BeforeGBuffer, m_CommandBuffer); } When the camera variable is used the first time, it is checked for null inequality. But further along the code the developers forget to do it. The correct variant could be like this: protected void DisableARBackgroundRendering() { .... if (camera != null) { camera.clearFlags = m_CameraClearFlags; // Command buffer camera.RemoveCommandBuffer(CameraEvent.BeforeForwardOpaque, m_CommandBuffer); camera.RemoveCommandBuffer(CameraEvent.BeforeGBuffer, m_CommandBuffer); } } Another similar error: PVS-Studio warning: V3125 CWE-476 The 'item' object was used after it was verified against null. Check lines: 88, 85. TreeViewForAudioMixerGroups.cs 88 protected override Texture GetIconForItem(TreeViewItem item) { if (item != null && item.icon != null) return item.icon; if (item.id == kNoneItemID) // <= return k_AudioListenerIcon; return k_AudioGroupIcon; } An error, that in some cases leads to an access by a null link. The execution of the condition in the first block if enables the exit from the method. However, if this does not happen, then there is no guarantee that the reference item is non-zero. Here is the corrected version of the code: protected override Texture GetIconForItem(TreeViewItem item) { if (item != null) { if (item.icon != null) return item.icon; if (item.id == kNoneItemID) return k_AudioListenerIcon; } return k_AudioGroupIcon; } In the code there are 12 similar errors. Let me give you a list of the first 10: V3125 CWE-476 The 'element' object was used after it was verified against null. Check lines: 132, 107. StyleContext.cs 132 V3125 CWE-476 The 'mi.DeclaringType' object was used after it was verified against null. Check lines: 68, 49. AttributeHelper.cs 68 V3125 CWE-476 The 'label' object was used after it was verified against null. Check lines: 5016, 4999. EditorGUI.cs 5016 V3125 CWE-476 The 'Event.current' object was used after it was verified against null. Check lines: 277, 268. HostView.cs 277 V3125 CWE-476 The 'bpst' object was used after it was verified against null. Check lines: 96, 92. BuildPlayerSceneTreeView.cs 96 V3125 CWE-476 The 'state' object was used after it was verified against null. Check lines: 417, 404. EditorGUIExt.cs 417 V3125 CWE-476 The 'dock' object was used after it was verified against null. Check lines: 370, 365. WindowLayout.cs 370 V3125 CWE-476 The 'info' object was used after it was verified against null. Check lines: 234, 226. AssetStoreAssetInspector.cs 234 V3125 CWE-476 The 'platformProvider' object was used after it was verified against null. Check lines: 262, 222. CodeStrippingUtils.cs 262 V3125 CWE-476 The 'm_ControlPoints' object was used after it was verified against null. Check lines: 373, 361. EdgeControl.cs 373 The choice turned out to be small PVS-Studio warning: V3136 CWE-691 Constant expression in switch statement. HolographicEmulationWindow.cs 261 void ConnectionStateGUI() { .... HolographicStreamerConnectionState connectionState = PerceptionRemotingPlugin.GetConnectionState(); switch (connectionState) { .... } .... } The method PerceptionRemotingPlugin.GetConnectionState() is to blame here. We have already come across it when we were analyzing the warnings V3022: internal static HolographicStreamerConnectionState GetConnectionState() { return HolographicStreamerConnectionState.Disconnected; } The method will return a constant. This code is very strange. It needs to be paid attention. Conclusions I think we can stop at this point, otherwise the article will become boring and overextended. Again, I listed the errors that I just couldn't miss. Sure, the Unity code contains a big number of the erroneous and incorrect constructions, that need to be fixed. The difficulty is that many of the issued warnings are very controversial and only the author of the code is able to make the exact "diagnosis" in each case. Generally speaking about the Unity project, we can say that it is rich for errors, but taking into account the size of its code base (400 thousand lines), it's not so bad. Nevertheless, I hope that the authors will not neglect the code analysis tools to improve the quality of their product. Use PVS-Studio and I wish you bugless code!  
      0 comments
    9. Intention This article is intended to give a brief look into the logistics of machine learning. Do not expect to become an expert on the field just by reading this. However, I hope that the article goes into just enough detail so that it sparks your interest in learning more about AI and how it can be applied to various fields such as games. Once you finish reading the article, I recommend looking at the resources posted below. If you have any questions, feel free to message me on Twitter @adityaXharsh. How Neural Networks Work Neural networks work by using a system of receiving inputs, sending outputs, and performing self-corrections based on the difference between the output and expected output, also known as the cost. Neural networks are composed of neurons, which in turn compose layers, or collections of neurons. For example, there is an input layer and an output layer. In between the these two layers, there are layers known as hidden layers. These layers allow for more complex and nuanced behavior by the neural network. A neural network can be thought of as a multi-tier cake: the first tier of the cake represents the input, the tiers in between, or lack thereof, represent the hidden layers, and the last tier represents the output. The two mechanisms of learning are Forward Propagation and Backward Propagation. Forward Propagation uses linear algebra for calculating what the activation of each neuron of the next layer should be, and then pushing, or propagating, those values forward. Backward Propagation uses calculus to determine what values in the network need to be changed in order to bring the output closer to the expected output. Forward Propagation As can be seen from the gif above, each layer is composed of multiple neurons, and each neuron is connected to every other neuron of the following and previous layer, save for the input and output layers since they are not surrounding by layers from both sides. To put it simply, a neural network represents a collection of activations, weights, and biases. They can be defined as: Activation: A value representing how strongly a neuron is firing. Weight: How strong the connection is between two neurons. Affects how much of the activation is propagated onto the next layer. Bias: A minimum threshold for whether or not the current neuron's activation and weight should affect the next neuron's activation. Each neuron has an activation and a bias. Every connection to every neuron is represented as a weight. The activations, weights, biases, and connections can be represented using matrices. Activations are calculated using this formula: After the inner portion of the function has been computed, the resulting matrix gets pumped into a special function known as the Sigmoid Function. The sigmoid is defined as: The sigmoid function is handy since its output is locked between a range of zero and one. This process is repeated until the activations of the output neurons have been calculated. Backward Propagation The process of a neural network performing self-correction is referred to as Backward Propagation or backprop. This article will not go into detail about backprop since it can be a confusing topic. To summarize, the algorithm uses a technique in calculus known as Gradient Descent. Given a plane in an infinite number of dimensions, the direction of change that minimizes the error must be found. The goal of using gradient descent is to modify the weights and biases such that the error in the network approaches zero. Furthermore, you can find the cost, or error, of a network using this formula: Unlike forward propagation, which is done from input to output, backward propagation goes from output to input. For every activation, find the error in that neuron, how much of a role it played in the error of the output, and adjust accordingly. This technique uses concepts such as the chain rule, partial derivatives, and multi-variate calculus; therefore, it's a good idea to brush up on one's calculus skills. High Level Algorithm Initialize matrices for weights and biases for all layers to a random decimal number between -1 and 1. Propagate input through the network. Compare output with the expected output. Backwards propagate the correction back into the network. Repeat this for N number of training samples. Source Code If you're interested in looking into the guts of a neural network, check out AI Chan! It's a simple to integrate library for machine learning I wrote in C++. Feel free to learn from it and use it in your own projects. https://bitbucket.org/mrsaturnsan/aichan/ Resources http://neuralnetworksanddeeplearning.com/ https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A    
      0 comments
    10. The following is an excerpt from the book, Unity 2017 Game Development Essentials - Third Edition written by Tommaso Lintrami and published by Packt Publishing. Unity makes the game production process simple by giving you a set of logical steps to build any conceivable game scenario. Renowned for being non-game-type specific, Unity offers you a blank canvas and a set of consistent procedures to let your imagination be the limit of your creativity. Essential Unity concepts By establishing the use of the Game Object concept, you are able to break down parts of your game into easily manageable objects, which are made of many individual component parts. By making individual objects within the game, introducing functionality to them with each component you add, you are able to infinitely expand your game in a logical, progressive manner. Component parts in turn have Variables, which are essentially properties of the component, or settings to control them with. By adjusting these variables, you'll have complete control over the effect that a component has on your object. The following diagram illustrates this: Assets These are the building blocks of all Unity projects. From textures in the form of image files, through 3D models for meshes, and sound files for effects, Unity refers to the files you'll use to create your game as assets. This is why, in any Unity project folder, all files used are stored in a child folder named Assets. This Assets folder is mirrored in the Project panel of the Unity interface; see the interface section in this chapter for more detail. Scenes In Unity, you should think of scenes as individual levels or areas of game content. However, some developers create entire games in a single scene, such as puzzle games, by dynamically loading content through the code. By constructing your game with many scenes, you'll be able to distribute loading times and test different parts of your game individually. New scenes are often used separately to a game scene you may be working on in order to prototype or test a piece of potential gameplay.

      Any currently open scene is what you are working on. In Unity 2017, you can load more scenes into the hierarchy while editing, and even at runtime, through the new SceneManager API, where two or more scenes can be worked on simultaneously. Scenes can be manipulated and constructed by using the Hierarchy and Scene views. OK, now that we know what assets and scenes let’s start setting up a scene and building a game asset. Setting up a scene and preparing game assets Create a new scene from the main menu by navigating to Assets | Create | Scene, and name it ParallaxGame. In this new scene, we will set up, step by step, all the elements for our 2D game prototype. First of all, we will switch the camera setting in the Scene view to 2D by clicking on the button as shown by the red arrow in the following screenshot: As you can see, now the Scene view camera is orthographic. You can't rotate it as you wish, as you can do with the 3D camera. Of course, we will want to change this setting on our Main Camera as well. Also, we want to change the Orthographic size to 4.5 to have the correct view of the scene. Instead, for the Skybox, we will choose a very dark or black color as clear color in the depth setting. This is how the Inspector should look when these settings are done:   While the Clipping Planes distances are important for setting the size of the frustum cone of a 3D, for the Perspective camera (inside which everything will be rendered by the engine), we should only set the Orthographic Size to 4.5, to have the correct distance of the 2D camera from the scene. When these settings are done, proceed by importing Chapter2-3-4.unitypackage into the project. You can either double-click on the package file with Unity open, or use the top menu: Assets | Import | Custom Package. If you haven't imported all the materials from the book's code already, be sure to include the Sprites subfolder. After the import, look in the Sprites/Parallax/DarkCave folder in the Project view and you will find some images imported as textures (as per default). The first thing we want to do now is to change the import settings of these images, in the Inspector, from Texture to Sprite (2D and UI). To do so, select all the images in the Project view in the Sprites/Parallax/DarkCave folder, all except the _reference_main_post file. Which is just a picture used as a reference of what the game level should look like: The Import Settings shown in the Inspector after selecting the seven images in the Project view The Max Size setting is hidden (-) because we have a multi-selection of image files. After having made the multiple selections, again, in the Inspector, we will do the following: Set the Texture Type option to Sprites (2D and UI). By default, images are imported as textures; to import them as Sprites, this type must be set. Uncheck the Generate Mip Maps option as we don't need MIP maps for this project as we are not going to look at the Sprites from a distant point of view, for example, games with the zoom-in/zoom-out feature (like the original Grand Theft Auto 2D game) would need this setting checked. Set Max Size to the maximum allowed. To ensure that you import all the images at their maximum resolution, set this to 8192. This is the maximum resolution size for an image on a modern PC, imported as a Sprite or texture. We set it so high because most of the background images we have in the collection are around 6,000 pixels wide. Click on the Apply button to apply these changes to all the images that were selected: The Project view showing the content of the folder after the images have been set to Sprite in the Import Settings. Placing the prefabs in the game Unity can place the prefabs in the game in many ways, the usual, visual method is to drag a stored prefab or another kind of file/object directly into the scene. Before dragging in the Sprites we imported, we will create an empty GameObject and rename it ParallaxCave. We will drag the layer images we just imported as Sprites, one by one, from the Project view (pointing at the Assets/Chapters2-3-4/Sprites/Background/DarkCave folder) into the Scene view, or more simply, directly in the Hierarchy view as the children of our ParallaxCaveGameObject, resulting in a scene Hierarchy like the one illustrated here: You can't drag all of them instantly because Unity will prompt you to save an animation filename for the selected collection of Sprites; we will see this later for our character and for the collectable graphics. Importing and placing background layers In any game engine, 2D elements, such as Sprites, are rendered following a sort order; this order is also called the z-order because it is a way to express the depth or to cope with the missing z axis in a two-dimensional context. The sort order is assigned an integer number which can be positive or negative; 0 is the middle point of this draw order. Ideally, a sort order of zero expresses the middle ground, where the player will act, or near its layer. Look at this image: Image courtesy of Wikipedia: parallax scrolling   All positive numbers will render the Sprite element in front of the other elements with a lower number. The graphic set we are going to use was taken from the Open Game Art website at http://opengameart.org. For simplicity, the provided background image files are named with a number within parentheses, for example, middleground(z1), which means that this image should be rendered with a z sort order of 1. Change the sort order property of the Sprite component on each child object under ParallaxCave according to the value in the parentheses at the end of their filenames. This will rearrange the graphics into the appropriately sorted order. After we place and set the correct layer order for all the images, we should arrange and scale the layers in a proper manner to end as something like the reference image furnished in the Assets/Chapters2-3-4/Sprites/Background/DarkCave/ folder. You can take a look at the final result for this part anytime, by saving the current scene and loading the Chapter3_start.unity scene. You just read an excerpt from the book, Unity 2017 Game Development Essentials - Third Edition written by Tommaso Lintrami and published by Packt Publishing. Use the code ORGDA10 at checkout to get recommended eBook retail price for $10 only until April 30, 2018.  
      0 comments
    11. This reference guide has now been proofread by @stimarco (Sean Timarco Baggaley). Please give your thanks to him. The guide should now be far easier to read and understand than previous revisions, enjoy! Note: The normal mapping tutorial has been temporarily moved, to be added back as its own topic, to help separate the two for more clarity.   If anyone has any corrections, please contact me. 3D Graphics Primer 1: Textures. This is a quick reference for artists who are starting out. The first topic revolves around textures and the many things an artist who is starting out needs to understand. I am primarily a 3d artist and my focus will therefore be primarily on 3d art. However, some of this information is applicable to 2d artists. Textures What is a texture?

      By classical definition a texture is the visual and esp. tactile quality of a surface (Dictionary.com).

      Since current games lack the ability to convey tactile sensations, a texture in game terms simply refers to the visual quality of a surface, with an implicit tactile quality. That is, a rock texture should give the impression of the surface of a rock, and depending on the type, a rough or smooth tactile quality. We see these types of surfaces in real life and feel them in real life. So when we see the texture, we know what it feels like without needing to touch it due to our past experiences. But a lot more goes into making a convincing texture beyond the simple tactile quality.

      As you will learn as you read on, textures in games is a very complex topic, with many elements involved in creating them for realtime rendering.

      We will look at: Texture File Types & Formats Texture Image Size Texture File Size Texture Types Tiling Textures Texture Bit Depth Making Normal Maps (Brief overview only) Pixel Density and Fill Rate Limitation Mipmaps Further Reading     Further Reading Creating and using textures is such a big subject that covering it entirely within this one primer is simply not sensible. All I can sensibly achieve here is a skimming over the surface, so here are some links to further reading matter.

      Beautiful Yet Friendly - Written by Guillaume Provost, hosted by Adam Bromell. This is a very interesting article that goes into some depth about basic optimizations and the thought process when designing and modeling a level. It goes into the technical side of things to truly give you an understanding on what is going on in the background. You can use the information in this article to find out how to build models that use fewer resources -- polygons, textures, etc. -- for the same results.
      This is the reason why this is the first article I am linking to: it is imperative to understand the topics discussed in this article. If you need any extra explanation after reading it, you can PM me and I am more than happy to help. However, parts of this article go outside the texture realm of things and into the mesh side, so keep that in mind if you're focusing on learning textures at the moment.

      UVW Mapping Tutorial - by Waylon Brinck. This is about the best tutorial I have found for a topic that gives all 3D artists a headache: unwrapping your three-dimensional object into a two-dimensional plane for 2D painting. It is the process by which all 3D artists place a texture on a mesh (model). NOTE: while this tutorial is very good and will help you in learning the process, UVW mapping/unwrapping is just one of those things you must practice and experiment with for a while before you truly understand it.

      Poop In My Mouth's Tutorials - By Ben Mathis. Probably the only professional I know who has such a scary website name, but don't be afraid! I swear there is nothing terrible beyond that link. He has a ton of excellent tutorials, short and long, that cover both the modeling and texturing processes, ranging from normal-mapping to UVW unwrapping. You may want to read this reference first before delving into some of his tutorials. Texture File Types & Formats In the computer world, textures are really nothing more than image files applied to a model. Because of this, a variety of common computer image formats can be used. These include, .TGA, .DDS, .BMP, and even .JPG (or .JPEG). Almost any digital image format can be used, but some things must be taken into consideration:

      In the modern world of gaming, being heavily reliant on shaders, formats like the .JPG format are rarely used. This is because .JPG, and others like it, are lossy formats, where data in the image file is actually thrown away to make the file smaller. This process can result in compression artifacts The problem is that these artifacts will interfere with shaders, because these rely on having all the data contained within the image intact. Because of this, lossless formats are used -- formats like .DDS (if lossless option chosen), .BMP, and .TGA.

      However, there is such a thing called S3TC (also known as "dxt") compression. This was a compression technique developed for use on Savage 3D graphics cards, with the benefit of keeping a texture compressed within video memory whereas non-S3TC-compressed textures are not. This results in a 1:8 or greater compression ratio and can allow either more textures to be used in a scene, or can be used to increase the resolution of a texture without using more memory. S3TC compression can be made to work with any format, but is most commonly associated with the .DDS format.

      Just like the .jpg and other lossy formats, any texture using S3TC will suffer compression artifacts, and as such is not suitable for normal maps, (which we'll discuss a little later on).

      Even with S3TC it is common to use a lossless format for the texture format, and then apply S3TC when necessary. This is done to provide an artist with the ability to have lossless textures when needed -- e.g. for normal maps -- but then provide them with a method for compression on textures that could benefit from S3TC compression, such as diffuse textures. Texture Image Size The engineers who design computers and their component parts like us to feed data to their hardware in chunks that have dimensions defined as powers of two. (E.g. 16 pixels, 32 pixels, 64 pixels, and so on.) While it is possible to have a texture that is not the power of two, it is generally a good idea to stick to power-of-two sizes for compatibility reasons (especially if you're targeting older hardware). That is, if you're creating a texture for a game, you want to use image dimensions that are the power of two. Examples, 32x32, 16x32, 2048x1024, 1024x1024, 512x512, 512x32, etc. Say for example, you have a mesh/model and you're UV Unwrapping, for a game you must work within dimensions that are a power of two.

      Powers of two include: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and so on.

      What if you want to use images that aren't powers of two? In such cases, you can often use uneven pixel ratios. This means you can create your texture at, say, 1024x768, and then you can save it as 1024x1024. When you're applying your texture to your mesh you can stretch it back out in proportion. However, it is best to go for a 1:1 pixel ratio and create the texture starting at the power of 2, but the stretching is one method for getting around this if needed

      Please refer to the "Pixel Density and Fill Rate Limitation" section for more in depth info on how exactly to choose your image size. Texture File Size File size is important for a number of reasons. The file is the actual amount of memory (permanent or temporary) that the texture requires.
      For an obvious example, an uncompressed .BMP could be 6MB, this is the space it requires to be saved on a hard drive and within the video and/or system RAM. Using compression we can squeeze the same image into a file size of, say, 400 KB, or possibly even smaller. Compression, like that used by .JPG and other similar formats, will only do compression on permanent storage media, such as hard drives. That is to say, when the image is stored in video card memory it must be uncompressed within the memory, so you truly only get a benefit on storing the texture on its permanent medium but not in memory during use. Enter S3TC The key benefit of the S3TC compression system is that it compresses on hard drives, discs, and other media, while also staying compressed within video card memory.

      But why should you care what size it is within the video memory?

      Video cards have onboard memory called, unsuprisingly enough, video memory, for storing data that needs to be accessed fast. This memory is limited, so considerations on the artists' part must be used. The good news is that video card manufacturers are constantly adding more of this video memory -- known as Video RAM (or VRAM for short). Where once 64 MB was considered good, we can now find video cards with up to 1 GB of RAM.

      The more textures you have, the more RAM will be used. The actual amount is based primarily on the file size of each texture. Other data take up video memory, such as the model data itself, but the majority is used for textures. For this reason, it is a good idea to both plan your video memory usage and test how much you're using once in the engine. It is also advised that you have a minimum hardware configuration for what you want your game to run on. If this is the case, then you should always make sure your video memory usage does not go over the minimum target hardware's amount.

      Another advantage of in-memory compression like S3TC is that it can increase available bandwidth. If you know your engine on your target hardware may be swapping textures back and forth frequently (something that should be avoided if possible, but is a technique used on consoles), then you may want to consider having the textures compressed and then decompress them on the fly. That is to say, you have the textures compressed, and then when they're required, they're transported and then decompressed as they're added to video memory. This results in less data having to be shunted across the graphics card's bus (transport line) to and from the computer's main memory, resulting in less bandwidth utilization, but with the added penalty of a few wasted processing clocks. Texture Types Now we're going to discuss such things are diffuse, normal, specular, parallax and cube mapping.
      Aside from the diffuse map, these mapping types are common in what have become known as 'next-gen' engines, where the artist is given more control over how the model is rendered by use of control maps for shaders. Diffuse maps are textures which simply provide the color and any baked-in details. Games before shaders simply used diffuse textures. (You could say this is the 'default' texture type.) For example, when texturing a rock, the diffuse map of the rock would be just the color and data of that rock. Diffuse textures can be painted by hand or made from photographs, or a mixture of both.

      However, in any modern engine, most lighting and shadow detail is preferred to not be 'baked' (i.e. pre-drawn directly into the texture) into the diffuse map, but instead to have just the color data in the diffuse and use other control maps to recreate such things as shadows, specular reflections, refraction effects and so on. This results in a more dynamic look for the texture overall, helping its believability once in-game. 'Baking' such details will hinder the game engine's ability to produce "dynamic" results, which will cause the end result to look unrealistic.

      There is sometimes an exception to this rule: if you're providing "general" shading/lighting like ambient occlusion maps baked (merged) into the diffuse, then it is ok. These types of additions into the diffuse are general enough that they won't hinder the dynamic effect of running in a realtime engine, while achieving a higher level of realism.

      Another point to remember is that while textures sourced from photographs tend to work very well in 3D environment work, it is often frowned upon to use such 'photosourced textures' for humans. Human diffuse textures are usually hand-painted.

      Normal maps are used to provide lighting detail for dynamic lighting, however this is involved in an even more important role, as we will discuss shortly. Normal maps get their name from the fact that they recreate normals on a mesh using a texture. A 'normal' is a point (actually a vector) extending from a triangle on a mesh. This tells the game engine how much light the triangle should receive from a particular light source -- the engine simply compares the angle of the normal with the position and angle of the light itself and thus calculates how the light strikes the triangle. Without a normal map, the game engine can only use the data available in the mesh itself; a triangle would only have three normals for the engine to use -- one at each point -- regardless of how big that triangle is on the screen, resulting in a flat look. A normal map, on the other hand, creates normals right across a mesh's surface. The result is the capability to generate a normal map from a 2 million poly mesh and have this lighting detail recreated on a 200 poly mesh. This can allow an artist to recreate a high-poly mesh with relatively low polygons in comparison.

      Such tricks are associated with 'next-gen' engines, and are heavily used in screenshots of the Unreal 3 Engine, where you can see large amounts of detail yet able to run in realtime due to the actual amount of polys used. Normal maps use the different color channels (red, green, and blue) for storing gray scale data of lighting information at different angles, however there is no set standard for how these channels can be interpreted and as such sometimes require a channel to be flipped for correct function in an engine.

      Normal maps can be generated from high-poly meshes, or they can be image generated, that is, being generated from a gray scale map. However, high-poly generated normal maps tend to be preferred as they tend to have more depth. This is for various reasons, but the main one is that it is difficult for most to paint a grayscale map that is equal to the quality you'll receive straight from a model. Also, you will often find that the image generators tend to miss details, requiring the artist to edit the normal map by hand later. However, it is not impossible to receive near equal results using both methods, each have their own requirements, it is up to you on which to use in each situation

      (Please refer to the tutorial section for extended information on normal maps.)

      Specular maps are simple in creation. They are easy to paint by hand, because the map is simply a gray scale map that defines the specular level for any point on a mesh. However, in some engines and in 3D modeling applications this map can be full-color so the brightness defines the specular level and color defines the color of the specular highlights. This gives the artist finer control to produce more lifelike textures because the specific specular attribute of certain materials can be more defined and closer to reality. In this way you can give a plastic material a more plastic-like specular reflection, or simulate a stone's particular specular look.

      Parallax mapping picks up where regular normal mapping fails. We create normals on a model by use of a normal map. A parallax map does the same thing as a normal map except it samples a grayscale map within the alpha channel. Parallax mapping works by using the angles recorded in the tangent space normal maps along with the heightmap to calculate which way to displace the texture coordinates. It uses the grayscale map to define how much the texture should be extruded outwards, and uses the angle information recorded in the tangent normal map to determine the angle to offset the texture. Doing things this way a parallax map can then recreate the extrusion of a normal map, but without the flattening that visible in ordinary normal mapping due to lack of data at different perspectives. It mainly gets its name from how the effect is created by the parallax effect. The end results are cleaner and deeper extrusions with less flattening that occurs in normal mapping because the texture coordinates are offset with your perspective.

      Parallax mapping is usually more costly than normal mapping. Parallax also has the limitation of not being used on deformable meshes because of the tendency for the texture to "swim" due to the way textures are offset.

      Cube mapping uses a texture that is not unlike an unfolded box and acts just like a sky box when used. Basically this texture is designed like an unfolded box where it all is folded back together when being used. This allows us to create a 3d dimensional reflection of sorts. The result is very realistic precomputed reflections. For example, you would use this technique to create a shiny, metal ball. The cube map would provide the reflection data. (In some engines, you can tell the engine itself to generate a cube map from a specific point in the scene and have it apply the result to a model. This is how we get shiny, reflective cars in racing games, for example. The engine is constantly taking cube map 'photos' for each car to ensure it reflects its surroundings accurately.) Tiling Textures Now, while you can have unique, specific textures made for specific models, a common thing to do to both save time and video memory, is tiling textures. These are textures which can be fitted together like floor or wall tiles, producing a single, seamless texture/image. The benefit is that you can texture an entire floor in an entire building using a single tiling texture, which both saves the artist's time, and video memory due to fewer textures being needed.

      A tiling texture is achieved by having the left and right side of a texture blend into each other, and the same for the bottom and top of the texture. Such blending can be achieved by the use of a specialist program, a plugin, or by simply offsetting the texture a little to the left and down, cleaning up the seams, and then offsetting back to the original position

      After you create your first tiling texture and test it, you're bound to see that each texture will produce a 'tiling artifact' which shows how and where the texture is tiled. Such artifacts can be reduced by avoiding high-contrast detail, unique detail (such as a single rock on a sand texture), and by tiling the texture less. Texture Bit Depth A texture is just an image file. This means most of the theory you're familiar with when it comes to working with images also applies to textures. One such would be bit depth. Here you will see such numbers as 8, 16, 24, and 32 bits. These each correspond to the amount of color data that is stored for each image.

      How do we get the number 24 from an image file? Well, the number 24 refers to how many bits it contains. That is, that you have 3 channels, red, green, and blue, all of which are simply channels which contain a gray scale image, but are added together to produce a full color image. So black, in the red channel, means "no red" at that point, while white in the red channel means "lots of red". Same applies to blue and green. When these are combined, they produce a full color image, a mix of Red, Green and Blue (if using the RGB color model). The bits come in by the fact that they define how many levels of gray each channel has: 8 bits per channel, over 3 channels, is 24 bits total.

      8 bits gives you 256 levels of gray. Combining the idea that 8 bits gives you 256 levels of gray and that each channel is simply gray scale and different levels of gray define a level within that color, we can then see that a 24 bit image will give us 16,777,216 different colors to play with. That is, 8 bits x 3 channels= 24 bits, 8 bits= 256 gray scale levels, so 256 x 3= 16,777,216 colors. This knowledge comes in useful when at certain times it is easier to edit the RGB channels individually, with a correct understanding you can then delve deeper into editing your textures.

      However, with the increase in shader use, you'll often see a 32 bit image/texture file. These are image files which contain 4 channels, each of 8 bits: 4 x 8 = 32. This allows a developer to use the 4th channel to carry a unique control map or extra data needed for shaders. Since each channel is gray scale, a 32 bit image is ideal to carry a full color texture along with an extra map within it. Depending on your engine you may see a full color texture with the extra 4th channel being used to hold a gray scale map for transparency (more commonly known as an "alpha channel"), specular control map, or a gray scale map along with a normal map in the other channels to be used for parallax mapping.

      As you paint your textures you may start to realize that you're probably not using all of the colors available to you in a 24 bit image. And you're probably right, this is why artists can at times use a lower bit depth texture to achieve the same or near the same look with a lesser memory footprint. There are certain cases where you will more than likely need a 24 bit image however: If your image/texture contains frequent gradations in the color or shading, then a 24 bit image is required. However, if your image/texture contains solid colors, little detail, little or no shading, and so on, you can probably get away with a 16, 8, or perhaps even a 4 bit texture.

      Often, this type of memory conservation technique is best done when the artist is able to choose/make his own color pallette. This is where you hand pick the colors that will be saved with your image, instead of letting the computer automatically choose. By using a careful eye you have the possibility to choose more optimal colors which will fit your texture better. Basically, in a way, all you're doing is throwing out what would be considered useless colors which are being stored in the texture but not being used. Making Normal Maps There are two methods for creating normal maps: Using a detailed mesh/model. Creating a normal map from an image.
      The first method is part of a common workflow that nearly all modelers who support normal map generation use. For generating a normal map from a model you can either generate it out to a straight texture, or if you're generating your normal map from a high-poly mesh, it is common to then model the low poly mesh around your high-poly mesh. (Some artists have prefer for modeling the low-poly version first while others like to do the high then the low, in the end there is no perfect way, its just preference.)

      For example: you have a high-poly rock, you will then model/build a low poly mesh around the high, then UVW unwrap it, and generate the normal map from your high-poly version. Virtual "rays" will be cast from the high- to the low-poly model -- a technique known as "projecting". This allows for a better application of your high-poly mesh normal map to your low-poly mesh since you're projecting the detail from the high to the low. However, some applications will switch the requirements and have your low poly mesh be inside your high, and others allow the rays for generating the normal map to be cast both ways. So refer to your application tutorials for how to do this as it may vary. Creating a normal map from an image. This method can be quicker than the more common method described above. For this, all you need is an edited version of your diffuse map. The key to good-looking image-based normal maps is to edit out any unneeded information for your grayscale diffuse texture. If your diffuse has any baked-in specular, shadows, or any information that does not define depth, this needs to be removed from the image. Also, anything that is extra, like strips of paint on a concrete texture, that too should be edited out. This is because, just like bump maps, and displacement maps, the colour of the pixels defines depth, with lighter pixels being "higher" than darker pixels. So, if you have specular (will turn white when made gray), it will be interpreted as a bump in your normal map, you don't want this if the specular in fact lays on a flat surface. The same applies to shadows and everything else mentioned: it will all interfere with the normal map generation process. You simply want to only have various shades of gray represent various depths, any other data in the texture will not produce the correct results.

      For generating normal maps you can always use the Nvidia Plugin, however it takes a lot of tweaking to get a good looking normal map. As such, I recommend Crazy Bump!. Crazy Bump will produce some very good normal maps if the given texture it is generating it from is good. Combining the two methods. It is common, even if you're generating a normal map from a 3d high-poly mesh to then generate an image generated normal map and overlay it over the high-poly generated one. This is done by generating one from your diffuse map, filling the resulting normal map's blue channel with 128 neutral gray, and then overlaying this over your high-poly generated one. This is done to add in those small details that only the image can generate. This way you get the high frequency detail along with the nice and cleanly generated mid-to-low frequency detail from your high-poly generated normal map. Pixel Density and Fill Rate Limitation Let's say you have a coin that you just finished UVW unwrapping, it will indeed be very small once in-game, however you decide it would be fine to use a 1024x1024 texture. What is wrong with the above situation? Firstly, you shouldn't need to UVW unwrap a coin! Furthermore, you should not be applying the 1024x1024 texture! Not only is this wasteful of video memory, but it will result in uneven pixel density and will increase your fill rate on that model for no reason. A good rule of thumb is to only use the amount of resources that would make sense based on how much screen space an object will take up. A building will take up more of the screen than a coin, so it needs a higher resolution texture and more polygons. A coin takes up less screen space and therefore needs fewer polys and a lower resolution texture to obtain a similar pixel density.

      So, what is pixel density? It is the density of each pixel from a texture on a mesh. For example, take the UVW unwrapping tutorial linked to in the "Texture Image Size" section: There you will see a checkered pattern, this is not only used to make sure the texture is applied right, but to also keep track of pixel density. If you increased the pixel density, you would see the checkered pattern get more dense; if you decrease the density, the checkered pattern would be less dense, with fewer squares showing.

      Maintaining a consistent pixel density in a game helps all of the art fit together. How would you feel if your high pixel density character walks up to a wall with a significantly lower pixel density? Your eyes would be able to compare the two and see that the wall looks like crap compared to the character, however would this same thing happen if the character were near the same pixel density of the wall? Probably not -- such things only become apparent (within reason) to the end user if they have something to compare it to. If you keep a consistent pixel density throughout all of the game's assets, you will see all of it fits together better.

      It is important to note that there is one other reason for this, but we'll come to it in a moment. First, we need to look at two related problems that can arise: transform(ation) limited and fill-rate limited modeling. A transform-limited model will have less pixel density per polygon than a fill-rate limited model where it has a higher pixel density per polygon. The theory is that a model takes longer on either processing the polys, or processing the actual pixel-dense surface. Knowing this, we can see that our coin, with very few polys will have a giant pixel density per polygon, resulting in a fill rate limited mesh. However, it does not need to be fill rate limited if we lower the texture resolution, resulting in a lower pixel density.

      The point is that your mesh will be held back when rendering based on which process takes longer: transform or fill rate. If your mesh is fill rate limited then you can speed up its processing by decreasing its pixel density, and its speed will increase until you reach transform limitation, in which your mesh is now taking longer to render based on the amount of polygons it contains. In the latter case, you would then speed up the processing of the model by decreasing the amount of polygons the model contains. That is, until you decrease the polygon count to the point where you're now fill rate limited once again! As you can see, it's a balancing act. The trick is to maximize the speed of the transform and fill rate processing (minimize the impact of both as much as you can), to get the best possible processing speed for your mesh.

      That said, being fill rate limited can sometimes be a good thing. The majority of "next-gen" games are fill rate limited primarily because of their use of texture/shading techniques. So, if you can't possibly get any lower on the fill rate limitation and you're still fill rate limited, then you have a little bit of wiggle room to work around where you can actually introduce more polygons with no performance hit. However, you should always try to cut down on fill rate limitations when possible because of general performance concerns

      Some methods revolve around splitting up a single polygon into multiple polygons on a single mesh (like a wall for example). This works by then decreasing the pixel density and processing (shaders) for the single polygon by splitting the work into multiple polygons. There are other methods for dealing with fill rate limitation, but mainly it is as simple as keeping your pixel density at a reasonable level.       MipMaps It is fitting that after we discuss pixel density and fill rate limitation that we discuss a thing called Mipmapping. Mipmaps (or mip maps) are a collection of precomputed lower resolution image copies for a texture contained in the texture. Let's say you have a 1024x1024 texture. If you generate mipmaps for your texture, it will contain the original 1024x1024 texture, but it will also contain a 512x512, 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, 2x2 version of the same texture, (exactly how many levels there are is up to you). These smaller mipmaps (textures) are then used in sequence, one after the other, according to the model's distance from the camera in the scene. If your model uses a 1024x1024 texture up close, it may be using a 256x256 texture when further away, and an even smaller mipmap texture level when it's way off in the distance. This is done because of many things: The further away you are from your mesh, the less polygonal and texture detail is needed. This is because all displays have a fixed display resolution and it is physically impossible for the player to decipher the detail of a 1024x1024 texture and 6,000 polygon mesh when the model takes up only 20 pixels on screen. The further away we are from the mesh, the fewer the polygons and the lower the texture resolution we need to render it. Because of the whole fill rate limitation described above, it is beneficial to use mipmaps as less texture detail must be processed for distant meshes. This results is a less fill-rate-heavy scene because only the closer models are receiving larger textures, whereas more distant models are receiving smaller textures. Texture filtering. What happens when the player tries to view a 1024x1024 texture at such a distance that only 40 pixels are given to render the mesh on screen? You get noise and aliasing artefacts! Without mipmaps, any textures too large for the display resolution will only result in unneeded and unwanted noise. Instead of filtering out this noise, mipmaps use a lower resolution texture for different distances, this results in less noise. It is important to note, that while mipmaps will increase performance overall, you're actually increasing your texture memory usage. The mipmaps and the whole texture will be loaded into memory. However, it is possible to have a system where the user or dev can select the highest mipmap they want and the ones higher than this limit will not be loaded into memory (as the system we're using now), however the mipmaps which meet or are lower than this limit will still be loaded into memory. It is widely agreed that the benefits of mipmaps vastly outweigh the small memory hit. NOTE: Mesh detail can also affect performance, so the equivalent method used for mesh detail is known as LOD -- "Level Of Detail. Multiple versions of the mesh itself are stored at different levels of detail. The less-detailed mesh is rendered when it's a long way away. Like mipmaps, a mesh can have any number of levels of detail you feel it requires.

      The image below is of a level I created which makes use of most of what we've discussed. It makes use of S3TC compressed textures, normal mapping, diffuse mapping, specular mapping, and tiled textures.
         
      0 comments
    12. Sounds This is our final part, 5 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and part 4 is here. It's great that collecting the pickups work, but a silent game is pretty bland. It would be great to have a sound play whenever a pickup is collected. Start by configuring a sound:   [PickupSound] Sound = pickup.ogg KeepInCache = true   Then as part of the collision detection in the PhysicsEventHandler function, we change the code to be:   if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); }   In code above, if the recipient is a pickup object, then use the orxObject_AddSound function to place our sound on the sender object. There's little point adding a sound to an object that is about to be deleted. And of course, if the pickup object is the sender, we add the sound to the recipient object. Also, the PickupSound that is added to the object, is the config section name we just defined in the config. Compile and run. Hit the pickups and a sound will play. You can also use sounds without code. There is an AppearSound section already available in the config. We can use this sound on the ufo when it first appears in the game. This is as simple as adding a SoundList property to the ufo:   [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody AngularVelocity = 200 SoundList = SoundAppear   Re-run and a nice sound plays at the start of the game.   Adding a score What's a game without a score? We need to earn points for every pickup that is collected. The great thing about Orx objects is that they don't have to contain a texture as a graphic. They can contain a font and text rendered to a graphic instead. This is perfect for making a score object. Start by adding some config for the ScoreObject:   [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0)   Next, to add the ScoreTextGraphic section, which will not be a texture, but text instead:   [ScoreTextGraphic] Text = ScoreText   Now to define the ScoreText which is the section that contains the text information:   [ScoreText] String = 10000   The String property contains the actual text characters. This will be the default text when a ScoreObject instance is created in code. Let's now create an instance of the ScoreObject in the Init() function:   orxObject_CreateFromConfig("ScoreObject");   So far, the Init() function should look like this:   orxSTATUS orxFASTCALL Init() { orxVIEWPORT *viewport = orxViewport_CreateFromConfig("Viewport"); camera = orxViewport_GetCamera(viewport); orxObject_CreateFromConfig("BackgroundObject"); ufo = orxObject_CreateFromConfig("UfoObject"); orxCamera_SetParent(camera, ufo); orxObject_CreateFromConfig("PickupObjects"); orxObject_CreateFromConfig("ScoreObject"); orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL); orxEvent_AddHandler(orxEVENT_TYPE_PHYSICS, PhysicsEventHandler); return orxSTATUS_SUCCESS; }   Compile and run. There should be a score object in the top left hand corner displaying: 10000 The score is pretty small. And it's fixed into the top left corner of the playfield. That's not really what we want. A score is an example of a User Interface (UI) element. It should be fixed in the same place on the screen. Not move around when the screen scrolls. The score should in fact, be fixed as a child to the Camera. Wherever the Camera goes, the score object should go with it. This can be achieved with the ParentCamera property, and then setting the position of the score relative to the camera's centre position:   [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0) ParentCamera = Camera UseParentSpace = false   With these changes, we've stated that we want the Camera to be the parent of the ScoreObject. In other words, we want the ScoreObject to travel with the Camera and appear to be fixed on the screen. By saying that we don't want to UseParentSpace means that we want specify relative world coordinates from the centre of the camera. If we said yes, we'd have to specify coordinates in another system. And Position, of course, is the position relative to the center of the camera. In our case, moved to the top left corner position. Re-run and you'll see the score in much the same position as before, but when you move the ufo around, and the screen scrolls, the score object remains fixed in the same place. The only thing, it's still a little small. We can double its size using Scale:   [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0) ParentCamera = Camera UseParentSpace = false Scale = 2.0 Smoothing = false Smoothing has been set to false so that when the text is scaled up, it will be sharp and pixellated rather than smoothed up which looks odd. All objects in our project are smooth be default due to:   [Display] Smoothing = true: So we need to explicitly set the score to not smooth. Re-run. That looks a lot better. To actually make use of the score object, we will need a variable in code of type int to keep track of the score. Every clock cycle, we'll take that value and change the text on the ScoreObject. That is another cool feature of Orx text objects: the text can be changed any time, and the object will re-render. Finally, when the ufo collides with the pickup, and the pickup is destroyed, the score variable will be increased. The clock will pick up the variable value and set the score object. Begin by creating a score variable at the very top of the code:   #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; int score = 0;   Change the comparison code inside the PhysicsEventHandler function to increase the score by 150 points every time a pickup is collected:   if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); score += 150; } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); score += 150; }   Now we need a way to change the text of the score object. We declared the score object in the Init() function as:   orxObject_CreateFromConfig("ScoreObject");   But we really need to create it using an orxOBJECT variable:   scoreObject = orxObject_CreateFromConfig("ScoreObject");   And then declare the scoreObject at the top of the file:   #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; orxOBJECT *scoreObject; int score = 0;   Now it is possible to update the scoreObject using our score variable. At the bottom of the Update() function, add the following code:   if (scoreObject) { orxCHAR formattedScore[5]; orxString_Print(formattedScore, "%d", score); orxObject_SetTextString(scoreObject, formattedScore); }   First, the block will only execute if there is a valid scoreObject. If so, then create a 5 character string. Then print into the string with the score value, effectively converting an int into a string. Finally set the score text to the scoreObject using the orxObject_SetTextString function. Compile and Run. Move the ufo around and collect the pickups to increase the score 150 points at a time.   Winning the game 1200 is the maximum amount of points that can be awarded, and that will mean we've won the game. If we do win, we want a text label to appear above the ufo, saying “You win!”. Like the score object, we need to define a YouWinObject:   [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false   Just like the camera, the YouWinObject is going to be parented to the ufo too. This will give the appearance that the YouWinObject is part of the ufo. The Scale is set to x2. The Position is set offset up in the y axis so that it appears above the ufo. Next, the actual YouWinTextGraphic:   [YouWinTextGraphic] Text = YouWinText Pivot = center   And the text to render into the YouWinTextGraphic:   [YouWinText] String = You Win!   We'll test it by creating an instance of the YouWinObject, putting it into a variable, and then parent it to the ufo in the Init() function:   orxObject_CreateFromConfig("PickupObjects"); scoreObject = orxObject_CreateFromConfig("ScoreObject"); ufoYouWinTextObject = orxObject_CreateFromConfig("YouWinObject"); orxObject_SetParent(ufoYouWinTextObject, ufo);   Then the variable:   #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; orxOBJECT *ufoYouWinTextObject; orxOBJECT *scoreObject; int score = 0;   Compile and Run. The “You win” text should appear above the ufo. Not bad, but the text is rotating with the ufo much like the camera was before. We can ignore the rotation from the parent on this object too:   [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false IgnoreFromParent = rotation   Re-run. Interesting. It certainly isn't rotating with the ufo, but its position is still being taken from the ufo's rotation. We need to ignore this as well:   [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false IgnoreFromParent = position.rotation rotation   Good that's working right. We want the “You Win!” to appear once all pickups are collected. The YouWinObject object on created on the screen when the game starts. But we don't want it to appear yet. Only when we win. Therefore, we need to disable the object immediately after it is created using the orxObject_Enable function:   ufoYouWinTextObject = orxObject_CreateFromConfig("YouWinObject"); orxObject_SetParent(ufoYouWinTextObject, ufo); orxObject_Enable(ufoYouWinTextObject, orxFALSE);   Finally, all that is left to do is add a small check in the PhysicsEventHandler function to test the current score after each pickup collision:   if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); score += 150; } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); score += 150; } if (orxObject_IsEnabled(ufoYouWinTextObject) == orxFALSE && score == 1200) { orxObject_Enable(ufoYouWinTextObject, orxTRUE); }   We are checking two things: that the ufoYouWinTextObject is not yet enabled using the orxObject_IsEnabled function, and if the score is 1200. If both conditions are met, enable the ufoYouWinTextObject. Compile and run. Move the ufo around and collect all the pickups. When all are picked up and 1200 is reached, the “You Win!” text should appear above the ufo signifying that the game is over and we have won. And that brings us to the end! We have created a simple and complete game with some configuration and minimal code. Congratulations! I hope you enjoyed working through making the ufo game using the Orx Portable Game Engine. Of course, there are many little extras you can add to give your game that little extra polish. So, for just a bit more eye candy, there a couple more sections that you can follow along with if you wish.   Shadows There are many ways to do shadows. One method is to use shaders… though this method is a little beyond this simple guide. Another method, when making your graphics, would be to add an alpha shadow underneath. This is a good method if your object does not need to rotate or flip. The method I will show you in this chapter is to have a separate shadow object as a child of an object. And in order to remain independent of rotations, the children will ignore rotations from the parent. First a shadow graphic for the ufo, and one for the pickups:   Save these both into the data/texture folder. Then create config for the ufo shadow:   [UfoShadowGraphic] Texture = ufo-shadow.png Alpha = 0.3 Pivot = center   The only interesting part is the Alpha property. 0.1 would be almost completely see-through (or transparent), and 1.0 is not see-through at all, which is the regular default value for a graphic. 0.3 is fairly see-through.   [UfoShadowObject] Graphic = UfoShadowGraphic Position = (20, 20, 0.05)   Set the Position a bit to the right, and downwards. Next, add the UfoShadowObject as a child of the UfoObject:   [UfoObject] Graphic = UfoGraphic Position = (0,0, -0.1) Body = UfoBody AngularVelocity = 200 UseParentSpace = position SoundList = AppearSound ChildList = UfoShadowObject   Run the project. The shadow child is sitting properly behind the ufo but it rotates around the ufo, until it ends up at the bottom left which is not correct. We'll need to ignore the rotation from the parent:   [UfoShadowObject] Graphic = UfoShadowGraphic Position = (20, 20, 0.05) IgnoreFromParent = position.rotation rotation   Not only do we need to ignore the rotation of ufo, we also need to ignore the rotation position of the ufo. Re-run and the shadow sits nice and stable to the bottom right of the ufo. Now to do the same with the pickup shadow:   [PickupShadowGraphic] Texture = pickup-shadow.png Alpha = 0.3 Pivot = center [PickupShadowObject] Graphic = PickupShadowGraphic Position = (20, 20, 0.05) IgnoreFromParent = position.rotation   The only difference between this object and the ufo shadow, is that we want the pickup shadow to take the rotation value from the parent. But we do not want to take the position rotation. That way, the pickup shadow will remain in the bottom right of the pickup, but will rotate nicely in place. Now attach as a child to the pickup object:   [PickupObject] Graphic = PickupGraphic FXList = RotateFX Body = PickupBody ChildList = PickupShadowObject   Re-run, and the shadows should all be working correctly. And that really is it this time. I hope you made it this far and that you enjoyed this series of articles on the Orx Portable Game Engine. If you like what you see and would like to try out a few more things with Orx, head over our learning wiki where you can follow more beginner guides, tutorials and examples. You can always get the latest news on Orx at the official website. If you need any help, you can get in touch with the community on gitter, or at the forum. They're a friendly helpful bunch over there, always ready to welcome newcomers and assist with any questions.    
      0 comments
    13. Creating Pickup Objects This is part 4 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and part 3 is here. In our game, the player will be required to collect objects scattered around the playfield with the ufo. When the ufo collides with one, the object will disappear, giving the impression that it has been picked up. Begin by creating a config section for the graphic, and then the pickup object:   [PickupGraphic] Texture = pickup.png Pivot = center [PickupObject] Graphic = PickupGraphic   The graphic will use the image pickup.png which is located in the project's data/object folder. It will also be pivoted in the center which will be handy for a rotation effect later. Finally, the pickup object uses the pickup graphic. Nice and easy. Our game will have eight pickup objects. We need a simple way to have eight of these objects in various places. We will employ a nice trick to handle this. We will make an empty object, called PickupObjects which will hold eight copies of the pickup object as child objects. That way, wherever the parent is moved, the children move with it. Add that now:   [PickupObjects] ChildList = PickupObject1 # PickupObject2 # PickupObject3 # PickupObject4 # PickupObject5 # PickupObject6 # PickupObject7 # PickupObject8 Position = (-400, -300, -0.1)   This object will have no graphic. That's ok. It can still act like any other object. Notice the position. It is being positioned in the top left hand corner of the screen. All of the child objects PickupObject1 to PickupObject8 will be positioned relative to the parent in the top left corner. Now to create the actual children. We'll use the inheritance trick again, and just use PickupObject as a template:   [PickupObject1@PickupObject] Position = (370, 70, -0.1) [PickupObject2@PickupObject] Position = (210, 140, -0.1) [PickupObject3@PickupObject] Position = (115, 295, -0.1) [PickupObject4@PickupObject] Position = (215, 445, -0.1) [PickupObject5@PickupObject] Position = (400, 510, -0.1) [PickupObject6@PickupObject] Position = (550, 420, -0.1) [PickupObject7@PickupObject] Position = (660, 290, -0.1) [PickupObject8@PickupObject] Position = (550, 150, -0.1)   Each of the PickupObject* objects uses the properties defined in PickupObject. And the only difference between them are their Position properties. The last thing to do is to create an instance of PickupObjects in code in the Init() function:   orxObject_CreateFromConfig("PickupObjects");   Compile and Run. Eight pickup objects should appear on screen. Looking good. It would look good if the pickups rotated slowly on screen, just to make them more interesting. This is very easy to achieve in Orx using FX. FX can also be defined in config. FX allows you to affect an object's position, colour, rotation, scaling, etc, even sound can be affected by FX. Change the PickupObject by adding a FXList property:   [PickupObject] Graphic = PickupGraphic FXList = SlowRotateFX   Clearly being an FXList you can have many types of FX placed on an object at the same time. We will only have one. An FX is a collection of FX Slots. FX Slots are the actual effects themselves. Confused? Let's work through it. First, the FX:   [SlowRotateFX] SlotList = SlowRotateFXSlot Loop = true   This simply means, use some effect called SlowRotateFXSlot, and when it is done, do it again in a loop. Next the slot (or effect):   [SlowRotateFXSlot] Type = rotation StartTime = 0 EndTime = 10 Curve = linear StartValue = 0 EndValue = 360   That's a few properties. First, the Type, which is a rotation FX. The total time for the FX is 10 seconds, which comes from the StartTime and EndTime properties. The Curve type is linear so that the values changes are done so in a strict and even manner. And the values which the curve uses over the 10 second period starts from 0 and climbs to 360. Re-run and notice the pickups now turning slowly for 10 seconds and then repeating.   Picking up the collectable objects Time to make the ufo collide with the pickups. In order for this to work (just like for the walls) the pickups need a body. And the body needs to be set to collide with a ufo and vice versa. First a body for the pickup template:   [PickupObject] Graphic = PickupGraphic FXList = SlowRotateFX Body = PickupBody   Then the body section itself:   [PickupBody] Dynamic = false PartList = PickupPart   Just like the wall, the pickups are not dynamic. We don't want them bouncing and traveling around as a result of being hit by the ufo. They are static and need to stay in place if they are hit. Next to define the PickupPart:   [PickupPart] Type = sphere Solid = false SelfFlags = pickup CheckMask = ufo   The pickup is sort of roundish, so we're going with a spherical type. It is not solid. We want the ufo to able to pass through it when it collides. It should not influence the ufo's travel at all. The pickup is given a label of pickup and will only collide with an object with a label of ufo. The ufo must reciprocate this arrangement (just like a good date) by adding pickup to its list of bodypart check masks:   [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo Friction = 1.2 CheckMask = wall # pickup   This is a static bodypart, and we have specified collision actions to occur if the ufo collides with a pickup. But it's a little difficult to test this right now. However you can turn on the debug again to check the body parts:   [Physics] Gravity = (0, 0, 0) ShowDebug = true   Re-run to see the body parts. Switch off again:   [Physics] Gravity = (0, 0, 0) ShowDebug = false   To cause a code event to occur when the ufo hits a pickup, we need something new: a physics hander. The hander will run a function of our choosing whenever two objects collide. We can test for these two objects to see if they are the ones we are interested in, and run some code if they are. First, add the physics hander to the end of the Init() function:   orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL); orxEvent_AddHandler(orxEVENT_TYPE_PHYSICS, PhysicsEventHandler);   This will create a physics handler, and should any physics event occur, (like two objects colliding) then a function called PhysicsEventHandler will be executed. Our new function will start as:   orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent) { if (_pstEvent->eID == orxPHYSICS_EVENT_CONTACT_ADD) { orxOBJECT *pstRecipientObject, *pstSenderObject; /* Gets colliding objects */ pstRecipientObject = orxOBJECT(_pstEvent->hRecipient); pstSenderObject = orxOBJECT(_pstEvent->hSender); const orxSTRING recipientName = orxObject_GetName(pstRecipientObject); const orxSTRING senderName = orxObject_GetName(pstSenderObject); orxLOG("Object %s has collided with %s", senderName, recipientName); return orxSTATUS_SUCCESS; } }   Every handler function passes an orxEVENT object in. This structure contains a lot of information about the event. The eID is tested to ensure that the type of physics event that has occurred is a orxPHYSICS_EVENT_CONTACT_ADD which indicates when objects collide. If true, then two orxOBJECT variables are declared, then set from the orxEVENT structure. They are passed in as the hSender and hRecipient objects. Next, two orxSTRINGs are declared and are set by getting the names of the objects using the orxObject_GetName function. The name that is returned is the section name from the config. Potential candidates are: UfoObject, BackgroundObject, and PickupObject1 to PickupObject8. The names are then sent to the console. Finally, the function returns orxSTATUS_SUCCESS which is required by an event function. Compile and run. If you drive the ufo into a pickup or the edge of the playfield, a message will display on the console. So we know that all is working. Next is to add code to remove a pickup from the playfield if the ufo collides with it. Usually we could compare the name of one object to another and perform the action. In this case, however, the pickups are named different things: PickupObject1, PickupObject2, PickupObject3… up to PickupObject8. So we will need to actually just check if the name contains “PickupObject” which will match well for any of them. In fact, we don't need to test for the “other” object in the pair of colliding objects. Ufo is a dynamic object and everything else on screen is static. So if anything collides with PickupObject*, it has to be the ufo. Therefore, we won't need to test for that. First, remove the orxLOG line. We don't need that anymore. Change the function to become:   orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent) { if (_pstEvent->eID == orxPHYSICS_EVENT_CONTACT_ADD) { orxOBJECT *pstRecipientObject, *pstSenderObject; /* Gets colliding objects */ pstRecipientObject = orxOBJECT(_pstEvent->hRecipient); pstSenderObject = orxOBJECT(_pstEvent->hSender); const orxSTRING recipientName = orxObject_GetName(pstRecipientObject); const orxSTRING senderName = orxObject_GetName(pstSenderObject); if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); } } return orxSTATUS_SUCCESS; }   You can see the new code additions after the object names. If an object name contains the word “PickupObject”, then the ufo must have collided with it. Therefore, we need to kill it off. The safest way to do this is by setting the object's lifetime to 0. This will ensure the object is removed instantly and deleted by Orx in a safe manner. Notice that the test is performed twice. Once, if the pickup object is the sender, and again if the object is the recipient. Therefore we need to check and handle both. Compile and run. Move the ufo over the pickups and they should disappear nicely. We'll leave it there for the moment. In the final, Part 5, we'll cover adding sounds, a score, and winning the game.
      0 comments
    14. Collisions This is part 3 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and Part 2 is here. There is one last requirement for the collision to occur: we need to tell the physics system, who can collide with who. This is done with flags and masks. Make a change to the ufo's body part by adding SelfFlags and CheckMask:   [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo CheckMask = wall   SelfFlags is the label you assign to one object, and CheckMask is the list of labels that your object can collide with. These labels don't have to match the names you give objects, however it will help you stay clean and organised. So in the config above, we are saying: the UfoBodyPart is a “ufo” and it is expected to collide with any bodypart marked as a “wall”. But we haven't done that yet, so let's do it now. We will only need to add it to the WallTopPart:   [WallTopPart] Type = box Solid = true SelfFlags = wall CheckMask = ufo TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0)   Remember, that the other three wall parts inherit the values from WallTopPart. So each now carries the label of “wall” and they will collide with any other body part that carries the label of “ufo”. Re-run and press the left arrow key and drive the ufo into the left wall. It collides! And it stops. Now that the collision is working, we can flesh out the rest of the keyboard controls and test all four walls:   void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { const orxFLOAT FORCE = 0.8; orxVECTOR rightForce = { FORCE, 0, 0 }; orxVECTOR leftForce = { -FORCE, 0, 0 }; orxVECTOR upForce = { 0, -FORCE, 0 }; orxVECTOR downForce = { 0, FORCE, 0 }; if (orxInput_IsActive("GoLeft")) { orxObject_ApplyForce(ufo, &leftForce, orxNULL); } if (orxInput_IsActive("GoRight")) { orxObject_ApplyForce(ufo, &rightForce, orxNULL); } if (orxInput_IsActive("GoUp")) { orxObject_ApplyForce(ufo, &upForce, orxNULL); } if (orxInput_IsActive("GoDown")) { orxObject_ApplyForce(ufo, &downForce, orxNULL); } } }   Now is a good time to turn off the physics debug as we did earlier on. Compile and run. Try all four keys, and you should be able to move the ufo around the screen. The ufo can also collide with each wall. The ufo is a little boring in the way that it doesn't spin when colliding with a wall. We need to ensure the UfoBody is not using fixed rotation. While this value defaults to false when not supplied, it will make things more readable if we explicitly set it:   [UfoBody] Dynamic = true PartList = UfoBodyPart FixedRotation = false   The active ingredient here is to ensure that both the wall bodypart and the ufo bodypart both have a little friction applied. This way when they collide, they will drag against each other and produce some spin:   [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo CheckMask = wall Friction = 1.2 [WallTopPart] Type = box Solid = true SelfFlags = wall CheckMask = ufo TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0) Friction = 1.2   Re-run that and give it a try. Run against a wall on angle to get some spin on the ufo. The next thing to notice is that both the movement of the ufo and the spin never slow down. There is no friction to slow those down. We'll deal with the spin first. By adding some AngularDamping on the UfoBody, the spin will slow down over time:   [UfoBody] Dynamic = true PartList = UfoBodyPart AngularDamping = 2 FixedRotation = false   Re-run and check the spin. The ufo should be slowing down after leaving the wall. Now for the damping on the movement. That can be done with LinearDamping on the UfoBody:   [UfoBody] Dynamic = true PartList = UfoBodyPart AngularDamping = 2 FixedRotation = false LinearDamping = 5   Re-run and the speed will slow down after releasing the arrow keys. But it's slower overall as well. Not 100% what we want. You can increase the FORCE value in code (ufo.cpp), in the Update function to compensate:   const orxFLOAT FORCE = 1.8;   Compile and run. The speed should be more what we expect. It would be nice for the ufo to be already spinning a little when the game starts. For this, add a little AngularVelocity :   [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody AngularVelocity = 200   Run this and the ship will have a small amount of spin at the start until the AngularDamping on the ufo body slows it down again.   Following the UFO with the camera While we can simply move the ufo around with the keys on a fixed background, it will be a more pleasant experience to have the ufo fixed and have the screen scroll around instead. This effect can be achieved by parenting the camera to the ufo so that wherever the ufo goes, the camera goes. Currently, our project is set up so that the viewport has a camera configured to it. But the camera is not available to our code. We will require the camera to be available in a variable so that it can be parented to the ufo object. To fix this, in the Init() function, extract the camera from the viewport into a variable by first removing this line: orxViewport_CreateFromConfig("Viewport"); to: orxVIEWPORT *viewport = orxViewport_CreateFromConfig("Viewport"); camera = orxViewport_GetCamera(viewport);   And because the camera variable isn't defined, do so at the top of the code:   #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera;   Now it is time to parent the camera to the ufo in the init() function using the orxCamera_SetParent function:   ufo = orxObject_CreateFromConfig("UfoObject"); orxCamera_SetParent(camera, ufo);   Compile and Run. Woah, hang on. That's crazy, the whole screen just rotated around when ufo. And it continues to rotate when hitting the ufo against the walls. See how the camera is a child of the ufo now? Not only does the camera move with the ufo, it rotates with it as well. We certainly want it to move with the ufo, but it would be nice ignore the rotation from the parent ufo. Add the IgnoreFromParent property to the MainCamera section:   [MainCamera] FrustumWidth = 1024 FrustumHeight = 720 FrustumFar = 2.0 FrustumNear = 0.0 Position = (0.0, 0.0, -1.0) IgnoreFromParent = rotation   Re-run. That's got it fixed. Now when you move around, the playfield will appear to scroll rather than it being the ufo that moves. This makes for a more dramatic and interesting effect. In Part 4, we will give the ufo something to do. The goal is to collect several pickups.
      0 comments
    15. The UFO This is part 2 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here. We have a playfield, and now we need a UFO character for the player to control. The first step is the create the configuration for the ufo object in ufo.ini:   [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) This indicates that the UfoObject should use a graphic called UfoGraphic. Secondly, its position will be centered in the playfield with (x,y) = (0,0). The -0.1 is the Z-axis, and this will be placed above the BackgroundObject whose Z-axis is set to 0. Then the UfoGraphic which the UfoObject needs:   [UfoGraphic] Texture = ufo.png Pivot = center   Unlike the background object, our ufo object will need to be assigned to a variable. This will make it possible to affect the ufo using code: Create the variable for our ufo object just under the orx.h include line:   #include "orx.h" orxOBJECT *ufo;   And in the Init() function, create an instance of the ufo object with:   ufo = orxObject_CreateFromConfig("UfoObject");   Compile and run. You'll see a ufo object in front of the background. Excellent. Time to move to something a little more fun, moving the ufo.   Controlling the UFO The ufo is going to be controlled using the cursor arrow keys on the keyboard. The ufo will be moved by applying forces. Physics will be set up in the project in order to do this. Also, we will use a clock to call an update function regularly. This function will read and respond to key presses.   Defining Direction Keys Defining the keys is very straight forward. In the config file, expand the MainInput section in the ufo.ini by adding the four cursor keys:   [MainInput] KEY_ESCAPE = Quit KEY_UP = GoUp KEY_DOWN = GoDown KEY_LEFT = GoLeft KEY_RIGHT = GoRight   Each key is being given a label name, like: GoUp or GoDown. These label names are available in our code to test against. The next step is to create an update callback function in our code where the keys presses are checked:   void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { }   And in order to tie this function to a clock (the clock will execute this function over and over), add the following to bottom of the Init() function:   orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL);   That looks very scary and intimidating, but the only part that is important to you is the parameter with “Update”. This means, tell the existing core clock to continually call our “Update” function. Of course you can specify any function name here you like as long as it exists. Let's test a key to ensure that our event is working well. Add the following code into the Update function:   void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { if (orxInput_IsActive("GoLeft")) { orxLOG("LEFT PRESSED!"); } } }   Every time Update is run, ufo is tested to ensure it exists, and then moves to check the input system for the label “GoLeft” (if it is active or pressed). Remember how GoLeft is bound to KEY_LEFT in the MainInput config section? If that condition is true, send “LEFT PRESSED!” to the console output window while the key is pressed or held down. Soon we'll replace the orxLOG line with a function that places force on the ufo. But before that, we need to add physics to the ufo. Compile and run. Press the left arrow key and take note of the console window. Every time you press or hold the key, the message is printed. Good, so key presses are working.   Physics In order to affect the ufo using forces, physics need to be enabled. Begin by adding a Physics config section and setting Gravity with:   [Physics] Gravity = (0, 980, 0)   In order for an object in Orx to be affected by physics, it needs both a dynamic body, and at least one bodypart. Give the ufo a body with the Body property:   [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody   Next, create the UfoBody section and define the UfoBodyPart property:   [UfoBody] Dynamic = true PartList = UfoBodyPart   The body part is set to Dynamic which means that it is affected by gravity and collisions. A body needs at least one part, and so we need to define the UfoBodyPart:   [UfoBodyPart] Type = sphere Solid = true   The body part Type is set to be a sphere which will automatically size itself around the object's size, and the body is to be solid so that if it should collide with anything, it will not pass through it. Compile and Run. The ufo falls through the floor. This is because of the gravity setting of 980 in the y axis which simulates world gravity. Our game is a top down game. So change the Gravity property to:   [Physics] Gravity = (0, 0, 0)   Re-run (no compile needed) and the ufo should remain in the centre of the screen. The Physics section has another handy property available to visually test physics bodies on objects: ShowDebug. Add this property with true:   [Physics] Gravity = (0, 0, 0) ShowDebug = true   Re-run, and you will see a pinkish sphere outline automatically sized around the ufo object. For now we'll turn that off again. You can do this by changing the ShowDebug value to false, adding a ; comment in front of the line or simply just deleting the line. We'll set our ShowDebug to false:   [Physics] Gravity = (0, 0, 0) ShowDebug = false   Let's add some force to the ufo if the left cursor key is pressed. Change the code in the Update function to be:   void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { const orxFLOAT FORCE = 0.8; orxVECTOR leftForce= { -FORCE, 0, 0 }; if (orxInput_IsActive("GoLeft")) { orxObject_ApplyForce(ufo, &leftForce, orxNULL); } } }   The orxObject_ApplyForce function takes an orxVECTOR facing left and applies it to the ufo object. Compile and re-run. If you press and release the left arrow key, the ufo will move to the left. If you hold the left key down, the ufo will increase its speed and move out the left hand side of the screen. Even if you tap the left key once quickly, the ufo will still eventually travel out of the left of the screen. There is no friction yet to slow it down, or any barriers to stop it going out of the screen.   Barrier Around The Border Even though the background looks it has a border, it is really only a picture. In order to create a barrier for the ufo, we will need to wrap the edges using some body parts. This means, the background object will also be given a body, and four body parts, one for each wall. Start with adding a body to the object:   [BackgroundObject] Graphic = BackgroundGraphic Position = (0, 0, 0) Body = WallBody   And then the body itself:   [WallBody] Dynamic = false PartList = WallTopPart # WallRightPart # WallBottomPart # WallLeftPart   This is different from the ufo body. This body is not dynamic. This means that it is a static body, one that cannot be affected by gravity. But dynamic objects can still collide with it. Also, there are four parts to this body, unlike the ufo which only had one. Start with the WallTopPart first:   [WallTopPart] Type = box Solid = true TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0)   In this part, the type is a box body part. It is set to solid for collisions, ie so that a dynamic object can collide with it but not pass though it. Stretch the box to cover the region from (-400,-300) to (400, -260). At this point, it might be a good idea to turn on the physics debugging to check our work:   [Physics] Gravity = (0, 0, 0) ShowDebug = true   Re-run the project. The top wall region should cover the top barrier squares:   Great. Next, we'll do the right hand side. But rather than copy all the same values, we'll reuse some from the top wall:   [WallRightPart@WallTopPart] TopLeft = (360, -260,0) BottomRight = (400, 260, 0)   Notice the @WallTopPart in the section name? This means: copy all the values from WallTopPart, but any properties in WallRightPart will take priority. Therefore, use the Type, and Solid properties from WallTopPart, but use our own values for TopLeft and BottomRight for the WallRightPart section. This is called “Section Inheritance”. This will come in very handy soon when we tweak values or add new properties to all four wall parts. Re-run the project, and there will now be two walls. Define the last two walls using the same technique:   [WallBottomPart@WallTopPart] TopLeft = (-400,260,0) BottomRight = (400, 300, 0) [WallLeftPart@WallTopPart] TopLeft = (-400,-260,0) BottomRight = (-360, 260, 0)   Now there are four walls for the ufo to collide with. Re-run and try moving the ufo left into the wall. Oops, it doesn't work. It still passes straight though. There is one last requirement for the collision to occur: we need to tell the physics system, who can collide with who. We'll cover that in Part 3.
      0 comments
    16. Overview Welcome to the 2D UFO game guide using the Orx Portable Game Engine. My aim for this tutorial is to take you through all the steps to build a UFO game from scratch. The aim of our game is to allow the player to control a UFO by applying physical forces to move it around. The player must collect pickups to increase their score to win. I should openly acknowledge that this series is cheekily inspired by the 2D UFO tutorial written for Unity. It makes an excellent comparison of the approaches between Orx and Unity. It is also a perfect way to highlight one of the major parts that makes Orx unique among other game engines, its Data Driven Configuration System. You'll get very familiar with this system very soon. It's at the very heart of just about every game written using Orx. If you are very new to game development, don't worry. We'll take it nice and slow and try to explain everything in very simple terms. The only knowledge you will need is some simple C++. I'd like say a huge thank you to FullyBugged for providing the graphics for this series of articles.   What are we making? Visit the video below to see the look and gameplay of the final game: Getting Orx The latest up to date version of Orx can be cloned from github and set up with: git clone https://github.com/orx/orx.git Once cloning has completed, the setup script in the root of the files will start automatically for you. This script creates an $ORX environment variable for your system. The variable will point to the code subfolder where you cloned Orx. Why? I'll get to the in a moment, but it'll make your life easier. The setup script also creates several projects for various IDEs and operating system: Visual Studio, Codelite, Code::Blocks, and gmake. You can pick one of these projects to build the Orx library. Building the Orx Library While the Orx headers are provided, you need to compile the Orx library so that your own games can link to it. Because the setup script has already created a suitable a project for you (using premake), you can simply open one for your chosen OS/IDE and compile the Orx library yourself. There are three configurations to compile: Debug, Profile and Release. You will need to compile all three. For more details on compiling the Orx lbrary at: http://orx-project.org/wiki/en/tutorials/cloning_orx_from_github at the Orx learning wiki. The $ORX Environment Variable I promised I would explain what this is for. Once you have compiled all three orx library files, you will find them in the code/lib/dynamic folder: orx.dll orxd.dll orxp.dll Also, link libraries will be available in the same folder: orx.lib orxd.lib orxp.lib When it comes time to create our own game project, we would normally be forced to copy these library files and includes into every project. A better way is to have our projects point to the libraries and includes located at the folder that the $ORX environment variable points to (for example: C:\Dev\orx\code). This means that your projects will always know where to find the Orx library. And should you ever clone and re-compile a new version of Orx, your game projects can make immediate use of the newer version. Setting up a 2D UFO Project Now the you have the Orx libraries cloned and compiled, you will need a blank project for your game. Supported options are: Visual Studio, CodeLite, Code::Blocks, XCode or gmake, depending on your operating system. Once you have a game project, you can use it to work through the steps in this tutorial. Orx provides a very nice system for auto creating game projects for you. In the root of the Orx repo, you will find either the init.bat (for Windows) or init.sh (Mac/Linux) command. Create a project for our 2D game from the command line in the Orx folder and running: init c:\temp\ufo or init.sh ~/ufo Orx will create a project for each IDE supported by your OS at the specified location. You can copy this folder anywhere, and your project will always compile and link due to the $ORX environment variable. It knows where the libraries and includes are for Orx. Open your project using your favourite IDE from within the ufo/build folder. When the blank template loads, there are two main folders to note in your solution: config src Firstly, the src folder contains a single source file, ufo.cpp. This is where we will add the c++ code for the game. The config folder contains configuration files for our game.   What is config? Orx is a data driven 2D game engine. Many of the elements in your game, like objects, spawners, music etc, do not need to be defined in code. They can be defined (or configured) using config files. You can make a range of complex multi-part objects with special behaviours and effects in Orx, and bring them into your game with a single line of code. You'll see this in the following chapters of this guide. There are three ufo config files in the config folder but for this guide, only one will actually be used in our game. This is: ufo.ini All our game configuration will be done there. Over in the Orx library repo folder under orx/code/bin, there are two other config files: CreationTemplate.ini SettingsTemplate.ini These are example configs and they list all the properties and values that are available to you. We will mainly concentrate on referring to the CreationTemplate.ini, which is for objects, sounds, etc. It's good idea to include these two files into your project for easy reference. Alternatively you can view these online at https://github.com/orx/orx/blob/master/code/bin/CreationTemplate.ini and here: https://github.com/orx/orx/blob/master/code/bin/SettingsTemplate.ini   The code template Now to take a look at the basic ufo.cpp and see what is contained there. The first function is the Init() function. This function will execute when the game starts up. Here you can create objects have been defined in the config, or perform other set up tasks like handlers. We'll do both of these soon. The Run() function is executed every main clock cycle. This is a good place to continually perform a task. Though there are better alternatives for this, and we will cover those later. This is mainly used to check for the quit key. The Exit() function is where memory is cleaned up when your game quits. Orx cleans up nicely after itself. We won't use this function as part of this guide. The Bootstrap() function is an optional function to use. This is used to tell Orx where to find the first config file for use in our game (ufo.ini). There is another way to do this, but for now, we'll use this function to inform Orx of the config. Then of course, the main() function. We do not need to use this function in this guide. Now that we have everything we need to get start, you should be able to compile successfully. Run the program and an Orx logo will appear slowly rotating. Great. So now you have everything you need to start building the UFO game. If you experience an issue compiling, check the troubleshooting article for Orx projects    for help.   Setting up the game assets Our game will have a background, a UFO which the player will control, and some pickups that the player can collect. The UFO will be controlled by the player using the cursor keys. First you'll need the assets to make the game. You can download the file  assets-for-orx-ufo-game.zip which contains: The background file (background.png😞 The UFO and Pickup sprite images (ufo.png and pickup.png😞   And a pickup sound effect (pickup.ogg😞 pickup.ogg Copy the .png files into your data/texture folder Copy the .ogg file into your data/sound folder. Now these files can be accessed by your project and included in the game.   Setting up the Playfield We will start by setting up the background object. This is done using config. Open the ufo.ini config file in your editor and add the following:   [BackgroundGraphic] Texture = background.png Pivot = center   The BackgroundGraphic defined here is called a Graphic Section. It has two properties defined. The first is Texture which has been set as background.png. The Orx library knows where to find this image, due to the properties set in the Resource section:   [Resource] Texture = ../../data/texture   So any texture files that are required (just like in our BackgroundGraphic section) will be located in the ../../data/texture folder. The second parameter is Pivot. A pivot is the handle (or sometimes “hotspot” in other frameworks). This is set to be center. The position is 0,0 by default, just like the camera. The effect is to ensure the background sits in the center of our game window. There are other values available for Pivot. To see the list of values, open the CreationTemplate.ini file in your editor. Scroll to the GraphicTemplate section and find Pivot in the list. There you can see all the possible values that could be used. top left is also a typical value. We need to define an object that will make use of this graphic. This will be the actual entity that is used in the game:   [BackgroundObject] Graphic = BackgroundGraphic Position = (0, 0, 0)   The Graphic property is the section BackgroundGraphic that we defined earlier. Our object will use that graphic. The second property is the Position. In our world, this object will be created at (0, 0, 0). In Orx, the coordinates are (x, y, z). It may seem strange that Orx, being a 2D game engine has a Z axis. Actually Orx is 2.5D. It respects the Z axis for objects, and can use this for layering above or below other objects in the game. To make the object appear in our game, we will add a line of code in our source file to create it. In the Init() function of ufo.cpp, remove the default line: orxObject_CreateFromConfig("Object"); and replace it with: orxObject_CreateFromConfig("BackgroundObject"); Compile and run. The old spinning logo is now replaced with a nice tiled background object. Next, the ufo object is required. This is what the player will control. This will be covered in Part 2.
      4 comments
    17. The more you know about a given topic, the more you realize that no one knows anything. For some reason (why God, why?) my topic of choice is game development. Everyone in that field agrees: don't add networked multiplayer to an existing game, you drunken clown. Well, I did it anyway because I hate myself. Somehow it turned out great. None of us know anything. Problem #1: assets My first question was: how do I tell a client to use such-and-such mesh to render an object? Serialize the whole mesh? Nah, they already have it on disk. Send its filename? Nah, that's inefficient and insecure. Okay, just a string identifier then? Fortunately, before I had time to implement any of my own terrible ideas, I watched a talk from Mike Acton where he mentions the danger of "lazy decision-making". One of his points was: strings let you lazily ignore decisions until runtime, when it's too late to fix. If I rename a texture, I don't want to get a bug report from a player with a screenshot like this: I had never thought about how powerful and complex strings are. Half the field of computer science deals with strings and what they can do. They usually require a heap allocation, or something even more complex like ropes and interning. I usually don't bother to limit their length, so a single string expands the possibility space to infinity, destroying whatever limited ability I had to predict runtime behavior. And here I am using these complex beasts to identify objects. Heck, I've even used strings to access object properties. What madness! Long story short, I cultivated a firm conviction to avoid strings where possible. I wrote a pre-processor that outputs header files like this at build time: namespace Asset { namespace Mesh { const int count = 3; const AssetID player = 0; const AssetID enemy = 1; const AssetID projectile = 2; } } So I can reference meshes like this: renderer->mesh = Asset::Mesh::player; If I rename a mesh, the compiler makes it my problem instead of some poor player's problem. That's good! The bad news is, I still have to interact with the file system, which requires the use of strings. The good news is the pre-processor can save the day. const char* Asset::Mesh::filenames[] = { "assets/player.msh", "assets/enemy.msh", "assets/projectile.msh", 0, }; With all this in place, I can easily send assets across the network. They're just numbers! I can even verify them. if (mesh < 0 || mesh >= Asset::Mesh::count) net_error(); // just what are you trying to pull, buddy? Problem #2: object references My next question was: how do I tell a client to please move/delete/frobnicate "that one object from before, you know the one". Once again, I was lucky enough to hear from smart people before I could shoot myself in the foot. From the start, I knew I needed a bunch of lists of different kinds of objects, like this: Array<Turret> Turret::list; Array<Projectile> Projectile::list; Array<Avatar> Avatar::list; Let's say I want to reference the first object in the Avatar list, even without networking, just on our local machine. My first idea is to just use a pointer:
        Avatar* avatar; avatar = &Avatar::list[0]; This introduces a ton of non-obvious problems. First, I'm compiling for a 64 bit architecture, which means that pointer takes up 8 whole bytes of memory, even though most of it is probably zeroes. And memory is the number one performance bottleneck in games. Second, if I add enough objects to the array, it will get reallocated to a different place in memory, and the pointer will point to garbage. Okay, fine. I'll use an ID instead. template<typename Type> struct Ref { short id; inline Type* ref() { return &Type::list[id]; } // overloaded "=" operator omitted }; Ref<Avatar> avatar = &Avatar::list[0]; avatar.ref()->frobnicate(); Second problem: if I remove that Avatar from the list, some other Avatar will get moved into its place without me knowing. The program will continue, blissfully and silently screwing things up, until some player sends a bug report that the game is "acting weird". I much prefer the program to explode instantly so I at least get a crash dump with a line number. Okay, fine. Instead of actually removing the avatar, I'll put a revision number on it: struct Avatar { short revision; }; template<typename Type> struct Ref { short id; short revision; inline Type* ref() { Type* t = &Type::list[id]; return t->revision == revision ? t : nullptr; } }; Instead of actually deleting the avatar, I'll mark it dead and increment the revision number. Now anything trying to access it will give a null pointer exception. And serializing a reference across the network is just a matter of sending two easily verifiable numbers. Problem #3: delta compression If I had to cut this article down to one line, it would just be a link to Glenn Fiedler's blog. Which by the way is here: gafferongames.com As I set out to implement my own version of Glenn's netcode, I read this article, which details one of the biggest challenges of multiplayer games. Namely, if you just blast the entire world state across the network 60 times a second, you could gobble up 17 mbps of bandwidth. Per client. Delta compression is one of the best ways to cut down bandwidth usage. If a client already knows where an object is, and it hasn't moved, then I don't need to send its position again. This can be tricky to get right. The first part is the trickiest: does the client really know where the object is? Just because I sent the position doesn't mean the client actually received it. The client might send an acknowledgement back that says "hey I received packet #218, but that was 0.5 seconds ago and I haven't gotten anything since." So to send a new packet to that client, I have to remember what the world looked like when I sent out packet #218, and delta compress the new packet against that. Another client might have received everything up to packet #224, so I can delta compress the new packet differently for them. Point is, we need to store a whole bunch of separate copies of the entire world. Someone on Reddit asked "isn't that a huge memory hog"? No, it is not. Actually I store 255 world copies in memory. All in a single giant array. Not only that, but each copy has enough room for the maximum number of objects (2048) even if only 2 objects are active. If you store an object's state as a position and orientation, that's 7 floats. 3 for XYZ coordinates and 4 for a quaternion. Each float takes 4 bytes. My game supports up to 2048 objects. 7 floats * 4 bytes * 2048 objects * 255 copies = ... 14 MB. That's like, half of one texture these days. I can see myself writing this system five years ago in C#. I would start off immediately worried about memory usage, just like that Redditor, without stopping to think about the actual data involved. I would write some unnecessary, crazy fancy, bug-ridden compression system. Taking a second to stop and think about actual data like this is called Data-Oriented Design. When I talk to people about DOD, many immediately say, "Woah, that's really low-level. I guess you want to wring out every last bit of performance. I don't have time for that. Anyway, my code runs fine." Let's break down the assumptions in this statement. Assumption 1: "That's really low-level". Look, I multiplied four numbers together. It's not rocket science. Assumption 2: "You sacrifice readability and simplicity for performance." Let's picture two different solutions to this netcode problem. For clarity, let's pretend we only need 3 world copies, each containing up to 2 objects. Here's the solution I just described. Everything is statically allocated in the .bss segment. It never moves around. Everything is the same size. No pointers at all. Here's the idiomatic C# solution. Everything is scattered randomly throughout the heap. Things can get reallocated or moved right in the middle of a frame. The array is jagged. 64-bit pointers all over the place. Which is simpler? The second diagram is actually far from exhaustive. C#-land is a lot more complex in reality. Check the comments and you'll probably find someone correcting me about how C# actually works. But that's my point. With my solution, I can easily construct a "good enough" mental model to understand what's actually happening on the machine. I've barely scratched the surface with the C# solution. I have no idea how it will behave at runtime. Assumption 3: "Performance is the only reason you would code like this." To me, performance is a nice side benefit of data-oriented design. The main benefit is clarity of thought. Five years ago, when I sat down to solve a problem, my first thought was not about the problem itself, but how to shoehorn it into classes and interfaces. I witnessed this analysis paralysis first-hand at a game jam recently. My friend got stuck designing a grid for a 2048-like game. He couldn't figure out if each number was an object, or if each grid cell was an object, or both. I said, "the grid is an array of numbers. Each operation is a function that mutates the grid." Suddenly everything became crystal clear to him. Assumption 4: "My code runs fine". Again, performance is not the main concern, but it's important. The whole world switched from Firefox to Chrome because of it. Try this experiment: open up calc.exe. Now copy a 100 MB file from one folder to another. I don't know what calc.exe is doing during that 300ms eternity, but you can draw your own conclusions from my two minutes of research: calc.exe actually launches a process called Calculator.exe, and one of the command line arguments is called "-ServerName". Does calc.exe "run fine"? Did throwing a server in simplify things at all, or is it just slower and more complex? I don't want to get side-tracked. The point is, I want to think about the actual problem and the data involved, not about classes and interfaces. Most of the arguments against this mindset amount to "it's different than what I know". Problem #4: lag I now hand-wave us through to the part of the story where the netcode is somewhat operational. Right off the bat I ran into problems dealing with network lag. Games need to respond to players immediately, even if it takes 150ms to get a packet from the server. Projectiles were particularly useless under laggy network conditions. They were impossible to aim. I decided to re-use those 14 MB of world copies. When the server receives a command to fire a projectile, it steps the world back 150ms to the way the world appeared to the player when they hit the fire button. Then it simulates the projectile and steps the world forward until it's up to date with the present. That's where it creates the projectile. I ended up having the client create a fake projectile immediately, then as soon as it hears back from the server that the projectile was created, it deletes the fake and replaces it with the real thing. If all goes well, they should be in the same place due to the server's timey-wimey magic. Here it is in action. The fake projectile appears immediately but goes right through the wall. The server receives the message and fast-forwards the projectile straight to the part where it hits the wall. 150ms later the client gets the packet and sees the impact particle effect. The problem with netcode is, each mechanic requires a different approach to lag compensation. For example, my game has an "active armor" ability. If players react quick enough, they can reflect damage back at enemies. This breaks down in high lag scenarios. By the time the player sees the projectile hitting their character, the server has already registered the hit 100ms ago. The packet just hasn't made it to the client yet. This means you have to anticipate incoming damage and react long before it hits. Notice in the gif above how early I had to hit the button. To correct this, the server implements something I call "damage buffering". Instead of applying damage instantly, the server puts the damage into a buffer for 100ms, or whatever the round-trip time is to the client. At the end of that time, it either applies the damage, or if the player reacted, reflects it back. Here it is in action. You can see the 200ms delay between the projectile hitting me and the damage actually being applied. Here's another example. In my game, players can launch themselves at enemies. Enemies die instantly to perfect shots, but they deflect glancing blows and send you flying like this: Which direction should the player bounce? The client has to simulate the bounce before the server knows about it. The server and client need to agree which direction to bounce or they'll get out of sync, and they have no time to communicate beforehand. At first I tried quantizing the collision vector so that there were only six possible directions. This made it more likely that the client and server would choose the same direction, but it didn't guarantee anything. Finally I implemented another buffer system. Both client and server, when they detect a hit, enter a "buffer" state where the player sits and waits for the remote host to confirm the hit. To minimize jankiness, the server always defers to the client as to which direction to bounce. If the client never acknowledges the hit, the server acts like nothing happened and continues the player on their original course, fast-forwarding them to make up for the time they sat still waiting for confirmation. Problem #5: jitter My server sends out packets 60 times per second. What about players whose computers run faster than that? They'll see jittery animation. Interpolation is the industry-standard solution. Instead of immediately applying position data received from the server, you buffer it a little bit, then you blend smoothly between whatever data that you have. In my previous attempt at networked multiplayer, I tried to have each object keep track of its position data and smooth itself out. I ended up getting confused and it never worked well. This time, since I could already easily store the entire world state in a struct, I was able to write just two functions to make it work. One function takes two world states and blends them together. Another function takes a world state and applies it to the game. How big should the buffer delay be? I originally used a constant until I watched a video from the Overwatch devs where they mention adaptive interpolation delay. The buffer delay should smooth out not only the framerate from the server, but also any variance in packet delivery time. This was an easy win. Clients start out with a short interpolation delay, and any time they're missing a packet to interpolate toward, they increase their "lag score". Once it crosses a certain threshold, they tell the server to switch them to a higher interpolation delay. Of course, automated systems like this often act against the user's wishes, so it's important to add switches and knobs to the algorithm! Problem #6: joining servers mid-match Wait, I already have a way to serialize the entire game state. What's the hold up? Turns out, it takes more than one packet to serialize a fresh game state from scratch. And each packet may take multiple attempts to make it to the client. It may take a few hundred milliseconds to get the full state, and as we've seen already, that's an eternity. If the game is already in progress, that's enough time to send 20 packets' worth of new messages, which the client is not ready to process because it hasn't loaded yet. The solution is—you guessed it—another buffer. I changed the messaging system to support two separate streams of messages in the same packet. The first stream contains the map data, which is processed as soon as it comes in. The second stream is just the usual fire-hose of game messages that come in while the client is loading. The client buffers these messages until it's done loading, then processes them all until it's caught up. Problem #7: cross-cutting concerns This next part may be the most controversial. Remember that bit of gamedev wisdom from the beginning? "don't add networked multiplayer to an existing game"? Well, most of the netcode in this game is literally tacked on. It lives in its own 5000-line source file. It reaches into the game, pokes stuff into memory, and the game renders it. Just listen a second before stoning me. Is it better to group all network code in one place, or spread it out inside each game object? I think both approaches have advantages and disadvantages. In fact, I use both approaches in different parts of the game, for various reasons human and technical. But some design paradigms (*cough* OOP) leave no room for you to make this decision. Of course you put the netcode inside the object! Its data is private, so you'll have to write an interface to access it anyway. Might as well put all the smarts in there too. Conclusion I'm not saying you should write netcode like I do; only that this approach has worked for me so far. Read the code and judge for yourself. There is an objectively optimal approach for each use case, although people may disagree on which one it is. You should be free to choose based on actual constraints rather than arbitrary ones set forth by some paradigm. Thanks for reading. DECEIVER is launching on Kickstarter soon. Sign up to play the demo here!
      0 comments
    18. On the 2nd of November 2017 we launched a Kickstarter campaign for our game Nimbatus - The Space Drone Constructor, which aimed to raise $20,000. By the campaign’s end, 3000 backers had supported us with a total of $74,478. All the PR and marketing was handled by our indie developer team of four people with a very low marketing budget. Our team decided to go for a funding goal we were sure we could reach and extend the game’s content through stretch goals. The main goal of the campaign was to raise awareness for the game and raise funds for the alpha version.   Part 1 - Before Launch Is what we believed when we launched our first Kickstarter campaign in 2016. For this first campaign, we had built up a very dedicated group of people before the Kickstarter’s launch. Nimbatus also had a bit of a following before the campaign launched: ~ 300 likes on Facebook
      ~ 1300 followers on Twitter
      ~ 1000 newsletter subs
      ~ 3500 followers on Steam However, there had been little interaction between players and us previous to the campaign's launch. This made us unsure whether or not the Nimbatus Kickstarter would reach its funding goal. A few weeks prior to launch, we started to look for potential ways to promote Nimbatus during the Kickstarter. We found our answer in social news sites. Reddit, Imgur and 9gag all proved to be great places to talk about Nimbatus. More about this in Part 3 - During the campaign. As with our previous campaign, the reward structure and trailer were the most time-consuming aspects of the page setup. We realised early that Nimbatus looks A LOT better in motion and therefore decided that we should show all features in action with animated GIFs. Two examples:   In order to support the campaigns storytelling, “we built a ship, now we need a crew!”, we named all reward tiers after open positions on the ship.   We were especially interested how the “Navigator” tier would do. This $95 tier would give backers free digital copies of ALL games our company EVER creates.   We decided against Early Bird and Kickstarter exclusive rewards in order avoid splitting backers into “winners and losers”, based on the great advice from Stonemaier Game’s book A Crowdfunder’s Strategy Guide (EDS Publications Ltd. (2015). Their insights also convinced us to add a $1 reward tier because it lets people join the update loop to build up trust in our efforts. Many of our $1 backers later increased their pledge to a higher tier. Two of our reward tiers featured games that are similar to Nimbatus. The keys for these games were provided by fellow developers. We think that this is really awesome and it helped the campaign a lot! A huge thanks to Avorion, Reassembly , Airships and Scrap Galaxy <3   Youtubers and streamers are important allies for game developers. They are in direct contact with potential buyers/backers and can significantly increase a campaign’s reach. We made a list of content creators who’d potentially be interested in our game. They were selected mostly by browsing Youtube for “let’s play” videos of games similar to Nimbatus. We sent out a total of 100 emails, each with a personalized intro sentence, no money involved. Additionally, we used Keymailer . Keymailer is a tool to contact Youtubers and streamers. At a cost of $150/month you can filter all available contacts by games they played and genres they enjoy. We personalized the message for each group. Messages automatically include an individual Steam key. With this tool, we contacted over 2000 Youtubers/Streamers who are interested in similar games. How it turned out
      - About 10 of the 100 Youtubers we contacted manually ended up creating a video/stream during the Kickstarter. Including some big ones with 1 million+ subscribers.
      - Over 150 videos resulted from the Keymailer outreach. Absolutely worth the investment! Another very helpful tool to find Youtubers/Streamers is Twitter. Before, but also during the campaign we sent out tweets , stating that we are looking for Youtubers/Streamers who want to feature Nimbatus. We also encouraged people to tag potentially interested content creators in the comments. This brought in a lot of interested people and resulted in a couple dozen videos. We also used Twitter to follow up when people where not responding via email, which proved to be very effective. In terms of campaign length we decided to go with a 34 day Kickstarter. The main reason being that we thought it would take quite a while until the word of the campaign spread enough. In retrospective this was ok, but we think 30 days would have been enough too.
      We were very unsure whether or not to release a demo of Nimbatus. Mainly because we were unsure if the game offered enough to convince players in this early state and we feared that our alpha access tier would potentially lose value because everyone could play already. Thankfully we decided to offer a demo in the end. More on this topic in Part 3 - During the campaign. Since we are based in Switzerland, we were forced to use CHF as our campaign’s currency. And while the currency is automatically re-calculated into $ for American backers, it was displayed in CHF for all other international backers. Even though CHF and $ are almost 1:1 in value, we believed this to be a
      hurdle. There is no way to tell for us how many backers were scared away because of this in the end.   Part 2: Kickstarter Launch We launched our Kickstarter campaign on a Thursday evening (UTC + 1) which is midday in the US. In order to celebrate the launch, we did a short livestream on Facebook. We had previously opened an event page and invited all our Facebook friends to it. Only a few people were watching and we were a bit stressed out. In order to help us spread the word we challenged our supporters with community goals. We promised that if all these goals were reached, each backer above $14 would receive an extra copy of Nimbatus. With most of the goals reached after the first week, we realized that we should have made the challenge a bit harder. The first few days went better than expected. We announced the Kickstarter on Imgur, Reddit, 9gag, Instagram, Facebook, Twitter, in some forums, via our Newsletter and on our Steam page. If you plan to release your game on Steam later on, we’d highly recommend that you set up your Steam page before the Kickstarter launches. Some people might not be interested in backing the game but will go ahead and wishlist it instead.   Part 3: During The Campaign We tried to keep the campaign’s momentum going. This worked our mostly thanks to the demo we had released. In order to download the Nimbatus demo, people needed to head over to our website and enter their email address. Within a few minutes, they received an automated email, including a download link for the demo. We used Mailchimp for this process.   We also added a big pop up in the demo to inform players about the Kickstarter. At first we were a bit reluctant to use this approach, it felt a bit sneaky. But after adding a line informing players they would be added to the newsletter and adding a huge unsubscribe button in the demo download mail, we felt that we could still sleep at night. For our previous campaign we had also released a demo. But the approach was significantly different. For the Nimbatus Kickstarter, we used the demo as a marketing tool to inform people about the campaign. Our previous Kickstarters’ demo was mainly an asset you could download if you were already checking out the campaign’s page and wanted to try the game before backing. We continued to frequently post on Imgur, Twitter, 9Gag and Facebook. Simultaneously, people streamed Nimbatus on Twitch and released videos on Youtube. This lead to a lot of demo downloads and therefore growth of our newsletter. A few hundred subs came in every day. Only about 10% of the people unsubscribed from the newsletter after downloading the demo.
      Whenever we updated the demo or reached significant milestones in the campaign, such as being halfway to our goal, we sent out a newsletter. We also opened a Discord channel, which turned out a be a great way to stay in touch with our players. We were quite surprised to see a decent opening and link click rate. Especially if you compare this to our “normal” newsletter, which includes mostly people we personally met at events. Our normal newsletter took over two years to build up and includes about 4000 subs. With the Nimbatus demo, we gathered 50’000 subs within just 4 weeks and without travelling to any conferences.   (please note that around 2500 people subscribed to the normal newsletter during the Kickstarter) On the 7th day of the campaign we asked a friend if she would give us a shoutout on Reddit. She agreed and posted it in r/gaming. We will never forget what happened next. The post absolutely took off! In less than an hour, the post had reached the frontpage and continued to climb fast. It soon reached the top spot of all things on Reddit. Our team danced around in the office. Lots of people backed, a total of over $5000 came in from this post and we reached our funding goal 30 minutes after hitting the front page.   We couldn’t believe our luck. Then, people started to accuse us of using bots to upvote the post. Our post was reported multiple times until the moderators took the post down.
      We were shocked and contacted them. They explained that they would need to investigate the post for bot abuse. A few hours later, they put the post back up and stated to have found nothing wrong with it and apologized for the inconvenience. Since the post had not received any upvotes in the past hours while it was taken down it very quickly dropped off the front page and the money flow stopped. While this is a misunderstanding we can understand and accept, people’s reactions hit us pretty hard. After the post was back up, many people on Reddit continued to accuse us and our friend. In the following days, our friend was constantly harassed when she posted on Reddit. Some people jumped over to our companies Twitter and Imgur account and kept on blaming us, asking if we were buying upvotes there too. It’s really not cool to falsely accuse people. Almost two weeks later we decided to start posting in smaller subreddits again. This proved to be no problem. But when we dared to do another post in r/gaming later, people immediately reacted very aggressive. We took the new post down and decided to stop posting in r/gaming (at least during the Kickstarter). After upgrading the demo with a new feature to easily export GIFs, we started to run competitions on Twitter. The coolest drones that were shared with #NimbatusGame would receive a free Alpha key for the game. Lots of players participated and helped to increase Nimbatus’ reach by doing so. We also gave keys to our most dedicated Youtubers/streamers who then came up with all kinds of interesting challenges for their viewers. All these activities came together in a nice loop:
      People downloaded the Nimbatus demo they heard about on social media/social news sites or from Youtubers/Streamers. By receiving newsletters and playing the demo they learned about the Kickstarter. Many of them backed and participated in community goals/competitions which brought in more new people.   Not much happened in terms of press. RockPaperShotgun and PCGamer wrote articles, both resulting in about $500, which was nice. A handful of small sites picked up the news too. We sent out a press release when Nimbatus reached its funding goal, both to manually picked editors of bigger sites and via gamespress.com.   Part 4: Last Days Every person that hit the “Remind me” button on a Kickstarter page receives an email 48 hours before a campaign ends. This helpful reminder caused a flood of new pledges. We reached our last stretch goal a few hours before our campaign ended. Since we had already communicated this goal as the final one we withheld announcing any further stretch goals.   We decided to do a Thunderclap 24 hours before the campaign ends. Even after having done quite a few Thunderclaps, we are still unsure how big of an impact they have. A few minutes before the Kickstarter campaign was over we cleaned up our campaign page and added links to our Steam page and website. Note that Kickstarter pages cannot be edited after the campaign ends! The campaign ended on a Tuesday evening (UTC + 1) and raised a total of $75’000, which is 369% of the original funding goal. After finishing up our “Thank you” image and sending it to our backers it was time to rest.   Part 5: Conclusion We are very happy with the campaign’s results. It was unexpected to highly surpass our funding goal, even though we didn’t have an engaged community when the campaign started. Thanks to the demo we were able to develop a community for Nimbatus on the go. The demo also allowed us to be less “promoty” when posting on social news sites. This way, interested people could get the demo and discover the Kickstarter from there instead of us having to ask for support directly when posting. This, combined with the ever growing newsletter, turned into a great campaign dynamic. We plan to use this approach again for future campaigns.
        Growth 300 ------------------> 430 Facebook likes 1300 -----------------> 2120 Twitter followers 1000 -----------------> 50’000 Newsletter signups 3500 -----------------> 10’000 Followers on Steam 0 ---------------------> 320 Readers of subreddit 0 ---------------------> 468 People on Discord 0 ---------------------> 300 Members in our forum   More data 23% of our backers came directly from Kickstarter.
      76% of our backers came from external sites.
      For our previous campaign it was 36/64. The average pledge amount of our backers was $26.
      94 backers decided to choose the Navigator reward, which gives them access to all games our studio will create in the future. It makes us very happy to see that this kind of reward, which is basically an investment in us as a game company, was popular among backers.   Main sources of backers Link inside demo / Newsletter 22’000 Kickstarter 17’000 Youtube 15’000 Google 3000 Reddit 2500 Twitter 2000 Facebook 2000   TLDR: Keymailer is awesome, but also contact big Youtubers/streamers via email. Most money for the Kickstarter came in through the demo. Social news sites (Imgur, 9Gag, Reddit, …) can generate a lot of attention for a game. It’s much easier to offer a demo on social news sites than to ask for Kickstarter support. Collecting newsletter subs from demo downloads is very effective. It’s possible to run a successful Kickstarter without having a big community beforehand.   We hope this insight helps you plan your future Kickstarter campaign. We believe you can do it and we wish you all the best.   About the author: Philomena Schwab is a game designer from Zurich, Switzerland. She co-founded Stray Fawn Studio together with Micha Stettler. The indie game studio recently released its first game, Niche - a genetics survival game and is now developing its second game Nimbatus - The Space Drone Constructor. Philomena wrote her master thesis about community building for indie game developers and founded the nature gamedev collective Playful Oasis. As a chair member of the Swiss Game Developers association she helps her local game industry grow. https://www.nimbatus.ch/
      https://strayfawnstudio.com/
      https://www.kickstarter.com/projects/strayfawnstudio/nimbatus-the-space-drone-constructor   Related Reading: Algo-Bot: Lessons Learned from our Kickstarter failure.
      8 comments
    19. Originally posted on Troll Purse development blog. Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture. Interaction via Overlaps By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player: Header // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API InteractiveActor : public AActor { GENERATED_BODY() public: // Sets default values for this actor's properties InteractiveActor(); virtual void BeginPlay() override; protected: UFUNCTION() virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult); UFUNCTION() virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex); UFUNCTION() virtual void OnPlayerInputActionReceived(); UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction) class UBoxComponent* InteractionTrigger; } This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code. Source // Fill out your copyright notice in the Description page of Project Settings. #include "InteractiveActor.h" #include "Components/BoxComponent.h" // Sets default values AInteractiveActor::AInteractiveActor() { PrimaryActorTick.bCanEverTick = true; RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); RootComponent->SetMobility(EComponentMobility::Static); InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger")); InteractionTrigger->InitBoxExtent(FVector(128, 128, 128)); InteractionTrigger->SetMobility(EComponentMobility::Static); InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap); InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap); InteractionTrigger->SetupAttachment(RootComponent); } void AInteractiveActor::BeginPlay() { if(InputComponent == nullptr) { InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component"); InputComponent->bBlockInput = bBlockInput; } InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived); } void AInteractiveActor::OnPlayerInputActionReceived() { //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out. } void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { EnableInput(PC); } } } } void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex) { if (OtherActor) { AController* Controller = OtherActor->GetController(); if(Controller) { APlayerController* PC = Cast<APlayerController>(Controller); if(PC) { DisableInput(PC); } } } }   Pros and Cons The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene. Interaction via Raytrace Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting. Source AInteractiveActor.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/Actor.h" #include "InteractiveActor.generated.h" UCLASS() class GAME_API AInteractiveActor : public AActor { GENERATED_BODY() public: virtual OnReceiveInteraction(class APlayerController* PC); }   AMyPlayerController.h // Fill out your copyright notice in the Description page of Project Settings. #pragma once #include "CoreMinimal.h" #include "GameFramework/PlayerController.h" #include "AMyPlayerController.generated.h" UCLASS() class GAME_API AMyPlayerController : public APlayerController { GENERATED_BODY() AMyPlayerController(); public: virtual void SetupInputComponent() override; float MaxRayTraceDistance; private: AInteractiveActor* GetInteractiveByCast(); void OnCastInput(); }   These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used. The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code. The simple execution of the code is as follows in the class definitions. AInteractiveActor.cpp void AInteractiveActor::OnReceiveInteraction(APlayerController* PC) { //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.)) }   AMyPlayerController.cpp AMyPlayerController::AMyPlayerController() { MaxRayTraceDistance = 1000.0f; } AMyPlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput); } void AMyPlayerController::OnCastInput() { AInteractiveActor* Interactive = GetInteractiveByCast(); if(Interactive != nullptr) { Interactive->OnReceiveInteraction(this); } else { return; } } AInteractiveActor* AMyPlayerController::GetInteractiveByCast() { FVector CameraLocation; FRotator CameraRotation; GetPlayerViewPoint(CameraLocation, CameraRotation); FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance); FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn()); TraceParams.bTraceAsyncScene = true; FHitResult Hit(ForceInit); GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams); AActor* HitActor = Hit.GetActor(); if(HitActor != nullptr) { return Cast<AInteractiveActor>(HitActor); } else { return nullptr; } }   Pros and Cons One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override. Conclusion There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!     Originally posted on Troll Purse development blog.
      3 comments
    20. Original Post: Limitless Curiosity Out of various phases of the physics engine. Constraint Resolution was the hardest for me to understand personally. I need to read a lot of different papers and articles to fully understand how constraint resolution works. So I decided to write this article to help me understand it more easily in the future if, for example, I forget how this works. This article will tackle this problem by giving an example then make a general formula out of it. So let us delve into a pretty common scenario when two of our rigid bodies collide and penetrate each other as depicted below. From the scenario above we can formulate: We don't want our rigid bodies to intersect each other, thus we construct a constraint where the penetration depth must be more than zero. \(C: d>=0\) This is an inequality constraint, we can transform it to a more simple equality constraint by only solving it if two bodies are penetrating each other. If two rigid bodies don't collide with each other, we don't need any constraint resolution. So: if d>=0, do nothing else if d < 0 solve C: d = 0 Now we can solve this equation by calculating  \( \Delta \vec{p1},\Delta \vec{p2},\Delta \vec{r1}\),and  \( \Delta \vec{r2}\)  that cause the constraint above satisfied. This method is called the position-based method. This will satisfy the above constraint immediately in the current frame and might cause a jittery effect. A much more modern and preferable method that is used in Box2d, Chipmunk, Bullet and my physics engine is called the impulse-based method. In this method, we derive a velocity constraint equation from the position constraint equation above. We are working in 2D so angular velocity and the cross result of two vectors are scalars. Next, we need to find \(\Delta V\) or impulse to satisfy the velocity constraint. This \(\Delta V\) is caused by a force. We call this force 'constraint force'. Constraint force only exerts a force on the direction of illegal movement in our case the penetration normal. We don't want this force to do any work, contribute or restrict any motion of legal direction. \(\lambda\) is a scalar, called Lagrangian multiplier. To understand why constraint force working on \(J^{T}\) direction (remember J is a 12 by 1 matrix, so \(J^{T}\) is a 1 by 12 matrix or a 12-dimensional vector), try to remember the equation for a three-dimensional plane. Now we can draw similarity between equation(1) and equation(2), where \(\vec{n}^{T}\) is similar to J and \(\vec{v}\) is similar to V. So we can interpret equation(1) as a 12 dimensional plane, we can conclude that \(J^{T}\) as the normal of this plane. If a point is outside a plane, the shortest distance from this point to the surface is the normal direction. After we calculate the Lagrangian multiplier, we have a way to get back the impulse from equation(3). Then, we can apply this impulse to each rigid body. Baumgarte Stabilization Note that solving the velocity constraint doesn't mean that we satisfy the position constraint. When we solve the velocity constraint, there is already a violation in the position constraint. We call this violation position drift. What we achieve is stopping the two bodies from penetrating deeper (The penetration depth will stop growing). It might be fine for a slow-moving object as the position drift is not noticeable, but it will be a problem as the object moving faster. The animation below demonstrates what happens when we solve the velocity constraint. [caption id="attachment_38" align="alignnone" width="800"] So instead of purely solving the velocity constraint, we add a bias term to fix any violation that happens in position constraint.  So what is the value of the bias? As mentioned before we need this bias to fix positional drift. So we want this bias to be in proportion to penetration depth. This method is called Baumgarte Stabilization and \(\beta\) is a baumgarte term. The right value for this term might differ for different scenarios. We need to tweak this value between 0 and 1 to find the right value that makes our simulation stable.   Sequential Impulse If our world consists only of two rigid bodies and one contact constraint. Then the above method will work decently. But in most games, there are more than two rigid bodies. One body can collide and penetrate with two or more bodies. We need to satisfy all the contact constraint simultaneously. For a real-time application, solving all these constraints simultaneously is not feasible. Erin Catto proposes a practical solution, called sequential impulse. The idea here is similar to Project Gauss-Seidel. We calculate \(\lambda\) and \(\Delta V\) for each constraint one by one, from constraint one to constraint n(n = number of constraint). After we finish iterating through the constraints and calculate \(\Delta V\), we repeat the process from constraint one to constraint n until the specified number of iteration. This algorithm will converge to the actual solution.The more we repeat the process, the more accurate the result will be. In Box2d, Erin Catto set ten as the default for the number of iteration. Another thing to notice is that while we satisfy one constraint we might unintentionally satisfy another constraint. Say for example that we have two different contact constraint on the same rigid body. When we solve \(\dot{C1}\), we might incidentally make \(\dot{d2} >= 0\). Remember that equation(5), is a formula for \(\dot{C}: \dot{d} = 0\) not \(\dot{C}: \dot{d} >= 0\). So we don't need to apply it to \(\dot{C2}\) anymore. We can detect this by looking at the sign of \(\lambda\). If the sign of \(\lambda\) is negative, that means the constraint is already satisfied. If we use this negative lambda as an impulse, it means we pull it closer instead of pushing it apart. It is fine for individual \(\lambda\) to be negative. But, we need to make sure the accumulation of \(\lambda\) is not negative. In each iteration, we add the current lambda to normalImpulseSum. Then we clamp the normalImpulseSum between 0 and positive infinity. The actual Lagrangian multiplier that we will use to calculate the impulse is the difference between the new normalImpulseSum and the previous normalImpulseSum Restitution Okay, now we have successfully resolve contact penetration in our physics engine. But what about simulating objects that bounce when a collision happens. The property to bounce when a collision happens is called restitution. The coefficient of restitution denoted \(C_{r}\), is the ratio of the parting speed after the collision and the closing speed before the collision. The coefficient of restitution only affects the velocity along the normal direction. So we need to do the dot operation with the normal vector. Notice that in this specific case the \(V_{initial}\) is similar to JV. If we look back at our constraint above, we set \(\dot{d}\) to zero because we assume that the object does not bounce back(\(C_{r}=0\)).So, if \(C_{r} != 0\), instead of 0, we can modify our constraint so the desired velocity is \(V_{final}\). We can merge our old bias term with the restitution term to get a new bias value. // init constraint // Calculate J(M^-1)(J^T). This term is constant so we can calculate this first for (int i = 0; i < constraint->numContactPoint; i++) { ftContactPointConstraint *pointConstraint = &constraint->pointConstraint; pointConstraint->r1 = manifold->contactPoints.r1 - (bodyA->transform.center + bodyA->centerOfMass); pointConstraint->r2 = manifold->contactPoints.r2 - (bodyB->transform.center + bodyB->centerOfMass); real kNormal = bodyA->inverseMass + bodyB->inverseMass; // Calculate r X normal real rnA = pointConstraint->r1.cross(constraint->normal); real rnB = pointConstraint->r2.cross(constraint->normal); // Calculate J(M^-1)(J^T). kNormal += (bodyA->inverseMoment * rnA * rnA + bodyB->inverseMoment * rnB * rnB); // Save inverse of J(M^-1)(J^T). pointConstraint->normalMass = 1 / kNormal; pointConstraint->positionBias = m_option.baumgarteCoef * manifold->penetrationDepth; ftVector2 vA = bodyA->velocity; ftVector2 vB = bodyB->velocity; real wA = bodyA->angularVelocity; real wB = bodyB->angularVelocity; ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); //Calculate JV real jnV = dv.dot(constraint->normal pointConstraint->restitutionBias = -restitution * (jnV + m_option.restitutionSlop); } // solve constraint while (numIteration > 0) { for (int i = 0; i < m_constraintGroup.nConstraint; ++i) { ftContactConstraint *constraint = &(m_constraintGroup.constraints); int32 bodyIDA = constraint->bodyIDA; int32 bodyIDB = constraint->bodyIDB; ftVector2 normal = constraint->normal; ftVector2 tangent = normal.tangent(); for (int j = 0; j < constraint->numContactPoint; ++j) { ftContactPointConstraint *pointConstraint = &(constraint->pointConstraint[j]); ftVector2 vA = m_constraintGroup.velocities[bodyIDA]; ftVector2 vB = m_constraintGroup.velocities[bodyIDB]; real wA = m_constraintGroup.angularVelocities[bodyIDA]; real wB = m_constraintGroup.angularVelocities[bodyIDB]; //Calculate JV. (jnV = JV, dv = derivative of d, JV = derivative(d) dot normal)) ftVector2 dv = (vB + pointConstraint->r2.invCross(wB) - vA - pointConstraint->r1.invCross(wA)); real jnV = dv.dot(normal); //Calculate Lambda ( lambda real nLambda = (-jnV + pointConstraint->positionBias / dt + pointConstraint->restitutionBias) * pointConstraint->normalMass; // Add lambda to normalImpulse and clamp real oldAccumI = pointConstraint->nIAcc; pointConstraint->nIAcc += nLambda; if (pointConstraint->nIAcc < 0) { pointConstraint->nIAcc = 0; } // Find real lambda real I = pointConstraint->nIAcc - oldAccumI; // Calculate linear impulse ftVector2 nLinearI = normal * I; // Calculate angular impulse real rnA = pointConstraint->r1.cross(normal); real rnB = pointConstraint->r2.cross(normal); real nAngularIA = rnA * I; real nAngularIB = rnB * I; // Apply linear impulse m_constraintGroup.velocities[bodyIDA] -= constraint->invMassA * nLinearI; m_constraintGroup.velocities[bodyIDB] += constraint->invMassB * nLinearI; // Apply angular impulse m_constraintGroup.angularVelocities[bodyIDA] -= constraint->invMomentA * nAngularIA; m_constraintGroup.angularVelocities[bodyIDB] += constraint->invMomentB * nAngularIB; } } --numIteration; }   General Step to Solve Constraint In this article, we have learned how to solve contact penetration by defining it as a constraint and solve it. But this framework is not only used to solve contact penetration. We can do many more cool things with constraints like for example implementing hinge joint, pulley, spring, etc. So this is the step-by-step of constraint resolution: Define the constraint in the form \(\dot{C}: JV + b = 0\). V is always \(\begin{bmatrix} \vec{v1} \\ w1 \\ \vec{v2} \\ w2\end{bmatrix}\) for every constraint. So we need to find J or the Jacobian Matrix for that specific constraint. Decide the number of iteration for the sequential impulse. Next find the Lagrangian multiplier by inserting velocity, mass, and the Jacobian Matrix into this equation: Do step 3 for each constraint, and repeat the process as much as the number of iteration. Clamp the Lagrangian multiplier if needed. This marks the end of this article. Feel free to ask if something is still unclear. And please inform me if there are inaccuracies in my article. Thank you for reading. NB: Box2d use sequential impulse, but does not use baumgarte stabilization anymore. It uses full NGS to resolve the position drift. Chipmunk still use baumgarte stabilization. References Allen Chou's post on Constraint Resolution A Unified Framework for Rigid Body Dynamics An Introduction to Physically Based Modeling: Constrained Dynamics Erin Catto's Box2d and presentation on constraint resolution Falton Debug Visualizer 18_01_2018 22_40_12.mp4 equation.svg
      2 comments

    GameDev Contractors

    1. We are a full-service 3D animation and art studio with over 10 years of experience in animation, game development, training simulators and other related industries. We are proud of our 3d team – consisting of best Ukrainian 3d professionals with extensive experience in AAA game titles, animated movies, cartoons. Our artists have background in gaming companies such as Crytek, Ubisoft, 4A Games, as well as big animation studios     Please find more info about us at tavo-art.com
    2. Hello! My name is Mika Pilke and I've been a hobbyist musician and music writer for about 20 years. For now I haven't been too active looking for projects that need sounds or music, but I'd like to change that.
      I've also been active gamer for 25 years, and it has been sort of a dream of creating them too. And since I like creating different atmospheres, it would seem quite logical move to offer my skills for someone who is developing them. Though of course I'm willing to do music for a lot more than for games only, if needed. I do have to mention that I'm not a rock-hard pro, but I am very serious with my stuff. Two years ago there was one project for what I did 30 songs/ambiences, but game never actually was finished. Does not really matter though, since it was a fun journey and did give some important experience nevertheless. I'm not sure of the total amount of songs I've done, but it is far over 100. As an artist I am quite versatile. I do have tendencies of creating a bit darker atmospheres, but really, feel free to ask anything or offer any kinds of projects! I might have something up in my sleeve. For now I have been mixing and mastering my work alone, but for commercial projects I do have some contacts I can use to master and enhance my creations to top quality (two ambient songs listed below are mastered a bit too quiet, since I lacked some tools for mastering back in the past). Right now I do have a full time job, so if you need 20 song in two weeks, I probably won't be able to do them all for you. But if you give me a month or (rather) few months, it definitely shouldn't be a problem. :) Anyway, here's some of my recent work. Everything you hear or see is my work (photos, 3D-modeling, animation, shooting, music, mixing, singing, editing ect.). Everything except artwork in the video of the song Hellbreaker in which Jaime Jasso has been creator of all the visual art. Dreamland Synthesis - Awaiting This song was made to serve as menu music or such (horror ect.). Dreamland Synthesis - Morbid Tomorrow Darkish ambient. Dreamland Synthesis - Nebula Core This one would maybe be suitable for some dark scifi-styled horror game. Dreamland Synthesis - Hellbreaker I think when I wrote this song I was imagining some sort of moment being chased. Dreamland Synthesis - In Peace With this one, my goal was to do a song that would fit to a RPG or JRPG. Village music? Dead Cold Ground - Towards Eternity This was my project at the beginning of this year. It kind of got out of the hand, and took almost a month to finish. :) Working alone takes it's toll.
    3. Sound design for games (soundtracks/SFX/VO) Your project will be filled with bright and juicy sound, and the music will be recognized from the very first notes! Our studio employs experienced specialists, so everyone is a professional in the business – musicians compose hits, sound designers create colorful sound effects, and dubbing actors play voices alive. Our team will help you to implement any idea in the field of sound – from rich and unique sound effects up to fascinating and thought-out soundtrack! We love and appreciate our customers, providing individual and responsible approach to every project. You will have full control over the operation and get a great result precisely to the intended deadline!

      Portfolio and contacts on our website: 
      http://www.onsoundwaves.com
      https://soundcloud.com/onsoundwaves
      https://www.youtube.com/user/OnSoundWaves/videos

      Contacts:
      https://www.facebook.com/yakovlev.j
      skype: yakovlev.j
      mail: info@onsoundwaves.com

      Reasons to choose us? We have developed sound design for more than 100 projects;
        We can develop for you a unique sound design of the highest quality – your project will be implemented by the professionals who work in game development and have extensive experience in sound design for a number of successful projects;
        We can help you to develop the terms of reference;
        We will do a test job on your project, so you can make sure of the quality of our work and the advantages of working with us;
        We always meet deadlines;
        You can contact us anytime and learn about the work process on your project;
        Transfer of audio content is done by contract.
    4. I'm available for contract work!

      I do environments, level design, props and characters. High poly or low-poly:
      https://www.behance.net/thomasguillon

      You can contact me through my website or send me an email at: thomasguillon@yahoo.com
    5. Hey there! Jannik - that's me. I'm a freelance web developer since March 2016 and was a regular, employed web developer before - total experience - about five years. What can I help you with? Well, basically anything web development related. I bring a vast amount of experience in working with PHP, MySQL, PostgreSQL, JavaScript, jQuery, CSS(3), HTML(5) and Dojo amongst others. You can find a full list on my website. I've been working for a company called InnoGames (any many more) for quite a few years (one of the major browser/mobile game developers in Germany and Europe) - and I'm always working on my own browser- and client game projects, too. How much do I charge? It's a tough question. I usually charge about 33-35 EUR per hour (net) remote work during prime time (Monday-Friday, 9-to-5). However, I'm aware that there are lots of decent projects and start-ups out there on a budget - so I'm offering fixed prices and (much) lower rates for non-priority work I can do in the evenings and on weekends. General rule: It's negotiable - and I'm more willing to find a good rate for both of us if the project sounds interesting. Anyway. Should you be interested - don't shy away. I'll be available to answer your requests and find a good deal.
      You can reach me by sending a mail to hello@janniklorenzen.de Thanks!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!