Jump to content
  • Advertisement
  • Features

    1. Past hour
    2. I'm not 100% sure exactly what you are doing and I don't know Python, but years ago I did write a game of life program. One common way of implementing wrapping for stuff like this is to use the modulus operator. For instance say you have a range of 0 to 10 by 0 to 10. You can use the formula: (location + 10) mod 10 So say you are at (0,0) and you want to address (x-1,y-1) .... That's ((-1+10) mod 10) for each coordinate which gives you (9,9) Now let's say you are at (9,9) and you want to address (Y+1,Y+1) ..... That's ((10+10) mod 10) for each coordinate which gives you (0,0) So if you do that the wrapping part should be super simple.
    3. Thank you for this very nice answer ! I now store projectiles in a separated array and have added a simple spatial partitionning for the game entities (uniform grid of vectors storing pointers to entities), it's working very well. I'm just not sure about using vectors for the spatial paritionning. I have a vector of vectors (for 2D) storing vectors of pointers to my game objects. When I need to move an object to another cell, I have to remove it from its current cell, and I can't find a way of doing this without iterating over all the objects into the cell. Should I use linked lists for all cells instead of vectors ? It's probably not dramatic performance-wise since there is not a lot of objects changing cells at the same time, but it doesn't look right.
    4. trojanfoe

      Migrating from Win32 (DX) to UWP

      I am also interested in targetting UWP but have never developed for it. You will get a flavour of what's involved if you install the Direct3D Game Templates for VS and create a solution with a "UWP DirectX 11 (or 12) DR C++/WinRT" project and a "Win32" version and look at the implementation in Main.cpp and DeviceResources.cpp (Game.cpp is 99% the same in both, which is greate). From what I can see you won't have to learn much about UWP and WinRT as that's taken care of by the template (WinRT is the Windows Runtime subset of Win32 API calls that's supported on all Win10 platforms). Also check out this video about C++/WinRT from Microsoft as it means you can write in ISO C++ and not C++/CX and that's a good thing. Something else I found out recently is that if you want more of the GPU from your UWP game on Xbox One then it needs to be DirectX 12, which could be a show-stopper for some.
    5. JoeJ, thank you for your full information, with my point of view, i think the most decrease of performance during using OpenCL this is CPU/GPU communication, also the sharing graphics resources between OpenCL, am i right ? What do you think about the OpenCL future? for example Khronos has plans with unique Vulkan and OpenCL interoperability. I try to use OpenCL for my 3D Engine, but i have some decrease performance with using OpenCL for frustum culling and Indirect Drawing and simulation of particle system. nVidia has some OpenCL Driver bugs: https://devtalk.nvidia.com/default/topic/1035913/cuda-programming-and-performance/clcreatefromd3d11buffernv-returns-cl_invalid_d3d11_resource_nv-for-id3d11buffer-with-d3d11_resource_-/ But in some project still uses OpenCL: Bullet Physics, AMD Radeon Rays, AMD Pro Render
    6. Today
    7. Short Bio I come from a concept art and 3D background and have worked commercially on series like The Legend of Zelda, Warhammer and Magic: The Gathering. On the indie side I've made some small quick online games with other teams in the past for fun and worked on some single player personal projects. Availability I'm available to work 7 days a week pretty much all day and I'm looking for someone who shares the same motivation, passion for games and work ethic. Interested in working on Currently I'm interested mainly in developing 2D multiplayer games for PC and making PSX/N64 era style low poly games. Favourite Games Final Fantasy VII, Donkey Kong Country 2, Super Mario Bros 3, Megaman Legends, Ocarina of Time, Symphony of the Night, Phantasy Star Online, Mario 64, Cave Story, Resident Evil 2, Resident Evil 1 (REmake not ReREMake), Smash Bros: Melee, Nier: Automata, Splatoon, Dead Or Alive 2, Metal Gear Solid, Chrono Trigger, the list goes on. Looking for I'm looking for someone who's really into programming and ideally has the same taste in games. Even if you're not the most experienced programmer but you have similar taste and are really motivated then please feel free to email me that's fine too I'm not interested in joining pre-existing teams or making mobile games though. Okay show me some stuff.. I've quickly put together some examples of 2D sprite stuff I've worked on here: https://musume.club/gallery.html Also, I'm interested in PSX/N64 era low poly 3D so if you're into that then I've put up some quick WebGL samples here: http://musume.club/pianopsx/ http://musume.club/snow/ So if you're interested in teaming up then shoot me an email and let's make some awesome games! goodbyesquare@gmail.com
    8. This should all be a simple matter of vector addition. if (player.WalkRight() || player.WalkLeft()) { double sign = player.WalkRight()?1.0:-1.0; player.acceleration = sign*camera.RightVector.Normalize()*(player.WalkForce()/player.Mass())*SecondsPerFrame(); } /// Repeat for up/down and forward/backward ///... Now change the players velocity player.velocity += player.acceleration; player.position += player.velocity;
    9. I have a very big project covering different algorithms, so this is an average factor not coincidence. I've measured this on AMD GPU mainly, but on NV it is similar. There are 3 reasons: 1. Vulkan compiler is better, maybe 15%. (CL is initially mostly faster, but after optimizing, VK wins. Exceptions are rare.) 2. Prerecorded command buffers. In VK you can record the whole program flow (invoking programs and memory barriers in between.) to a GPU command buffer, and per frame you only say 'run this command buffer'. So there is almost no CPU<->GPO communication necessary. 3. Indirect Dispatch. With VK you can set the workload for a later shader from the current shader. With OpenCL you need to download result from current shader to CPU, and set the workload for the later shader from there. (a real performance killer) So the combination of points 2 and 3 allow to do complex stuff without any CPU<->GPO communication. This means you often have recorded some dispatches that at runtime turn out to have zero work. This has a cost (memory barriers still being executed), but overall it explains the big speed up i see. (I think i have about 100 dispatches.) This is all about OpenCL 1.x (!) I assume CL 2.0 has similar / better things as well, but i've never used it. If you consider to port to Vulkan, the work is mainly on the host side. Vulkan API results in 10 times more code than OpenCL. For the shaders i use a C preprocessor to make my CL code look like GLSL. Basically i only have to write function headers twice an there is a bit of clutter with #ifdefs, but it's no problem to work with. (I also solve things like missing pointers in GLSL with #defines, no downsides there.)
    10. I want to stress my application by rendering lots of polygons. I use DX12 (SharpDX) and I upload all vertices to GPU memory via an upload buffer. In normal cases everything works fine, but when I try to create a large upload buffer (CreateCommittedResource) I get a non-informative exception from SharpDX, and the device is being removed. GetDeviceRemovedReason (DeviceRemovedReason in SharpDX) returns 0x887A0020: DXGI_ERROR_DRIVER_INTERNAL_ERROR. I'm guessing it's all because it can't find a consecutive chunk of memory big enough to create the buffer (I have 6GB on the card and I'm currently trying to allocate a buffer smaller than 1GB). So how do I deal with this? I guess I could create several smaller buffers instead, but I can't just put the CreateCommittedResource call in a try-catch section. The exception is serious enough to remove my device, so any further attempts will fail after the first try. Can I somehow know beforehand what size is OK to allocate?
    11. Why ? do you have some test or other information ?
    12. Edgar Reynaldo

      Allegro Lives! Try it today!

      The forum is maintained by Matthew Leverton. Suffice it to say, he's been quite busy with RL. You can reach him by PM on allegro.cc. Which manuals are you referring to? I have two up to date manuals for Allegro 4.4.3 and Allegro 5.2.4 in CHM format. https://bitbucket.org/bugsquasher/unofficial-allegro-5-binaries/downloads/Allegro443.chm https://bitbucket.org/bugsquasher/unofficial-allegro-5-binaries/downloads/Allegro524.chm You're very welcome. Stop by the forums and say hello!
    13. Valach

      Allegro Lives! Try it today!

      I really like allegro, I think it is very good library to start when you already know basics of programming language and you want to improve your skills. I have choosen allegro because I liked the fact that I can use C, which basics I know and I dont necessarily need to learn C++ to use it. Another plus is that allegro is cross-platform. There is sufficient number of examples on liballeg site. You can use forum to get help which is still quite active. All I want to change is forum, design looks old (someone can say retro) but this is not the biggest problem. The biggest problem is, in my opinion, that all the answers are automatically locked up quickly and older topics are automatically hidden and can not be easily viewed. Also some manuals on liballeg site are outdated. I would like to thank for this library and for all the help and willingness of the developers or other contributors who help in the forum or in some other way. Keep going guys
    14. Edgar Reynaldo

      Allegro Lives! Try it today!

      Historically (I'm speaking of a few years ago) the Raspberry PI port worked. I don't know what happened, but newer models of the PI totally broke the graphics drivers. It was building on the PI since Trent G ported it to PI in 2013 ( https://www.allegro.cc/forums/thread/611802/975070#target ) . As of the beginning of the last year it was actually working, and working pretty well on the PI 2 and 3 for martinohanlon : ( https://www.allegro.cc/forums/thread/616673/1027519#target ) . He made a complete game called Mayhem 2 ( https://github.com/martinohanlon/mayhem-pi ) . @Chris Katko Weren't you involved in a thread about issues with the PI 2 and 3 being totally slow? Do you remember what the issue was or the thread where we discussed that? Here it is : https://www.allegro.cc/forums/thread/617378 and the issue on the rpi forums https://www.raspberrypi.org/forums/viewtopic.php?t=191791 Here's what he came up with for possible solutions : https://www.allegro.cc/forums/thread/617378/1037321#target
    15. Hello guys, we at Game Matter are looking for a level designer to join our team. Be sure to send me a message if you are interested ūüôā
    16. It depends on the particular tool chain you are using. This isn't part of c++, a linker could do it any way it wants. Newer visual studios like you having dependencies set up for each module, if I remember right, whereas some other build systems aren't so picky (perhaps they do this for you). If you think it has linked against the same thing multiple times (for the same exe / dll), it will deal with this duplication for you, compilers are clever like that (same as calling the same function from different bits of your code). Calling the same function from multiple dlls is a different matter however. As above the linker will deal with it. It is in shared libraries (dlls) you are more likely to deal with duplication problems / library mismatches. If you want 2 dlls to share code, usually they would have to do something like both link to the same 3rd dll (like msvcrt for example). DLLs have a specific list of functions that are marked as being callable from the outside world. Anything else is internal. Static libraries anything is potentially accessible. Think of a DLL as a pre-compiled executable, but just with a small list of functions you can use to call it. And a static library as a bunch of object files (compiled c++) ready for linking into a bigger program. Afaik with a DLL there's no search for 'functionality' *. There's a list of required DLLs for the .exe, if they are not there, the program fails to run. If they are there, they get loaded into memory, and the predefined list of functions can be called. There's some magic fixup for the functions but this is OS / implementation specific, it could be done a number of ways, for example just using a jump table. This is all just general info as I understand it, I've not written one of these particular compiler / linkers. This kind of thing is not part of c++ or any language, it is down to how the operating system / build system likes to do things. * Actually you can get query function addresses and this is useful sometimes, but it is not typical use case.
    17. MikhailGorobets

      Compact YCoCg Frame Buffer

      Thanks you
    18. Aqua Costa

      Compact YCoCg Frame Buffer

      You can download a demo with source code from the paper website which includes code on how to unpack the buffer. The most simple method is to sample the current pixel to get the luminance (Y) and one of the chroma values, and then sample one of the neighbour pixels to get the other chrome value: uint2 coord_neighbour = coord; coord_neighbour.x += 1; float4 sample0 = texture0.Load(uint3(coord, 0)); float4 sample1 = texture0.Load(uint3(coord_neighbour, 0)); float y = sample0.r; float co = sample0.g; float cg = sample1.g; // switch which chroma value is Co/Cg based on pixel position if((coord.x & 1) == (coord.y & 1)) { co = sample1.g; cg = sample0.g; } float3 col = YCoCg2RGB(y, co, cg); You'll probably want to use one of the more complex reconstruction methods from the paper (check the source code) but this gives you the basic idea.
    19. You said that it is open source. Where is the link to the source code?
    20. buzylaxy

      Card game AI help

      ah you got a point guys thanks for your comprehension and sorry. yes, existing game no rule book it is more like a folklore thing played with Spanish card it is not a Spanish game however the setup: 40 cards = 4 cards of the same type 1 to 7 jack / knight / king the dealer distributes cards 4 on the first round then 3 each round dealer always plays at the end case of 2, 3 or 4 players ***what you do in your turn you can play one card only =>during the first round 1- first turn you play a card from your hand (this is no match no point ever) 2- 2nd turn you try to make a match using cards in hand and the card on the table =>during the rounds after 1- you try to make a match using cards in hand and the cards on the table =>during final round: last player to score points gets all the cards left on the table ***battle chance to make hit(pair) triplet or quadruplet when you play a card if next play makes a matching with your card that's a hit if you respond by another hit that's a triplet if the player respond by another hit that's a quadruple (rare case in 2 players mode means the dealer distributed a pair of the same type to each hand, more chance to happen in 4 players mode) ***when the game ends and when there is no cards left to distribute ***how you determine who won. the player with more cards wins since one card is 1 point added to final score
    21. Mussi

      3ds max reduce poly count

      Have you tried "collapsing" the modifiers? The option is available when you right click your modifier.
    22. Toastmastern

      /Weirdness regarding Z-buffer/Pixel shader

      I were able to solve it, The issue was with my blend state. This question got me in the right direction: https://gamedev.stackexchange.com/questions/107866/directx11-alphablending-rendering-problem My blend state code now looks like this: blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; Instead of: blendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_ONE; blendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_ONE; I believe it has to do with the .DestBlend only due to the way my pixelcolor is blended. But now it works anyway. Hope this thread can help anyone else in need //Toastmastern
    23. Hi all, I've been working on a game for sometime now, and it's getting close to completion. Thinking about the target market of the game, I'd like for it to be able to work on the X1 and Windows Store to make it more accessible. There's a screenshot below if you're interested, one has to get the fluid from the top to the bottom, solving various puzzles as you go along. The main advantage of the X1 would be sofa-based multi-player where players are racing against each other, each with their own instance of the level. The game itself is structured that it there's a layer of abstraction between the Win32 inputs (keyboard etc.), Win32 message queue etc. so I now have, essentially, an orchestrating layer which gets initialised with the DX items (swap chain, RTVs) and presents the caller with a frame method to handle everything. I'd like to have both the UWP version of X1 as well as standard Win32 for Win7 machines / Steam etc. Looking at the documentation for UWP, I'm a little bit lost for a simple example which allows me to fully understand the way it works. By background, most of my coding is via WinForms / C#; I've only used the combination of Win32/C++ on this project. Does anyone know of a good tutorial site which I can use to educate myself please? I'm looking to be able to: 1. Specify the use of full screen (presumably Win10 only) 2. Specify the screen resolution / refresh frequency - I've been lazy in the game and sync the game loop to the 60Hz refresh rate of the screen (the fluid movement is very discrete in its movement capabilities) 3. Handle key-presses outside of Win32 messages I'm assuming that XInput functions exactly the same. The next follow up question would be looking on the following link (MS's ID@XBox overview), there appears to be the implication that I have to allow users to sign in / interact with the user identity etc.. Any good recommendations of where to start for looking at these etc. And finally, if anyone knows of any good web sites / additional forums for this sort of things, then please do let me know Thanks Steve
    24. jstragie

      Invitation: participatory design study

      Almost 50 responses! Thanks already. Study open until July 8th!
    25. I've been trying different algorithms, and just yesterday I adapted one from the Graphics Programmer's Black Book (the chapter 17), and it works... but doesn't wrap around the edges like the other algorithms do. It does vertically, but not horizontally. I'm doing the wrapping by using an extra outer border of cells all around, that each gets a copy of the opposite inner border. I've been trying for hours to figure out why it isn't wrapping around but I got nowhere so far. Meanwhile I also burned out. If someone else could take a look and see if they could figure out what's wrong, I'd appreciate it a lot. A fresh pair of eyes might see better than mine. I don't know if I should paste the code right here, as it's a little long (some 200 lines), so meanwhile it's in this repo right here. It's a simple console app, works on the windows console (don't know about the linux terminal). There's two generation algorithms there for comparison, and you can easily switch using the algo variable. The SUM works fine, the BITS is the one that doesn't. Not sure what else to say. I tried commenting the code for clarity. Well, if someone has 5 minutes to spare, I'll greatly appreciate it. Thanks in advance. ---------- EDIT: A specific symptom that I noticed (that I didn't think to mention earlier) is that when using the BITS algorithm (which uses bit manipulation, hence the name of the flag), the cells on the outter edges don't seem to be affected by this part of the code that kills a cell or the equivalent part to revive a cell (specifically the "[i-1]" and "[i+1]" lines, which should affect the cells to the sides of the cell being considered): # if it's alive if cellmaps[prev][j][i] & 0x01: # kill it if it doesn't have 2 or 3 neighbors if (n != 2) and (n != 3): cellmaps[curr][j][i] &= ~0x01 alive_cells -= 1 # inform neighbors this cell is dead cellmaps[curr][ j-1 ][ i-1 ] -= 2 cellmaps[curr][ j-1 ][ i ] -= 2 cellmaps[curr][ j-1 ][ i+1 ] -= 2 cellmaps[curr][ j ][ i-1 ] -= 2 cellmaps[curr][ j ][ i+1 ] -= 2 cellmaps[curr][ j+1 ][ i-1 ] -= 2 cellmaps[curr][ j+1 ][ i ] -= 2 cellmaps[curr][ j+1 ][ i+1 ] -= 2 The actual effect is that the leftmost and rightmost edges are always clear (actually, as depicted, the ones on the opposite side to where the actual glider is, are affected, but not the ones next to the glider). when a glider approaches the edge, for exampple, this edge should have 1 cell And the opposite side should v like this not be like this, but this V v V V | . . . . | . . . . . . . .| . . . .| | . . # . | . . # . . . . .| . . . .| | . # . . | # # . . . . . #| . . # #| | . # # . | . # # . . . . #| . . . #| | . . . . | . . . . . . . .| . . . .| | . . . . | . . . . . . . .| . . . .|
    26. Dynamic Linked Libraries (.dll or it's sister .so on Unix) are in a way different to Static Linked Libraries as there code has to be fully qualified at the time you create them, so you will need any dependcie to be linked into a dynamic lib at compile time while also have to provide any dependencie at runtime. This is because compiler will add boostrapping code to the location at which it expects the libarie to stay on runtime. A call to .dll functions have a little performance impact (at least on Windows) because your code needs to lookup the librarie on disk (via LoadLibrary) and then caches a reference pointer to the function to want to call from your code. For you, it looks like a normal function call but under the hood queries into a cached pointer. A static librarie contains the code you want to use during runtime and the compiler links it together into one assembly. This has an impact on performance (a little, one told me ;) ) and I just static libs for my engine framework except for those code parts that need a dynamic lib (like a bridge from C++ to C# via p/invoke)
    27. lawnjelly

      Allegro Lives! Try it today!

      It is pretty much this, and shouldn't be a massive deal. Expect desktop OpenGL to be available on .. mostly desktops, and OpenGL ES to be available on .. embedded systems, phones, tablets, tv boxes, raspberry pis, low power things and stuff that isn't a desktop. That is pretty much the modern landscape. OpenGL ES may also be available on desktop (it is on my linux via the MESA drivers). And SDL2 works with OpenGL ES, I'm using it for my linux build, it was very easy to get working (caveat I am only using SDL2 for stuff like input etc, I've never used SDL2 for drawing stuff, you'd have to check that in the docs).

    Recent Blogs

    1. Fixed numerous bugs from last update. Showcasing some more features announced in that update. Also added the bat icon to the action on screen button.
      View the full article
      2 comments
      By SOS-CC
    2. Hello again everyone! This week's start was kind of slow, but at the end of it I managed to add some features which I was planning to do for quite a while. The main feature is: tower upgrades! You can now upgrade towers using a little menu I designed. It gives you all the information about the upgrade such as its cost, damage, locking radius, shooting speed and gun rotation speed. Other thing I added is laser beams which have their own light sources so it looks cool when lots of turrets are shooting. It adds something to the feel of the game.  There are lots of other small tweaks and fixes I did related to tower placement/editing. Here you can see the gameplay in action:   As you might have noticed, as of now flying objects are tanks, this is because I haven't yet modeled planes or helicopters, but I'm planning on doing so. As of now there is only one completely upgradable tower, this is again mainly because I don't have the models yet. I should really find someone to help me out Anyways, that's all for today, I should better go and continue working on the game now, since there's still LOTS of work to do. So see you next week
      3 comments
      By EddieK
    3. Hello GameDev, Here is a screen shot of the Cube Cannon.  You can only build one! It is fully operational and it packs a punch, for squares that is.  Anyways I've made so many changes to the code I can't even really recap all the things I've done.  So here is the code TD.06.22.zip If you fancy, give the project a go.  I've included some instructions to go over.
      I'm going to be plugging away come tomorrow to add the Hell-Fire Hex and the Tack Tower.
      Then I'll be incorporating points and upgrades and a start screen

      Yeish!  hopefully figures crossed.
       
      0 comments
      By Awoken
    4. Added a speed effect when sliding.

        Improved branch system.

        Enemies now die with smoke particles to avoid walking through them.



      CURRENT STATUS
        GAME DESIGN DONE PROGRAMMING DONE LEVEL DESIGN / BUILDING JUST STARTED ENVIRONMENT ART JUST STARTED CHARACTERS DESIGN / ANIMATION JUST STARTED INTRO / OUTRO TO DO SOUND DESIGN JUST STARTED MUSIC TO DO TEST / OPTIMIZATION TO DO
      0 comments
    5. I've just come off a several-months-long jag of playing Path of Exile. PoE has influenced the (sporadic) development I've done through that time on Goblinson Crusoe by a great deal. While GC is turn-based, it shares a lot of the same DNA as PoE, specifically in the influence of Diablo and Diablo 2, so a lot of ideas and mechanics from those games have been seeded throughout GC. Like Diablo 2, Path of Exile implements a system of resistances for elemental damage (fire, ice, lightning). Resistance stat values are obtained primarily through gear, and are obtained as percentage values that accumulate up to a maximum value. For example, you could get a belt with +35% lightning resistance. Resistance amounts from gear and other sources are accumulated, then capped to a maximum value (75% by default), with the option of obtaining small increases to this maximum value via other sources. Resistance either reduces or increases (it is possible to have negative resistances that actually boost the damage the player takes) the incoming damage by the given percentage.  As one progresses through the game, at certain checkpoints the player's resistance value has imposed upon it a penalty. In the beginning, this penalty was imposed in smaller stages as one progressed through the difficulty levels. (Difficulty levels simply repeat the story of the game, with higher-level monsters as well as the resistance penalty coming into play.) In current PoE, the penalties are imposed at two checkpoints within the story: the first after completing Act 5 (character level approximately L45) and the second after completing Act 10 (around 67 to 70 character level). The first checkpoint applies a -30% reduction to resistances, and the second checkpoint another -30%, for a total of -60%. I understand the thinking behind this design. The game is balanced around having maximum resistances. That is, any given encounter will be damage-tuned with the assumption that the player is at resistance cap, and thus not having the resistance value at cap can bring extra punishment and pain. This provides a constant pressure for equipment improvement as one progresses to end-game; gear that was fine before the checkpoint now is suddenly deficient, pressuring the player to seek upgrades. At a certain point, though, the player can obtain enough +res% to overcome the penalties and still raise their resistance to 75%, meaning they are effectively "done" with upgrading their resistances. (Further equipment upgrades for other stats must be chose to maintain these resistance caps, but that is usually not too difficult.) Typical players are encouraged to obtain these caps as soon as possible to ease the leveling and progression process. While I understand the design, I've always been bothered by the implementation. Having gear that was "fine" at one point suddenly become "totally deficient" in one instant after beating a single specific boss feels too abrupt. Also, I kinda don't like that at a certain point the pressure to maintain resistances eases up. So in GC,  I am exploring ideas for putting this resistance penalty system on a smooth curve, rather than having the abrupt steps. The current iteration of this system uses a logistic function, which is a type of sigmoid function. Instead of collecting gear that provides +X% resistance to a given damage type, you collect gear that gives +Y resistance rating. This resistance rating is plugged into a logistic function to obtain the actual amount of resistance % to apply against incoming damage. The logistic function is structured like this: function res(rating, levelbase, levelslope) return (1.0 / (1.0 + math.pow(e, -levelslope * (rating - levelbase)))) *2.0 -1.0 end The function operates using the rating (granted by equipment and other buffs) as well as a base rating for a given level, levelbase. At level M, if the player's rating is equal to level base, then the granted resistance value will be 0%. Rating less than levelbase results in a negative resistance, while greater than levelbase grants a positive resistance. The factor levelslope is used to affect the spread of the function at a given level; ie, how quickly the resistance approaches 1.0 or -1.0. For example, if you use a levelslope of 1, that means that the resistance value will be very close to -1.0 at a rating that is 6 points below base, and will be very close to 1.0 at a rating that is 6 points above base. This slope value determines the slope of the curve where it passes through the origin of the graph. By making the slope shallower as the character level increases, that spread can be made wider, granting a larger window within which the rating will grant a resistance value somewhere between -1 and 1. This way, as resistance ratings grow larger, the absolute difference between the rating and the levelbase has a more gradual effect on the value. These graphs show how this works: At a levelslope of 1, you can see here that around 6 points below the levelbase, the curve approaches -1, and at around 6 points above, it approaches 1. So if the base resistance rating for that level were, say, 10 then if you had a rating of 4, you would have a resistance value of -100% (or close to), meaning you would effectively take double damage, whereas if you had a rating of 16, you would have a resistance of 100%, meaning you would take no damage. Now, at a higher level, you might have a levelslope of, say, 1/3: Here you can see that the spread is now approximately -16 to +16 from level base. If the levelbase rating for that level were, say, 100 then if you had 84 rating or below you would take double damage, whereas if you had 116 or higher you would take no damage. Of course, the base and slope ratings would be a candidate for a great deal of tuning in the final game. The constant increase of levelbase applies constant pressure for the player to upgrade resistance, not simply at 1 or 2 main checkpoints, but all throughout the game, with that pressure growing larger the longer one plays and levels up without changing equipment. And this also doesn't account for having a resistance cap. In PoE, the default cap is 75%, which can be raised only through rare and special means, which is a sensible design decision in my opinion. A simple solution for this would be to multiply the output of the resistance function by the value of the cap if the output is positive (leaving the negative resistance value uncapped). Doing it this way, the positive side of the curve approaches the resistance cap, rather than 1.0, while the negative side is untouched. I could even implement a negative resistance cap, if so desired, to allow the player to build a stat to reduce the amount of damage taken from having a negative resistance. In my preliminary tests (which include no playtesting so far by anyone but myself) it seems to work fairly well, but this is one of those kinds of systems that I will need to tinker with and explore more fully in the final testing phases. Just because it works well now, doesn't mean it won't be massively exploitable in the future.   I have also been tweaking and experimenting with damage types. At the moment, I have a system of damage types somewhat similar to PoE, though with quite a few differences. In my system, any given attack or spell can deal a certain combination of damage types selected from the set {Crush, Slash, Burn, Poison, Bleed, Void, Shock}. These damage types can additionally be tagged with modifiers drawn from the set {Projectile, Melee, Attack, Spell, Area, DoT, Siege} which can be used to apply damage increases or reductions. So as an example, a basic fireball of some sort might deal 2 damages. The first would be tagged {Crush | Area | Spell} and the second would be tagged {Burn | DoT | Area | Spell}. Player stats exist that can amplify or reduce damage dealt by any of these various tags. So, for example, it might be possible to have a stat that increases Spell damage by 13%, meaning both damage rolls from this fireball spell will be boosted by 13%. Primary damage types all come with a secondary debuff effect. Crush causes a stun/slow effect, slowing the target by some amount for some period of time. Slash damage causes a Bleed debuff that causes damage over time (this damage bearing the {Bleed | DoT} tags). Poison damage also applies a stacking debuff to poison and burn damage resistance rating, meaning that poison and burn damages become more potent the more poison stacks there are. Void causes an increased chance to take a critical strike, and Shock increases all damage taken by a certain %. These effects are all subject to change as the game develops further. The idea, though, is that each damage type should be differentiated by a mechanic, and not just by a type. I've played games where there was no mechanical difference between, ie, Fire and Lightning, merely cosmetic differences and the necessity of maintaining resistances against two types instead of just one. If a damage type doesn't lend itself to some mechanical difference from the others, then it will be altered or removed.   At the moment, all damage types are mitigated in a similar manner, using the damage resistance calculations describe earlier. That is, 'physical' types such as Crush and Slash are not mitigated using any kind of armor rating, but instead are mitigated by Crush or Slash resistance rating granted by certain equipment. Homogenizing the various damage mitigation strategies in this manner vastly simplifies the design of the character combat back-end and balancing, though again it is subject to change in the future.
      0 comments
    6. Hi everyone!

      Two Magic Item Tech games have joined the "Summer Sale" on Steam and itch.io

      I Am Overburdened



      The silly roguelike full of crazy artifacts and a "hero" who has 20 inventory slots, is 40% off so currently it's only 2.99$ (may vary based on region, base price is 4.99$) !

      You can buy it at Steam or at itch.io.

         





      Go go go dungeon crawlers !!!

      Operation KREEP



      The best couch co-op multiplayer Alien satire action game, is 83% so currently it's only 0.50$  (may vary based on region, base price is 2.99$) !!!
      If you are a sucker for retro party games like Bomberman (Dyna Blaster) or Battle City make sure to give it a try!

      You can buy it at Steam or at itch.io.

         





      Remember:
      In space no one can hear you KREEP...

      I hope you check them out and they will be to your liking ! 

      P.S.: Magic Item Tech also has a Steam developer page now. Feel free to follow me there to receive first hand news about my games, updates and sales.

      Thanks for taking the time to read my post, have an awesome summer full of fun
      Cheers!
      0 comments
      By Spidi
    7. So the game we develop is growing steadily. We've already added a pause menu. Its function is to be a... pause menu. (I don't know what you expected...) The design Because our game is going to be really Vaporwave, we knew that the visual had to be on point. We ended up trying a classic Windows-like design with header bars and everything, but with a twist... Here's the result: (Note that not all buttons are fully functioning. Also, the seed isn't actually used in the generation) The idea is that this window appears when the player pauses. It takes the form of a popup with fully functional tabs. We also plan to let the player easily navigate through the tabs with keyboard (or buttons, in the case of a controller) shortcuts. In addition, our game uses palettes, so the GUI elements are colored according to the active palette. Here's a example with a different palette: LESS-like color computations You may have noticed that there is a difference between each palette (for example, the title of the window has changed color). This is done by a beautiful library that I built for our project. Because I was a web developer for about 2 years, I already knew (and worked with) CSS compilers like SASS and LESS. My library is strongly inspired these compilers. Especially LESS. The luma value For this reason, I knew there was a way to know if a text of a given color would be readable when displayed on a given background. This feature is present in vanilla LESS : it's called "contrast".  That function uses the luma values (sometimes called "relative lightness" or "perceived luminance") of colors. This small practical value describes the perceived luminance of a color, which means that particularly brightly perceived colors (such as lime green or yellow) gets higher luma values than other colors (red or brown) despite their real lightness value. Here's how I compute a given color's luma value: Color color = Color.GREEN; // Fictionnal class, but it stores each components as floating points values form 0 to 1 float redComponent, blueComponent, greenComponent; if (color.r < 0.03928f){ redComponent = color.r / 12.92f; } else { redComponent = Math.pow((color.r + 0.055f) / 1.055f, 2.4f); } if (color.g < 0.03928f){ greenComponent = color.g / 12.92f; } else { greenComponent = Math.pow((color.g + 0.055f) / 1.055f, 2.4f); } if (color.b < 0.03928f){ blueComponent = color.b / 12.92f } else { blueComponent = Math.pow((color.b + 0.055f) / 1.055f, 2.4f); } float luma = (0.2126f * redComponent) + (0.7152f * greenComponent) + (0.0722f * blueComponent); The actual luma computation is fully describe here. With that luma value, we can then check and compare the contrast between 2 colors: float backgroundLuma = getLuma(backgroundColor) + 0.05f; float textLuma = getLuma(textColor) + 0.05f; float contrast = Math.max(backgroundLuma, textLuma) / Math.min(backgroundLuma, textLuma); With that, we can choose between tow color by picking the one with the lowest contrast: Color chosenTextColor = getContrast(backgroundColor, lightTextColor) > getContrast(backgroundColor, darkTextColor) ? lightTextColor : darkTextColor; This can give us a lot of flexibility: especially since we use many different color palettes in our game, and each with different luma values. This, along with more nice LESS colors functions, can make coloring components a breeze. Just for example, I've inverted our color palette texture and these are the results:   Yes, it looks weird, but notice how each component is still fully readable. Paired with our dynamic palette, color computation is now really easy and flexable.  
      2 comments
    8. s.Yep Yep!! Back again this time with more changes, it is probably the last stop (I hope) before I start to talk about the new game on its own. So what is new. Let see the main function for an overview, here is a test for almost all the features of the game engine: BKP_Entity_Agent * G[3] ; BKP_PlatformMap * Map; BKP_Vec2i g_scr; BKP_Vec2 TWH; //screen ratio based on game artist TWH.w = sprite.w * screen.w / expected.w same with TWH.h int main(int argc, char **argv) { //bkp_setMouse(GLFW_CURSOR, GLFW_CURSOR_DISABLED, mouse_callback, scroll_callback); //bkp_setLogInfo("logdir/",BKP_LOGGER_DEBUG, BKP_LOGGER_FILEOUT | BKP_LOGGER_TERMOUT, BKP_LOGGER_OAPPEND) //change default log //void bkp_setOpenGLInfo(3, 1) //minimal version 3.1 //bkp_setWindowInfo(1024, 768, BKP_TRUE,"BKP Test"); //fullscreen 1024x768 bkp_setWindowInfo(0, 0, BKP_FALSE,"BKP Test"); // window mode, auto detect screen resolution if(bkp_startEngine(argv) != BKP_TRUE) return EXIT_FAILURE; bkp_input_Init(); g_scr = bkp_graphics_getWindowDimensions(); TWH = bkp_graphics_getTWH(1920, 1080); bkp_input_setKeyPauseTimer(.05); G[0] = (BKP_Entity_Agent *) bkp_memory_getTicket(sizeof(BKP_Entity_Agent)); G[1] = (BKP_Entity_Agent *) bkp_memory_getTicket(sizeof(BKP_Entity_Agent)); G[0]->spritesheet = bkp_graphics_2dloadSurface("data/graphics/h_none.png"); G[1]->spritesheet = bkp_graphics_2dloadSurface("data/graphics/platform.png"); init_player(G[0]); setMap(); BKP_Font * myfont; myfont = bkp_graphics_newFont("data/fonts/DejaVuSans.ttf", 64, 128); BKP_ScreenText * fps = bkp_graphics_appendTextPool("do not care", 64, myfont, bkp_vec2(20 * TWH.w,64 * TWH.h), bkp_vec2(.3 * TWH.w,.3 * TWH.h), bkp_vec4(0.98f,0.98f,0.2f,1.0f)); BKP_ScreenText * mem = bkp_graphics_appendTextPool("can't see me", 128, myfont, bkp_vec2(20 * TWH.w,96 * TWH.h), bkp_vec2(.3 * TWH.w,.3 * TWH.h), bkp_vec4(1.0f,1.0f,1.0f,1.0f)); while(G[0]->input->Cancel == 0) { bkp_input_capture(G[0]->input); manage_platforms(Map); //rotating and moving platforms manage_player(G[0]); bkp_graphics_2dscroll(G[0]->dyn.gbox); Ugp_draw(); _update_fps_counter( fps); _update_memUsage(mem); } bkp_graphics_releaseTextPool(fps); bkp_graphics_releaseTextPool(mem); bkp_graphics_freeFont(myfont); unsetMap(); bkp_memory_releaseTicket(G[0]->input); bkp_memory_releaseTicket(G[0]->spritesheet); bkp_memory_releaseTicket(G[1]->spritesheet); bkp_memory_releaseTicket(G[0]); bkp_memory_releaseTicket(G[1]); bkp_logger_write(BKP_LOGGER_INFO,"\t________________________________\n"); bkp_logger_write(BKP_LOGGER_INFO,"\t*POOLS* Allocated : %.1f Kb\n",(float)bkp_memory_allocated() / 1024); bkp_logger_write(BKP_LOGGER_INFO,"\t*POOLS* Used : %.1f Kb\n",(float)bkp_memory_usage() / 1024); bkp_logger_write(BKP_LOGGER_INFO,"\t*POOLS* Free : %.1f Kb\n",(float)bkp_memory_free() / 1024); bkp_stopEngine(); return EXIT_SUCCESS; } For those who have an allergy to global variables don't worry it is just for testing :P. So, now before starting the engine, it is possible to change some engine states by using functions like bkp_set*(); here they are commented. When the engine starts it will set up my memory pools. Those are a number of memory stacks of a different size.  Every time I require some memory it will give me a "ticket" (pointer of size N)  popped from stack #n. All allocations are made at the first time an object of size N is required. For example, if I want 64bytes and the pool is empty it will malloc(64 * default_number_of_object) then later I will just need to pop the ticket or push it back. There is no garbage collector but memory leaks are detected and logged. PRO: the number of calls to malloc is very low or at least computable before a game starts. CON: the default values may waste memory. However, using bkp_set_memory*() allows changing the parameter for pools of the same memory. Anyway, I will need to make statistics for each game to calculate how to set up the memory manager. G[0] = (BKP_Entity_Agent *) bkp_memory_getTicket(sizeof(BKP_Entity_Agent)); This line shows how to ask a ticket (pop a block). I can show the details of the memory manager if asked. Next is the text-engine (if I can call it like this), it is still temporary as I am still a beginner with freetype and I am not sure I will continue with it at the moment, I used it because I want a result and don't want to reinvent the wheel.  The function to draw  is the following, it needs a font loaded by bkp_graphics_newFont (not sure if I want to separate Text module and Graphics module yet) void bkp_graphics_renderText(const char * str, BKP_Font * font, BKP_Vec2 pos, BKP_Vec2 scale, BKP_Vec4 color) In the previous code you don't see any calls of that function because I use an OnscreenTextBuffer, it is a buffer of linked lists where each node contains everything to draw a text on screen such as Text, Position, Scale, Color.  Any text can be added to buffer with the following: BKP_ScreenText * fps = bkp_graphics_appendTextPool("do not care", 64, myfont, bkp_vec2(20 * TWH.w,64 * TWH.h), bkp_vec2(.3 * TWH.w,.3 * TWH.h), bkp_vec4(0.98f,0.98f,0.2f,1.0f)); Here a pointer to the buffer entry is returned to "fps", so you guess for which purpose. 64 is the size in bytes of the text length. Once set, the size cannot change but the content can be changed, for example, like this: sprintf(fps->text, "%s : %d", "fps = ", vfps); void _update_memUsage(BKP_ScreenText * stext) { ssize_t s = bkp_memory_using(); sprintf(stext->text,"memory pool : %.2fMb | %ldKb |%ld bytes", (float)(s/ 1024) / 1024, s / 1024, s); return; } and the internal function will write all texts like this: void bkp_graphics_OnScreentext() { if(NULL == stc_Btext->head) return; BKP_ScreenText * ps = stc_Btext->head; // if text not initialize will segfault. Warning for(; NULL != ps; ps = ps->next) bkp_graphics_renderText(ps->text, ps->font, ps->pos, ps->scale, ps->color); return; }   The last thing I did was scrolling. I didn't do anything fancy or special here, just adding a VIEW matrix in the shader. Every time I move the view portbkp_graphics_2dscroll(G[0]->dyn.gbox); in the GPU otherwise it keeps its values. It is a single function that takes as a parameter a rectangle to focus on. In the following example, the center of the player's bounding box is used as the point of focus: bkp_graphics_2dscroll(G[0]->dyn.gbox); Something I didn't show here is that is possible so set an autoscroll by just giving a default x/y speed to the scroller once again with a function like bkp_set*(). If I comment one of the bkp_memory_releaseTichet()  lines at the end  we get this in the log file or/and screen: [ INFO ][ 2018-06-21 15:56:08 ] -> Graphics Engine stopped [OK] [WARNING][ 2018-06-21 15:56:08 ] -> Ticket for Memory pool are not all released, **MEMORY LEAK** P(4) s:320 [ INFO ][ 2018-06-21 15:56:08 ] -> Destroying Memory pools[OK] The memory leak is detected in the Pool number 4  (320 bytes/ticket) unfortunately I do not provide a way to find exactly where it is. However, it is already a good hint to find it. Now a small video to show everything. I made the character bounce to show clearly all platforms' properties.     What will come next? So far in my learning process I start to go away from my main goal, indeed if this demo shows a Mario like a game, it is not at all what I am making but some features I needed are now present. As an example, the double jump will not be in my game but I implemented it. It is time to pause the game engine development and go to game dev. Next step should be creating the main character and his/her basic movements. I spend a lot of time separating tasks, making the project ready to receive other contributors. That's why I am creating and documenting all the functions. I also have an artist now, not official yet but... Anyway, my experience made me suspicious. Something I  have to recognize I am bad at is levelling. I needed such a long time to pass all basic levels of Soul of Mask, this game is difficult and the difficulty doesn't come crescendo, it is too sudden. I didn't even finish the last level, it looks impossible for me. And as always thanks for reading.                                                      Until next time   ps: I am thinking about making the game engine open-source, for those who read it please tell me in the comment what you think about that Idea.
      2 comments
    9. Notice a bit of a facelift? I worked on some of the Ultra Pony designs this week, just filling in those gaps in design I was drawing a blank on for the longest while. Now I can say I'm satisfied with how everypony looks and can clearly see them animated in my head.

           Actually animating the sprites is a step to come much later in development, but I can roll with these for now. Come see the steps toward a new playable build featuring a new level editor and a few more motivating surprises in this week's gamedev update!
      0 comments
    10. Corona can be used to make many different kinds of apps, but undoubtedly it‚Äôs a great tool for games. There of course are many different types of games too. We would like to introduce you to OverRapid, a musical puzzle game. OverRapid, created by Byeong-Su Kang of Team ArcStar, is a mobile rhythm action game similar in style to famous titles like Rock Band and Guitar Hero. OverRapid lets you select songs and you‚Äôre sent notes down rows and tap on the various lines to have the note counted in your score. You have notes you have to just touch to count and other notes you have to ‚Äúscratch‚ÄĚ to have counted. The game has over 250,000 installs on Google Play and over 116,000 installs on iOS. It also made the Google Indie Game Festival Top 20 in Korea. The game is available as a free download on both Google Play and the App Store with in-app purchases. ¬†
      View the full article
      0 comments

    Latest Forum Topics

    Latest News

    1. Today, at Unite Berlin, GitHub announced the launch of GitHub for Unity 1.0, making git even more accessible to game developers. The Unity editor extension brings Git into Unity with an integrated sign-in experience for GitHub users, introducing key features for game development teams to manage large assets and critical scene files using Git in the same way that users manage code files, all within Unity. GitHub for Unity is a Unity editor extension that brings Git into Unity 5.6, 2017.x, and 2018.x with an integrated sign-in experience for GitHub users. It introduces two key features for game development teams: support for large files using Git LFS and file locking. These features allow you to manage large assets and critical scene files using Git in the same way that you manage code files, all within Unity. The initial alpha-version was made available in March 2017, and is now publicly available to download at unity.github.com and from the Unity Asset Store.
      0 comments
      By khawk
    2. Esoteric has announced the general availability of their new spine-cpp runtime, an idiomatic implementation of the Spine Runtimes API for C++, making the integration of Spine in your C++ based projects a breeze. The Spine Runtimes for Unreal Engine, Cocos2d-X and SFML have also been updated to use the new spine-cpp implementation. You can try out the new spine-cpp runtime as well as the new integrations for Unreal Engine, Cocos2d-X, and SFML in the 3.7-beta-cpp branch on Github. Check out the CHANGELOG for information about API additions and breaking changes. The Spine-C++ guide shows how to operate the new runtime and integrate it in your custom engines.  
      0 comments
      By khawk
    3. GameDev Townhall is a topic posed to the GameDev.net community for discussion of events and issues affecting games and the game industry. Participate in the comments below. The game store battle is being fought over content moderation.     Valve yesterday announced in a blog post that they are going to allow everything onto the Steam Store, except for things that they decide are illegal, or straight up trolling, explaining that Valve shouldn't be the ones deciding what games players can and cannot purchase and play.   itch.io's creator followed that up with its creator saying Steam's new hands-off curation policy is 'ridiculous':   Discussion point: Will Valve's open market work? Is itch.io's moderated market the better option for developers and gamers? Is there another option?
      3 comments
      By khawk
    4. Today, Oculus announced Oculus Connect 5 (OC5) with a message that sounds as though it's a bit of a turning point for the company: Celebrate the last five. Believe in the next five. The event will take place on September 26-27 in San Jose, CA. There isn't much information right now, leaving plenty to speculation, but you can sign up for updates on the website at https://www.oculusconnect.com.
      0 comments
      By khawk
    5. Free tickets are available to The Business of Indie Games Virtual Summit happening next month from July 24-27, 2018. There will be 30+ well respected indie devs, producers, and industry veterans speaking about strategy, finance, and marketing of indie games. You can easily sign up on the website at https://businessofindiegames.com/info.
      0 comments
    6. Apple today introduced ARKit 2, a platform that allows developers to integrate shared experiences, persistent AR experiences tied to a specific location, object detection and image tracking to make AR apps even more dynamic. Apple is also unveiling the Measure app for iOS, which uses AR to quickly gauge the size of real-world objects, as well as a new open file format with iOS 12, usdz, which is designed to more deeply integrate AR throughout iOS and make AR objects available across the ecosystem of Apple apps. In related news, Epic announced support for ARKit 2 in the Unreal Engine 4.20 preview available this month, enabling developers with the latest AR features on iOS devices.  Learn more from the Apple Newsroom post.
      0 comments
      By khawk
    7. Raph Koster's new book Postmortems is now available for purchase. This first of three volumes is the beginnings of many of the essays and writings Koster has shared over the last several decades. It focuses specifically on games he has worked on, from LegendMUD and beyond, and is a compendium of design history, lessons learned, and anecdotes from the games industry. The foreword for the book is written by Richard Garriott.       Contents include: Early days, creating board games, and the lessons learned MUDs, including DikuMUDs design, administrative practices on LegendMUD, and the struggles on MUD governance The resource system, playerkilling, and evolution of the game economy model of Ultima Online Postmortems, design philosophy, and a design overview on Star Wars Galaxies The transition of MMOs Design diary and transcript of Andean Bird Tech architecture and postmortem of Metaplace Here's an excerpt from the book discussing playerkilling with Ultimate Online: A longer excerpt can be found here. Raph Koster is a veteran game designer and creative executive who has worked at EA, Sony, and Disney as well as run his own company. The lead designer and director of massive titles such as Ultima Online and Star Wars Galaxies, he’s also contributed writing, art, music, and programming to many other titles. He is the author of the classic book A Theory of Fun for Game Design. In 2012, he was named an Online Game Legend at the Game Developers Conference Online. You can find the book on Amazon and other retailers. 
      0 comments
      By khawk
    8. Interactive Gaming Ventures has joined forces with Epic Games to identify independent game developers building promising titles using¬†Unreal¬†Engine¬†4, and to bring teams meeting desirable criteria into the their investment portfolio. Led by former PlayStation President and CEO Jack Tretton, Interactive Gaming Ventures plans to invest in two to three experienced indie teams per year, at $1 million to $5 million per project, over the next seven years. ‚ÄúWe are looking to provide exceptional independent teams building games with¬†Unreal¬†Engine¬†the support structure, cash infusion, marketing resources and relationships that will help them achieve incredible financial returns,‚ÄĚ said Tretton, Managing Partner, Interactive Gaming Ventures. When a studio takes investment from Interactive Gaming Ventures, it maintains control of its IP and creation process. Interactive Gaming Ventures helps fund milestone deliverables, manages promotion and distribution, and then shares in a project‚Äôs success once it ships. ‚ÄúThis partnership falls perfectly in line with Epic‚Äôs philosophy, meaning that we only succeed when developers succeed,‚ÄĚ said Joe Kreiner, head of¬†Unreal¬†Engine¬†business development at Epic. ‚ÄúFrom programs like¬†Unreal¬†Dev Grants to one-to-one conversations where we connect teams with strategic opportunities, we have an honest motivation to help our licensees get ahead. We couldn‚Äôt be happier to make it even easier for Interactive Gaming Ventures to get behind¬†Unreal¬†indies.‚ÄĚ Interactive Gaming Ventures provides investment capital and management strategy to help independent developers force-multiply their scale and success, focusing on teams looking to ship first on PC, with the option of taking their game to console and mobile as well. Joining Tretton in leadership at Interactive Gaming Ventures is Studio Wildcard CEO Doug Kennedy, whose company is behind the¬†Unreal¬†Engine-powered¬†ARK: Survival Evolved¬†franchise, which has sold more than 13 million copies across PC, console, mobile and VR platforms. ‚ÄúEpic has been an incredibly supportive partner for Studio Wildcard over the years,‚ÄĚ said Kennedy. ‚ÄúARK: Survival Evolved started off as an independent game released in early access and grew to be a phenomenon beyond our wildest dreams, thanks in part to the¬†Unreal¬†community and Epic‚Äôs support. This is a foundation of stability and massive potential, and we‚Äôre looking to build on it in collaboration with even more¬†Unreal¬†Enginedevelopers.‚ÄĚ To contact Interactive Gaming Ventures, visit¬†interactivegamingventures.com. Download¬†Unreal¬†Engine¬†and get started for free at¬†unrealengine.com.
      0 comments
      By khawk
    9. The EnhanceMyApp podcast returns! In this week's episode, we discuss mobile app monetization strategy with Trevor Williams, SR Director of Monetization at Hi-Rez Studios.  From general monetization tips to targeting users and retaining them, this is another episode you do not want to miss! Check it out now and subscribe today! https://goo.gl/DzPhbb
      0 comments
    10. WILD WEST Bandit for FUSE The pack contains the following Clothing items, that you can easily alter the material type, substance, and colour.¬† Check out the Promo Pics on this product page, which depicts the clothing. The Pack contains: Hat Scarf Sweater Belt Trousers Trouser Over ChestBelt ¬† Pack is already imported and set up in fuse - simply follow the easy instructions and a scene is provided that shows the Avatar in FUSE, with the full costume assembled. ¬† Purchase for just $14 from the below Arteria3d website link ¬† WildWest Bandit ‚Äď arteria3d¬† ¬† ¬† ¬†
      0 comments
      By arteria

    Recent Articles and Tutorials

    1. This is an excerpt from the book, Unity 2017 Game AI Programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe, and published by Packt Publishing. This book will show you how to use Unity 2017 to create fun and unbelievable AI entities in your games with A*, Fuzzy logic and NavMesh. Path following and steering Sometimes, we want our AI characters to roam around in the game world, following a roughly-guided or thoroughly-defined path. For example, in a racing game, the AI opponents need to navigate the road. In an RTS game, your units need to be able to get from wherever they are to the location you tell them navigating through the terrain and around each other. To appear intelligent, our agents need to be able to determine where they are going, and if they can reach that point, they should be able to route the most efficient path and modify that path if an obstacle appears as they navigate. Obstacle avoidance is a simple behavior that allows AI entities to reach a target point. It's important to note that the specific behavior implemented in this post is meant to be used for behaviors such as crowd simulation, where the main objective of each agent entity is just to avoid the other agents and reach the target. There's no consideration of what would be the most efficient and shortest path. Technical Requirements You will be required to have Unity 2017 installed on a system that has either Windows 7 SP1+, 8, 10, 64-bit versions or Mac OS X 10.9+. The code in this book will not run on Windows XP and Vista, and server versions of Windows and OS X are not tested. The code files of this post can be found on GitHub. Check out this video to see the code in action. Navigation mesh Let‚Äôs learn how to use Unity's built-in navigation mesh generator that can make pathfinding for AI agents a lot easier. Early in the Unity 5.x cycle, NavMesh was made available to all users, including personal edition licensees, whereas it was previously a Unity Pro-only feature. Before the release of 2017.1, the system was upgraded to allow a component-based workflow, but as it requires an additional downloadable package that, at the time of writing is only available as a preview, we will stick to the default scene-based workflow. Don't worry, the concepts carry over, and when the final implementation eventually makes its way to 2017.x, there shouldn't be drastic changes. For more information on Unity's NavMesh component system, head over to GitHub. Now, we will dive in and explore all that this system has to offer. AI pathfinding needs a representation of the scene in a particular format; we've seen that using a 2D grid (array) for A* Pathfinding on a 2D map. AI agents need to know where the obstacles are, especially the static obstacles. Dealing with collision avoidance between dynamically moving objects is another subject, primarily known as steering behaviors. Unity has a built-in tool for generating a NavMesh that represents the scene in a context that makes sense for our AI agents to find the optimum path to the target. Pop open the demo project and navigate to the NavMesh scene to get started. Inspecting our map Once you have the demo scene, NavMesh, open, it should look something like this screenshot: A scene with obstacles and slopes This will be our sandbox to explain and test the NavMesh system functionality. The general setup is similar to an RTS (real-time strategy) game. You control the blue tank. Simply click at a location to make the tank move to that location. The yellow indicator is the current target location for the tank. Navigation Static The first thing to point out is that you need to mark any geometry in the scene that will be baked into the NavMesh as Navigation Static.¬†You may have encountered this elsewhere, such as in Unity's light-mapping system, for example. Setting game objects as static is easy. You can easily toggle the Static flag on for all purposes (navigation, lighting, culling, batching and so on), or you can use the dropdown to specifically select what you want. The toggle is found in the top-right corner of the inspector for the selected object(s). Look at this screenshot for a general idea of what you're looking for: The Navigation Static property You can do this on a per-object basis, or, if you have a nested hierarchy of game objects in your hierarchy, you can apply the setting to the parent and Unity will prompt you to apply it to all children. Baking the navigation mesh The navigation settings for the navigation mesh are applied via the Navigation window on a scene-wide basis. You can open the window by navigating to Window | Navigation in the menu bar. Like any other window, you can detach it to be free-floating, or you can dock it. Our screenshots show it docked as a tab next to the hierarchy, but you can place this window anywhere you please. With the window open, you'll notice four separate tabs. It'll look something like this screenshot: The Navigation window In our case, the preceding screenshot shows the Bake tab selected, but your editor might have one of the other tabs selected by default. Let's take a look at each tab, starting from the left and working our way to the right, starting with the Agents tab, which looks like the following screenshot: The Agents tab If you're working on a different project, you may find that some of these settings are different than what we set them to in the sample project from which the preceding screenshot was taken. At the top of the tab, you can see a list where you can add additional agent types by pressing the "+" button. You can remove any of these additional agents by selecting it and pressing the "-" button. The window provides a nice visual of what the various settings do as you tweak them. Let's take a look at what each setting does: Name: The name of the agent type to be displayed in the Agent Types dropdown. Radius: Think of it as the agent's "personal space". Agents will try to avoid getting too cozy with other agents based on this value, as it uses it for avoidance. Height: As you may have guessed, it dictates the height of the agent, which it can use for vertical avoidance (passing under things, for example). Step Height: This value determines how high of an obstacle the agent can climb over. Max Slope: As we'll see in the coming section, this value determines the max angle up which an agent can climb. This can be used to make steep areas of the map inaccessible to the agent. Next, we have the Areas tab, which looks like the following screenshot: As you can see in the preceding screenshot, Unity provides some default area types that cannot be edited: Walkable, Not Walkable, and Jump. In addition to naming and creating new areas, you can assign default costs to these areas. Areas serve two purposes: making areas accessible or inaccessible per agent, and marking areas as less desirable in terms of navigation cost. For example, you may have an RPG where demon enemies cannot enter areas marked as "holy ground." You could also have areas of your map marked something like "marsh" or "swamp," which your agent could avoid based on the cost. The third tab, Bake, is probably the most important. It allows you to create the actual NavMesh for your scene. You'll recognize some of the settings. The Bake tab looks like this: The Bake tab The agent size settings in this tab dictate how agents interact with the environment, whereas the settings in the Agents tab dictate how they interact with other agents and moving objects, but they control the same parameters, so we'll skip those here. The Drop Height and Jump Distance control how far an agent can "jump" to reach a portion of the NavMesh that is not directly connected to the one the agent is currently on. We'll go over this in more detail up ahead, so don't sweat it if you're not quite sure what that means yet. There are also some advanced settings that are generally collapsed by default. Simply click the drop-down triangle by the Advanced heading to unfold these options. You can think of the Manual Voxel Size setting as the "quality" setting. The smaller the size, the more detail you can capture in the mesh. The Min Region Area is used to skip baking platforms or surfaces below the given threshold. The Height Mesh gives you more detailed vertical data when baking the mesh. For example, it will help preserve the proper placement of your agent when climbing up stairs. The Clear button will clear any NavMesh data for the scene, and the Bake button will create the mesh for your scene. The process is fairly fast. As long as you have the window selected, you'll be able to see the NavMesh generated by the Bake button in your scene view. Go ahead and hit the Bake button to see the results. In our sample scene, you should end up with something that looks like the following screenshot: The blue areas represent the NavMesh. We'll revisit this up ahead. For now, let's move on to the final tab, the Object tab, which looks like the following screenshot: The three buttons pictured in the preceding screenshot, All, Mesh Renderers, and Terrains, act as filters for your scene. These are helpful when working in complex scenes with lots of objects in the hierarchy. Selecting an option will filter out that type in your hierarchy to make them easier to select. You can use this when digging through your scene looking for objects to mark as navigation static. Using the NavMesh agent Now that we have our scene set up with a NavMesh, we need a way for our agent to use this information. Luckily for us, Unity provides a Nav Mesh Agent component we can throw onto our character. The sample scene has a game object named Tank with the component already attached to it. Take a look at it in the hierarchy, and it should look like the following screenshot: There are quite a few settings here, and we won't go over all of them, since they're fairly self-explanatory and you can find the full descriptions in the official Unity documentation, but let's point out a few key things: Agent Type: Remember the Agents tab in the Navigation window? The agent types you define there will be selectable here. Auto Traverse Off Mesh Link: We'll get into Off Mesh Links up ahead, but this setting allows the agent to automatically use that feature. Area Mask: The areas you set up in the Areas tab of the Navigation window will be selectable here. That's it. The component handles 90% of the heavy lifting for you: placement on the path, pathfinding, obstacle avoidance, and so on. The only thing you need to do is provide the agent with a target destination. Let's look at that next. That's it. The component handles 90% of the heavy lifting for you: placement on the path, pathfinding, obstacle avoidance, and so on. The only thing you need to do is provide the agent with a target destination. Let's look at that next. Setting a destination Now that we've set up our AI agent, we need a way to tell it where to go. Our sample project provides a script named Target.cs that does just that. ¬† This is a simple class that does three things: Shoots a ray from the camera origin to the mouse world position using a ray Updates the marker position Updates the destination property of all the NavMesh agents The code is fairly straightforward. The entire class looks like this: using UnityEngine; using UnityEngine.AI; public class Target : MonoBehaviour { private NavMeshAgent[] navAgents; public Transform targetMarker; private void Start () { navAgents = FindObjectsOfType(typeof(NavMeshAgent)) as NavMeshAgent[]; } private void UpdateTargets ( Vector3 targetPosition ) { foreach(NavMeshAgent agent in navAgents) { agent.destination = targetPosition; } } private void Update () { if(GetInput()) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; UpdateTargets(targetPosition); targetMarker.position = targetPosition; } } } private bool GetInput() { if (Input.GetMouseButtonDown(0)) { return true; } return false; } private void OnDrawGizmos() { Debug.DrawLine(targetMarker.position, targetMarker.position + Vector3.up * 5, Color.red); } } There are a few things happening here. In the Start method, we initialize our navAgents array by using the FindObjectsOfType() method. The UpdateTargets() method runs through our navAgents array and sets their target destination to the given Vector3. This is really the key to making it work. You can use any mechanism you wish to actually get the target destination, and all you need to do to get the agent to move there is set the NavMeshAgent.destination¬†field; the agent will do the rest. Our sample uses a click-to-move approach, so whenever the player clicks, we shoot a ray from the camera into the world towards the mouse cursor, and if we hit something, we assign that hit position as the new targetPosition¬†for the agent. We also set the target marker accordingly for easy in-game visualization of the target destination. To test it out, make sure you baked the NavMesh as described in the previous section, then enter play mode, and select any area on the map. If you go click-happy, you may notice there are some areas your agent can't reach‚ÄĒthe top of the red cubes, the top-most platform, and the platform towards the bottom of the screen. In the case of the red cubes, they're too far up. The ramp leading up to the top-most platform is too steep, as per our Max Slope settings, and the agent can't climb up to it. The following screenshots illustrate how the Max Slope settings affect the NavMesh: NavMesh with the max slope value set to 45 If you tweak the Max Slope to something like 51, then hit the Bake button again to re-bake the NavMesh, it will yield results like this: NavMesh with the max slope value set to 51 As you can see, you can tweak your level design to make entire areas inaccessible by foot with a simple value tweak. An example where this would be helpful is if you had a platform or ledge that you need a rope, ladder, or elevator to get to. Maybe even a special skill, such as the ability to climb? I'll let your imagination do the work and think of all the fun ways to use this. Making sense of Off Mesh Links You may have noticed that our scene features two gaps. The first one is accessible to our agent, but the one near the bottom of the screen is too far away. This is not completely arbitrary. Unity's Off Mesh Links effectively bridge the gap between segments of the NavMesh that are not connected. You can see these links in the editor, as shown in the next screenshot: The blue circles with the connecting lines are links There are two ways that Unity can generate these links. The first we've already covered. Remember the Jump Distance value in the Bake tab of the Navigation window? Unity will automatically use that value to generate the links for us when baking the NavMesh. Try tweaking the value in our test scene to 5 and re-baking. Notice how, now, the platforms are linked? That's because the meshes are within the newly-specified threshold. Set the value back to 2 and re-bake. Now, let's look at the second method. Create spheres that will be used to connect the two platforms. Place them roughly as shown in the following screenshot: You may already see where this is going, but let's walk through the process to get these connected. In this case, I've named the sphere on the right start, and the sphere on the left end. You'll see why in a second. Next up, add the Off Mesh Link component on the platform on the right (relative to the preceding screenshot). You'll notice the component has start and end fields. As you may have guessed, we're going to drop the spheres we created earlier into their respective slots‚ÄĒthe start sphere in the start field, and the end sphere in the end field. Our inspector will look something like this: The Cost Override value kicks in when you set it to a positive number. It will apply a cost multiplier to using this link, as opposed to, potentially, a more cost-effective route to the target. The Bi Directional value allows the agent to move in both directions when set to true. You can turn this off to create one-way links in your level design. The Activated value is just what it says. When set to false, the agent will ignore this link. You can turn it on and off to create gameplay scenarios where the player has to hit a switch to activate it, for example. You don't have to re-bake to enable this link. Take a look at your NavMesh and you'll see that it looks like the following screenshot: As you can see, the smaller gap is still automatically connected, and now we have a new link generated by our Off Mesh Link component between the two spheres. Enter play mode and click on the far platform, and, as expected, the agent can now navigate to the detached platform, as you can see in the following screenshot: In your own levels, you may need to tweak these settings to get the exact results you expect, but combining these features gives you a lot of power out-of-the-box. You can have a simple game up and running fairly quickly using Unity's NavMesh feature. This tutorial is an excerpt from the book, Unity 2017 Game AI Programming - Third Edition written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe, and published by Packt Publishing. Use the code ORGDA09 at checkout to get recommended eBook retail price for $9 only until July 15, 2018.
      0 comments
      By khawk
    2. Raph Koster¬†is a veteran game designer and creative executive who has worked at EA, Sony, and Disney as well as run his own company. The lead designer and director of massive titles such as Ultima Online and Star Wars Galaxies, he‚Äôs also contributed writing, art, music, and programming to many other titles. He is the author of the classic book¬†A Theory of Fun for Game Design. In 2012, he was named an Online Game Legend at the Game Developers Conference Online. The content below is an excerpt from his book Postmortems, published in June 2018 and now available on Amazon and other retailers. This is adapted from a speech given in Spanish entitled ‚ÄúUna carrera,‚ÄĚ delivered at GameDay Peru in early 2015. I was given the Online Game Legend award at GDC Online in 2010. The citation read in part, Raph Koster has led a prolific career. As the lead designer on Ultima Online and the creative director on Star Wars Galaxies, his contributions helped lay the foundation for the many massively multiplayer games that followed. Koster's professional credits span nearly every facet of game development, including writing, art, soundtrack music, programming and design. Raph Koster is considered a thought leader, as a frequent lecturer and published author on topics of game design, community management, storytelling and ethics in game development. His A Theory of Fun, published in 2004, is considered seminal by educators and members of the art game movement, as well as being one of the most popular books ever written about games‚Ķ I worry that people look at a career and see only this: the jobs, successes, awards and invitations to give talks. But that‚Äôs not how life is, and that‚Äôs not what I feel like inside. Upon hearing about the award, my mother asked me for a favor. ‚ÄúPlease,‚ÄĚ she told me, ‚Äúmention that you are Latino. People don‚Äôt know, and you have a duty to encourage kids with dreams.‚ÄĚ I lived in Lima, Peru, as a boy. Videogames were not common. Oh, I had played them in the States before arriving in Peru in 1979, and I continued to see them there when I went back to spend summers with my dad. My dad also sent me a copy of Dungeons & Dragons, the red box Basic set, as a birthday present. I already read like crazy: a book a day, usually science fiction or fantasy or murder mysteries. And I don‚Äôt mean just Hardy Boys or kids‚Äô books ‚ÄĒ those, yes, but also Theodore Sturgeon and Isaac Asimov and Robert Ludlum. I was indiscriminate. Of course, I started to create worlds, to draw little maps. Thus began my course of study: I wanted to play those videogames that weren‚Äôt making it to Peru. Pengo and Q*Bert and others weren‚Äôt in the list of the ten or so games within a few miles of where I lived. In fact, there were so few that I can still recite the list of what was available: Spider at the rotisserie chicken place; Asteroids, Gorf, Berzerk, and Star Castle at the mall. On the bus we played the Game & Watch series ‚ÄĒ the dual screen Donkey Kong and Donald & Mickey. I had a Casio game watch, the one with the falling triangle blocks where you had to make a pyramid. With the help of videogame magazines, I made board game versions of the videogames I didn‚Äôt have access to. Porting a digital game to the world of tabletop play taught me the most basic thing: that games can manifest in many ways. Pengo could be decomposed into turn-based strategy. AI could be mimicked with dice or simple rules. I started out like any other apprentice in the arts, by copying things. On my first original game, one called Jungle Climb, I basically took the ideas from the various Donkey Kong games, and drew a crude vertical platformer board. You moved a space at a time, and dodged just like you did on the LCD Game & Watch games‚Ķ which was pretty dull pretty quickly, in a tabletop setting. I then started to challenge myself on the visual front; Egyptian Graverobber required you to play against another player who controlled all the ‚ÄúAI‚ÄĚ monsters, and try to get down to grab all the treasures and then escape. It still used basically the same mechanics. It took several games, but eventually I took on the challenge of actually trying to create something with rules of my own invention. Based in part on the massive Gary Jennings novel Aztec (a decidedly mature read for someone who was probably thirteen at the time), The Hunt for the Treasure of Quetzalcoatl, spanned a dozen tall thin boards, with countless enemies, a randomized event generator from shuffled event decks, and a randomized quest order based on drawing cards from a separate deck. It took hours to play, and supported a bunch of kids all playing different creatures and monsters. I look back on it now, and with hindsight, I say to myself ‚Äúwow, it‚Äôs almost as if I knew at age 12 that I was going to be a game designer as an adult.‚ÄĚ But that‚Äôs a lie. I thought I was going to be a writer or teacher, you see. Everyone in my family had always told me so. I started to do things like take existing games and do revamped versions, to try to improve them. One in particular was a remake of TSR‚Äôs Dungeon! I‚Äôm really not sure that my take was any better, but there it was. I rather quickly found to my dismay that my previous games had mostly just been about level design, map design, incident decks and movement rules. There was a whole new world at work in more sophisticated games ‚ÄĒ there was math, and statistics behind everything, stuff I just didn‚Äôt understand. My try at a state-machine game based on sword fighting, creatively entitled Swordplay, was a miserable failure because just about everything led everywhere. I soon discovered the issue of ‚Äúdegenerate strategies‚ÄĚ without knowing the term. This basically led me to the library. I didn‚Äôt think I was ‚Äústudying.‚ÄĚ For me, reading up on this stuff was just part of the game of making games. By the time I hit high school I was researching ship-to-ship combat for a game called Legal Pirates, which was actually what I turned in for a class assignment in history class. It came with an annotated bibliography; as it happens, this was not the last time I had a bibliography for a game design, even though it was never required of me again. It was around when I was thirteen that I discovered that computers would actually let me make my own games. Oh, I had started playing with a Pong home console well before moving to Peru, and we had an Atari 2600 with quite a bunch of cartridges. But it was clear, from reading Video Games and Electronic Games and Compute! and Creative Computing that computer gaming was where the real action was. I started learning some MS-BASIC on my dad‚Äôs CP/M-based Osborne 1, hacking Colossal Cave and playing ASCII versions of Pac-Man and Wari. I begged and begged, and my great-uncle got me a 16K Atari 8-bit computer. I don‚Äôt have that one anymore, because I upgraded three times before I was 16, ending up with the 130XE. I still have it, along with my collection of games on floppy disks. And so I threw myself into trying to create games again, just in a new medium. My friends and I took ourselves very seriously. We actually called ourselves a company, and we put a copyright symbol by the company name (which is legally nonsensical, of course). It took a few tries, but I made a game called Orion. And it was actually fun. It was imitative, for sure, featuring light cycles and spaceships. It consisted of linked games, so you could keep score of who won in each of the challenges. You can actually trace the progression of my programming skill over the course of the games; the first one was the light cycles; the second, I could move ships vertically but not freely; the third had free movement but not independent shots; and so on. I even had the brainstorm to bring in the kids‚Äô game of Capture the Flag into video game form. After five linked games, I felt like I was a game developer. I wrote a second one, a pretty terrible one where you flew around over a moonscape and shot down aliens before they reached the ground, while dodging explosive satellites. We managed to sell a copy of that one to a friend, in a Ziploc baggie. But then I stopped. I finished high school. I went to college. I thought I would go be that writer, teacher, artist. I studied poetry, and music, and art, but this time ‚Äúfor real.‚ÄĚ I even got an MFA in poetry. A sheepskin doesn‚Äôt make a poet, it turns out. While in college, I ran a play-by-email roleplay campaign, but otherwise didn‚Äôt really do much with game development. Macintosh computers were everywhere on campus‚ÄĒI was out of touch with programming and found the complexities of working with the newfangled graphical interfaces impenetrable. I could help fellow students with their Pascal homework, but I couldn‚Äôt put a sprite on screen. I could max out the campus high score in Crystal Quest, but I couldn‚Äôt make so much as a text adventure. My roleplay campaign was presented at a conference on academic computing by the head of the computer program, but all my creative energies went to writing. I got pretty unhappy in grad school. There were academic politics. The writing that was in vogue felt utterly disconnected from most people‚Äôs lived experiences to me, a sort of hermetic and self-referential body of work infatuated with other academic writers. I recall huge arguments over whether Stephen King was more important a writer than whoever we happened to be studying that week. I am pretty sure I was proven right by time. But the Internet was starting to boom. To stay in touch with friends, we started using email. And after email, a friend pointed out that there were these crazy games, reminiscent of the D&D campaigns I had run as a kid, but run over the Internet. They were called MUDs. MUDs were text-based virtual realities, but I didn‚Äôt know that yet. I started out playing them, then in less than a year, making them. I could use the writing skills I had acquired for doing MUD development. And MUDs were communities. Managing them, I had to study politics and sociology. The result was that the industry knocked at my door. And so I got lucky, helping lead Ultima Online, by the time I was 25. I say luck, and it was indeed luck. But it also happened because we dreamt of fantastic worlds and the future of cyberspace. That wasn‚Äôt something that we were equipped to build, but we tried anyway. Even on a giant project like that, I still found myself drawing little maps with pencil. They don‚Äôt look that different from the ones from grade school, if I am honest with myself. In some ways, it feels like I ran to stand still. And as I ran, I ran with more ambition, because you have to challenge yourself, you have to beat the boss. By the time I did Star Wars Galaxies, I was inventing new technology around procedural generation techniques, to do something that wasn‚Äôt quite possible: ship a 4 gigabyte game on a CD. I was teaching myself all the disciplines of all my colleagues, so I could do things like do all of the interface design for the game. I finally understood those mathematics behind everything, and now I was trying to turn them into magic, to allow other people to live improbable lives in impossible places. I came to see games as gifts. I have a daughter. She lives with Type I diabetes. When she was young, I made a videogame for her called Watersnake. The snake lives under the water, and the landscape scrolls by, all starfish with cute eyes and seaweed. The snake is always drifting down. If it hits the bottom, well, that‚Äôs a seizure and coma and possible death from a hypoglycemic event. If it goes up too high, it pokes above the water where it can‚Äôt breathe, and suffers slow damage that can never be healed, equivalent to the slow damage caused by constant high blood sugars, the slow neuropathies and circulatory damage that occurs. You have to toss food to the snake ‚ÄĒ cupcakes, steaks, pizza, fried chicken, juice boxes ‚ÄĒ kids‚Äô foods, things that she would want to eat herself. And you do it in order to pick up prizes under the water. Each food uses the real world glycemic index of that food to cause the snake to swim upwards, and them slowly come down. Fast carbohydrates cause the snake to shoot up to the surface and possibly the sky; slower proteins and fats cause gentle arcs. Watersnake was a gift to my daughter. I have a mother. She always worries that I will forget the cultures from which I sprang, my heritage. I made a game called Andean Bird, one of the very first ‚Äúart games,‚ÄĚ for her. In it you fly over the littoral islands off the coast of Peru, in the form of a sea bird of some sort. You fly, and you experience the wind and the sunrise and the sunset, and you listen to music and you flap your wings and read a small poem about the ways in which our memories of cultures and heritages erode. Andean Bird was a gift to my mother. I realize now that games themselves have been my teachers, all this time. There came a moment when I realized it was my turn to teach. After all, you take turns in games. The result was a book called A Theory of Fun for Game Design, where I tried to share back what I had learned by ranging widely over other fields. My tools for making abstruse topics easy to swallow were the same little cartoons that I drew when I was twelve. But now, of course, I take it all so so seriously. I have a tall shelf reserved just for books that are about games, for for fields that impinge directly on the sorts of games I make. Books about hypertext, books about virtual law. Books about industry history, and books about cultural anthropology. Books about the way in which virtualizing our world provides opportunities for constant surveillance, and books about how societies find ways out of pickles like that. Books about chance, and books about economics and books about cognition, and yes, still books about poetry. Because games deserve to be taken seriously, and players deserve to be taken seriously, and most of the worst mistakes I have made over my career, the worst game design mistakes, have happened because I failed to do that. Ultimately, what I do deserves to be taken seriously. So I set out to help the world create their own games, without having to go through that study process. I created a platform called Metaplace which was intended to democratize the creation of online worlds, so that we could get back the creative explosion of them that had existed in the days before World of Warcraft. Spoiler: it didn‚Äôt work. But working to create tools was yet another new design challenge, another new way to look at the problem. Even though the platform didn‚Äôt do what we hoped, people still made amazing things. Many of our best users are developers in the industry now, some of them on award-winning titles. The platform hosted a President speaking by video, arcade games and Nordic myth and lectures and parties and games about 9/11 and games about fuzzballs. In the end, though, you really can‚Äôt skip past the learning, I think. Those who seem to‚ÄĒsay, lucky ones who lead a major title when they are 23‚ÄĒdo so because they are learning in public, running forward like mad, because it is their passion. It‚Äôs their art. Yes, I said ‚Äúart.‚ÄĚ The world has changed a lot since I started. Now everyone plays games. I‚Äôve made some for that ‚Äúeveryone,‚ÄĚ like Island Life, My Vineyard, and Jackpot Trivia. There is also now a bit of a science to making games; more is understood about verbs and loops and arcs and grammars, and I helped that to happen. There are even classes inside virtual worlds, and you can go to a games program and learn how to make games from an actual teacher. And if you want to get creative with games, you don‚Äôt have to know how to make them yourself anymore. The games themselves are canvases and brushes, tools of creativity in their own right, and yeah, I helped make that happen too. Lately, it has brought me all full circle. I‚Äôm back to working with cards and tokens and cluttered tabletops these days. It‚Äôs fun and challenging to work with few moving parts, without the crutches, but also with a limitless field of possibility, the way it was when I was thirteen. It‚Äôs fun to be able to go back to the prototype, back to experimentation, back to the heart of design. With a tad more confidence than before, perhaps, but never with too much. I know what pitfalls are out there now, after all. But I also know that one learns from failures, from trying. That you dive into the thicket in order to understand it. In the end, games connect us and teach us. They carry us from the simple to the complex; but only if we are willing to play. Willing to play in our lives, willing to play with learning, willing to play and challenge ourselves. This is the race we run. We are our own opponent. It‚Äôs not a bad thing to be a kid inside, if we never stop learning and never stop in that process of slowly growing up. If we as game designers sometimes feel like we don‚Äôt fit into society, well, it‚Äôs because games form cultures, all the way in our youth, and sometimes someone is needed who can stand outside the culture and impart lessons. That is, in a sense, your cultural heritage. You probably grew up with games. You can make them, and not be ashamed of it. You can love them, and not be ashamed of it. You can look at them, see their flaws, the ways in which people misuse them for exclusion, and work to make a change. Every year from now on, games will be a larger part of our world. They will change society as a whole. Learning to design them will prepare you for this new tech-mad planet. It‚Äôs a career. It‚Äôs a long road. You had better start running now. Read more anecdotes, stories, and game design tips from Raph Koster's Postmortems. Available on Amazon.
      0 comments
      By khawk
    3. Aaron is hosting an AMA in the GameDev.net Business and Law forum. Click here to participate! ‚Äú$100. Gone.‚ÄĚ Jonas leaned back in his chair, staring at his screen in disbelief. His social media ads had failed. A few weeks before, he had launched the beta for his first game,¬†Startup Company,¬†and planned to use the ads to drive pre-release sales, but to no avail. Frustrated and out $100, Jonas started looking for another marketing method ‚ÄĒ one that could successfully generate the excitement and sales he needed for Startup Company‚Äôs launch. And that‚Äôs when he found influencer marketing. His plan was simple: gather a list of YouTube and Twitch influencers, send them free Startup Company keys, cross his fingers, and hope they play it on stream/video. After hours of searching for and sending 500+ emails, Jonas waited. The result? The game took off. Within two weeks of its launch, hundreds of influencers were playing Startup Company and sharing it with their viewers. His success began to snowball ‚ÄĒ as more people starting playing the game, more content creators started making videos about it. With the help of those creators, Startup Company sold over 50,000 copies within its first two weeks on Steam. Jonas had made a hit. After seeing successes like Startup Company‚Äôs, many game devs have begun looking at Twitch influencer marketing as a means of spreading their game across the gaming community. The only problem? They have no idea how to start. The world of Twitch influencer marketing is frightening. But by educating yourself on the platform and learning the proper methods for conducting sponsorships, you can use Twitch to achieve your sales goals just like Jonas. But before you do anything‚Ķ. 1. You must formulate detailed¬†goals. To succeed on Twitch, you have to know why you want to work with influencers in the first place. Are you trying to‚Ķ Drive beta users for QA testing? Collect feedback? Generate hype around your launch? Develop a tight-knit community? Promote a new patch/feature? Or blast your game to as many people has possible? Be sure to set your goals early.¬†They‚Äôll provide a framework for the rest of the campaign you‚Äôll build shortly. 2. Next, set a¬†budget. How much money can you realistically spend promoting your game? Your budget should reflect your goals ‚ÄĒ if you want to maximize awareness around your launch, you‚Äôll have to hire more influencers than someone looking to drive a few beta users.¬† We‚Äôll talk more about promotion strategies and pricing shortly. But for now, go ahead and map your available funds. 3. Now brainstorm promotion ideas and their requirements. Many game devs think there‚Äôs only one way to work with Twitch influencers: Don‚Äôt get me wrong ‚ÄĒ that strategy will work occasionally (just look at Jonas). But if you want to run long-lasting campaigns that help you reach¬†your specific goals,¬†you‚Äôll have to go deeper.¬† There are thousands of ways to promote your game on Twitch ‚ÄĒ too many to list. But here are a few to jog your mind:¬† Sponsoring an event between streamers from the same Twitch community (e.g. the ‚ÄúBinding of Isaac‚ÄĚ game directory) would work great for developing your game‚Äôs community within a tight-knit group.¬† Paying a large streamer to play your game for 1‚Äď2 hours would allow you to generate brand awareness, hype an upcoming launch, and/or increase sales. You could even give them a discount code to share with their viewers if your goals are sales focused. ¬† Offering social media promotion to streamers in exchange for on-stream promotion could be a great way to generate buzz on a low budget.¬† On top of promotion ideas, you‚Äôll also need to plan the smaller aspects of your promotions. For instance, do you want your streamer(s) to:¬† Place your branded graphic in their info section? A streamer‚Äôs ‚Äúinfo section‚ÄĚ is a small section below their stream where they place links to social media pages, gear lists, and most importantly, sponsored graphics (like in the image¬†above). Post timed discount codes in their chat? (Most¬†chat bots¬†have this capability, so ask your streamer which one they prefer.) Promote sponsored content on their social media channels (e.g. post to Twitter announcing your partnership)? This is your time to get creative.¬†The more engaging, entertaining, and easy your promotion ideas, the faster you‚Äôll reach your goals. 4. Gather a list of streamers. After you‚Äôve set your goals, defined a budget, and planned a promotion strategy, it‚Äôs time to find the streamers who will spearhead your campaign. Streamer delivering sponsored content to their viewers, circa¬†2018. ‚Ķbut before you start searching, it‚Äôs important you understand some key Twitch influencer marketing metrics: Followers:¬†How many users have chosen to see a streamer‚Äôs broadcast in their ‚ÄúFollowing‚ÄĚ list. Average Concurrent Viewership:¬†The average number of viewers in a streamer‚Äôs channel.¬† Follower Growth:¬†How many followers a streamer is gaining daily. This number should always be positive. Monthly impressions:¬†The number of unique visits a streamer had on their broadcasts throughout the month.¬† Engagements:¬†The number of chat messages sent during a given stream or over the period of days or months. The higher the engagements, the better. ACV is the main determinant for how much money you have to pay a streamer for sponsored content ‚ÄĒ as their ACV increases, so must your budget (generally).¬† There are a few ways you can discover new streamers and measure their analytics:¬† 1. Do it manually. Head to Twitch, click on a game, and start watching streamers that pique your interest. Measure how many viewers they receive on a daily basis and how many followers they gain. Observe how active and positive their chat rooms are. Determine whether you like their personalities. If everything matches up with your goals and your budget, you‚Äôll know the streamer is a good fit to promote your game. This method is pretty monotonous, but it can work if you‚Äôre just starting out. 2.¬†Use a¬†tool. Twinge.tv is great for discovering new streamers and viewing their metrics.¬† Or, if you‚Äôre looking for something more powerful,¬†PowerSpike is a good option. It has all the metric measurement features of Twinge and more. The platform also allows you to post a ‚Äúcampaign‚ÄĚ to a marketplace where streamers can apply (like a job board) ‚ÄĒ this is great if you don‚Äôt feel like manually searching for streamers. Full transparency: I work with PowerSpike so I‚Äôm biased towards our platform, but any tool will work for your needs. ¬† Once you have a list of potential streamers‚Ķ 5. Find their contact information. If you manually searched for your list of streamers, you‚Äôll have to manually find each of their points of contact. There are a few common places you can look for contact info: 1. The info¬†section. This is where most streamers link to their emails or Discord servers.¬† If a streamer‚Äôs info section is crowded, just Control + F and search for ‚Äú@,‚ÄĚ ‚Äúgmail,‚ÄĚ or ‚Äúemail.‚ÄĚ If nothing comes up, you‚Äôll have to look elsewhere. 2. Twitter descriptions. If the contact info isn‚Äôt in their info section, there‚Äôs a good chance they‚Äôve linked it in their Twitter bio. You can usually find a streamer‚Äôs Twitter account from their info section. If it‚Äôs not there, however, you can Google ‚Äú[streamer name] + Twitter‚ÄĚ and (if they have an account) it will appear. 6. Send a sponsorship proposal. We‚Äôre finally getting to the good stuff. A ‚Äúproposal‚ÄĚ is an email that introduces you to a streamer and informs them of your sponsorship offer. It usually acts as your first impression,¬†so it‚Äôs important to get right.¬† Here‚Äôs the process I use to write proposals for custom-managed campaigns at PowerSpike:¬† Greet the streamer and tell them a bit about yourself and your game. Briefly mention how you discovered their stream. Make it personal. Next, tell them you want to send them a free copy of your game and let them know you want to sponsor them. Give a brief description of your promotion idea. Then, provide an offer for how much you‚Äôd pay them for completing the sponsorship. Let them know when you‚Äôre looking to start the deal. Lastly, encourage ongoing communication by inviting them to a short voice call to further discuss the deal. Once your proposal is completed, send it to the streamer on Discord, Twitter, or email. Then wait. If the streamer accepts your proposal, great! You can move on to the next step.¬† ¬† If they want to negotiate your price or requirements, that‚Äôs fine too. Talk it out with them. Be honest about what you‚Äôre able to offer and how far you can go in terms of pricing. If the offer goes out of your range or they decline to accept, it‚Äôs no big deal ‚ÄĒ thank them for taking the time and move on. 7. Send the necessary deal and promotion materials. Once a streamer accepts your proposal, there are only a few things left to do:¬† If money is involved, send a contract. You can skip this step if you‚Äôre using PowerSpike.¬† Set a time and date for them to complete the sponsorship. It‚Äôs best to let them choose this time, but don‚Äôt hesitate to propose your own time frame if it‚Äôs important.¬† Send the necessary resources (e.g. game keys, branded info section graphics, tracking links, documents that restate your requirements, etc.). Lastly, ensure the streamer knows to include #ad or #sponsored in their stream titles or social media posts during sponsored content. If you‚Äėre unsure whether this FTC rule applies to your sponsorship,¬†more info can be found here.¬† Almost done!¬† 8. Watch the sponsorship.¬† There are several reasons why you‚Äôd want to watch your sponsored content live: Viewers like to interact with devs. You‚Äôll make them feel like they‚Äôre a part of your project by talking with them in the chat, and that‚Äôs cool. You can collect feedback and answer questions.¬† The streamers and the viewers will know you care. Just be sure you aren‚Äôt micromanaging from the chat. Let your streamers do their thing and you can interact with their communities. 9. Record results, pay the streamer, and¬†restart. It‚Äôs done. And now it‚Äôs time to measure the results. How many clicks did your website get? How many game copies did you sell? How much feedback did you receive? Did the streamer provide high-quality content? Were they professional? Did you set the grounds for an ongoing relationship? And most importantly: Did you achieve the goals you set in step¬†one? I hope so. But if not, you can always learn from your mistakes and try again later. Once all your requirements have been fulfilled, you can pay your streamers and restart the process! ¬† By now, you should have a great understanding of how you can sponsor Twitch streamers to achieve your marketing goals as a game developer. To quickly recap the process: Formulate your goals. Set your budget. Brainstorm promotion ideas. Gather a list of streamers. Find their contact information. Reach out and propose the promotion ideas and sponsorship offer. Send necessary deal and promotion materials if they accept your offer. Observe the sponsored content. Record results, pay the streamer, and restart. And that‚Äôs it.¬† Good luck out there!¬† If you're interested in trying PowerSpike for free to kickstart your influencer marketing efforts, feel free to DM me and I'll help you out!¬† ¬† Originally posted on Medium at¬†https://medium.com/@aaronmarsden/a17045c32611.
      0 comments
    4. Byron Atkinson-Jones is a game designer, writer, speaker and teacher from the United Kingdom (Byron's¬†twitter). He has been in the games industry for 21 years and continues to expand his own knowledge as well as other on the art of video game design. He's worked on a number of games, including FIFA, Football Manager, and NHL. He is also a tutor in his spare time and has students of various ages from around the world. Want to know the best ways in which to get a job at Ubisoft? Or are you unsure which field of game design you should become an expert in?¬†In this interview, Byron discusses the best ways for young video game designers to get into the games industry. Hi Byron, thanks a lot for speaking to me. Firstly, did you study game design at university? There weren‚Äôt any games courses when I went to university - I did computer science. Has your degree been important for you in order to work in the industry? If you were studying now would you do a game design course instead? The degree is the first foot in the door of most companies, it certainly helped me as the Job I was going for required a degree. I wouldn‚Äôt do a games course these days; I‚Äôve not really been that impressed by the courses I‚Äôve seen up close. Do you think it is difficult for young wannabe game designers to get into the industry? I guess a degree isn't enough, they need to do something that makes themselves stand out, right? Anybody starting out as a designer is tough enough anywhere, unless you‚Äôve got a proven track record of published games it‚Äôs a hard call to allow somebody very new to be at the helm of a product that‚Äôs going to cost a lot of money. Most I knew went in via a different route such as QA. What would you recommend to university graduates trying to make their way in the industry? Make as many games as they can to add to their portfolio, it‚Äôs important to finish those games and get them in front of as many people as possible. It‚Äôs never been easier to do that now that we have too, so like Unity. I guess they don't necessarily have to show they can come up with any original ideas do they? Only that they can do what is likely to be required of them at a studio? Originality isn‚Äôt necessary but the ability to commit to and finish a game is. When I was interviewing for a AAA (as in the one interviewing potential candidates) I was always more impressed by those that came in with Games they were making either by themselves or with others. Are there certain elements of game design that are more in demand than others? For example, should a student concentrate on one element more than others in order to get a step ahead? No, early on in their careers they should be generalists. Chances are they are not going to get to work on what they want from day one so why restrict what you can apply for? So it is best to narrow your field as you grow into the industry I guess? Although it is important to have a broad range of skills? Better to go with the flow, see where your career takes you. I never imagined when I started out as a coder I would end up doing stand-up comedy for instance. I guess young wannabe game designers shouldn't put so much pressure on themselves then? Yeah, totally, no need to rush! Looking back, is there anything different you would have done in your career? No, I‚Äôm pretty happy with the way it‚Äôs turned out. Is the teaching going well? I love teaching - it's amazing seeing somebody who thinks they can't make a game leave at the end of the week having made a game. Could you tell me a little bit more about the course you offer? It is a course we do in various locations around London. The class size is usually 22 people. Mostly in the age range of 16 to 22 and it's open to all. You said earlier that you have not been that impressed with game design courses that you have seen, why is this the case? In general, I wouldn‚Äôt recommend a games course, mainly because games courses are in their relative infancy and the wider world hasn‚Äôt caught up - for instance what if you try to get a non-games job? There could be bias against you if the recruiter doesn‚Äôt consider a games course a ‚Äėreal‚Äô degree. Of course that bias is ridiculous but it‚Äôs a possibility currently. Also, as with all universities the quality level is vastly different. This is down to funding and how integrated the university is with the industry. This is perhaps something we have to change as an industry. I guess the games industry is constantly changing so the lecturers themselves also need to continue learning in order to be up to date. It is not like a history lecturer for example teaching about the history of Ancient Greece, for example. That‚Äôs one aspect yes. Any other reasons why you think the quality is lacking sometimes? It‚Äôs complicated, I‚Äôm sure they will get there but at the moment a lot of work needs to be done. Would you be able to recommend a potential route for a student that wants to work for Ubisoft, for example? That's a tough one. The best thing is to look at what they are after. What current jobs do they have? Also, try to meet up with them when they attend games conferences like develop, GDC etc... noting beats meeting the actual people doing the recruiting and being able to ask them questions. Would you recommend unpaid internships so they can get their foot in the door? Never work for free, never. If somebody has an unpaid position, run a mile. Everybody should be paid for their work, bet it actual money or a revenue share in the product. I know it seems like a good way to get experience but it isn't. Have you ever been tempted to work as a developer for casino games? Would you ever advise a game designer to work for an online casino games company to build experience or is that completely different? Very early in my career I worked as a coder on slot machines and it was a fundamentally toxic environment to work in. It was a completely male dominated workplace and it became Lord of the Flies very rapidly and just was not a pleasant work environment. As a result, I try not to work in male only environments. It‚Äôs not really game design working on slot machines - it‚Äôs more about statistics and art (to make them look flashy). So you wouldn‚Äôt recommend it as a stepping stone? Personally - no. ¬†
      0 comments
      By khawk
    5. This is an excerpt from the book, Mastering C++ Game Development written by Mickey Macdonald and published by Packt Publishing. With this book, learn high-end game development with advanced C++ 17 programming techniques. One of the most common uses for shaders is creating lighting and reflection effects. Lighting effects achieved from the use of shaders help provide a level of polish and detail that every modern game strives for. In this post, we will look at some of the well-known models for creating different surface appearance effects, with examples of shaders you can implement to replicate the discussed lighting effect. Per-vertex diffuse To start with, we will look at one of the simpler lighting vertex shaders, the diffuse reflection shader. Diffuse is considered simple since we assume that the surface we are rendering appears to scatter the light in all directions equally. With this shader, the light makes contact with the surface and slightly penetrates before being cast back out in all directions. This means that some of the light's wavelength will be at least partially absorbed. A good example of what a diffuse shader looks like is to think of matte paint. The surface has a very dull look with no shine. Let's take a quick look at the mathematical model for a diffuse reflection. This reflection model takes two vectors. One is the direction of the surface contact point to the initial light source, and the second is the normal vector of that same surface contact point. This would look something like the following: It's worth noting that the amount of light that strikes the surface is partially dependent on the surface in relation to the light source and that the amount of light that reaches a single point will be at its maximum along the normal vector, and its lowest when perpendicular to the normal vector. Dusting off our physics knowledge toolbox, we are able to express this relationship given the amount of light making contact with a point by calculating the dot product of the point normal vector and incoming light vector. This can be expressed by the following formula: Light Density(Source Vector) Normal Vector \(LightDensity = Source Vector * Normal Vector\)\(Light Density = SourceVector \cdot NormalVector\) The source and normal vector in this equation are assumed to be normalized. As mentioned before, some of the light striking the surface will be absorbed before it is re-cast. To add this behavior to our mathematical model, we can add a reflection coefficient, also referred to as the diffuse reflectivity. This coefficient value becomes the scaling factor for the incoming light. Our new formula to specify the outgoing intensity of the light will now look like the following: Outgoing Light = (Diffuse Coefficient x Light Density x Source Vector) Normal Vector \(Outgoing Light = (Diffuse Coefficient x Light Density x Source Vector) \cdot Normal Vector\) With this new formula, we now have a lighting model that represents an omnidirectional, uniform scattering. OK, now that we know the theory, let's take a look at how we can implement this lighting model in a GLSL shader. The full source for this example can be found in the Chapter07 folder of the GitHub repository, starting with the Vertex Shader shown as follows: #version 410 in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal; out vec2 UV; out vec3 LightIntensity; uniform vec4 LightPosition; uniform vec3 DiffuseCoefficient ; uniform vec3 LightSourceIntensity; uniform mat4 ModelViewProjection; uniform mat3 NormalMatrix; uniform mat4 ModelViewMatrix; uniform mat4 ProjectionMatrix; void main() { vec3 tnorm = normalize(NormalMatrix * vertexNormal); vec4 CameraCoords = ModelViewMatrix * vec4(vertexPosition_modelspace,1.0); vec3 IncomingLightDirection = normalize(vec3(LightPosition - CameraCoords)); LightIntensity = LightSourceIntensity * DiffuseCoefficient * max( dot( IncomingLightDirection, tnorm ), 0.0 ); gl_Position = ModelViewProjection * vec4(vertexPosition_modelspace,1); UV = vertexUV; } We'll go through this shader block by block. To start out, we have our attributes, vertexPosition_modelspace, vertexUV, and vertexNormal. These will be set by our game application, which we will look at after we go through the shader. Then we have our out variables, UV and LightIntensity. These values will be calculated in the shader itself. We then have our uniforms. These include the needed values for our reflection calculation, as we discussed. It also includes all the necessary matrices. Like the attributes, these uniform values will be set via our game. Inside of the main function of this shader, our diffuse reflection is going to be calculated in the camera relative coordinates. To accomplish this, we first normalize the vertex normal by multiplying it by the normal matrix and storing the results in a vector 3 variable named tnorm. Next, we convert the vertex position that is currently in model space to camera coordinates by transforming it with the model view matrix. We then calculate the incoming light direction, normalized, by subtracting the vertex position in the camera coordinates from the light's position. Next, we calculate the outgoing light intensity by using the formula we went through earlier. A point to note here is the use of the max function. This is a situation when the light direction is greater than 90 degrees, as in the light is coming from inside the object. Since in our case we don't need to support this situation, we just use a value of 0.0 when this arises. To close out the shader, we store the model view projection matrix, calculated in clip space, in the built-in outbound variable gl_position. We also pass along the UV of the texture, unchanged, which we are not actually using in this example. Now that we have the shader in place, we need to provide the values needed for the calculations. We do this by setting the attributes and uniforms. We built an abstraction layer to help with this process, so let's take a look at how we set these values in our game code. Inside the GamePlayScreen.cpp file, we are setting these values in the Draw() function. I should point out this is for the example, and in a production environment, you would only want to set the changing values in a loop for performance reasons. Since this is an example, I wanted to make it slightly easier to follow: GLint DiffuseCoefficient = shaderManager.GetUniformLocation("DiffuseCoefficient "); glUniform3f(DiffuseCoefficient, 0.9f, 0.5f, 0.3f); GLint LightSourceIntensity = shaderManager.GetUniformLocation("LightSourceIntensity "); glUniform3f(LightSourceIntensity, 1.0f, 1.0f, 1.0f); glm::vec4 lightPos = m_camera.GetView() * glm::vec4(5.0f, 5.0f, 2.0f, 1.0f); GLint lightPosUniform = shaderManager.GetUniformLocation("LightPosition"); glUniform4f(lightPosUniform, lightPos[0], lightPos[1], lightPos[2], lightPos[3]); glm::mat4 modelView = m_camera.GetView() * glm::mat4(1.0f); GLint modelViewUniform = shaderManager.GetUniformLocation("ModelViewMatrix"); glUniformMatrix4fv(modelViewUniform, 1, GL_FALSE, &modelView[0][0]); glm::mat3 normalMatrix = glm::mat3(glm::vec3(modelView[0]), glm::vec3(modelView[1]), glm::vec3(modelView[2])); GLint normalMatrixUniform = shaderManager.GetUniformLocation("NormalMatrix"); glUniformMatrix3fv(normalMatrixUniform, 1, GL_FALSE, &normalMatrix[0][0]); glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &m_camera.GetMVPMatrix()[0][0]); I won't go through each line since I am sure you can see the pattern. We first use the shader manager's GetUniformLocation() method to return the location for the uniform. Next, we set the value for this uniform using the OpenGL glUniform*() method that matches the value type. We do this for all uniform values needed. We also have to set our attributes, and as discussed in the beginning of the chapter, we do this in between the compilation and linking processes. In this example case, we are setting these values in the OnEntry() method of the GamePlayScreen() class: shaderManager.AddAttribute("vertexPosition_modelspace"); shaderManager.AddAttribute("vertexColor"); shaderManager.AddAttribute("vertexNormal"); That takes care of the vertex shader and passed in values needed, so next, let's look at the fragment shader for this example: #version 410 in vec2 UV; in vec3 LightIntensity; // Ouput data out vec3 color; // Values that stay constant for the whole mesh. uniform sampler2D TextureSampler; void main() {  color = vec3(LightIntensity); } For this example, our fragment shader is extremely simple. To begin, we have the in values for our UV and LightIntensity, and we will only use the LightIntensity this time. We then declare our out color value, specified as a vector 3. Next, we have the sampler2D uniform that we use for texturing, but again we won't be using this value in the example. Finally, we have the main function. This is where we set the final output color by simply passing the LightIntensity through to the next stage in the pipeline. If you run the example project, you will see the diffuse reflection in action. The output should look like the following screenshot. As you can see, this reflection model works well for surfaces that are very dull but has limited use in a practical environment. Next, we will look at a reflection model that will allow us to depict more surface types:   Per-vertex ambient, diffuse, and specular The ambient, diffuse, and specular (ADS) reflection model, also commonly known as the Phong reflection model, provides a method of creating a reflective lighting shader. This technique models the interaction of light on a surface using a combination of three different components. The ambient component models the light that comes from the environment; this is intended to model what would happen if the light was reflected many times, where it appears as though it is emanating from everywhere. The diffuse component, which we modeled in our previous example, represents an omnidirectional reflection. The last component, the specular component, is meant to represent the reflection in a preferred direction, providing the appearance of a light glare or bright spot. This combination of components can be visualized using the following diagram: Source: Wikipedia This process can be broken down into separate components for discussion. First, we have the ambient component that represents the light that will illuminate all of the surfaces equally and reflect uniformly in all directions. This lighting effect does not depend on the incoming or the outgoing vectors of the light since it is uniformly distributed and can be expressed by simply multiplying the light source's intensity with the surface reflectivity. This is shown in the mathematical formula Ia = LaKa The next component is the diffuse component we discussed earlier. The diffuse component models a dull or rough surface that scatters light in all directions. Again, this can be expressed with the mathematical formula Id = LdKd(sn) The final component is the specular component, and it is used to model the shininess of the surface. This creates a glare or bright spot that is common on surfaces that exhibit glossy properties. We can visualize this reflection effect using the following diagram: For the specular component, ideally, we would like the reflection to be at is most apparent when viewed aligned with the reflection vector, and then to fade off as the angle is increased or decreased from this alignment. We can model this effect using the cosine of the angle between our viewing vector and the reflection angle, which is then raised by some power, as shown in this equation: (r v) p. In this equation, p represents the specular highlight, the glare spot. The larger the value input for p, the smaller the spot will appear, and the shinier the surface will look. After adding the values to represent the reflectiveness of the surface and the specular light intensity, the formula for calculating the specular effect for the surface looks like so: Is = LsKs(r v) p So, now, if we take all of our components and put them together in a formula, we come up with I = Ia + Id + Is or breaking it down more, I = LaKa + LdKd(sn) + LsKs(r v) p With our theory in place, let's see how we can implement this in a per-vertex shader, beginning with our vertex shader as follows: #version 410 // Input vertex data, different for all executions of this shader. in vec3 vertexPosition_modelspace; in vec2 vertexUV; in vec3 vertexNormal; // Output data ; will be interpolated for each fragment. out vec2 UV; out vec3 LightIntensity; struct LightInfo {  vec4 Position; // Light position in eye coords.  vec3 La; // Ambient light intensity  vec3 Ld; // Diffuse light intensity  vec3 Ls; // Specular light intensity }; uniform LightInfo Light; struct MaterialInfo {  vec3 Ka; // Ambient reflectivity  vec3 Kd; // Diffuse reflectivity  vec3 Ks; // Specular reflectivity  float Shininess; // Specular shininess factor };  uniform MaterialInfo Material;  uniform mat4 ModelViewMatrix;  uniform mat3 NormalMatrix;  uniform mat4 ProjectionMatrix;  uniform mat4 ModelViewProjection;  void main() {     vec3 tnorm = normalize( NormalMatrix * vertexNormal);     vec4 CameraCoords = ModelViewMatrix * vec4(vertexPosition_modelspace,1.0);     vec3 s = normalize(vec3(Light.Position - CameraCoords));     vec3 v = normalize(-CameraCoords.xyz);     vec3 r = reflect( -s, tnorm );     float sDotN = max( dot(s,tnorm), 0.0 );     vec3 ambient = Light.La * Material.Ka;     vec3 diffuse = Light.Ld * Material.Kd * sDotN;     vec3 spec = vec3(0.0);     if( sDotN > 0.0 )      spec = Light.Ls * Material.Ks * pow( max( dot(r,v), 0.0 ), Material.Shininess );     LightIntensity = ambient + diffuse + spec;     gl_Position = ModelViewProjection * vec4(vertexPosition_modelspace,1.0); } Let's take a look at what is different to start with. In this shader, we are introducing a new concept, the uniform struct. We are declaring two struct, one to describe the light, LightInfo, and one to describe the material, MaterialInfo. This is a very useful way of containing values that represent a portion in the formula as a collection. We will see how we can set the values of these struct elements from the game code shortly. Moving on to the main function of the function. First, we start as we did in the previous example. We calculate the tnorm, CameraCoords, and the light source vector(s). Next, we calculate the vector in the direction of the viewer/camera (v), which is the negative of the normalized CameraCoords. We then calculate the direction of the pure reflection using the provided GLSL method, reflect. Then we move on to calculating the values of our three components. The ambient is calculated by multiplying the light ambient intensity and the surface's ambient reflective value. The diffuse is calculated using the light intensity, the surface diffuse reflective value of the surface, and the result of the dot product of the light source vector and the tnorm, which we calculated just before the ambient value. Before computing the specular value, we check the value of sDotN. If sDotN is zero, then there is no light reaching the surface, so there is no point in computing the specular component. If sDotN is greater than zero, we compute the specular component. As in the previous example, we use a GLSL method to limit the range of values of the dot product to between 1 and 0. The GLSL function pow raises the dot product to the power of the surface's shininess exponent, which we defined as p in our shader equation previously. Finally, we add all three of our component values together and pass their sum to the fragment shader in the form of the out variable, LightIntensity. We end by transforming the vertex position to clip space and passing it off to the next stage by assigning it to the gl_Position variable. For the setting of the attributes and uniforms needed for our shader, we handle the process just as we did in the previous example. The main difference here is that we need to specify the elements of the struct we are assigning when getting the uniform location. An example would look similar to the following, and again you can see the full code in the example solution in the Chapter07 folder of the GitHub repository:   GLint Kd = shaderManager.GetUniformLocation("Material.Kd"); glUniform3f(Kd, 0.9f, 0.5f, 0.3f);   The fragment shader used for this example is the same as the one we used for the diffuse example, so I won't cover it again here. When you run the ADS example from the Chapter07 code solution of the GitHub repository, you will see our newly created shader in effect, with an output looking similar to the following: In this example, we calculated the shading equation within the vertex shader; this is referred to as a per-vertex shader. One issue that can arise from this approach is that our glare spots, the specular highlights, might appear to warp or disappear. This is caused by the shading being interpolated and not calculated for each point across the face. For example, a spot that was set near the middle of the face might not appear due to the fact that the equation was calculated at the vertices where the specular component was near to zero. You enjoyed an excerpt from the book, Mastering C++ Game Development written by Mickey Macdonald and published by Packt Publishing. Use the code ORGDB10 at checkout to get recommended eBook retail price for $10 only until May 31, 2018.
      0 comments
      By khawk
    6. Automated builds are a pretty important tool in a game developer's toolbox. If you're only testing your Unreal-based game in the editor (even in standalone mode), you're in for a rude awakening when new bugs pop up in a shipping build that you've never encountered before. You also don't want to manually package your game from the editor every time you want to test said shipping build, or to distribute it to your testers (or Steam for that matter). Unreal already provides a pretty robust build system, and it's very easy to use it in combination with build automation tools. My build system of choice is  Gradle , since I use it pretty extensively in my backend Java and Scala work. It's pretty easy to learn, runs everywhere, and gives you a lot of powerful functionality right out of the gate. This won't be a Gradle tutorial necessarily, so you can familiarize yourself with how Gradle works via the documentation on their site. Primarily, I use Gradle to manage a version file in my game's Git repository, which is compiled into the game so that I have version information in Blueprint and C++ logic. I use that version to prevent out-of-date clients from connecting to newer servers, and having the version compiled in makes it a little more difficult for malicious clients to spoof that build number, as opposed to having it stored in one of the INI files. I also use Gradle to automate uploading my client build to Steam via the use of steamcmd. Unreal's command line build tool is known as the Unreal Automation Tool. Any time you package from the editor, or use the Unreal Frontend Tool, you're using UAT on the back end. Epic provides handy scripts in the Engine/Build/BatchFiles directory to make use of UAT from the command line, namely RunUAT.bat. Since it's just a batch file, I can call it from a Gradle build script very easily. Here's the Gradle task snippet I use to package and archive my client: task packageClientUAT(type: Exec) { workingDir = "[UnrealEngineDir]\\Engine\\Build\\BatchFiles" def projectDirSafe = project.projectDir.toString().replaceAll(/[\\]/) { m -> "\\\\" } def archiveDir = projectDirSafe + "\\\\Archive\\\\Client" def archiveDirFile = new File(archiveDir) if(!archiveDirFile.exists() && !archiveDirFile.mkdirs()) { throw new Exception("Could not create client archive directory.") } if(!new File(archiveDir + "\\\\WindowsClient").deleteDir()) { throw new Exception("Could not delete final client directory.") } commandLine "cmd", "/c", "RunUAT", "BuildCookRun", "-project=\"" + projectDirSafe + "\\\\[ProjectName].uproject\"", "-noP4", "-platform=Win64", "-clientconfig=Development", "-serverconfig=Development", "-cook", "-allmaps", "-build", "-stage", "-pak", "-archive", "-noeditor", "-archivedirectory=\"" + archiveDir + "\"" } My build.gradle file is in my project's directory, alongside the uproject file. This snippet will spit the packaged client out into [ProjectDir]\Archive\Client. For the versioning, I have two files that Gradle directly modifies. The first, a simple text file, just has a number in it. In my [ProjectName]\Source\[ProjectName] folder, I have a [ProjectName]Build.txt file with the current build number in it. Additionally, in that same folder, I have a C++ header file with the following in it: #pragma once #define [PROJECT]_MAJOR_VERSION 0 #define [PROJECT]_MINOR_VERSION 1 #define [PROJECT]_BUILD_NUMBER ### #define [PROJECT]_BUILD_STAGE "Pre-Alpha" Here's my Gradle task that increments the build number in that text file, and then replaces the value in the header file: task incrementVersion { doLast { def version = 0 def ProjectName = "[ProjectName]" def vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Build.txt") if(vfile.exists()) { String versionContents = vfile.text version = Integer.parseInt(versionContents) } version += 1 vfile.text = version vfile = new File("Source\\" + ProjectName + "\\" + ProjectName + "Version.h") if(vfile.exists()) { String pname = ProjectName.toUpperCase() String versionContents = vfile.text versionContents = versionContents.replaceAll(/_BUILD_NUMBER ([0-9]+)/) { m -> "_BUILD_NUMBER " + version } vfile.text = versionContents } } } I manually edit the major and minor versions and the build stage as needed, since they don't need to update with every build. You can include that header into any C++ file that needs to know the build number, and I also have a few static methods in my game's Blueprint static library that wrap them so I can get the version numbers in Blueprint. I also have some tasks for automatically checking those files into the Git repository and committing them: task prepareVersion(type: Exec) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "reset" } task stageVersion(type: Exec, dependsOn: prepareVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "add", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Build.txt", project.projectDir.toString() + "\\Source\\[ProjectName]\\[ProjectName]Version.h" } task commitVersion(type: Exec, dependsOn: stageVersion) { workingDir = project.projectDir.toString() commandLine "cmd", "/c", "git", "commit", "-m", "\"Incrementing [ProjectName] version\"" } And here's the task I use to actually push it to Steam: task pushBuildSteam(type: Exec) { doFirst { println "Pushing build to Steam..." } workingDir = "[SteamworksDir]\\sdk\\tools\\ContentBuilder" commandLine "cmd", "/c", "builder\\steamcmd.exe", "+set_steam_guard_code", "[steam_guard_code]", "+login", "\"[username]\"", "\"[password]\"", "+run_app_build", "..\\scripts\\[CorrectVDFFile].vdf", "+quit" } You can also spit out a generated VDF file with the build number in the build's description so that it'll show up in SteamPipe. I have a single Gradle task I run that increments the build number, checks in those version files, packages both the client and server, and then uploads the packaged client to Steam. Another great thing about Gradle is that Jenkins has a solid plugin for it, so you can use Jenkins to set up a nice continuous integration pipeline for your game to push builds out regularly, which you absolutely should do if you're working with a team.
      0 comments
    7. If you are a software developer working in the video game industry and wondering what else you could do to improve the quality of your product or make the development process easier and you don't use static analysis ‚Äď it's just the right time to start doing so. You doubt that? OK, I'll try to convince you. And if you are just looking to see what coding mistakes are common with video-game and game-engine developers, then you're, again, at the right place: I have picked the most interesting ones for you. Why you should use static analysis Although video-game development includes a lot of steps, coding remains one of the basic ones. Even if you don't write thousands of code lines, you have to use various tools whose quality determines how comfortable the process is and what the ultimate result will be. If you are¬†a developer of such tools (such as game engines), this shouldn't sound new to you. Why is static analysis useful in software development in general? The main reasons are as follows: Bugs grow costlier and more difficult to fix over time. One of the principal advantages of static analysis is detecting bugs at early development stages (you can find an error when code writing). Therefore, by using static analysis, you could make the development process easier both for your coworkers and yourself, detecting and fixing lots of bugs before they become a headache. Static analysis tools can recognize a great variety of bug patterns (copy-paste, typos, incorrect use of functions, etc.). Static analysis is generally good at detecting those defects that defy dynamic analysis. However, the opposite is also true. Negative side effects of static analysis (such as false positives) are usually 'smoothed out' through means provided by the developers of powerful analyzers. These means include various mechanisms of warning suppression (individually, by pattern, and so on), switching off irrelevant diagnostics, and excluding files and folders from analysis. By properly tweaking the analyzer settings, you can reduce the amount of 'noise' greatly. As my colleague Andrey Karpov has shown in the¬†article about the check of EFL Core Libraries, tweaking the settings helps cut down the number of false positives to 10-15% at most. But it's all theory, and you are probably interested in real-life examples. Well then, I've got some. Static analysis in Unreal Engine If you have read this far, I assume you don't need me telling you about Unreal Engine or the Epic Games company ‚Äď and if you don't hold these guys in high regard, I wonder whom you do. The PVS-Studio team has cooperated with Epic Games a few times to help them adopt static analysis in their project (Unreal Engine) and fix bugs and false positives issued by the analyzer. I'm sure both parties found this experience interesting and rewarding. One of the effects of this cooperation was adding a special flag into Unreal Engine allowing the developers to conveniently integrate static analysis into the build system of Unreal Engine projects. The idea is simple: the guys do care about the quality of their code and adopt various techniques available to maintain it, static analysis being one of them. John Carmack on static analysis John Carmack, one of the most renowned video-game developers, once called the adoption of static analysis one of his most important accomplishments as a programmer: "The most important thing I have done as a programmer in recent years is to aggressively pursue static code analysis." The next time you hear someone say that static analysis is a tool for newbies, show them this quote. Carmack described his experience in¬†this article, which I strongly recommend checking out ‚Äď both for motivation and general knowledge. Bugs found in video games and game engines with static analysis One of the best ways to prove that static analysis is a useful method is probably through examples showing it in action. That's what the PVS-Studio team does while checking open-source projects. It's a practice that everyone benefits from: The project authors get a bug report and a chance to fix the defects. Ideally, it should be done in quite a different way, though: they should run the analyzer and check the warnings on their own rather than fix them relying on someone else's log or article. It matters, if only because the authors of articles might miss some important details or inadvertently focus on bugs that aren't much critical to the project. The analyzer developers can use the analysis results as the basis for improving the tool, as well as demonstrating its bug-detecting capabilities. The readers learn about bug patterns, gain experience, and get started with static analysis. So, isn't that proof of the effectiveness of this approach? Teams already using static analysis While some are pondering introducing static analysis into their development process, others have long been using and benefiting from it! These are, among others, Rocksteady, Epic Games, ZeniMax Media, Oculus, Codemasters, Wargaming (source). Top 10 software bugs in video-game industry I should point right off that this is not some ultimate top list, but simply bugs which were found by PVS-Studio in video games and game engines and which I found most interesting. As usual, I recommend trying to find the bug in each example on your own first and only then go on reading the warning and my comments. You'll enjoy the article more that way. Tenth place Source:¬†Anomalies in X-Ray Engine The tenth place is given to the bug in X-Ray Engine employed by the S.T.A.L.K.E.R game series. If you played them, you surely remember many of funny (and not quite funny) bugs they had. This is especially true for S.T.A.L.K.E.R.: Clear Sky, which was impossible to play without patches (I still remember the bug that 'killed' all my saves). The analysis revealed there were many bugs indeed. Here's one of them. BOOL CActor::net_Spawn(CSE_Abstract* DC) { .... m_States.empty(); .... } PVS-Studio warning:¬†V530¬†The return value of function 'empty' is required to be utilized. The problem is quite simple: the programmer is not using the logical value returned by the¬†empty¬†method describing whether the container is empty or not. Since the expression contains nothing but a method call, I assume the programmer intended to clear the container but called the¬†empty¬†method instead of¬†clear¬†by mistake. You may argue that this bug is too plain for a Top-10 list, but that's the nice thing about it! Even though it looks straightforward to someone not involved in writing this code, 'plain' bugs like that still appear (and get caught) in various projects. Ninth place Source:¬†Long-Awaited Check of CryEngine V Going on with bugs in game engines. This time it's a code fragment from CryEngine V. The number of bugs I have encountered in games based on this engine was not as large as in games based on X-Ray Engine, but it turns out it has plenty of suspicious fragments too. void CCryDXGLDeviceContext:: OMGetBlendState(...., FLOAT BlendFactor[4], ....) { CCryDXGLBlendState::ToInterface(ppBlendState, m_spBlendState); if ((*ppBlendState) != NULL) (*ppBlendState)->AddRef(); BlendFactor[0] = m_auBlendFactor[0]; BlendFactor[1] = m_auBlendFactor[1]; BlendFactor[2] = m_auBlendFactor[2]; BlendFactor[2] = m_auBlendFactor[3]; *pSampleMask = m_uSampleMask; } PVS-Studio warning:¬†V519¬†The 'BlendFactor[2]' variable is assigned values twice successively. Perhaps this is a mistake. As we mentioned many times in our articles, no one is safe from mistyping. Practice has also shown more than once that static analysis is very good at detecting copy-paste-related mistakes and typos. In the code above, the values of the¬†m_auBlendFactor¬†array are copied to the¬†BlendFactor¬†array, but the programmer made a mistake by writing¬†BlendFactor[2]¬†twice. As a result, the value at¬†m_auBlendFactor[3]¬†is written to¬†BlendFactor[2], while the value at¬†BlendFactor[3]¬†remains unchanged. Eighth place Source:¬†¬†Unicorn in Space: Analyzing the Source Code of 'Space Engineers'¬† Let's change course a bit and take a look at some C# code. What we've got here is an example from the Space Engineers project, a 'sandbox' game about building and maintaining various structures in space. I haven't played it myself, but one guy said in the comments, "I'm not much surprised at the results ". Well, we did manage to find some bugs worth mentioning, and here's two of them. public void Init(string cueName) { .... if (m_arcade.Hash == MyStringHash.NullOrEmpty && m_realistic.Hash == MyStringHash.NullOrEmpty) MySandboxGame.Log.WriteLine(string.Format( "Could not find any sound for '{0}'", cueName)); else { if (m_arcade.IsNull) string.Format( "Could not find arcade sound for '{0}'", cueName); if (m_realistic.IsNull) string.Format( "Could not find realistic sound for '{0}'", cueName); } } PVS-Studio warnings: ¬†V3010¬†¬†The return value of function 'Format' is required to be utilized. ¬†V3010¬†¬†The return value of function 'Format' is required to be utilized. As you can see, it's a common problem, both in C++-code and C#-code, where programmers ignore methods' return values. The¬†String.Format¬†method forms the resulting string based on the format string and objects to substitute and then returns it. In the code above, the¬†else-branch contains two¬†string.Format¬†calls, but their return values are never used. It looks like the programmer intended to log these messages in the same way as they did in the¬†then-branch of the¬†if¬†statement using the¬†MySandboxGame.Log.WriteLine¬†method. Seventh place Source:¬†Analyzing the Quake III Arena GPL project Did I tell you already that static analysis is good at detecting typos? Well, here's one more example. void Terrain_AddMovePoint(....) { .... x = ( v[ 0 ] - p->origin[ 0 ] ) / p->scale_x; y = ( v[ 1 ] - p->origin[ 1 ] ) / p->scale_x; .... } PVS-Studio warning:¬†V537¬†Consider reviewing the correctness of 'scale_x' item's usage. The variables¬†x¬†and¬†y¬†are assigned values, yet both expressions contain the¬†p->scale_x¬†subexpression, which doesn't look right. It seems the second subexpression should be¬†p->scale_y¬†instead. Sixth place Source:¬†Checking the Unity C# Source Code Unity Technologies recently made the code of their proprietary game engine, Unity, available to the public, so we couldn't ignore the event. The check revealed a lot of interesting code fragments; here's one of them: public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 || pageSize <= 1000) && totalFilters <= 10; } PVS-Studio warning:¬†V3063¬†A part of conditional expression is always true if it is evaluated: pageSize <= 1000. What we have here is an incorrect check of the range of¬†pageSize. The programmer must have intended to check that the¬†pageSize¬†value was within the range [1; 1000] but made a sad mistake by typing the '||' operator instead of '&&'. The subexpression actually checks nothing. Fifth place Source:¬†Discussing Errors in Unity3D's Open-Source Components This place was given to a nice bug found in Unity3D's components. The article mentioned above was written a year prior to revealing Unity's source code, but there already were interesting defects to find there at the time. public static CrawledMemorySnapshot Unpack(....) { .... var result = new CrawledMemorySnapshot { .... staticFields = packedSnapshot.typeDescriptions .Where(t => t.staticFieldBytes != null & t.staticFieldBytes.Length > 0) .Select(t => UnpackStaticFields(t)) .ToArray() .... }; .... } PVS-Studio warning:¬†V3080¬†Possible null dereference. Consider inspecting 't.staticFieldBytes'. Note the lambda expression passed as an argument to the¬†Where¬†method. The code suggests that the¬†typeDescriptions¬†collection could contain elements whose¬†staticFieldBytes¬†member could be¬†null¬†‚Äď hence the check¬†staticFieldBytes != null¬†before accessing the¬†Length¬†property. However, the programmer mixed up the '&' and '&&' operators. It means that no matter the result of the left expression (true/false), the right one will also be evaluated, causing a¬†NullReferenceException¬†to be thrown when accessing the¬†Length¬†property if¬†staticFieldBytes == null. Using the '&&' operator could help avoid this because the right expression won't be evaluated if¬†staticFieldBytes == null. Although Unity was the only engine to hit this top list twice, it doesn't prevent enthusiasts from building wonderful games on it. Including one(s) about fighting bugs. Fourth place Source:¬†¬†Analysis of Godot Engine's Source Code¬† Sometimes we come across interesting cases that have to do with missing keywords. For example, an exception object is created but never used because the programmer forgot to add the¬†throw¬†keyword. Such errors are found both in¬†C# projects¬†and¬†C++ projects. There was one missing keyword in Godot Engine as well. Variant Variant::get(const Variant& p_index, bool *r_valid) const { .... if (ie.type == InputEvent::ACTION) { if (str =="action") { valid=true; return ie.action.action; } else if (str == "pressed") { valid=true; ie.action.pressed; } } .... } PVS-Studio warning:¬†V607¬†Ownerless expression 'ie.action.pressed'. In the given code fragment it is obvious that a programmer wanted to return a certain value of the¬†Variant¬†type, depending on the values¬†ie.type¬†and¬†str. Yet only one of the return statements ‚Ästreturn ie.action.action;¬†‚Äď is written properly, while the other is lacking the¬†return¬†operator, which prevents the needed value from returning and forces the method to keep executing. Third place Source:¬†PVS-Studio: analyzing Doom 3 code Now we've reached the Top-3 section. The third place is awarded to a small code fragment of Doom 3's source code. As I already said, the fact that a bug may look straightforward to an outside observer and make you wonder how one could have made such a mistake at all shouldn't be confusing: there are actually all sorts of bugs to be found in the field... void Sys_GetCurrentMemoryStatus( sysMemoryStats_t &stats ) { .... memset( &statex, sizeof( statex ), 0 ); .... } PVS-Studio warning:¬†V575¬†The 'memset' function processes '0' elements. Inspect the third argument. To figure this error out, we should recall the signature of the¬†memset¬†function: void* memset(void* dest, int ch, size_t count); If you compare it with the call above, you'll notice that the last two arguments are swapped; as a result, some memory block that was meant to be cleared will stay unchanged. Second place The second place is taken by a bug found in the code of the Xenko game engine written in C#. Source:¬†Catching Errors in the Xenko Game Engine private static ImageDescription CreateDescription(TextureDimension dimension, int width, int height, int depth, ....) { .... } public static Image New3D(int width, int height, int depth, ....) { return new Image(CreateDescription(TextureDimension.Texture3D, width, width, depth, mipMapCount, format, 1), dataPointer, 0, null, false); } PVS-Studio warning:¬†V3065¬†Parameter 'height' is not utilized inside method's body. The programmer made a mistake when passing the arguments to the¬†CreateDescription¬†method. If you look at its signature, you'll see that the second, third, and fourth parameters are named¬†width,¬†height,¬†and¬†depth, respectively. But the call passes the arguments¬†width,¬†width,¬†and¬†depth. Looks strange, doesn't it? The analyzer, too, found it strange enough to point it out. First place Source:¬†A Long-Awaited Check of Unreal Engine 4 This Top-10 list is led by a bug from Unreal Engine. Just like it was with the leader of "Top 10 Bugs in the C++ Projects of 2017", I knew this bug should be given the first place the very moment I saw it. bool VertInfluencedByActiveBone( FParticleEmitterInstance* Owner, USkeletalMeshComponent* InSkelMeshComponent, int32 InVertexIndex, int32* OutBoneIndex = NULL); void UParticleModuleLocationSkelVertSurface::Spawn(....) { .... int32 BoneIndex1, BoneIndex2, BoneIndex3; BoneIndex1 = BoneIndex2 = BoneIndex3 = INDEX_NONE; if(!VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[0], &BoneIndex1) && !VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[1], &BoneIndex2) && !VertInfluencedByActiveBone( Owner, SourceComponent, VertIndex[2]) &BoneIndex3) { .... } PVS-Studio warning:¬†V564¬†The '&' operator is applied to bool type value. You've probably forgotten to include parentheses or intended to use the '&&' operator. I wouldn't be surprised if you read the warning, looked at the code, and wondered, "Well, where's the '&' used instead of '&&'?" But if we simplify the conditional expression of the¬†if¬†statement, keeping in mind that the last parameter of the¬†VertInfluencedByActiveBone¬†function has a default value, this will clear it all up: if (!foo(....) && !foo(....) && !foo(....) & arg) Take a close look at the last subexpression: !VertInfluencedByActiveBone(Owner, SourceComponent, VertIndex[2]) &BoneIndex3 This parameter with the default value has messed things up: but for this value, the code would have never compiled at all. But since it's there, the code compiles successfully and the bug blends in as successfully. It's this suspicious fragment that the analyzer spotted ‚Äď the infix operation '&' with the left operand of type¬†bool¬†and the right operand of type¬†int32. Conclusion I hope I have convinced you that static analysis is a very useful tool when developing video games and game engines, and one more option to help you improve the quality of your code (and thus of the final product). If you are a video game industry developer, you ought to tell your coworkers about static analysis and refer them to this article. Wondering where to start? Start with¬†PVS-Studio.
      0 comments
    8. Are you considering developing a mobile game? If you want to be successful, you should avoid making the most common mistakes. Trying to build a game without figuring out the right approach is a recipe for disaster. There are experienced developers like MyIsaak¬†from Sweden, an expert in C# and Unity game development who frequently livestreams his Diablo III Board game development process. The more you learn from professionals like him, who have gone through the processes, the faster you can avoid making the common game development mistakes. Here are the top 5 game developments mistakes to avoid. 1. Ignoring the target group Creating a game without properly studying your target group is a huge barrier that will keep it from being downloaded and played. Who are you building the game for? What are their main interests? What activities do they like participating in? Can the target group afford the gaming app? Does your target audience use iOS or Android operating system? Seeking answers to the above questions and others can assist in correctly identifying your target group. Consequently, you can design its functionalities around their preferences. Just like an ice cream vendor is likely to set up shop at the beach during summer, you should focus on consumers whose behaviors are likely to motivate them to play your game. For example, if you want to create a gun shooting game, you can target college-educated men in their 20s and 30s, while targeting other demographic groups secondarily. 2. Failure to study the competitors To create a successful game that will increase positive reviews and retention, you should analyze the strengths and weaknesses of your competitors. Studying your competition will allow you to understand your capabilities to match or surpass the consumer demand for your mobile or web-based game. If you fail to do it, you will miss the opportunity to fill the actual needs in the gaming industry and correct the mistakes made by the developers in your niche. You should ask questions like ‚ÄúWhat is their target audience?‚ÄĚ ‚ÄúHow many downloads do their gaming app receive per month?‚ÄĚ ‚ÄúWhat resources do they have?‚ÄĚ. Answering such questions will give you a good idea of the abilities of your competition, the feasibility of competing with them, and the kind of strategies to adopt to out-compete them. Importantly, instead of copying the strategies of your competitors, develop a game that is unique and provides an added value to users. 3. ¬†Design failure When building a mobile or a web-based game, it‚Äôs essential that you employ a unique art style and visually appealing design‚ÄĒwithout any unnecessary sophistication. People are attracted to games based on the user interface design and intuitiveness. So, instead of spending a lot of time trying to write elegant and complicated lines of code, take your time to provide a better design. No one will download a game because its code is beautiful. People download games to play them. And, the design of the game plays a critical part in assisting them to make the download decision. 4. Trying to do everything If you try to code, develop 3D models, create animations, do voice-overs‚ÄĒall by yourself‚ÄĒthen you are likely to create an unsuccessful game. The secret to succeeding is to complete tasks that align with your core competencies and outsource the rest of the work. Learn how to divide your work to other experts and save yourself the headaches. You should also avoid trying to reinvent the wheel. Instead of trying to do everything by yourself, go for robust tools available out there that can make your life easier. Trying to build something that is already provided in the open source community will consume a lot of your development time and make you feel frustrated. Furthermore, do not be the beta tester of your own game. If you request someone else to do the beta testing, you‚Äôll get useful outside perspective that will assist in discovering some hidden issues. 5. Having unrealistic expectations Unrealistic expectations are very dangerous because they set your game development career up for failure. Do not put your expectations so high such that you force somethings to work your way. For example, dreaming too big can make you include too many rewards in your game. As much as rewards are pivotal for improving engagement and keeping users motivated, gamers will not take you seriously if you incorporate rewards in every little achievement they make. Instead, you should select specific rewards for specific checkpoints; this way, the players will feel that they‚Äôve made major milestones. Conclusion The mistakes discussed in this article have made several game developers to be unsuccessful in their careers. So, be cautious and keep your head high so that you don‚Äôt fall into the same trap. The best way to avoid making the common mistakes is through learning how to build games from the experts. Who knows? You could develop the next big game in the industry.
      0 comments
    9. The Game Dev Loadout podcast (here and here) has shared their recent Cliff Harris interview with us. The original interview transcription is at¬†Game Dev Loadout. You can also see more podcast interviews from Game Dev Loadout¬†here on GameDev.net. Game designer, programmer, and running a one-man games business, Cliff Harris of Positech¬†Games (@cliffski32¬†on GameDev.net) is behind strategy and simulation games such as Production Line, Democracy, and Gratuitous Space Battles.¬† In this interview, Cliff talks about his journey in the game industry, emphasizes on making sure you are taking advice from the right people, and why you need to invest in a great chair for yourself and your team. How did you get started in the game industry? I started programming as a kid when I was 11 on a tiny home computer and then I kind of got out of computers, had all sorts of weird careers working on the stock market and in I.T and boat building and playing the guitar. I taught myself C++ from a home study course on floppy disks then I started making games and that was 20 years ago so it was before indie games were really a thing. I worked at Elixir Studios and Lionhead for 3 years and then I quit that and been full-time indie ever since. What was that push that made you join the game industry? At that time in the U.K, you couldn‚Äôt get a job as a programmer unless you had a former qualification as a programmer or you already had a job as a programmer. I always wanted to program video games like a lot of people did and it¬†became possible with the internet that you could program games from home. It was a hobby that turned into a career and a proper business. What is something we probably don‚Äôt know about in AI mechanic that we should know? The thing that people don‚Äôt realize about AI is that it‚Äôs very easy to make something seem alive with few lines of code. Like I have two cats and they really both are predictable. Sometimes the way my cats behave I think you are just few thousand lines of C++. When you break it down, it‚Äôs not that difficult to program NPC‚Äôs in games that behave and move in quite a natural way. It‚Äôs funny because some stuff that you consider easy, it‚Äôs almost impossible. So like finding your way out of a maze, we‚Äôre pretty good at that but computers are rubbish at it. If you want to program an NPC with text so that it seems to converse with you in a way that doesn‚Äôt seem too scripted, that‚Äôs not too hard. The reason people tend to encounter rubbish NPC in AI games is due to an obsession with having voice acting for everything. It‚Äôs easy to program an NPC that can talk to you about 100 different topics with thousands of different variations that sound fluent and responsive to you. But if you want to record a thousand lines of dialogue and you have got some big name actor then, you can‚Äôt afford it. So that‚Äôs actually the bottleneck. The other possibility is the translation. The problem is the minute you translate it to German, it‚Äôs absolute chaos. The sentence structure is different and you obviously have to pay a little bit to translate the text into German. And if you have got like 20 languages and there are tons of different phrases in each language, suddenly that‚Äôs what becomes expensive and difficult. But as you program, it‚Äôs fairly easy and fun. Production Line Would you suggest new developers stick to text or voice acting? If you want to capture the whole audience and to have a successful indie game, you probably are going to want it in like 10 different languages. So to get it professionally done is probably like 10 cents a word. So every time you type like one line of dialogue, it‚Äôs going to cost you like 50 dollars. If you want to record the audio and then you want that in 10 languages, that‚Äôs going to cost you even more. And does it really add to the experience? I‚Äôm not sure it does. And the other thing is that I am very impatient and I value time a lot so I‚Äôd rather read the dialogue personally than hear it. I want to talk about the worst moment of your career, that one moment that‚Äôs still vivid in your mind. They are a few. I have left a game company in a very heated argument, which is funny because I recently bumped into the guy I had that argument with and he‚Äôs fine, and I am fine and we get along, but it was just really stressful. I have had bugs that are really bad. I had a bug that could potentially destroy someone‚Äôs computer and someone reported a problem to me and I was like ‚ÄúNo, I am sure this is not a bug of mine‚ÄĚ and I looked into it and I did a lot of experimentation to get this bug to trigger on my PC. I thought ‚ÄúOh my God, I cannot believe I had made this mistake.‚ÄĚ Yeah, I remember frantically coding this patch for it and putting it out immediately and no one else got infected by it. But I was really worried. It was a very rare circumstance but it would delete stuff on your hard drive. It started happening to me but people had anti-virus so it was like a flag going ‚ÄúHey, what are you doing?‚ÄĚ That‚Äôs a reputation-destroying bug that deleted a file where the player goes to delete some content intentionally and under certain circumstances, the file name that would be passed in would be empty and given the structure of that code it would then start to delete everything. I had to make loads of checks in all of the areas of my game that can never delete a file. There‚Äôs so much code wrapped around this and I can never ever get into that position again because that was bad. From the heated argument story, how did you handle the situation and what was the outcome? Well, I handled it badly, actually, everyone handled it badly. I mean, I had been in this company too long and I was very frustrated and sort of wanted to leave but I had stayed on the assumption that things would change. I just lost it, I got into a very strong argument and someone there had stormed off. And that was it, that was how I left. Looking back on it, I stayed in the company too long. I am very Indie, I don‚Äôt like working for other people. I am not very easy to employ because I am quite outspoken and maybe not massively respectful of authority. So it was kind of like a bad fit. To be honest I have stormed out of another job as well. So what should we take away from that experience? The game industry is a lot like the music industry because almost everyone in the music industry makes no money and some people make piles of money. And then there are many more people who want to be in games than the industry will support, which is a lot like music. So you end up with very stressed people who are doing what they love doing, they are very passionate about it and very intensely into it and they are working very hard. It is basically a recipe for everyone to get into huge fights and hate each other. It‚Äôs a bit better now I think but there were a lot of companies where people would work very long hours and they would work very late and they go beyond what you would normally put into a job. They aren‚Äôt massively well paid at any point. Also, the game industry attracts people like me who are fairly introverted, so we have good technical skills but not very good people skills. You put all that together and it is going to be tough. But I don‚Äôt think people hold grudges, I mean I have had two huge arguments with people, some famous and some not and always ended up getting along with them because¬†ultimately very few people come into the industry for money because there isn‚Äôt much. So generally you realize that everyone is here just to try to make cool games and work on great stuff¬†no matter how much we like kind of get on each other‚Äôs nerves. What are bad recommendations that you hear in your profession? There are a lot of people that will give you advice. Like if you go on to Reddit and sort of say ‚ÄúOh I am thinking of making this game, oh I am thinking of porting to this platform‚ÄĚ. You will get a massive amount of advice and almost all of it will be rubbish. That‚Äôs because the people who are always on Facebook, Twitter and Reddit, just sit there, waiting on giving advice to others. I am never like I have to go there and post it and see if anyone is ever asking about something I know about. I‚Äôve read a huge amount of stuff that says there is no point in advertising your indie game. Some of them might say that I have spent like a $100 on Facebook Ads, I didn‚Äôt notice any difference in the number of downloads of my game and what you can take from that is you know one person who spent a $100, they did not receive a direct difference. I‚Äôve spent $265,000 on Facebook Ads over the years, so as you can imagine I am pretty convinced they work. But what I am saying is that I have literally a thousand times more data points on that issue. So if you see me talking about the Pro‚Äôs and Con‚Äôs of Facebook Ads, well I know what I am talking about. If you hear me talking about whether Android or iPhone is a better platform, then just slap me and tell me ‚ÄúCliff you have no idea, no you don‚Äôt know anything about it‚ÄĚ. Because you have never made a mobile game and I think that‚Äôs the most important thing,¬†you have to know whose advice you are taking and on what basis. We will have an intuition about games and stuff and what should work, what shouldn‚Äôt work and often our intuition is wrong. I honestly think that free to play should not work but it clearly does. I think that there is no way you can run an entire games business based on selling virtual hacks, I know that cannot possibly work. But I know it does. So you have to listen to specific people on specific issues. What is one of the best investments you have ever made? Could be an investment in time, energy, or money. Okay, I‚Äôll give you two, time and money. The time thing is learning how to code my own game engine. I learned to code everything like graphics, sound and whatever from scratch because I had to. It gives you a big insight into performance and why your game may be slow. I get to code very fast stuff when I need it. Also, I am not relying on anything else. I am not paying any money to Unity and if they update Unity or Unreal and it breaks everything, obviously I don‚Äôt care because I am not using it. So that‚Äôs given me an independence that‚Äôs very helpful. Obviously, it takes a lot of time. Gratuitous Space Battles The physical thing which I have told a lot of people over the years is this chair that I‚Äôm sitting in. If you‚Äôve got a tech startup and you have coders and you have money then buy these Herman Miller Aeron chairs¬†for everyone. It was 800 pounds which is a lot, $1100. I am fully aware that it‚Äôs a crazy amount of money. But I sit on it ten hours a day and they last forever. It affects your health and mood. It really is a good investment and even if you think it‚Äôs kind of over the top and unnecessary. Don‚Äôt buy a $20 chair from a cheap shop.¬†Make some sort of investment in your comfort because in the end you spend a lot of time sitting in front of the keyboard and it‚Äôs so much better for you to be comfortable when you do that.¬† BETA PHASE: Rapid Fire Questions What was holding you back from joining the game industry? I don‚Äôt know. Nothing would stop me right now but back before I joined, the industry was tiny. It was not something people did. Nobody knew anyone in the games industry, but now nothing would hold me back. Back then it was just like ‚Äúthat‚Äôs not a real job‚ÄĚ. What‚Äôs the personal habit that contributes to your success? Getting out bed early. I was a real workaholic. I would be working at my desk by 8 o‚Äôclock, which doesn‚Äôt sound that early but for a computer programmer, it is because then we work till like 8 o‚Äôclock in the evening.¬†Just learning how to get out bed and go straight to work without messing around, that‚Äôs the best thing. What‚Äôs the best piece of advice you‚Äôve ever received? Ask for more money. Most of the time you don‚Äôt get it, but occasionally you do and it‚Äôs just like free money and you‚Äôre like ‚ÄúHey, they weren‚Äôt going to give me that unless I said it‚ÄĚ. It‚Äôs really awkward but do it. What‚Äôs that great marketing tip to make yourself and your game stand out? Put faces in your games. Read into Neuroscience, a disproportionate amount of your brain is dedicated to looking for faces and looking for emotions in faces and if you have three images of games and if one of them has a face in the image, that is the one everyone looks at first. What resources should we game developers use to get started today? If you are technically-minded and if you want to be a programmer more than anything else, then buy some good C++ books and learn how to code everything from scratch. If you‚Äôre not, then use Unity but don‚Äôt put off the idea of coding from the ground up, it‚Äôs very valuable. Imagine you woke up the next morning in a brand new world and you knew no one, you still have all the experience and knowledge you currently have today, your food and shelter are taking care of and you have a laptop. What would you do step by step on the path to join and become successful in the game industry? I would make a PC strategy game and sell it on Steam. It would be 2D, top down. I would find an artist to do like revenue share on it and I would do all my end marketing and game designing and code it from scratch, probably don‚Äôt even need ‚ÄėUnity‚Äô to do that and it might even be easier. That‚Äôs the safest and best route to actually making a game that will make money. You can listen to the entire podcast and more interviews with developers and others in the industry at Game Dev Loadout.
      0 comments
      By khawk
    10. I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon. It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities. Why Not Waypoints? As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper. In other words, there was a shift from efficiency to flexibility. In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way. One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes. Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh). The navigation mesh Graph To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it. Personally, because of my experience, I view navigation meshes like the ones found in most Source games. However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex). Navigation Meshes In practice A navigation mesh implementation is actually three algorithms : A graph navigation algorithm A string pulling algorithm And a steering/path-smoothing algorithm In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development. Finding our cells Before doing any graph searches, we need to find 2 things : Our starting cell Our destination cell For example, let's use this navigation mesh : In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on. Also, let's use these points as our starting and destination points: Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination. Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells. In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells.  Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell. The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this: Vector3f lineToPoint = pointToProject.subtract(start); Vector3f line = end.subtract(start); Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line)); In our example, the starting and destination cells are C1 and C8 respectively:   Graph Search Algorithm A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other. There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid. For this reason, the use of the A* algorithm is probably the right solution. We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits... A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one. We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it. We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells. While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later. We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path. A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand. In our example, we find that this is our path: And here are highlighted (in pink) the traversed connections: The String Pulling Algorithm String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed. String pulling is in fact not linked to characters at all : it is rather a metaphor. Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross. This is precisely what we're doing but with a string that goes from one point to another. There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually... ...stupidly simple. To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking) When a point is in the funnel, we continue the algorithm with the other side. If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints. The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it. In our case, the funnel algorithm gives this path: The Steering Algoritm Now that we have a list of waypoints, we can finally just run our character at every point. But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls. That's the role of the steering algorithm. Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all... We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ... With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!) So, this is the navigation mesh algorithm. I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one. We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point. So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture : Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.
      1 comments
    11. As DMarket platform development continues, we would like to share a few case studies regarding the newest functionality on the platform. With these case studies we would like to illuminate our development process, user requirements gathering and analysis, and much more. The first case study we‚Äôre going to share is ‚ÄúDMarket Wallet Development‚ÄĚ: how, when and why we decided to implement functionality which improved virtual items and DMarket Coins collection and transfer. ¬† DMarket cares about every user, no matter how big or small the user group is. And that‚Äôs why we recently updated our virtual item purchase rules, bringing a brand new ‚ÄúDMarket Wallet‚ÄĚ feature to our users. So let‚Äôs take a retrospective look and find out what challenges were brought to the DMarket team within this feature and how these challenges were met.¬† DMarket and Blockchain Virtual Items Trading Rules Within the first major release of the DMarket platform, we provided you with a wide range of possibilities and options, assuring Steam account connection within user profile, confirmation of account and device ownership via email for enhanced security, DMarket Coins, and DMarket Tokens exchanging, transactions with intermediaries on blockchain within our very own Blockchain system called ‚ÄúBlockchain Explorer‚ÄĚ.¬† And well, regarding Blockchain... While it has totally proved itself as a working solution, we were having some issues with malefactors, as many of you may already know. DMarket specialists conducted an investigation, which resulted in a perfect solution: we found out that a few users created bots to buy our Founder‚Äôs Mark, a limited special edition memorabilia to commemorate the launch of the platform, for lower prices and then sell them at¬†higher prices. Sure thing, there was no chance left for regular users. A month ago we fixed the issue, to our great relief. We received real feedback from the community, a real proof-of-concept. The whole DMarket ecosystem turned out to be truly resilient, proving all our detractors wrong.¬† And while we‚Äôve got proof, we also studied how users feel about platform UX since blockchain requires additional efforts when buying or selling an item. With our first release of the Demo platform, we let users sign transactions with a private key from their wallet. In terms of user experience, that practice wasn‚Äôt too good. Just think about it: you should enter the private key each time you want to buy or sell something. Every transaction required a lot of actions from the user‚Äôs side, which is unacceptable for a great and user-friendly product like ours. That‚Äôs why we decided to move from that approach, and create a single unified ‚Äúwallet‚ÄĚ on the DMarket side in order to store all the DMarket Coins and virtual items on our side and let users buy or sell stuff with a few clicks instead of the previous lengthy process. In other words, every user received a public key which serves as a destination address, while private keys were held on the DMarket side in order to avoid transaction signing each time something is traded. This improved usability, and most of our users were satisfied with the update. But not all of them... You Can‚Äôt Make Everyone Happy‚Ķ.. Can You? By removing the transaction signing requirement we made most of our users happy. Of course, within a large number of happy people, we can always find those who are worried about owning a public key wallet. When you don‚Äôt own a public key, it may disturb you a little bit. Sure, DMarket is a trusted company, but there are people who can‚Äôt trust even themselves sometimes. So what were we gonna do? Ignore them? Roll back to the previous way of buying virtual items and coins? No! We decided to go the other way. Within the briefest timeline, the DMarket team decided on providing a completely new feature on Blockchain Explorer ‚ÄĒ wallet creation functionality. With this functionality, you can create a wallet with 2 clicks, getting both private and public keys and therefore ensuring your items‚Äô and coins‚Äô safety. Basically, we separated wallets on the marketplace and wallets on our Blockchain in order to keep great UX and reassure a small part of users with a needed option to keep everything in a separate wallet. You can go shopping on DMarket with no additional effort of signing every transaction, and at the same time, you are free to transfer all the goods to your very own wallet anytime you feel the need. Isn‚Äôt it cool? Outcome¬† After implementation of a separate DMarket wallet creation feature, we killed two birds with one stone and made everyone satisfied. Though it wasn‚Äôt too easy since we had a very limited amount of time. So if you need it, you can try it. Moreover, the creation of DMarket wallet within Blockchain Explorer will let you manage your wallet even on mobile devices because with downloading private and public keys you also get a 12-word mnemonic phrase to restore your wallet on any mobile device, from smartphone to tablet. Wow, but that‚Äôs another story ‚ÄĒ a story about DMarket Wallet application which has recently become available for Android users in the Google Play. Stay tuned for more case studies and don't forget to check out our website¬†and gain firsthand experience with in-game items trading!
      0 comments
    12. I got into a conversation awhile ago with some fellow game artists and the prospect of signing bonuses got brought up. Out of the group, I was the only one who had negotiated any sort of sign on bonus or payment above and beyond base compensation. My goal with this article and possibly others is to inform and motivate other artists to work on this aspect of their ‚Äúportfolio‚ÄĚ and start treating their career as a business.¬† What is a Sign-On Bonus? Quite simply, a sign-on bonus is a sum of money offered to a prospective candidate in order to get them to join. It is quite common in other industries but rarely seen in the games unless it is at the executive level. Unfortunately, conversations centered around artist employment usually stops at base compensation, quite literally leaving money on the table. Why Ask for a Sign-On Bonus? There are many reasons to ask for a sign-on bonus. In my experience, it has been to compensate for some delta between how much I need vs. how much the company is offering. For example, a company has offered a candidate a position paying $50k/year. However, research indicates that the candidate requires $60k/year in order to keep in line with their personal financial requirements and long-term goals. Instead of turning down the offer wholesale, they may ask for a $10k sign on bonus with actionable terms to partially bridge the gap. Whatever the reason may be, the ask needs to be reasonable. Would you like a $100k sign-on bonus? Of course! Should you ask for it? Probably not. A sign-on bonus is a tool to reduce risk, not a tool to help you buy a shiny new sports car. Aspects to Consider Before one goes and asks for a huge sum of money, there are some aspects of sign-on bonus negotiations the candidate needs to keep in mind. - The more experience you have, the more leverage you have to negotiate - You must have confidence in your role as an employee. - You must have done your research. This includes knowing your personal financial goals and how the prospective offer changes, influences or diminishes those goals. To the first point, the more experience one has, the better. If the candidate is a junior employee (roughly defined as less than 3 years of industry experience) or looking for their first job in the industry, it is highly unlikely that a company will entertain a conversation about sign-on bonuses. Getting into the industry is highly competitive and there is likely very little motivation for a company to pay a sign-on bonus for one candidate when there a dozens (or hundreds in some cases) of other candidates that will jump at the first offer. Additionally, the candidate must have confidence in succeeding at the desired role in the company. They have to know that they can handle the day to day responsibilities as well as any extra demands that may come up during production. The company needs to be convinced of their ability to be a team player and, as a result, is willing to put a little extra money down to hire them. In other words, the candidate needs to reduce the company‚Äôs risk in hiring them enough that an extra payment or two is negligible. And finally, they must know where they sit financially and where they want to be in the short-, mid-, and long-term. Having this information at hand is essential to the negotiation process. The Role Risk Plays in Employment The interviewing process is a tricky one for all parties involved and it revolves around the idea of risk. Is this candidate low-risk or high-risk? The risk level depends on a number of factors: portfolio quality, experience, soft skills, etc. Were you late for the interview? Your risk to the company just went up. Did you bring additional portfolio materials that were not online? Your risk just went down and you became more hireable. If a candidate has an offer in hand, then the company sees enough potential to get a return on their investment with as little risk as possible. At this point, the company is confident in their ability as an employee (ie. low risk) and they are willing to give them money in return for that ability. Asking for the Sign-On Bonus So what now? The candidate has gone through the interview process, the company has offered them a position and base compensation. Unfortunately, the offer falls below expectations. Here is where the knowledge and research of the position and personal financial goals comes in. The candidate has to know what their thresholds and limits are. If they ask for $60k/year and the company is offering $50k, how do you ask for the bonus? Once again, it comes down to risk. Here is the point to remember: risk is not one-sided. The candidate takes on risk by changing companies as well. The candidate has to leverage the sign-on bonus as a way to reduce risk for both parties. Here is the important part: A sign-on bonus reduces the company‚Äôs risk because they are not commiting to an increased salary and bonus payouts can be staggered and have terms attached to them. The sign-on bonus reduces the candidate‚Äôs risk because it bridges the gap between the offered compensation and their personal financial requirements. If the sign-on bonus is reasonable and the company has the finances (explained further down below), it is a win-win for both parties and hopefully the beginning a profitable business relationship. A Bit about Finances First off, I am not a business accountant nor have I managed finances for a business. I am sure that it is much more complicated than my example below and there are a lot of considerations to take into account. In my experience, however, I do know that base compensation (ie. salary) will generally fall into a different line item category on the financial books than a bonus payout. When companies determine how many open spots they have, it is usually done by department with inter-departmental salary caps. For a simplified example, an environment department‚Äôs total salary cap is $500k/year. They have 9 artists being paid $50k/year, leaving $50k/year remaining for the 10th member of the team. Remember the example I gave earlier asking for $60k/year? The company cannot offer that salary because it breaks the departmental cap. However, since bonuses typically do not affect departmental caps, the company can pull from a different pool of money without increasing their risk by committing to a higher salary. Sweetening the Deal Coming right out of the gate and asking for an upfront payment might be too aggressive of a play (ie. high risk for the company). One way around this is to attach terms to the bonus. What does this mean? Take the situation above. A candidate has an offer for $50k/year but would like a bit more. If through the course of discussing compensation they get the sense that $10k is too high, they can offer to break up the payments based on terms. For example, a counterpoint to the initial base compensation offer could look like this: $50k/year salary $5k bonus payout #1 after 30 days of successful employment $5k bonus payout #2 after 365 days (or any length of time) of successful employment In this example, the candidate is guaranteed $55k/year salary for 2 years. If they factor in a standard 3% cost of living raise, the first 3 years of employment looks like this: Year 0-1 =¬†$55,000 ($50,000 + $5,000 payout #1) Year 1-2 =¬†$56,500 (($50,000 x 1.03%) + $5,000 payout #2) Year 2-3 =¬†$53,045 ($51,500 x 1.03%) Now it might not be the $60k/year they had in mind but it is a great compromise to keep both parties comfortable. If the Company Says Yes Great news! The company said yes! What now? Personally, I always request at least a full 24 hours to crunch the final numbers. In the past, I‚Äôve requested up to a week for full consideration. Even if you know you will say yes, doing due diligence with your finances one last time is always a good practice. Plug the numbers into a spreadsheet, look at your bills and expenses again, and review the whole offer (base compensation, bonus, time off/sick leave, medical/dental/vision, etc.). Discuss the offer with your significant other as well. You will see the offer in a different light when you wake up, so make sure you are not rushing into a situation you will regret. If the Company Say No If the company says no, then you have a difficult decision to make. Request time to review the offer and crunch the numbers. If it is a lateral move (same position, different company) then you have to ask if the switch is worth it. Only due diligence will offer that insight and you have to give yourself enough time to let those insights arrive. You might find yourself accepting the new position due to other non-financial reasons (which could be a whole separate article!). Conclusion/Final Thoughts¬† When it comes to negotiating during the interview process, it is very easy to take what you can get and run. You might fear that in asking for more, you will be disqualifying yourself from the position. Keep in mind that the offer has already been extended to you and a company will not rescind their offer simply because you came back with a counterpoint. Negotiations are expected at this stage and by putting forth a creative compromise, your first impression is that of someone who conducts themselves in a professional manner. Also keep in mind that negotiations do not always go well. There are countless factors that influence whether or not someone gets a sign-on bonus. Sometimes it all comes down to being there at the right time at the right place. Just make sure you do your due diligence and be ready when the opportunity presents itself. Hope this helps!
      0 comments
      By RyRyB
    13. Recently a long-awaited event has happen - Unity Technologies uploaded the C# source code of the game engine, available for free download on Github. The code of the engine and the editor is available. Of course, we couldn't pass up, especially since lately we've not written so many articles about checking projects on C#. Unity allows to use the provided sources only for information purposes. We'll use them exactly in these ways. Let's try out the latest version PVS-Studio 6.23 on the Unity code. Introduction Previously we've written an article about checking Unity. At that time so much C#-code was not available for the analysis: some components, libraries and examples of usage. However, the author of the article managed to find quite interesting bugs. How did Unity please us this time? I'm saying "please" and hope not to offend the authors of the project. Especially since the amount of the source Unity C#-code, presented on GitHub, is about 400 thousand lines (excluding empty) in 2058 files with the extension "cs". It's a lot, and the analyzer had a quite considerable scope. Now about the results. Before the analysis, I've slightly simplified the work, having enabled the mode of the code display according to the CWE classification for the found bugs. I've also activated the warnings suppression mechanism of the third level of certainty (Low). These settings are available in the drop-down menu of PVS-Studio in Visual Studio development environment, and in the parameters of the analyzer. Getting rid of the warnings with low certainty, I made the analysis of the Unity source code. As a result, I got 181 warnings of the first level of certainty (High) and 506 warnings of the second level of certainty (Medium). I have not studied absolutely all the warnings, because there were quite a lot of them. Developers or enthusiasts can easily conduct an in-depth analysis by testing Unity themselves. To do this, PVS-Studio provides free trial and free modes of using. Companies can also buy our product and get quick and detailed support along with the license. Judging by the fact that I immediately managed to find couple of real bugs practically in every group of warnings with one or two attempts, there are a lot of them in Unity. And yes, they are diverse. Let's review the most interesting errors. Results of the check Something's wrong with the flags PVS-Studio warning: V3001 There are identical sub-expressions 'MethodAttributes.Public' to the left and to the right of the '|' operator. SyncListStructProcessor.cs 240 MethodReference GenerateSerialization() { .... MethodDefinition serializeFunc = new MethodDefinition("SerializeItem", MethodAttributes.Public | MethodAttributes.Virtual | MethodAttributes.Public | // <= MethodAttributes.HideBySig, Weaver.voidType); .... } When combining enumeration flags MethodAttributes, an error was made: the Public value was used twice. Perhaps, the reason for this is the wrong code formatting. A similar bug is also made in code of the method GenerateDeserialization: V3001 There are identical sub-expressions 'MethodAttributes.Public' to the left and to the right of the '|' operator. SyncListStructProcessor.cs 309 Copy-Paste PVS-Studio warning: V3001 There are identical sub-expressions 'format == RenderTextureFormat.ARGBFloat' to the left and to the right of the '||' operator. RenderTextureEditor.cs 87 public static bool IsHDRFormat(RenderTextureFormat format) { Return (format == RenderTextureFormat.ARGBHalf || format == RenderTextureFormat.RGB111110Float || format == RenderTextureFormat.RGFloat || format == RenderTextureFormat.ARGBFloat || format == RenderTextureFormat.ARGBFloat || format == RenderTextureFormat.RFloat || format == RenderTextureFormat.RGHalf || format == RenderTextureFormat.RHalf); } I gave a piece of code, preliminary having formatted it, so the error is easily detected visually: the comparison with RenderTextureFormat.ARGBFloat is performed twice. In the original code, it looks differently: Probably, another value of enumeration RenderTextureFormat has to be used in one of two identical comparisons. Double work PVS-Studio warning: V3008 CWE-563 The 'fail' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 1633, 1632. UNetWeaver.cs 1633 class Weaver { .... public static bool fail; .... static public bool IsValidTypeToGenerate(....) { .... if (....) { .... Weaver.fail = true; fail = true; return false; } return true; } .... } The true value is assigned twice to the value, as Weaver.fail and fail is one and the same static field of the Weaver class. Perhaps, there is no crucial error, but the code definitely needs attention. No options PVS-Studio warning: V3009 CWE-393 It's odd that this method always returns one and the same value of 'false'. ProjectBrowser.cs 1417 // Returns true if we should early out of OnGUI bool HandleCommandEventsForTreeView() { .... if (....) { .... if (....) return false; .... } return false; } The method always returns false. Pay attention to the comment in the beginning. A developer forgot about the result PVS-Studio warning: V3010 CWE-252 The return value of function 'Concat' is required to be utilized. AnimationRecording.cs 455 static public UndoPropertyModification[] Process(....) { .... discardedModifications.Concat(discardedRotationModifications); return discardedModifications.ToArray(); } When concatenating two arrays discardedModifications and discardedRotationModifications the author forgot to save the result. Probably a programmer assumed that the result would be expressed immediately in the array discardedModifications. But it is not so. As a result, the original array discardedModifications is returned from the method. The code needs to be corrected as follows: static public UndoPropertyModification[] Process(....) { .... return discardedModifications.Concat(discardedRotationModifications) .ToArray(); } Wrong variable was checked PVS-Studio warning: V3019 CWE-697 Possibly an incorrect variable is compared to null after type conversion using 'as' keyword. Check variables 'obj', 'newResolution'. GameViewSizesMenuItemProvider.cs 104 private static GameViewSize CastToGameViewSize(object obj) { GameViewSize newResolution = obj as GameViewSize; if (obj == null) { Debug.LogError("Incorrect input"); return null; } return newResolution; } In this method, the developers forgot to consider a situation where the variable objis not equal to null, but it will not be able to cast to the GameViewSize type. Then the variable newResolution will be set to null, and the debug output will not be made. A correct variant of code will be like this: private static GameViewSize CastToGameViewSize(object obj) { GameViewSize newResolution = obj as GameViewSize; if (newResolution == null) { Debug.LogError("Incorrect input"); } return newResolution; } Deficiency PVS-Studio warning: V3020 CWE-670 An unconditional 'return' within a loop. PolygonCollider2DEditor.cs 96 private void HandleDragAndDrop(Rect targetRect) { .... foreach (....) { .... if (....) { .... } return; } .... } The loop will execute only one iteration, after that the method terminates its work. Various scenarios are probable. For example, return must be inside the unit if, or somewhere before return, a directive continue is missing. It may well be that there is no error here, but then one should make the code more understandable. Unreachable code PVS-Studio warning: V3021 CWE-561 There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless CustomScriptAssembly.cs 179 public bool IsCompatibleWith(....) { .... if (buildingForEditor) return IsCompatibleWithEditor(); if (buildingForEditor) buildTarget = BuildTarget.NoTarget; // Editor .... } Two identical checks, following one after another. It is clear that in case of buildingForEditor equality to the true value, the second check is meaningless, because the first method terminates its work. If the value buildingForEditor is false, neither then-brunch nor if operator will be executed. There is an erroneous construction that requires correction. Unconditional condition PVS-Studio warning: V3022 CWE-570 Expression 'index < 0 && index >= parameters.Length' is always false. AnimatorControllerPlayable.bindings.cs 287 public AnimatorControllerParameter GetParameter(int index) { AnimatorControllerParameter[] param = parameters; if (index < 0 && index >= parameters.Length) throw new IndexOutOfRangeException( "Index must be between 0 and " + parameters.Length); return param[index]; } The condition of the index check is incorrect - the result will always be false. However, in case of passing the incorrect index to the GetParameter method, the exception IndexOutOfRangeException will still be thrown when attempting to access an array element in the return block. Although, the error message will be slightly different. One has to use || in a condition instead of the operator && so that the code worked the way a developer expected: public AnimatorControllerParameter GetParameter(int index) { AnimatorControllerParameter[] param = parameters; if (index < 0 || index >= parameters.Length) throw new IndexOutOfRangeException( "Index must be between 0 and " + parameters.Length); return param[index]; } Perhaps, due to the use of the Copy-Paste method, there is another the same error in the Unity code: PVS-Studio warning: V3022 CWE-570 Expression 'index < 0 && index >= parameters.Length' is always false. Animator.bindings.cs 711 And another similar error associated with the incorrect condition of the check of the array index: PVS-Studio warning: V3022 CWE-570 Expression 'handle.valueIndex < 0 && handle.valueIndex >= list.Length' is always false. StyleSheet.cs 81 static T CheckAccess<T>(T[] list, StyleValueType type, StyleValueHandle handle) { T value = default(T); if (handle.valueType != type) { Debug.LogErrorFormat(.... ); } else if (handle.valueIndex < 0 && handle.valueIndex >= list.Length) { Debug.LogError("Accessing invalid property"); } else { value = list[handle.valueIndex]; } return value; } And in this case, a release of the IndexOutOfRangeException exception is possible.As in the previous code fragments, one has to use the operator || instead of && to fix an error. Simply strange code Two warnings are issued for the code fragment below. PVS-Studio warning: V3022 CWE-571 Expression 'bRegisterAllDefinitions || (AudioSettings.GetSpatializerPluginName() == "GVR Audio Spatializer")' is always true. AudioExtensions.cs 463 PVS-Studio warning: V3022 CWE-571 Expression 'bRegisterAllDefinitions || (AudioSettings.GetAmbisonicDecoderPluginName() == "GVR Audio Spatializer")' is always true. AudioExtensions.cs 467 // This is where we register our built-in spatializer extensions. static private void RegisterBuiltinDefinitions() { bool bRegisterAllDefinitions = true; if (!m_BuiltinDefinitionsRegistered) { if (bRegisterAllDefinitions || (AudioSettings.GetSpatializerPluginName() == "GVR Audio Spatializer")) { } if (bRegisterAllDefinitions || (AudioSettings.GetAmbisonicDecoderPluginName() == "GVR Audio Spatializer")) { } m_BuiltinDefinitionsRegistered = true; } } It looks like an incomplete method. It is unclear why it has been left as such and why developers haven't commented the useless code blocks. All, that the method does at the moment: if (!m_BuiltinDefinitionsRegistered) { m_BuiltinDefinitionsRegistered = true; } Useless method PVS-Studio warning: V3022 CWE-570 Expression 'PerceptionRemotingPlugin.GetConnectionState() != HolographicStreamerConnectionState.Disconnected' is always false. HolographicEmulationWindow.cs 171 private void Disconnect() { if (PerceptionRemotingPlugin.GetConnectionState() != HolographicStreamerConnectionState.Disconnected) PerceptionRemotingPlugin.Disconnect(); } To clarify the situation, it is necessary to look at the declaration of the methodPerceptionRemotingPlugin.GetConnectionState(): internal static HolographicStreamerConnectionState GetConnectionState() { return HolographicStreamerConnectionState.Disconnected; } Thus, calling the Disconnect() method leads to nothing. One more error relates to the same method PerceptionRemotingPlugin.GetConnectionState(): PVS-Studio warning: V3022 CWE-570 Expression 'PerceptionRemotingPlugin.GetConnectionState() == HolographicStreamerConnectionState.Connected' is always false. HolographicEmulationWindow.cs 177 private bool IsConnectedToRemoteDevice() { return PerceptionRemotingPlugin.GetConnectionState() == HolographicStreamerConnectionState.Connected; } The result of the method is equivalent to the following: private bool IsConnectedToRemoteDevice() { return false; } As we can see, among the warnings V3022 many interesting ones were found. Probably, if one spends much time, he can increase the list. But let's move on. Not on the format PVS-Studio warning: V3025 CWE-685 Incorrect format. A different number of format items is expected while calling 'Format' function. Arguments not used: index. Physics2D.bindings.cs 2823 public void SetPath(....) { if (index < 0) throw new ArgumentOutOfRangeException( String.Format("Negative path index is invalid.", index)); .... } There is no error in code, but as the saying goes, the code "smells". Probably, an earlier message was more informative, like this: "Negative path index {0} is invalid.". Then it was simplified, but developers forgot to remove the parameter index for the method Format. Of course, this is not the same as a forgotten parameter for the indicated output string specifier, i.e. the construction of the type String.Format("Negative path index {0} is invalid."). In such a case, an exception would be thrown. But in our case we also need neatness when refactoring. The code has to be fixed as follows: public void SetPath(....) { if (index < 0) throw new ArgumentOutOfRangeException( "Negative path index is invalid."); .... } Substring of the substring PVS-Studio warning: V3053 An excessive expression. Examine the substrings 'UnityEngine.' and 'UnityEngine.SetupCoroutine'. StackTrace.cs 43 static bool IsSystemStacktraceType(object name) { string casted = (string)name; return casted.StartsWith("UnityEditor.") || casted.StartsWith("UnityEngine.") || casted.StartsWith("System.") || casted.StartsWith("UnityScript.Lang.") || casted.StartsWith("Boo.Lang.") || casted.StartsWith("UnityEngine.SetupCoroutine"); } Search of the substring "UnityEngine.SetupCoroutine" in the condition is meaningless, because before that the search for "UnityEngine." is performed. Therefore, the last check should be removed or one has to clarify the correctness of substrings. Another similar error: PVS-Studio warning: V3053 An excessive expression. Examine the substrings 'Windows.dll' and 'Windows.'. AssemblyHelper.cs 84 static private bool CouldBelongToDotNetOrWindowsRuntime(string assemblyPath) { return assemblyPath.IndexOf("mscorlib.dll") != -1 || assemblyPath.IndexOf("System.") != -1 || assemblyPath.IndexOf("Windows.dll") != -1 || // <= assemblyPath.IndexOf("Microsoft.") != -1 || assemblyPath.IndexOf("Windows.") != -1 || // <= assemblyPath.IndexOf("WinRTLegacy.dll") != -1 || assemblyPath.IndexOf("platform.dll") != -1; } Size does matter PVS-Studio warning: V3063 CWE-571 A part of conditional expression is always true if it is evaluated: pageSize <= 1000. UNETInterface.cs 584 public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 || pageSize <= 1000) && totalFilters <= 10; } Condition for a check of a valid page size is erroneous. Instead of the operator ||, one has to use &&. The corrected code: public override bool IsValid() { .... return base.IsValid() && (pageSize >= 1 && pageSize <= 1000) && totalFilters <= 10; } Possible division by zero PVS-Studio warning: V3064 CWE-369 Potential division by zero. Consider inspecting denominator '(float)(width - 1)'. ClothInspector.cs 249 Texture2D GenerateColorTexture(int width) { .... for (int i = 0; i < width; i++) colors[i] = GetGradientColor(i / (float)(width - 1)); .... } The problem may occur when passing the value width = 1 into the method. In the method, it is not checked anyway. The method GenerateColorTexture is called in the code just once with the parameter 100: void OnEnable() { if (s_ColorTexture == null) s_ColorTexture = GenerateColorTexture(100); .... } So, there is no error here so far. But, just in case, in the method GenerateColorTexture the possibility of transferring incorrect width value should be provided. Paradoxical check PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_Parent'. EditorWindow.cs 449 public void ShowPopup() { if (m_Parent == null) { .... Rect r = m_Parent.borderSize.Add(....); .... } } Probably, due to a typo, the execution of such code guarantees the use of the null reference m_Parent. The corrected code: public void ShowPopup() { if (m_Parent != null) { .... Rect r = m_Parent.borderSize.Add(....); .... } } The same error occurs later in the code: PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_Parent'. EditorWindow.cs 470 internal void ShowWithMode(ShowMode mode) { if (m_Parent == null) { .... Rect r = m_Parent.borderSize.Add(....); .... } And here's another interesting bug that can lead to access by a null reference due to incorrect check: PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'objects'. TypeSelectionList.cs 48 public TypeSelection(string typeName, Object[] objects) { System.Diagnostics.Debug.Assert(objects != null || objects.Length >= 1); .... } It seems to me that Unity developers quite often make errors related to misuse of operators || and && in conditions. In this case, if objects has a null value, then this will lead to a check of second part of the condition (objects != null || objects.Length >= 1), which will entail the unexpected throw of an exception. The error should be corrected as follows: public TypeSelection(string typeName, Object[] objects) { System.Diagnostics.Debug.Assert(objects != null && objects.Length >= 1); .... } Early nullifying PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'm_RowRects'. TreeViewControlGUI.cs 272 public override void GetFirstAndLastRowVisible(....) { .... if (rowCount != m_RowRects.Count) { m_RowRects = null; throw new InvalidOperationException(string.Format("....", rowCount, m_RowRects.Count)); } .... } In this case, the exception throw (access by the null reference m_RowRects) will happen when generating the message string for another exception. Code might be fixed, for example, as follows: public override void GetFirstAndLastRowVisible(....) { .... if (rowCount != m_RowRects.Count) { var m_RowRectsCount = m_RowRects.Count; m_RowRects = null; throw new InvalidOperationException(string.Format("....", rowCount, m_RowRectsCount)); } .... } One more error when checking PVS-Studio warning: V3080 CWE-476 Possible null dereference. Consider inspecting 'additionalOptions'. MonoCrossCompile.cs 279 static void CrossCompileAOT(....) { .... if (additionalOptions != null & additionalOptions.Trim().Length > 0) arguments += additionalOptions.Trim() + ","; .... } Due to the fact that the & operator is used in a condition, the second part of the condition will always be checked, regardless of the result of the check of the first part. In case if the variable additionalOptions has the null value, the exception throw is inevitable. The error has to be corrected, by using the operator && instead of &. As we can see, among the warnings with the number V3080 there are rather insidious errors. Late check PVS-Studio warning: V3095 CWE-476 The 'element' object was used before it was verified against null. Check lines: 101, 107. StyleContext.cs 101 public override void OnBeginElementTest(VisualElement element, ....) { if (element.IsDirty(ChangeType.Styles)) { .... } if (element != null && element.styleSheets != null) { .... } .... } The variable element is used without preliminary check for null. While later in the code this check is performed. The code probably needs to be corrected as follows: public override void OnBeginElementTest(VisualElement element, ....) { if (element != null) { if (element.IsDirty(ChangeType.Styles)) { .... } if (element.styleSheets != null) { .... } } .... } In code there are 18 more errors. Let me give you a list of the first 10: V3095 CWE-476 The 'property' object was used before it was verified against null. Check lines: 5137, 5154. EditorGUI.cs 5137 V3095 CWE-476 The 'exposedPropertyTable' object was used before it was verified against null. Check lines: 152, 154. ExposedReferenceDrawer.cs 152 V3095 CWE-476 The 'rectObjs' object was used before it was verified against null. Check lines: 97, 99. RectSelection.cs 97 V3095 CWE-476 The 'm_EditorCache' object was used before it was verified against null. Check lines: 134, 140. EditorCache.cs 134 V3095 CWE-476 The 'setup' object was used before it was verified against null. Check lines: 43, 47. TreeViewExpandAnimator.cs 43 V3095 CWE-476 The 'response.job' object was used before it was verified against null. Check lines: 88, 99. AssetStoreClient.cs 88 V3095 CWE-476 The 'compilationTask' object was used before it was verified against null. Check lines: 1010, 1011. EditorCompilation.cs 1010 V3095 CWE-476 The 'm_GenericPresetLibraryInspector' object was used before it was verified against null. Check lines: 35, 36. CurvePresetLibraryInspector.cs 35 V3095 CWE-476 The 'Event.current' object was used before it was verified against null. Check lines: 574, 620. AvatarMaskInspector.cs 574 V3095 CWE-476 The 'm_GenericPresetLibraryInspector' object was used before it was verified against null. Check lines: 31, 32. ColorPresetLibraryInspector.cs 31 Wrong Equals method PVS-Studio warning: V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. CurveEditorSelection.cs 74 public override bool Equals(object _other) { CurveSelection other = (CurveSelection)_other; return other.curveID == curveID && other.key == key && other.type == type; } Overload of the Equals method was implemented carelessly. One has to take into account the possibility of obtaining null as a parameter, as this can lead to a throw of an exception, which hasn't been considered in the calling code. In addition, the situation, when _other can't be cast to the type CurveSelection, will lead to a throw of an exception. The code has to be fixed. A good example of the implementation of Object.equals overload is given in the documentation. In the code, there are other similar errors: V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. SpritePackerWindow.cs 40 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. PlatformIconField.cs 28 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ShapeEditor.cs 161 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ActiveEditorTrackerBindings.gen.cs 33 V3115 CWE-684 Passing 'null' to 'Equals' method should not result in 'NullReferenceException'. ProfilerFrameDataView.bindings.cs 60 Once again about the check for null inequality PVS-Studio warning: V3125 CWE-476 The 'camera' object was used after it was verified against null. Check lines: 184, 180. ARBackgroundRenderer.cs 184 protected void DisableARBackgroundRendering() { .... if (camera != null) camera.clearFlags = m_CameraClearFlags; // Command buffer camera.RemoveCommandBuffer(CameraEvent.BeforeForwardOpaque, m_CommandBuffer); camera.RemoveCommandBuffer(CameraEvent.BeforeGBuffer, m_CommandBuffer); } When the camera variable is used the first time, it is checked for null inequality. But further along the code the developers forget to do it. The correct variant could be like this: protected void DisableARBackgroundRendering() { .... if (camera != null) { camera.clearFlags = m_CameraClearFlags; // Command buffer camera.RemoveCommandBuffer(CameraEvent.BeforeForwardOpaque, m_CommandBuffer); camera.RemoveCommandBuffer(CameraEvent.BeforeGBuffer, m_CommandBuffer); } } Another similar error: PVS-Studio warning: V3125 CWE-476 The 'item' object was used after it was verified against null. Check lines: 88, 85. TreeViewForAudioMixerGroups.cs 88 protected override Texture GetIconForItem(TreeViewItem item) { if (item != null && item.icon != null) return item.icon; if (item.id == kNoneItemID) // <= return k_AudioListenerIcon; return k_AudioGroupIcon; } An error, that in some cases leads to an access by a null link. The execution of the condition in the first block if enables the exit from the method. However, if this does not happen, then there is no guarantee that the reference item is non-zero. Here is the corrected version of the code: protected override Texture GetIconForItem(TreeViewItem item) { if (item != null) { if (item.icon != null) return item.icon; if (item.id == kNoneItemID) return k_AudioListenerIcon; } return k_AudioGroupIcon; } In the code there are 12 similar errors. Let me give you a list of the first 10: V3125 CWE-476 The 'element' object was used after it was verified against null. Check lines: 132, 107. StyleContext.cs 132 V3125 CWE-476 The 'mi.DeclaringType' object was used after it was verified against null. Check lines: 68, 49. AttributeHelper.cs 68 V3125 CWE-476 The 'label' object was used after it was verified against null. Check lines: 5016, 4999. EditorGUI.cs 5016 V3125 CWE-476 The 'Event.current' object was used after it was verified against null. Check lines: 277, 268. HostView.cs 277 V3125 CWE-476 The 'bpst' object was used after it was verified against null. Check lines: 96, 92. BuildPlayerSceneTreeView.cs 96 V3125 CWE-476 The 'state' object was used after it was verified against null. Check lines: 417, 404. EditorGUIExt.cs 417 V3125 CWE-476 The 'dock' object was used after it was verified against null. Check lines: 370, 365. WindowLayout.cs 370 V3125 CWE-476 The 'info' object was used after it was verified against null. Check lines: 234, 226. AssetStoreAssetInspector.cs 234 V3125 CWE-476 The 'platformProvider' object was used after it was verified against null. Check lines: 262, 222. CodeStrippingUtils.cs 262 V3125 CWE-476 The 'm_ControlPoints' object was used after it was verified against null. Check lines: 373, 361. EdgeControl.cs 373 The choice turned out to be small PVS-Studio warning: V3136 CWE-691 Constant expression in switch statement. HolographicEmulationWindow.cs 261 void ConnectionStateGUI() { .... HolographicStreamerConnectionState connectionState = PerceptionRemotingPlugin.GetConnectionState(); switch (connectionState) { .... } .... } The method PerceptionRemotingPlugin.GetConnectionState() is to blame here. We have already come across it when we were analyzing the warnings V3022: internal static HolographicStreamerConnectionState GetConnectionState() { return HolographicStreamerConnectionState.Disconnected; } The method will return a constant. This code is very strange. It needs to be paid attention. Conclusions I think we can stop at this point, otherwise the article will become boring and overextended. Again, I listed the errors that I just couldn't miss. Sure, the Unity code contains a big number of the erroneous and incorrect constructions, that need to be fixed. The difficulty is that many of the issued warnings are very controversial and only the author of the code is able to make the exact "diagnosis" in each case. Generally speaking about the Unity project, we can say that it is rich for errors, but taking into account the size of its code base (400 thousand lines), it's not so bad. Nevertheless, I hope that the authors will not neglect the code analysis tools to improve the quality of their product. Use PVS-Studio and I wish you bugless code!  
      0 comments
    14. Intention This article is intended to give a brief look into the logistics of machine learning. Do not expect to become an expert on the field just by reading this. However, I hope that the article goes into just enough detail so that it sparks your interest in learning more about AI and how it can be applied to various fields such as games. Once you finish reading the article, I recommend looking at the resources posted below. If you have any questions, feel free to message me on Twitter @adityaXharsh. How Neural Networks Work Neural networks work by using a system of receiving inputs, sending outputs, and performing self-corrections based on the difference between the output and expected output, also known as the cost. Neural networks are composed of neurons, which in turn compose layers, or collections of neurons. For example, there is an input layer and an output layer. In between the these two layers, there are layers known as hidden layers. These layers allow for more complex and nuanced behavior by the neural network. A neural network can be thought of as a multi-tier cake: the first tier of the cake represents the input, the tiers in between, or lack thereof, represent the hidden layers, and the last tier represents the output. The two mechanisms of learning are Forward Propagation and Backward Propagation. Forward Propagation uses linear algebra for calculating what the activation of each neuron of the next layer should be, and then pushing, or propagating, those values forward. Backward Propagation uses calculus to determine what values in the network need to be changed in order to bring the output closer to the expected output. Forward Propagation As can be seen from the gif above, each layer is composed of multiple neurons, and each neuron is connected to every other neuron of the following and previous layer, save for the input and output layers since they are not surrounding by layers from both sides. To put it simply, a neural network represents a collection of activations, weights, and biases. They can be defined as: Activation: A value representing how strongly a neuron is firing. Weight: How strong the connection is between two neurons. Affects how much of the activation is propagated onto the next layer. Bias: A minimum threshold for whether or not the current neuron's activation and weight should affect the next neuron's activation. Each neuron has an activation and a bias. Every connection to every neuron is represented as a weight. The activations, weights, biases, and connections can be represented using matrices. Activations are calculated using this formula: After the inner portion of the function has been computed, the resulting matrix gets pumped into a special function known as the Sigmoid Function. The sigmoid is defined as: The sigmoid function is handy since its output is locked between a range of zero and one. This process is repeated until the activations of the output neurons have been calculated. Backward Propagation The process of a neural network performing self-correction is referred to as Backward Propagation or backprop. This article will not go into detail about backprop since it can be a confusing topic. To summarize, the algorithm uses a technique in calculus known as Gradient Descent. Given a plane in an infinite number of dimensions, the direction of change that minimizes the error must be found. The goal of using gradient descent is to modify the weights and biases such that the error in the network approaches zero. Furthermore, you can find the cost, or error, of a network using this formula: Unlike forward propagation, which is done from input to output, backward propagation goes from output to input. For every activation, find the error in that neuron, how much of a role it played in the error of the output, and adjust accordingly. This technique uses concepts such as the chain rule, partial derivatives, and multi-variate calculus; therefore, it's a good idea to brush up on one's calculus skills. High Level Algorithm Initialize matrices for weights and biases for all layers to a random decimal number between -1 and 1. Propagate input through the network. Compare output with the expected output. Backwards propagate the correction back into the network. Repeat this for N number of training samples. Source Code If you're interested in looking into the guts of a neural network, check out AI Chan! It's a simple to integrate library for machine learning I wrote in C++. Feel free to learn from it and use it in your own projects. https://bitbucket.org/mrsaturnsan/aichan/ Resources http://neuralnetworksanddeeplearning.com/ https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A    
      0 comments
    15. The following is an excerpt from the book, Unity 2017 Game Development Essentials - Third Edition written by Tommaso Lintrami and published by Packt Publishing. Unity makes the game production process simple by giving you a set of logical steps to build any conceivable game scenario. Renowned for being non-game-type specific, Unity offers you a blank canvas and a set of consistent procedures to let your imagination be the limit of your creativity. Essential Unity concepts By establishing the use of the Game Object concept, you are able to break down parts of your game into easily manageable objects, which are made of many individual component parts. By making individual objects within the game, introducing functionality to them with each component you add, you are able to infinitely expand your game in a logical, progressive manner. Component parts in turn have Variables, which are essentially properties of the component, or settings to control them with. By adjusting these variables, you'll have complete control over the effect that a component has on your object. The following diagram illustrates this: Assets These are the building blocks of all Unity projects. From textures in the form of image files, through 3D models for meshes, and sound files for effects, Unity refers to the files you'll use to create your game as assets. This is why, in any Unity project folder, all files used are stored in a child folder named Assets. This Assets folder is mirrored in the Project panel of the Unity interface; see the interface section in this chapter for more detail. Scenes In Unity, you should think of scenes as individual levels or areas of game content. However, some developers create entire games in a single scene, such as puzzle games, by dynamically loading content through the code. By constructing your game with many scenes, you'll be able to distribute loading times and test different parts of your game individually. New scenes are often used separately to a game scene you may be working on in order to prototype or test a piece of potential gameplay.

      Any currently open scene is what you are working on. In Unity 2017, you can load more scenes into the hierarchy while editing, and even at runtime, through the new SceneManager API, where two or more scenes can be worked on simultaneously. Scenes can be manipulated and constructed by using the Hierarchy and Scene views. OK, now that we know what assets and scenes let’s start setting up a scene and building a game asset. Setting up a scene and preparing game assets Create a new scene from the main menu by navigating to Assets | Create | Scene, and name it ParallaxGame. In this new scene, we will set up, step by step, all the elements for our 2D game prototype. First of all, we will switch the camera setting in the Scene view to 2D by clicking on the button as shown by the red arrow in the following screenshot: As you can see, now the Scene view camera is orthographic. You can't rotate it as you wish, as you can do with the 3D camera. Of course, we will want to change this setting on our Main Camera as well. Also, we want to change the Orthographic size to 4.5 to have the correct view of the scene. Instead, for the Skybox, we will choose a very dark or black color as clear color in the depth setting. This is how the Inspector should look when these settings are done:   While the Clipping Planes distances are important for setting the size of the frustum cone of a 3D, for the Perspective camera (inside which everything will be rendered by the engine), we should only set the Orthographic Size to 4.5, to have the correct distance of the 2D camera from the scene. When these settings are done, proceed by importing Chapter2-3-4.unitypackage into the project. You can either double-click on the package file with Unity open, or use the top menu: Assets | Import | Custom Package. If you haven't imported all the materials from the book's code already, be sure to include the Sprites subfolder. After the import, look in the Sprites/Parallax/DarkCave folder in the Project view and you will find some images imported as textures (as per default). The first thing we want to do now is to change the import settings of these images, in the Inspector, from Texture to Sprite (2D and UI). To do so, select all the images in the Project view in the Sprites/Parallax/DarkCave folder, all except the _reference_main_post file. Which is just a picture used as a reference of what the game level should look like: The Import Settings shown in the Inspector after selecting the seven images in the Project view The Max Size setting is hidden (-) because we have a multi-selection of image files. After having made the multiple selections, again, in the Inspector, we will do the following: Set the Texture Type option to Sprites (2D and UI). By default, images are imported as textures; to import them as Sprites, this type must be set. Uncheck the Generate Mip Maps option as we don't need MIP maps for this project as we are not going to look at the Sprites from a distant point of view, for example, games with the zoom-in/zoom-out feature (like the original Grand Theft Auto 2D game) would need this setting checked. Set Max Size to the maximum allowed. To ensure that you import all the images at their maximum resolution, set this to 8192. This is the maximum resolution size for an image on a modern PC, imported as a Sprite or texture. We set it so high because most of the background images we have in the collection are around 6,000 pixels wide. Click on the Apply button to apply these changes to all the images that were selected: The Project view showing the content of the folder after the images have been set to Sprite in the Import Settings. Placing the prefabs in the game Unity can place the prefabs in the game in many ways, the usual, visual method is to drag a stored prefab or another kind of file/object directly into the scene. Before dragging in the Sprites we imported, we will create an empty GameObject and rename it ParallaxCave. We will drag the layer images we just imported as Sprites, one by one, from the Project view (pointing at the Assets/Chapters2-3-4/Sprites/Background/DarkCave folder) into the Scene view, or more simply, directly in the Hierarchy view as the children of our ParallaxCaveGameObject, resulting in a scene Hierarchy like the one illustrated here: You can't drag all of them instantly because Unity will prompt you to save an animation filename for the selected collection of Sprites; we will see this later for our character and for the collectable graphics. Importing and placing background layers In any game engine, 2D elements, such as Sprites, are rendered following a sort order; this order is also called the z-order because it is a way to express the depth or to cope with the missing z axis in a two-dimensional context. The sort order is assigned an integer number which can be positive or negative; 0 is the middle point of this draw order. Ideally, a sort order of zero expresses the middle ground, where the player will act, or near its layer. Look at this image: Image courtesy of Wikipedia: parallax scrolling   All positive numbers will render the Sprite element in front of the other elements with a lower number. The graphic set we are going to use was taken from the Open Game Art website at http://opengameart.org. For simplicity, the provided background image files are named with a number within parentheses, for example, middleground(z1), which means that this image should be rendered with a z sort order of 1. Change the sort order property of the Sprite component on each child object under ParallaxCave according to the value in the parentheses at the end of their filenames. This will rearrange the graphics into the appropriately sorted order. After we place and set the correct layer order for all the images, we should arrange and scale the layers in a proper manner to end as something like the reference image furnished in the Assets/Chapters2-3-4/Sprites/Background/DarkCave/ folder. You can take a look at the final result for this part anytime, by saving the current scene and loading the Chapter3_start.unity scene. You just read an excerpt from the book, Unity 2017 Game Development Essentials - Third Edition written by Tommaso Lintrami and published by Packt Publishing. Use the code ORGDA10 at checkout to get recommended eBook retail price for $10 only until April 30, 2018.  
      0 comments
      By khawk
    16. This reference guide has now been proofread by @stimarco (Sean Timarco Baggaley). Please give your thanks to him. The guide should now be far easier to read and understand than previous revisions, enjoy! Note: The normal mapping tutorial has been temporarily moved, to be added back as its own topic, to help separate the two for more clarity.   If anyone has any corrections, please contact me. 3D Graphics Primer 1: Textures. This is a quick reference for artists who are starting out. The first topic revolves around textures and the many things an artist who is starting out needs to understand. I am primarily a 3d artist and my focus will therefore be primarily on 3d art. However, some of this information is applicable to 2d artists. Textures What is a texture?

      By classical definition a texture is the visual and esp. tactile quality of a surface (Dictionary.com).

      Since current games lack the ability to convey tactile sensations, a texture in game terms simply refers to the visual quality of a surface, with an implicit tactile quality. That is, a rock texture should give the impression of the surface of a rock, and depending on the type, a rough or smooth tactile quality. We see these types of surfaces in real life and feel them in real life. So when we see the texture, we know what it feels like without needing to touch it due to our past experiences. But a lot more goes into making a convincing texture beyond the simple tactile quality.

      As you will learn as you read on, textures in games is a very complex topic, with many elements involved in creating them for realtime rendering.

      We will look at: Texture File Types & Formats Texture Image Size Texture File Size Texture Types Tiling Textures Texture Bit Depth Making Normal Maps (Brief overview only) Pixel Density and Fill Rate Limitation Mipmaps Further Reading     Further Reading Creating and using textures is such a big subject that covering it entirely within this one primer is simply not sensible. All I can sensibly achieve here is a skimming over the surface, so here are some links to further reading matter.

      Beautiful Yet Friendly - Written by Guillaume Provost, hosted by Adam Bromell. This is a very interesting article that goes into some depth about basic optimizations and the thought process when designing and modeling a level. It goes into the technical side of things to truly give you an understanding on what is going on in the background. You can use the information in this article to find out how to build models that use fewer resources -- polygons, textures, etc. -- for the same results.
      This is the reason why this is the first article I am linking to: it is imperative to understand the topics discussed in this article. If you need any extra explanation after reading it, you can PM me and I am more than happy to help. However, parts of this article go outside the texture realm of things and into the mesh side, so keep that in mind if you're focusing on learning textures at the moment.

      UVW Mapping Tutorial - by Waylon Brinck. This is about the best tutorial I have found for a topic that gives all 3D artists a headache: unwrapping your three-dimensional object into a two-dimensional plane for 2D painting. It is the process by which all 3D artists place a texture on a mesh (model). NOTE: while this tutorial is very good and will help you in learning the process, UVW mapping/unwrapping is just one of those things you must practice and experiment with for a while before you truly understand it.

      Poop In My Mouth's Tutorials - By Ben Mathis. Probably the only professional I know who has such a scary website name, but don't be afraid! I swear there is nothing terrible beyond that link. He has a ton of excellent tutorials, short and long, that cover both the modeling and texturing processes, ranging from normal-mapping to UVW unwrapping. You may want to read this reference first before delving into some of his tutorials. Texture File Types & Formats In the computer world, textures are really nothing more than image files applied to a model. Because of this, a variety of common computer image formats can be used. These include, .TGA, .DDS, .BMP, and even .JPG (or .JPEG). Almost any digital image format can be used, but some things must be taken into consideration:

      In the modern world of gaming, being heavily reliant on shaders, formats like the .JPG format are rarely used. This is because .JPG, and others like it, are lossy formats, where data in the image file is actually thrown away to make the file smaller. This process can result in compression artifacts The problem is that these artifacts will interfere with shaders, because these rely on having all the data contained within the image intact. Because of this, lossless formats are used -- formats like .DDS (if lossless option chosen), .BMP, and .TGA.

      However, there is such a thing called S3TC (also known as "dxt") compression. This was a compression technique developed for use on Savage 3D graphics cards, with the benefit of keeping a texture compressed within video memory whereas non-S3TC-compressed textures are not. This results in a 1:8 or greater compression ratio and can allow either more textures to be used in a scene, or can be used to increase the resolution of a texture without using more memory. S3TC compression can be made to work with any format, but is most commonly associated with the .DDS format.

      Just like the .jpg and other lossy formats, any texture using S3TC will suffer compression artifacts, and as such is not suitable for normal maps, (which we'll discuss a little later on).

      Even with S3TC it is common to use a lossless format for the texture format, and then apply S3TC when necessary. This is done to provide an artist with the ability to have lossless textures when needed -- e.g. for normal maps -- but then provide them with a method for compression on textures that could benefit from S3TC compression, such as diffuse textures. Texture Image Size The engineers who design computers and their component parts like us to feed data to their hardware in chunks that have dimensions defined as powers of two. (E.g. 16 pixels, 32 pixels, 64 pixels, and so on.) While it is possible to have a texture that is not the power of two, it is generally a good idea to stick to power-of-two sizes for compatibility reasons (especially if you're targeting older hardware). That is, if you're creating a texture for a game, you want to use image dimensions that are the power of two. Examples, 32x32, 16x32, 2048x1024, 1024x1024, 512x512, 512x32, etc. Say for example, you have a mesh/model and you're UV Unwrapping, for a game you must work within dimensions that are a power of two.

      Powers of two include: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and so on.

      What if you want to use images that aren't powers of two? In such cases, you can often use uneven pixel ratios. This means you can create your texture at, say, 1024x768, and then you can save it as 1024x1024. When you're applying your texture to your mesh you can stretch it back out in proportion. However, it is best to go for a 1:1 pixel ratio and create the texture starting at the power of 2, but the stretching is one method for getting around this if needed

      Please refer to the "Pixel Density and Fill Rate Limitation" section for more in depth info on how exactly to choose your image size. Texture File Size File size is important for a number of reasons. The file is the actual amount of memory (permanent or temporary) that the texture requires.
      For an obvious example, an uncompressed .BMP could be 6MB, this is the space it requires to be saved on a hard drive and within the video and/or system RAM. Using compression we can squeeze the same image into a file size of, say, 400 KB, or possibly even smaller. Compression, like that used by .JPG and other similar formats, will only do compression on permanent storage media, such as hard drives. That is to say, when the image is stored in video card memory it must be uncompressed within the memory, so you truly only get a benefit on storing the texture on its permanent medium but not in memory during use. Enter S3TC The key benefit of the S3TC compression system is that it compresses on hard drives, discs, and other media, while also staying compressed within video card memory.

      But why should you care what size it is within the video memory?

      Video cards have onboard memory called, unsuprisingly enough, video memory, for storing data that needs to be accessed fast. This memory is limited, so considerations on the artists' part must be used. The good news is that video card manufacturers are constantly adding more of this video memory -- known as Video RAM (or VRAM for short). Where once 64 MB was considered good, we can now find video cards with up to 1 GB of RAM.

      The more textures you have, the more RAM will be used. The actual amount is based primarily on the file size of each texture. Other data take up video memory, such as the model data itself, but the majority is used for textures. For this reason, it is a good idea to both plan your video memory usage and test how much you're using once in the engine. It is also advised that you have a minimum hardware configuration for what you want your game to run on. If this is the case, then you should always make sure your video memory usage does not go over the minimum target hardware's amount.

      Another advantage of in-memory compression like S3TC is that it can increase available bandwidth. If you know your engine on your target hardware may be swapping textures back and forth frequently (something that should be avoided if possible, but is a technique used on consoles), then you may want to consider having the textures compressed and then decompress them on the fly. That is to say, you have the textures compressed, and then when they're required, they're transported and then decompressed as they're added to video memory. This results in less data having to be shunted across the graphics card's bus (transport line) to and from the computer's main memory, resulting in less bandwidth utilization, but with the added penalty of a few wasted processing clocks. Texture Types Now we're going to discuss such things are diffuse, normal, specular, parallax and cube mapping.
      Aside from the diffuse map, these mapping types are common in what have become known as 'next-gen' engines, where the artist is given more control over how the model is rendered by use of control maps for shaders. Diffuse maps are textures which simply provide the color and any baked-in details. Games before shaders simply used diffuse textures. (You could say this is the 'default' texture type.) For example, when texturing a rock, the diffuse map of the rock would be just the color and data of that rock. Diffuse textures can be painted by hand or made from photographs, or a mixture of both.

      However, in any modern engine, most lighting and shadow detail is preferred to not be 'baked' (i.e. pre-drawn directly into the texture) into the diffuse map, but instead to have just the color data in the diffuse and use other control maps to recreate such things as shadows, specular reflections, refraction effects and so on. This results in a more dynamic look for the texture overall, helping its believability once in-game. 'Baking' such details will hinder the game engine's ability to produce "dynamic" results, which will cause the end result to look unrealistic.

      There is sometimes an exception to this rule: if you're providing "general" shading/lighting like ambient occlusion maps baked (merged) into the diffuse, then it is ok. These types of additions into the diffuse are general enough that they won't hinder the dynamic effect of running in a realtime engine, while achieving a higher level of realism.

      Another point to remember is that while textures sourced from photographs tend to work very well in 3D environment work, it is often frowned upon to use such 'photosourced textures' for humans. Human diffuse textures are usually hand-painted.

      Normal maps are used to provide lighting detail for dynamic lighting, however this is involved in an even more important role, as we will discuss shortly. Normal maps get their name from the fact that they recreate normals on a mesh using a texture. A 'normal' is a point (actually a vector) extending from a triangle on a mesh. This tells the game engine how much light the triangle should receive from a particular light source -- the engine simply compares the angle of the normal with the position and angle of the light itself and thus calculates how the light strikes the triangle. Without a normal map, the game engine can only use the data available in the mesh itself; a triangle would only have three normals for the engine to use -- one at each point -- regardless of how big that triangle is on the screen, resulting in a flat look. A normal map, on the other hand, creates normals right across a mesh's surface. The result is the capability to generate a normal map from a 2 million poly mesh and have this lighting detail recreated on a 200 poly mesh. This can allow an artist to recreate a high-poly mesh with relatively low polygons in comparison.

      Such tricks are associated with 'next-gen' engines, and are heavily used in screenshots of the Unreal 3 Engine, where you can see large amounts of detail yet able to run in realtime due to the actual amount of polys used. Normal maps use the different color channels (red, green, and blue) for storing gray scale data of lighting information at different angles, however there is no set standard for how these channels can be interpreted and as such sometimes require a channel to be flipped for correct function in an engine.

      Normal maps can be generated from high-poly meshes, or they can be image generated, that is, being generated from a gray scale map. However, high-poly generated normal maps tend to be preferred as they tend to have more depth. This is for various reasons, but the main one is that it is difficult for most to paint a grayscale map that is equal to the quality you'll receive straight from a model. Also, you will often find that the image generators tend to miss details, requiring the artist to edit the normal map by hand later. However, it is not impossible to receive near equal results using both methods, each have their own requirements, it is up to you on which to use in each situation

      (Please refer to the tutorial section for extended information on normal maps.)

      Specular maps are simple in creation. They are easy to paint by hand, because the map is simply a gray scale map that defines the specular level for any point on a mesh. However, in some engines and in 3D modeling applications this map can be full-color so the brightness defines the specular level and color defines the color of the specular highlights. This gives the artist finer control to produce more lifelike textures because the specific specular attribute of certain materials can be more defined and closer to reality. In this way you can give a plastic material a more plastic-like specular reflection, or simulate a stone's particular specular look.

      Parallax mapping picks up where regular normal mapping fails. We create normals on a model by use of a normal map. A parallax map does the same thing as a normal map except it samples a grayscale map within the alpha channel. Parallax mapping works by using the angles recorded in the tangent space normal maps along with the heightmap to calculate which way to displace the texture coordinates. It uses the grayscale map to define how much the texture should be extruded outwards, and uses the angle information recorded in the tangent normal map to determine the angle to offset the texture. Doing things this way a parallax map can then recreate the extrusion of a normal map, but without the flattening that visible in ordinary normal mapping due to lack of data at different perspectives. It mainly gets its name from how the effect is created by the parallax effect. The end results are cleaner and deeper extrusions with less flattening that occurs in normal mapping because the texture coordinates are offset with your perspective.

      Parallax mapping is usually more costly than normal mapping. Parallax also has the limitation of not being used on deformable meshes because of the tendency for the texture to "swim" due to the way textures are offset.

      Cube mapping uses a texture that is not unlike an unfolded box and acts just like a sky box when used. Basically this texture is designed like an unfolded box where it all is folded back together when being used. This allows us to create a 3d dimensional reflection of sorts. The result is very realistic precomputed reflections. For example, you would use this technique to create a shiny, metal ball. The cube map would provide the reflection data. (In some engines, you can tell the engine itself to generate a cube map from a specific point in the scene and have it apply the result to a model. This is how we get shiny, reflective cars in racing games, for example. The engine is constantly taking cube map 'photos' for each car to ensure it reflects its surroundings accurately.) Tiling Textures Now, while you can have unique, specific textures made for specific models, a common thing to do to both save time and video memory, is tiling textures. These are textures which can be fitted together like floor or wall tiles, producing a single, seamless texture/image. The benefit is that you can texture an entire floor in an entire building using a single tiling texture, which both saves the artist's time, and video memory due to fewer textures being needed.

      A tiling texture is achieved by having the left and right side of a texture blend into each other, and the same for the bottom and top of the texture. Such blending can be achieved by the use of a specialist program, a plugin, or by simply offsetting the texture a little to the left and down, cleaning up the seams, and then offsetting back to the original position

      After you create your first tiling texture and test it, you're bound to see that each texture will produce a 'tiling artifact' which shows how and where the texture is tiled. Such artifacts can be reduced by avoiding high-contrast detail, unique detail (such as a single rock on a sand texture), and by tiling the texture less. Texture Bit Depth A texture is just an image file. This means most of the theory you're familiar with when it comes to working with images also applies to textures. One such would be bit depth. Here you will see such numbers as 8, 16, 24, and 32 bits. These each correspond to the amount of color data that is stored for each image.

      How do we get the number 24 from an image file? Well, the number 24 refers to how many bits it contains. That is, that you have 3 channels, red, green, and blue, all of which are simply channels which contain a gray scale image, but are added together to produce a full color image. So black, in the red channel, means "no red" at that point, while white in the red channel means "lots of red". Same applies to blue and green. When these are combined, they produce a full color image, a mix of Red, Green and Blue (if using the RGB color model). The bits come in by the fact that they define how many levels of gray each channel has: 8 bits per channel, over 3 channels, is 24 bits total.

      8 bits gives you 256 levels of gray. Combining the idea that 8 bits gives you 256 levels of gray and that each channel is simply gray scale and different levels of gray define a level within that color, we can then see that a 24 bit image will give us 16,777,216 different colors to play with. That is, 8 bits x 3 channels= 24 bits, 8 bits= 256 gray scale levels, so 256 x 3= 16,777,216 colors. This knowledge comes in useful when at certain times it is easier to edit the RGB channels individually, with a correct understanding you can then delve deeper into editing your textures.

      However, with the increase in shader use, you'll often see a 32 bit image/texture file. These are image files which contain 4 channels, each of 8 bits: 4 x 8 = 32. This allows a developer to use the 4th channel to carry a unique control map or extra data needed for shaders. Since each channel is gray scale, a 32 bit image is ideal to carry a full color texture along with an extra map within it. Depending on your engine you may see a full color texture with the extra 4th channel being used to hold a gray scale map for transparency (more commonly known as an "alpha channel"), specular control map, or a gray scale map along with a normal map in the other channels to be used for parallax mapping.

      As you paint your textures you may start to realize that you're probably not using all of the colors available to you in a 24 bit image. And you're probably right, this is why artists can at times use a lower bit depth texture to achieve the same or near the same look with a lesser memory footprint. There are certain cases where you will more than likely need a 24 bit image however: If your image/texture contains frequent gradations in the color or shading, then a 24 bit image is required. However, if your image/texture contains solid colors, little detail, little or no shading, and so on, you can probably get away with a 16, 8, or perhaps even a 4 bit texture.

      Often, this type of memory conservation technique is best done when the artist is able to choose/make his own color pallette. This is where you hand pick the colors that will be saved with your image, instead of letting the computer automatically choose. By using a careful eye you have the possibility to choose more optimal colors which will fit your texture better. Basically, in a way, all you're doing is throwing out what would be considered useless colors which are being stored in the texture but not being used. Making Normal Maps There are two methods for creating normal maps: Using a detailed mesh/model. Creating a normal map from an image.
      The first method is part of a common workflow that nearly all modelers who support normal map generation use. For generating a normal map from a model you can either generate it out to a straight texture, or if you're generating your normal map from a high-poly mesh, it is common to then model the low poly mesh around your high-poly mesh. (Some artists have prefer for modeling the low-poly version first while others like to do the high then the low, in the end there is no perfect way, its just preference.)

      For example: you have a high-poly rock, you will then model/build a low poly mesh around the high, then UVW unwrap it, and generate the normal map from your high-poly version. Virtual "rays" will be cast from the high- to the low-poly model -- a technique known as "projecting". This allows for a better application of your high-poly mesh normal map to your low-poly mesh since you're projecting the detail from the high to the low. However, some applications will switch the requirements and have your low poly mesh be inside your high, and others allow the rays for generating the normal map to be cast both ways. So refer to your application tutorials for how to do this as it may vary. Creating a normal map from an image. This method can be quicker than the more common method described above. For this, all you need is an edited version of your diffuse map. The key to good-looking image-based normal maps is to edit out any unneeded information for your grayscale diffuse texture. If your diffuse has any baked-in specular, shadows, or any information that does not define depth, this needs to be removed from the image. Also, anything that is extra, like strips of paint on a concrete texture, that too should be edited out. This is because, just like bump maps, and displacement maps, the colour of the pixels defines depth, with lighter pixels being "higher" than darker pixels. So, if you have specular (will turn white when made gray), it will be interpreted as a bump in your normal map, you don't want this if the specular in fact lays on a flat surface. The same applies to shadows and everything else mentioned: it will all interfere with the normal map generation process. You simply want to only have various shades of gray represent various depths, any other data in the texture will not produce the correct results.

      For generating normal maps you can always use the Nvidia Plugin, however it takes a lot of tweaking to get a good looking normal map. As such, I recommend Crazy Bump!. Crazy Bump will produce some very good normal maps if the given texture it is generating it from is good. Combining the two methods. It is common, even if you're generating a normal map from a 3d high-poly mesh to then generate an image generated normal map and overlay it over the high-poly generated one. This is done by generating one from your diffuse map, filling the resulting normal map's blue channel with 128 neutral gray, and then overlaying this over your high-poly generated one. This is done to add in those small details that only the image can generate. This way you get the high frequency detail along with the nice and cleanly generated mid-to-low frequency detail from your high-poly generated normal map. Pixel Density and Fill Rate Limitation Let's say you have a coin that you just finished UVW unwrapping, it will indeed be very small once in-game, however you decide it would be fine to use a 1024x1024 texture. What is wrong with the above situation? Firstly, you shouldn't need to UVW unwrap a coin! Furthermore, you should not be applying the 1024x1024 texture! Not only is this wasteful of video memory, but it will result in uneven pixel density and will increase your fill rate on that model for no reason. A good rule of thumb is to only use the amount of resources that would make sense based on how much screen space an object will take up. A building will take up more of the screen than a coin, so it needs a higher resolution texture and more polygons. A coin takes up less screen space and therefore needs fewer polys and a lower resolution texture to obtain a similar pixel density.

      So, what is pixel density? It is the density of each pixel from a texture on a mesh. For example, take the UVW unwrapping tutorial linked to in the "Texture Image Size" section: There you will see a checkered pattern, this is not only used to make sure the texture is applied right, but to also keep track of pixel density. If you increased the pixel density, you would see the checkered pattern get more dense; if you decrease the density, the checkered pattern would be less dense, with fewer squares showing.

      Maintaining a consistent pixel density in a game helps all of the art fit together. How would you feel if your high pixel density character walks up to a wall with a significantly lower pixel density? Your eyes would be able to compare the two and see that the wall looks like crap compared to the character, however would this same thing happen if the character were near the same pixel density of the wall? Probably not -- such things only become apparent (within reason) to the end user if they have something to compare it to. If you keep a consistent pixel density throughout all of the game's assets, you will see all of it fits together better.

      It is important to note that there is one other reason for this, but we'll come to it in a moment. First, we need to look at two related problems that can arise: transform(ation) limited and fill-rate limited modeling. A transform-limited model will have less pixel density per polygon than a fill-rate limited model where it has a higher pixel density per polygon. The theory is that a model takes longer on either processing the polys, or processing the actual pixel-dense surface. Knowing this, we can see that our coin, with very few polys will have a giant pixel density per polygon, resulting in a fill rate limited mesh. However, it does not need to be fill rate limited if we lower the texture resolution, resulting in a lower pixel density.

      The point is that your mesh will be held back when rendering based on which process takes longer: transform or fill rate. If your mesh is fill rate limited then you can speed up its processing by decreasing its pixel density, and its speed will increase until you reach transform limitation, in which your mesh is now taking longer to render based on the amount of polygons it contains. In the latter case, you would then speed up the processing of the model by decreasing the amount of polygons the model contains. That is, until you decrease the polygon count to the point where you're now fill rate limited once again! As you can see, it's a balancing act. The trick is to maximize the speed of the transform and fill rate processing (minimize the impact of both as much as you can), to get the best possible processing speed for your mesh.

      That said, being fill rate limited can sometimes be a good thing. The majority of "next-gen" games are fill rate limited primarily because of their use of texture/shading techniques. So, if you can't possibly get any lower on the fill rate limitation and you're still fill rate limited, then you have a little bit of wiggle room to work around where you can actually introduce more polygons with no performance hit. However, you should always try to cut down on fill rate limitations when possible because of general performance concerns

      Some methods revolve around splitting up a single polygon into multiple polygons on a single mesh (like a wall for example). This works by then decreasing the pixel density and processing (shaders) for the single polygon by splitting the work into multiple polygons. There are other methods for dealing with fill rate limitation, but mainly it is as simple as keeping your pixel density at a reasonable level.       MipMaps It is fitting that after we discuss pixel density and fill rate limitation that we discuss a thing called Mipmapping. Mipmaps (or mip maps) are a collection of precomputed lower resolution image copies for a texture contained in the texture. Let's say you have a 1024x1024 texture. If you generate mipmaps for your texture, it will contain the original 1024x1024 texture, but it will also contain a 512x512, 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, 2x2 version of the same texture, (exactly how many levels there are is up to you). These smaller mipmaps (textures) are then used in sequence, one after the other, according to the model's distance from the camera in the scene. If your model uses a 1024x1024 texture up close, it may be using a 256x256 texture when further away, and an even smaller mipmap texture level when it's way off in the distance. This is done because of many things: The further away you are from your mesh, the less polygonal and texture detail is needed. This is because all displays have a fixed display resolution and it is physically impossible for the player to decipher the detail of a 1024x1024 texture and 6,000 polygon mesh when the model takes up only 20 pixels on screen. The further away we are from the mesh, the fewer the polygons and the lower the texture resolution we need to render it. Because of the whole fill rate limitation described above, it is beneficial to use mipmaps as less texture detail must be processed for distant meshes. This results is a less fill-rate-heavy scene because only the closer models are receiving larger textures, whereas more distant models are receiving smaller textures. Texture filtering. What happens when the player tries to view a 1024x1024 texture at such a distance that only 40 pixels are given to render the mesh on screen? You get noise and aliasing artefacts! Without mipmaps, any textures too large for the display resolution will only result in unneeded and unwanted noise. Instead of filtering out this noise, mipmaps use a lower resolution texture for different distances, this results in less noise. It is important to note, that while mipmaps will increase performance overall, you're actually increasing your texture memory usage. The mipmaps and the whole texture will be loaded into memory. However, it is possible to have a system where the user or dev can select the highest mipmap they want and the ones higher than this limit will not be loaded into memory (as the system we're using now), however the mipmaps which meet or are lower than this limit will still be loaded into memory. It is widely agreed that the benefits of mipmaps vastly outweigh the small memory hit. NOTE: Mesh detail can also affect performance, so the equivalent method used for mesh detail is known as LOD -- "Level Of Detail. Multiple versions of the mesh itself are stored at different levels of detail. The less-detailed mesh is rendered when it's a long way away. Like mipmaps, a mesh can have any number of levels of detail you feel it requires.

      The image below is of a level I created which makes use of most of what we've discussed. It makes use of S3TC compressed textures, normal mapping, diffuse mapping, specular mapping, and tiled textures.
         
      0 comments
    17. Sounds This is our final part, 5 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and part 4 is here. It's great that collecting the pickups work, but a silent game is pretty bland. It would be great to have a sound play whenever a pickup is collected. Start by configuring a sound: ¬† [PickupSound] Sound = pickup.ogg KeepInCache = true ¬† Then as part of the collision detection in the PhysicsEventHandler function, we change the code to be: ¬† if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); } ¬† In code above, if the recipient is a pickup object, then use the orxObject_AddSound function to place our sound on the sender object. There's little point adding a sound to an object that is about to be deleted. And of course, if the pickup object is the sender, we add the sound to the recipient object. Also, the PickupSound that is added to the object, is the config section name we just defined in the config. Compile and run. Hit the pickups and a sound will play. You can also use sounds without code. There is an AppearSound section already available in the config. We can use this sound on the ufo when it first appears in the game. This is as simple as adding a SoundList property to the ufo: ¬† [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody AngularVelocity = 200 SoundList = SoundAppear ¬† Re-run and a nice sound plays at the start of the game. ¬† Adding a score What's a game without a score? We need to earn points for every pickup that is collected. The great thing about Orx objects is that they don't have to contain a texture as a graphic. They can contain a font and text rendered to a graphic instead. This is perfect for making a score object. Start by adding some config for the ScoreObject: ¬† [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0) ¬† Next, to add the ScoreTextGraphic section, which will not be a texture, but text instead: ¬† [ScoreTextGraphic] Text = ScoreText ¬† Now to define the ScoreText which is the section that contains the text information: ¬† [ScoreText] String = 10000 ¬† The String property contains the actual text characters. This will be the default text when a ScoreObject instance is created in code. Let's now create an instance of the ScoreObject in the Init() function: ¬† orxObject_CreateFromConfig("ScoreObject"); ¬† So far, the Init() function should look like this: ¬† orxSTATUS orxFASTCALL Init() { orxVIEWPORT *viewport = orxViewport_CreateFromConfig("Viewport"); camera = orxViewport_GetCamera(viewport); orxObject_CreateFromConfig("BackgroundObject"); ufo = orxObject_CreateFromConfig("UfoObject"); orxCamera_SetParent(camera, ufo); orxObject_CreateFromConfig("PickupObjects"); orxObject_CreateFromConfig("ScoreObject"); orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL); orxEvent_AddHandler(orxEVENT_TYPE_PHYSICS, PhysicsEventHandler); return orxSTATUS_SUCCESS; } ¬† Compile and run. There should be a score object in the top left hand corner displaying: 10000 The score is pretty small. And it's fixed into the top left corner of the playfield. That's not really what we want. A score is an example of a User Interface (UI) element. It should be fixed in the same place on the screen. Not move around when the screen scrolls. The score should in fact, be fixed as a child to the Camera. Wherever the Camera goes, the score object should go with it. This can be achieved with the ParentCamera property, and then setting the position of the score relative to the camera's centre position: ¬† [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0) ParentCamera = Camera UseParentSpace = false ¬† With these changes, we've stated that we want the Camera to be the parent of the ScoreObject. In other words, we want the ScoreObject to travel with the Camera and appear to be fixed on the screen. By saying that we don't want to UseParentSpace means that we want specify relative world coordinates from the centre of the camera. If we said yes, we'd have to specify coordinates in another system. And Position, of course, is the position relative to the center of the camera. In our case, moved to the top left corner position. Re-run and you'll see the score in much the same position as before, but when you move the ufo around, and the screen scrolls, the score object remains fixed in the same place. The only thing, it's still a little small. We can double its size using Scale: ¬† [ScoreObject] Graphic = ScoreTextGraphic Position = (-380, -280, 0) ParentCamera = Camera UseParentSpace = false Scale = 2.0 Smoothing = false Smoothing has been set to false so that when the text is scaled up, it will be sharp and pixellated rather than smoothed up which looks odd. All objects in our project are smooth be default due to: ¬† [Display] Smoothing = true: So we need to explicitly set the score to not smooth. Re-run. That looks a lot better. To actually make use of the score object, we will need a variable in code of type int to keep track of the score. Every clock cycle, we'll take that value and change the text on the ScoreObject. That is another cool feature of Orx text objects: the text can be changed any time, and the object will re-render. Finally, when the ufo collides with the pickup, and the pickup is destroyed, the score variable will be increased. The clock will pick up the variable value and set the score object. Begin by creating a score variable at the very top of the code: ¬† #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; int score = 0; ¬† Change the comparison code inside the PhysicsEventHandler function to increase the score by 150 points every time a pickup is collected: ¬† if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); score += 150; } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); score += 150; } ¬† Now we need a way to change the text of the score object. We declared the score object in the Init() function as: ¬† orxObject_CreateFromConfig("ScoreObject"); ¬† But we really need to create it using an orxOBJECT variable: ¬† scoreObject = orxObject_CreateFromConfig("ScoreObject"); ¬† And then declare the scoreObject at the top of the file: ¬† #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; orxOBJECT *scoreObject; int score = 0; ¬† Now it is possible to update the scoreObject using our score variable. At the bottom of the Update() function, add the following code: ¬† if (scoreObject) { orxCHAR formattedScore[5]; orxString_Print(formattedScore, "%d", score); orxObject_SetTextString(scoreObject, formattedScore); } ¬† First, the block will only execute if there is a valid scoreObject. If so, then create a 5 character string. Then print into the string with the score value, effectively converting an int into a string. Finally set the score text to the scoreObject using the orxObject_SetTextString function. Compile and Run. Move the ufo around and collect the pickups to increase the score 150 points at a time. ¬† Winning the game 1200 is the maximum amount of points that can be awarded, and that will mean we've won the game. If we do win, we want a text label to appear above the ufo, saying ‚ÄúYou win!‚ÄĚ. Like the score object, we need to define a YouWinObject: ¬† [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false ¬† Just like the camera, the YouWinObject is going to be parented to the ufo too. This will give the appearance that the YouWinObject is part of the ufo. The Scale is set to x2. The Position is set offset up in the y axis so that it appears above the ufo. Next, the actual YouWinTextGraphic: ¬† [YouWinTextGraphic] Text = YouWinText Pivot = center ¬† And the text to render into the YouWinTextGraphic: ¬† [YouWinText] String = You Win! ¬† We'll test it by creating an instance of the YouWinObject, putting it into a variable, and then parent it to the ufo in the Init() function: ¬† orxObject_CreateFromConfig("PickupObjects"); scoreObject = orxObject_CreateFromConfig("ScoreObject"); ufoYouWinTextObject = orxObject_CreateFromConfig("YouWinObject"); orxObject_SetParent(ufoYouWinTextObject, ufo); ¬† Then the variable: ¬† #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; orxOBJECT *ufoYouWinTextObject; orxOBJECT *scoreObject; int score = 0; ¬† Compile and Run. The ‚ÄúYou win‚ÄĚ text should appear above the ufo. Not bad, but the text is rotating with the ufo much like the camera was before. We can ignore the rotation from the parent on this object too: ¬† [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false IgnoreFromParent = rotation ¬† Re-run. Interesting. It certainly isn't rotating with the ufo, but its position is still being taken from the ufo's rotation. We need to ignore this as well: ¬† [YouWinObject] Graphic = YouWinTextGraphic Position = (0, -60, 0.0) Scale = 2.0 Smoothing = false IgnoreFromParent = position.rotation rotation ¬† Good that's working right. We want the ‚ÄúYou Win!‚ÄĚ to appear once all pickups are collected. The YouWinObject object on created on the screen when the game starts. But we don't want it to appear yet. Only when we win. Therefore, we need to disable the object immediately after it is created using the orxObject_Enable function: ¬† ufoYouWinTextObject = orxObject_CreateFromConfig("YouWinObject"); orxObject_SetParent(ufoYouWinTextObject, ufo); orxObject_Enable(ufoYouWinTextObject, orxFALSE); ¬† Finally, all that is left to do is add a small check in the PhysicsEventHandler function to test the current score after each pickup collision: ¬† if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); orxObject_AddSound(pstSenderObject, "PickupSound"); score += 150; } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); orxObject_AddSound(pstRecipientObject, "PickupSound"); score += 150; } if (orxObject_IsEnabled(ufoYouWinTextObject) == orxFALSE && score == 1200) { orxObject_Enable(ufoYouWinTextObject, orxTRUE); } ¬† We are checking two things: that the ufoYouWinTextObject is not yet enabled using the orxObject_IsEnabled function, and if the score is 1200. If both conditions are met, enable the ufoYouWinTextObject. Compile and run. Move the ufo around and collect all the pickups. When all are picked up and 1200 is reached, the ‚ÄúYou Win!‚ÄĚ text should appear above the ufo signifying that the game is over and we have won. And that brings us to the end! We have created a simple and complete game with some configuration and minimal code. Congratulations! I hope you enjoyed working through making the ufo game using the Orx Portable Game Engine. Of course, there are many little extras you can add to give your game that little extra polish. So, for just a bit more eye candy, there a couple more sections that you can follow along with if you wish. ¬† Shadows There are many ways to do shadows. One method is to use shaders‚Ķ though this method is a little beyond this simple guide. Another method, when making your graphics, would be to add an alpha shadow underneath. This is a good method if your object does not need to rotate or flip. The method I will show you in this chapter is to have a separate shadow object as a child of an object. And in order to remain independent of rotations, the children will ignore rotations from the parent. First a shadow graphic for the ufo, and one for the pickups: ¬† Save these both into the data/texture folder. Then create config for the ufo shadow: ¬† [UfoShadowGraphic] Texture = ufo-shadow.png Alpha = 0.3 Pivot = center ¬† The only interesting part is the Alpha property. 0.1 would be almost completely see-through (or transparent), and 1.0 is not see-through at all, which is the regular default value for a graphic. 0.3 is fairly see-through. ¬† [UfoShadowObject] Graphic = UfoShadowGraphic Position = (20, 20, 0.05) ¬† Set the Position a bit to the right, and downwards. Next, add the UfoShadowObject as a child of the UfoObject: ¬† [UfoObject] Graphic = UfoGraphic Position = (0,0, -0.1) Body = UfoBody AngularVelocity = 200 UseParentSpace = position SoundList = AppearSound ChildList = UfoShadowObject ¬† Run the project. The shadow child is sitting properly behind the ufo but it rotates around the ufo, until it ends up at the bottom left which is not correct. We'll need to ignore the rotation from the parent: ¬† [UfoShadowObject] Graphic = UfoShadowGraphic Position = (20, 20, 0.05) IgnoreFromParent = position.rotation rotation ¬† Not only do we need to ignore the rotation of ufo, we also need to ignore the rotation position of the ufo. Re-run and the shadow sits nice and stable to the bottom right of the ufo. Now to do the same with the pickup shadow: ¬† [PickupShadowGraphic] Texture = pickup-shadow.png Alpha = 0.3 Pivot = center [PickupShadowObject] Graphic = PickupShadowGraphic Position = (20, 20, 0.05) IgnoreFromParent = position.rotation ¬† The only difference between this object and the ufo shadow, is that we want the pickup shadow to take the rotation value from the parent. But we do not want to take the position rotation. That way, the pickup shadow will remain in the bottom right of the pickup, but will rotate nicely in place. Now attach as a child to the pickup object: ¬† [PickupObject] Graphic = PickupGraphic FXList = RotateFX Body = PickupBody ChildList = PickupShadowObject ¬† Re-run, and the shadows should all be working correctly. And that really is it this time. I hope you made it this far and that you enjoyed this series of articles on the Orx Portable Game Engine. If you like what you see and would like to try out a few more things with Orx, head over our learning wiki where you can follow more beginner guides, tutorials and examples. You can always get the latest news on Orx at the official website. If you need any help, you can get in touch with the community on gitter, or at the forum. They're a friendly helpful bunch over there, always ready to welcome newcomers and assist with any questions. ¬† ¬†
      0 comments
    18. Creating Pickup Objects This is part 4 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and part 3 is here. In our game, the player will be required to collect objects scattered around the playfield with the ufo. When the ufo collides with one, the object will disappear, giving the impression that it has been picked up. Begin by creating a config section for the graphic, and then the pickup object: ¬† [PickupGraphic] Texture = pickup.png Pivot = center [PickupObject] Graphic = PickupGraphic ¬† The graphic will use the image pickup.png which is located in the project's data/object folder. It will also be pivoted in the center which will be handy for a rotation effect later. Finally, the pickup object uses the pickup graphic. Nice and easy. Our game will have eight pickup objects. We need a simple way to have eight of these objects in various places. We will employ a nice trick to handle this. We will make an empty object, called PickupObjects which will hold eight copies of the pickup object as child objects. That way, wherever the parent is moved, the children move with it. Add that now: ¬† [PickupObjects] ChildList = PickupObject1 # PickupObject2 # PickupObject3 # PickupObject4 # PickupObject5 # PickupObject6 # PickupObject7 # PickupObject8 Position = (-400, -300, -0.1) ¬† This object will have no graphic. That's ok. It can still act like any other object. Notice the position. It is being positioned in the top left hand corner of the screen. All of the child objects PickupObject1 to PickupObject8 will be positioned relative to the parent in the top left corner. Now to create the actual children. We'll use the inheritance trick again, and just use PickupObject as a template: ¬† [PickupObject1@PickupObject] Position = (370, 70, -0.1) [PickupObject2@PickupObject] Position = (210, 140, -0.1) [PickupObject3@PickupObject] Position = (115, 295, -0.1) [PickupObject4@PickupObject] Position = (215, 445, -0.1) [PickupObject5@PickupObject] Position = (400, 510, -0.1) [PickupObject6@PickupObject] Position = (550, 420, -0.1) [PickupObject7@PickupObject] Position = (660, 290, -0.1) [PickupObject8@PickupObject] Position = (550, 150, -0.1) ¬† Each of the PickupObject* objects uses the properties defined in PickupObject. And the only difference between them are their Position properties. The last thing to do is to create an instance of PickupObjects in code in the Init() function: ¬† orxObject_CreateFromConfig("PickupObjects"); ¬† Compile and Run. Eight pickup objects should appear on screen. Looking good. It would look good if the pickups rotated slowly on screen, just to make them more interesting. This is very easy to achieve in Orx using FX. FX can also be defined in config. FX allows you to affect an object's position, colour, rotation, scaling, etc, even sound can be affected by FX. Change the PickupObject by adding a FXList property: ¬† [PickupObject] Graphic = PickupGraphic FXList = SlowRotateFX ¬† Clearly being an FXList you can have many types of FX placed on an object at the same time. We will only have one. An FX is a collection of FX Slots. FX Slots are the actual effects themselves. Confused? Let's work through it. First, the FX: ¬† [SlowRotateFX] SlotList = SlowRotateFXSlot Loop = true ¬† This simply means, use some effect called SlowRotateFXSlot, and when it is done, do it again in a loop. Next the slot (or effect): ¬† [SlowRotateFXSlot] Type = rotation StartTime = 0 EndTime = 10 Curve = linear StartValue = 0 EndValue = 360 ¬† That's a few properties. First, the Type, which is a rotation FX. The total time for the FX is 10 seconds, which comes from the StartTime and EndTime properties. The Curve type is linear so that the values changes are done so in a strict and even manner. And the values which the curve uses over the 10 second period starts from 0 and climbs to 360. Re-run and notice the pickups now turning slowly for 10 seconds and then repeating. ¬† Picking up the collectable objects Time to make the ufo collide with the pickups. In order for this to work (just like for the walls) the pickups need a body. And the body needs to be set to collide with a ufo and vice versa. First a body for the pickup template: ¬† [PickupObject] Graphic = PickupGraphic FXList = SlowRotateFX Body = PickupBody ¬† Then the body section itself: ¬† [PickupBody] Dynamic = false PartList = PickupPart ¬† Just like the wall, the pickups are not dynamic. We don't want them bouncing and traveling around as a result of being hit by the ufo. They are static and need to stay in place if they are hit. Next to define the PickupPart: ¬† [PickupPart] Type = sphere Solid = false SelfFlags = pickup CheckMask = ufo ¬† The pickup is sort of roundish, so we're going with a spherical type. It is not solid. We want the ufo to able to pass through it when it collides. It should not influence the ufo's travel at all. The pickup is given a label of pickup and will only collide with an object with a label of ufo. The ufo must reciprocate this arrangement (just like a good date) by adding pickup to its list of bodypart check masks: ¬† [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo Friction = 1.2 CheckMask = wall # pickup ¬† This is a static bodypart, and we have specified collision actions to occur if the ufo collides with a pickup. But it's a little difficult to test this right now. However you can turn on the debug again to check the body parts: ¬† [Physics] Gravity = (0, 0, 0) ShowDebug = true ¬† Re-run to see the body parts. Switch off again: ¬† [Physics] Gravity = (0, 0, 0) ShowDebug = false ¬† To cause a code event to occur when the ufo hits a pickup, we need something new: a physics hander. The hander will run a function of our choosing whenever two objects collide. We can test for these two objects to see if they are the ones we are interested in, and run some code if they are. First, add the physics hander to the end of the Init() function: ¬† orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL); orxEvent_AddHandler(orxEVENT_TYPE_PHYSICS, PhysicsEventHandler); ¬† This will create a physics handler, and should any physics event occur, (like two objects colliding) then a function called PhysicsEventHandler will be executed. Our new function will start as: ¬† orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent) { if (_pstEvent->eID == orxPHYSICS_EVENT_CONTACT_ADD) { orxOBJECT *pstRecipientObject, *pstSenderObject; /* Gets colliding objects */ pstRecipientObject = orxOBJECT(_pstEvent->hRecipient); pstSenderObject = orxOBJECT(_pstEvent->hSender); const orxSTRING recipientName = orxObject_GetName(pstRecipientObject); const orxSTRING senderName = orxObject_GetName(pstSenderObject); orxLOG("Object %s has collided with %s", senderName, recipientName); return orxSTATUS_SUCCESS; } } ¬† Every handler function passes an orxEVENT object in. This structure contains a lot of information about the event. The eID is tested to ensure that the type of physics event that has occurred is a orxPHYSICS_EVENT_CONTACT_ADD which indicates when objects collide. If true, then two orxOBJECT variables are declared, then set from the orxEVENT structure. They are passed in as the hSender and hRecipient objects. Next, two orxSTRINGs are declared and are set by getting the names of the objects using the orxObject_GetName function. The name that is returned is the section name from the config. Potential candidates are: UfoObject, BackgroundObject, and PickupObject1 to PickupObject8. The names are then sent to the console. Finally, the function returns orxSTATUS_SUCCESS which is required by an event function. Compile and run. If you drive the ufo into a pickup or the edge of the playfield, a message will display on the console. So we know that all is working. Next is to add code to remove a pickup from the playfield if the ufo collides with it. Usually we could compare the name of one object to another and perform the action. In this case, however, the pickups are named different things: PickupObject1, PickupObject2, PickupObject3‚Ķ up to PickupObject8. So we will need to actually just check if the name contains ‚ÄúPickupObject‚ÄĚ which will match well for any of them. In fact, we don't need to test for the ‚Äúother‚ÄĚ object in the pair of colliding objects. Ufo is a dynamic object and everything else on screen is static. So if anything collides with PickupObject*, it has to be the ufo. Therefore, we won't need to test for that. First, remove the orxLOG line. We don't need that anymore. Change the function to become: ¬† orxSTATUS orxFASTCALL PhysicsEventHandler(const orxEVENT *_pstEvent) { if (_pstEvent->eID == orxPHYSICS_EVENT_CONTACT_ADD) { orxOBJECT *pstRecipientObject, *pstSenderObject; /* Gets colliding objects */ pstRecipientObject = orxOBJECT(_pstEvent->hRecipient); pstSenderObject = orxOBJECT(_pstEvent->hSender); const orxSTRING recipientName = orxObject_GetName(pstRecipientObject); const orxSTRING senderName = orxObject_GetName(pstSenderObject); if (orxString_SearchString(recipientName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstRecipientObject, 0); } if (orxString_SearchString(senderName, "PickupObject") != orxNULL) { orxObject_SetLifeTime(pstSenderObject, 0); } } return orxSTATUS_SUCCESS; } ¬† You can see the new code additions after the object names. If an object name contains the word ‚ÄúPickupObject‚ÄĚ, then the ufo must have collided with it. Therefore, we need to kill it off. The safest way to do this is by setting the object's lifetime to 0. This will ensure the object is removed instantly and deleted by Orx in a safe manner. Notice that the test is performed twice. Once, if the pickup object is the sender, and again if the object is the recipient. Therefore we need to check and handle both. Compile and run. Move the ufo over the pickups and they should disappear nicely. We'll leave it there for the moment. In the final, Part 5, we'll cover adding sounds, a score, and winning the game.
      0 comments
    19. Collisions This is part 3 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here, and Part 2 is here. There is one last requirement for the collision to occur: we need to tell the physics system, who can collide with who. This is done with flags and masks. Make a change to the ufo's body part by adding SelfFlags and CheckMask: ¬† [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo CheckMask = wall ¬† SelfFlags is the label you assign to one object, and CheckMask is the list of labels that your object can collide with. These labels don't have to match the names you give objects, however it will help you stay clean and organised. So in the config above, we are saying: the UfoBodyPart is a ‚Äúufo‚ÄĚ and it is expected to collide with any bodypart marked as a ‚Äúwall‚ÄĚ. But we haven't done that yet, so let's do it now. We will only need to add it to the WallTopPart: ¬† [WallTopPart] Type = box Solid = true SelfFlags = wall CheckMask = ufo TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0) ¬† Remember, that the other three wall parts inherit the values from WallTopPart. So each now carries the label of ‚Äúwall‚ÄĚ and they will collide with any other body part that carries the label of ‚Äúufo‚ÄĚ. Re-run and press the left arrow key and drive the ufo into the left wall. It collides! And it stops. Now that the collision is working, we can flesh out the rest of the keyboard controls and test all four walls: ¬† void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { const orxFLOAT FORCE = 0.8; orxVECTOR rightForce = { FORCE, 0, 0 }; orxVECTOR leftForce = { -FORCE, 0, 0 }; orxVECTOR upForce = { 0, -FORCE, 0 }; orxVECTOR downForce = { 0, FORCE, 0 }; if (orxInput_IsActive("GoLeft")) { orxObject_ApplyForce(ufo, &leftForce, orxNULL); } if (orxInput_IsActive("GoRight")) { orxObject_ApplyForce(ufo, &rightForce, orxNULL); } if (orxInput_IsActive("GoUp")) { orxObject_ApplyForce(ufo, &upForce, orxNULL); } if (orxInput_IsActive("GoDown")) { orxObject_ApplyForce(ufo, &downForce, orxNULL); } } } ¬† Now is a good time to turn off the physics debug as we did earlier on. Compile and run. Try all four keys, and you should be able to move the ufo around the screen. The ufo can also collide with each wall. The ufo is a little boring in the way that it doesn't spin when colliding with a wall. We need to ensure the UfoBody is not using fixed rotation. While this value defaults to false when not supplied, it will make things more readable if we explicitly set it: ¬† [UfoBody] Dynamic = true PartList = UfoBodyPart FixedRotation = false ¬† The active ingredient here is to ensure that both the wall bodypart and the ufo bodypart both have a little friction applied. This way when they collide, they will drag against each other and produce some spin: ¬† [UfoBodyPart] Type = sphere Solid = true SelfFlags = ufo CheckMask = wall Friction = 1.2 [WallTopPart] Type = box Solid = true SelfFlags = wall CheckMask = ufo TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0) Friction = 1.2 ¬† Re-run that and give it a try. Run against a wall on angle to get some spin on the ufo. The next thing to notice is that both the movement of the ufo and the spin never slow down. There is no friction to slow those down. We'll deal with the spin first. By adding some AngularDamping on the UfoBody, the spin will slow down over time: ¬† [UfoBody] Dynamic = true PartList = UfoBodyPart AngularDamping = 2 FixedRotation = false ¬† Re-run and check the spin. The ufo should be slowing down after leaving the wall. Now for the damping on the movement. That can be done with LinearDamping on the UfoBody: ¬† [UfoBody] Dynamic = true PartList = UfoBodyPart AngularDamping = 2 FixedRotation = false LinearDamping = 5 ¬† Re-run and the speed will slow down after releasing the arrow keys. But it's slower overall as well. Not 100% what we want. You can increase the FORCE value in code (ufo.cpp), in the Update function to compensate: ¬† const orxFLOAT FORCE = 1.8; ¬† Compile and run. The speed should be more what we expect. It would be nice for the ufo to be already spinning a little when the game starts. For this, add a little AngularVelocity : ¬† [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody AngularVelocity = 200 ¬† Run this and the ship will have a small amount of spin at the start until the AngularDamping on the ufo body slows it down again. ¬† Following the UFO with the camera While we can simply move the ufo around with the keys on a fixed background, it will be a more pleasant experience to have the ufo fixed and have the screen scroll around instead. This effect can be achieved by parenting the camera to the ufo so that wherever the ufo goes, the camera goes. Currently, our project is set up so that the viewport has a camera configured to it. But the camera is not available to our code. We will require the camera to be available in a variable so that it can be parented to the ufo object. To fix this, in the Init() function, extract the camera from the viewport into a variable by first removing this line: orxViewport_CreateFromConfig("Viewport"); to: orxVIEWPORT *viewport = orxViewport_CreateFromConfig("Viewport"); camera = orxViewport_GetCamera(viewport); ¬† And because the camera variable isn't defined, do so at the top of the code: ¬† #include "orx.h" orxOBJECT *ufo; orxCAMERA *camera; ¬† Now it is time to parent the camera to the ufo in the init() function using the orxCamera_SetParent function: ¬† ufo = orxObject_CreateFromConfig("UfoObject"); orxCamera_SetParent(camera, ufo); ¬† Compile and Run. Woah, hang on. That's crazy, the whole screen just rotated around when ufo. And it continues to rotate when hitting the ufo against the walls. See how the camera is a child of the ufo now? Not only does the camera move with the ufo, it rotates with it as well. We certainly want it to move with the ufo, but it would be nice ignore the rotation from the parent ufo. Add the IgnoreFromParent property to the MainCamera section: ¬† [MainCamera] FrustumWidth = 1024 FrustumHeight = 720 FrustumFar = 2.0 FrustumNear = 0.0 Position = (0.0, 0.0, -1.0) IgnoreFromParent = rotation ¬† Re-run. That's got it fixed. Now when you move around, the playfield will appear to scroll rather than it being the ufo that moves. This makes for a more dramatic and interesting effect. In Part 4, we will give the ufo something to do. The goal is to collect several pickups.
      0 comments
    20. The UFO This is part 2 of a series on creating a game with the Orx Portable Game Engine. Part 1 is here. We have a playfield, and now we need a UFO character for the player to control. The first step is the create the configuration for the ufo object in ufo.ini: ¬† [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) This indicates that the UfoObject should use a graphic called UfoGraphic. Secondly, its position will be centered in the playfield with (x,y) = (0,0). The -0.1 is the Z-axis, and this will be placed above the BackgroundObject whose Z-axis is set to 0. Then the UfoGraphic which the UfoObject needs: ¬† [UfoGraphic] Texture = ufo.png Pivot = center ¬† Unlike the background object, our ufo object will need to be assigned to a variable. This will make it possible to affect the ufo using code: Create the variable for our ufo object just under the orx.h include line: ¬† #include "orx.h" orxOBJECT *ufo; ¬† And in the Init() function, create an instance of the ufo object with: ¬† ufo = orxObject_CreateFromConfig("UfoObject"); ¬† Compile and run. You'll see a ufo object in front of the background. Excellent. Time to move to something a little more fun, moving the ufo. ¬† Controlling the UFO The ufo is going to be controlled using the cursor arrow keys on the keyboard. The ufo will be moved by applying forces. Physics will be set up in the project in order to do this. Also, we will use a clock to call an update function regularly. This function will read and respond to key presses. ¬† Defining Direction Keys Defining the keys is very straight forward. In the config file, expand the MainInput section in the ufo.ini by adding the four cursor keys: ¬† [MainInput] KEY_ESCAPE = Quit KEY_UP = GoUp KEY_DOWN = GoDown KEY_LEFT = GoLeft KEY_RIGHT = GoRight ¬† Each key is being given a label name, like: GoUp or GoDown. These label names are available in our code to test against. The next step is to create an update callback function in our code where the keys presses are checked: ¬† void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { } ¬† And in order to tie this function to a clock (the clock will execute this function over and over), add the following to bottom of the Init() function: ¬† orxClock_Register(orxClock_FindFirst(orx2F(-1.0f), orxCLOCK_TYPE_CORE), Update, orxNULL, orxMODULE_ID_MAIN, orxCLOCK_PRIORITY_NORMAL); ¬† That looks very scary and intimidating, but the only part that is important to you is the parameter with ‚ÄúUpdate‚ÄĚ. This means, tell the existing core clock to continually call our ‚ÄúUpdate‚ÄĚ function. Of course you can specify any function name here you like as long as it exists. Let's test a key to ensure that our event is working well. Add the following code into the Update function: ¬† void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { if (orxInput_IsActive("GoLeft")) { orxLOG("LEFT PRESSED!"); } } } ¬† Every time Update is run, ufo is tested to ensure it exists, and then moves to check the input system for the label ‚ÄúGoLeft‚ÄĚ (if it is active or pressed). Remember how GoLeft is bound to KEY_LEFT in the MainInput config section? If that condition is true, send ‚ÄúLEFT PRESSED!‚ÄĚ to the console output window while the key is pressed or held down. Soon we'll replace the orxLOG line with a function that places force on the ufo. But before that, we need to add physics to the ufo. Compile and run. Press the left arrow key and take note of the console window. Every time you press or hold the key, the message is printed. Good, so key presses are working. ¬† Physics In order to affect the ufo using forces, physics need to be enabled. Begin by adding a Physics config section and setting Gravity with: ¬† [Physics] Gravity = (0, 980, 0) ¬† In order for an object in Orx to be affected by physics, it needs both a dynamic body, and at least one bodypart. Give the ufo a body with the Body property: ¬† [UfoObject] Graphic = UfoGraphic Position = (0, 0, -0.1) Body = UfoBody ¬† Next, create the UfoBody section and define the UfoBodyPart property: ¬† [UfoBody] Dynamic = true PartList = UfoBodyPart ¬† The body part is set to Dynamic which means that it is affected by gravity and collisions. A body needs at least one part, and so we need to define the UfoBodyPart: ¬† [UfoBodyPart] Type = sphere Solid = true ¬† The body part Type is set to be a sphere which will automatically size itself around the object's size, and the body is to be solid so that if it should collide with anything, it will not pass through it. Compile and Run. The ufo falls through the floor. This is because of the gravity setting of 980 in the y axis which simulates world gravity. Our game is a top down game. So change the Gravity property to: ¬† [Physics] Gravity = (0, 0, 0) ¬† Re-run (no compile needed) and the ufo should remain in the centre of the screen. The Physics section has another handy property available to visually test physics bodies on objects: ShowDebug. Add this property with true: ¬† [Physics] Gravity = (0, 0, 0) ShowDebug = true ¬† Re-run, and you will see a pinkish sphere outline automatically sized around the ufo object. For now we'll turn that off again. You can do this by changing the¬†ShowDebug value to false, adding a ; comment in front of the line or simply just deleting the line. We'll set our ShowDebug to false: ¬† [Physics] Gravity = (0, 0, 0) ShowDebug = false ¬† Let's add some force to the ufo if the left cursor key is pressed. Change the code in the Update function to be: ¬† void orxFASTCALL Update(const orxCLOCK_INFO *_pstClockInfo, void *_pContext) { if (ufo) { const orxFLOAT FORCE = 0.8; orxVECTOR leftForce= { -FORCE, 0, 0 }; if (orxInput_IsActive("GoLeft")) { orxObject_ApplyForce(ufo, &leftForce, orxNULL); } } } ¬† The orxObject_ApplyForce function takes an orxVECTOR facing left and applies it to the ufo object. Compile and re-run. If you press and release the left arrow key, the ufo will move to the left. If you hold the left key down, the ufo will increase its speed and move out the left hand side of the screen. Even if you tap the left key once quickly, the ufo will still eventually travel out of the left of the screen. There is no friction yet to slow it down, or any barriers to stop it going out of the screen. ¬† Barrier Around The Border Even though the background looks it has a border, it is really only a picture. In order to create a barrier for the ufo, we will need to wrap the edges using some body parts. This means, the background object will also be given a body, and four body parts, one for each wall. Start with adding a body to the object: ¬† [BackgroundObject] Graphic = BackgroundGraphic Position = (0, 0, 0) Body = WallBody ¬† And then the body itself: ¬† [WallBody] Dynamic = false PartList = WallTopPart # WallRightPart # WallBottomPart # WallLeftPart ¬† This is different from the ufo body. This body is not dynamic. This means that it is a static body, one that cannot be affected by gravity. But dynamic objects can still collide with it. Also, there are four parts to this body, unlike the ufo which only had one. Start with the WallTopPart first: ¬† [WallTopPart] Type = box Solid = true TopLeft = (-400, -300, 0) BottomRight = (400, -260, 0) ¬† In this part, the type is a box body part. It is set to solid for collisions, ie so that a dynamic object can collide with it but not pass though it. Stretch the box to cover the region from (-400,-300) to (400, -260). At this point, it might be a good idea to turn on the physics debugging to check our work: ¬† [Physics] Gravity = (0, 0, 0) ShowDebug = true ¬† Re-run the project. The top wall region should cover the top barrier squares: ¬† Great. Next, we'll do the right hand side. But rather than copy all the same values, we'll reuse some from the top wall: ¬† [WallRightPart@WallTopPart] TopLeft = (360, -260,0) BottomRight = (400, 260, 0) ¬† Notice the @WallTopPart in the section name? This means: copy all the values from WallTopPart, but any properties in WallRightPart will take priority. Therefore, use the Type, and Solid properties from WallTopPart, but use our own values for TopLeft and BottomRight for the WallRightPart section. This is called ‚ÄúSection Inheritance‚ÄĚ. This will come in very handy soon when we tweak values or add new properties to all four wall parts. Re-run the project, and there will now be two walls. Define the last two walls using the same technique: ¬† [WallBottomPart@WallTopPart] TopLeft = (-400,260,0) BottomRight = (400, 300, 0) [WallLeftPart@WallTopPart] TopLeft = (-400,-260,0) BottomRight = (-360, 260, 0) ¬† Now there are four walls for the ufo to collide with. Re-run and try moving the ufo left into the wall. Oops, it doesn't work. It still passes straight though. There is one last requirement for the collision to occur: we need to tell the physics system, who can collide with who. We'll cover that in Part 3.
      0 comments

    GameDev Contractors

    1. We are a full-service 3D animation and art studio with over 10 years of experience in animation, game development, training simulators and other related industries. We are proud of our 3d team ‚Äď consisting of best Ukrainian 3d professionals with extensive experience in AAA game titles, animated movies, cartoons. Our artists have background in gaming companies such as Crytek, Ubisoft, 4A Games, as well as big animation studios ¬† ¬† Please find more info about us at tavo-art.com
    2. Hi, My name is Clint, I run a small design studio. I specialize in 2D game illustration and animation. If your looking for a reliable studio to finish your game project, I am your guy. I have worked with clients from all corners of the world for over 18 years. My style is fun and colorful. I can help you with most of your design needs, from character development, UI, scenes, backgrounds, menus, through to final sprite animations. In any format needed for your project. As well as any marketing material, like banners, adds, or YouTube videos. I like to work with my clients on their projects, and get to know what your goals and ideas are that will help you achieve success. We can discuss your project needs and I will work out a pricing accordingly. I look forward to hearing about your next crazy project.   Contact me: Website - http://www.clintsuttonillustration.co.za
      Email - studio@clintsuttonillustration.co.za      
    3. Freelance 3D Modeler and Texture Artist with several years of experience in the movie (The Expendables and Survivor), gaming and 3D Printing industries. Have worked on games that have been published on Steam. Available for freelance work - contact me at nickeydimchev3d@gmail.com Portfolio: nickeydimchev3d.myportfolio.com
    4. Hi, I'm an industry professional with a decade of freelancing experience. I specialize in projects that seek further funding through the creation of a demo/prototype/POC and have a limited budget. I've worked on a number of highlight projects in the past and deliberately choose to focus on indie projects now. Past projects include: Resident Evil: Revelations II, Yu-Gi-Oh BAM!, Space Engineers and Seas of Fortune (IndieDB's Players Choice 2017). If you have a great idea but are lacking a developer that can put it together, look no further! I'll even get you through publishing! https://www.linkedin.com/in/michelmony/ Cheers!
    5. Hi. Our team deals with all kinds of graphics outsource. We work for any budget and are open for cooperation with indie developers. If you want to get high quality graphics for your games ‚Äď you are welcome. We specialize in: - GUI design
      - 2D characters with animations
      - 2D locations Our portfolio can be found at: https://kalitrom.artstation.com Contact us: kalitrom@gmail.com  
  • Advertisement
√ó

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!