• Advertisement

Blogs

Featured Entries

  • Components and Messages in Unity

    By nbrosz

    Up to now, I have had a tendency toward monolithic classes when developing in Unity. My experience has always been with the typical object-oriented approach (with the exception of when I was developing using batari Basic), but I’ve been trying to train myself toward small, reusable components with focused purposes. I’ve had some good success lately, breaking larger scripts into smaller ones and using interfaces as a means of communicating between components where possible. public class ReportAttack : MonoBehaviour, IDamageable { public Team team; void Start () { team = GetComponent<Team>(); } void IDamageable.TakeDamage(MonoBehaviour from, DamageType type, float amount) { var attackerTeam = from.GetComponent<Team>(); if (team && attackerTeam && team.team != attackerTeam.team) Debug.Log(gameObject.name + " says: I've Been Attacked by " + from.gameObject + " on team " + (attackerTeam ? attackerTeam.team.ToString() : "no team") + " with " + System.Enum.GetName(typeof(DamageType), (DamageType)((int)type << 1)) + " (" + (int)type + ")"); } } While I’ve been fairly satisfied with the use of interfaces for calls to multiple or unknown components, I recall fondly the rapid development and flexible approach provided by utilizing messages in my 2017 Global Game Jam submission, Metalmancer. However, since Unity’s message passing uses reflection (or at least probably does, given that it takes the string name of the event to call), it does not perform particularly well. With that in mind, I hoped to make my own, alternative messaging system which is used much like the existing messaging system, but uses delegates and event handlers under the hood. This was the result. While I felt that I succeeded in my goal of providing a useful interface that hid the reflection-based old messaging system, I was crestfallen once I began running tests. On average, I see a performance increase of about 33% over Unity’s built in SendMessage, with the complication that all components using the new system must inherit from the new MessagingBehavior abstract class, rather than directly from MonoBehavior. Still, given that a direct call (as would be the case using an interface) is still about ten times faster, I wasn’t particularly encouraged by these results. On the other hand, as tomvds said in the Unity forums: Still, stubborn as I am, it’ll be hard to convince myself to use even my own message passing architecture in lieu of more efficient interfaces. Or maybe I should just use an adaptation of wmiller’s Events system. Or I should just stop worrying about it.
    View the full article
    • 1 comment
    • 516 views
  • Games Look Bad, Part 1: HDR and Tone Mapping

    By Promit

    This is Part 1 of a series examining techniques used in game graphics and how those techniques fail to deliver a visually appealing end result. See Part 0 for a more thorough explanation of the idea behind it. High dynamic range. First experienced by most consumers in late 2005, with Valve’s Half Life 2: Lost Coast demo. Largely faked at the time due to technical limitations, but it laid the groundwork for something we take for granted in nearly every blockbuster title. The contemporaneous reviews were nothing short of gushing. We’ve been busy making a complete god awful mess of it ever since. Let’s review, very quickly. In the real world, the total contrast ratio between the brightest highlights and darkest shadows during a sunny day is on the order of 1,000,000:1. We would need 20 bits of just luminance to represent those illumination ranges, before even including color in the mix. A typical DSLR can record 12-14 bits (16,000:1 in ideal conditions). A typical screen can show 8 (curved to 600:1 or so). Your eyes… well, it’s complicated. Wikipedia claims 6.5 (100:1) static. Others disagree. Graphics programmers came up with HDR and tone mapping to solve the problem. Both film and digital cameras have this same issue, after all. They have to take enormous contrast ratios at the input, and generate sensible images at the output. So we use HDR to store the giant range for lighting computations, and tone maps to collapse the range to screen. The tone map acts as our virtual “film”, and our virtual camera is loaded with virtual film to make our virtual image. Oh, and we also throw in some eye-related effects that make no sense in cameras and don’t appear in film for good measure. Of course we do. And now, let’s marvel in the ways it goes spectacularly wrong. In order: Battlefield 1, Uncharted: Lost Legacy, Call of Duty: Infinite Warfare, and Horizon Zero Dawn. HZD is a particular offender in the “terrible tone map” category and it’s one I could point to all day long. And so we run head first into the problem that plagues games today and will drive this series throughout: at first glance, these are all very pretty 2017 games and there is nothing obviously wrong with the screenshots. But all of them feel videogamey and none of them would pass for a film or a photograph. Or even a reasonably good offline render. Or a painting. They are instantly recognizable as video games, because only video games try to pass off these trashy contrast curves as aesthetically pleasing. These images look like a kid was playing around in Photoshop and maxed the Contrast slider. Or maybe that kid was just dragging the Curves control around at random. The funny thing is, this actually has happened to movies before. Hahaha. Look at that Smaug. He looks terrible. Not terrifying. This could be an in-game screenshot any day. Is it easy to pick on Peter Jackson’s The Hobbit? Yes, it absolutely is. But I think it serves to highlight that while technical limitations are something we absolutely struggle with in games, there is a fundamental artistic component here that is actually not that easy to get right even for film industry professionals with nearly unlimited budgets. Allow me an aside here into the world of film production. In 2006, the founder of Oakley sunglasses decided the movie world was disingenuous in their claims of what digital cameras could and could not do, and set out to produce a new class of cinema camera with higher resolution, higher dynamic range, higher everything than the industry had and would exceed the technical capabilities of film in every regard. The RED One 4K was born, largely accomplishing its stated goals and being adopted almost immediately by one Peter Jackson. Meanwhile, a cine supply company founded in 1917 called Arri decided they don’t give a damn about resolution, and shipped the 2K Arri Alexa camera in 2010. How did it go? 2015 Oscars: Four of the five nominees in the cinematography category were photographed using the ARRI Alexa. Happy belated 100th birthday, Arri. So what gives? Well, in the days of film there was a lot of energy expended on developing the look of a particular film stock. It’s not just chemistry; color science and artistic qualities played heavily into designing film stocks, and good directors/cinematographers would (and still do) choose particular films to get the right feel for their productions. RED focused on exceeding the technical capabilities of film, leaving the actual color rendering largely in the hands of the studio. But Arri? Arri focused on achieving the distinctive feel and visual appeal of high quality films. They better understood that even in the big budget world of motion pictures, color rendering and luminance curves are extraordinarily difficult to nail. They perfected that piece of the puzzle and it paid off for them. Let’s bring it back to games. The reality is, the tone maps we use in games are janky, partly due to technical limitations. We’re limited to a 1D luminance response where real film produces both hue and saturation shifts. The RGB color space is a bad choice to be doing this in the first place. And because nobody in the game industry has an understanding of film chemistry, we’ve all largely settled on blindly using the same function that somebody somewhere came up with. It was Reinhard in years past, then it was Hable, now it’s ACES RRT. And it’s stop #1 on the train of Why does every game this year look exactly the goddamn same? The craziest part is we’re now at the point of real HDR televisions showing game renders with wider input ranges. Take this NVIDIA article which sees the real problem and walks right past it. The ACES tone map is destructive to chroma. Then they post a Nikon DSLR photo of a TV in HDR mode as a proxy for how much true HDR improves the viewing experience. Which is absolutely true – but then why does the LDR photo of your TV look so much better than the LDR tone map image? There’s another tone map in this chain which nobody thought to examine: Nikon’s. They have decades of expertise in doing this. Lo and behold, their curve makes a mockery of the ACES curve used in the reference render. Wanna know why that is? It’s because the ACES RRT was never designed to be an output curve in the first place. Its primary design goal is to massage differences between cameras and lenses used in set so they match better. You’re not supposed to send it to screen! It’s a preview/baseline curve which is supposed to receive a film LUT and color grading over top of it. “Oh, but real games do use a post process LUT color grade!” Yeah, and we screwed that up too. We don’t have the technical capability to run real film industry LUTs in the correct color spaces, we don’t have good tools to tune ours, they’re stuck doing double duty for both “filmic look” as well as color grading, the person doing it doesn’t have the training background, and it’s extraordinary what an actual trained human can do after the fact to fix these garbage colors. Is he cheating by doing per-shot color tuning that a dynamic scene can’t possibly accomplish? Yes, obviously. But are you really going to tell me that any of these scenes from any of these games look like they are well balanced in color, contrast, and overall feel? Of course while we’re all running left, Nintendo has always had a fascinating habit of running right. I can show any number of their games for this, but Zelda: Breath of the Wild probably exemplifies it best when it comes to HDR.  No HDR. No tone map. The bloom and volumetrics are being done entirely in LDR space. (Or possibly in 10 bit. Not sure.) Because in Nintendo’s eyes, if you can’t control the final outputs of the tone mapped render in the first place, why bother? There’s none of that awful heavy handed contrast. No crushed blacks. No randomly saturated whites in the sunset, and saturation overall stays where it belongs across the luminance range. The game doesn’t do that dynamic exposure adjustment effect that nobody actually likes. Does stylized rendering help? Sure. But you know what? Somebody would paint this. It’s artistic. It’s aesthetically pleasing. It’s balanced in its transition from light to dark tones, and the over-brightness is used tastefully without annihilating half the sky in the process. Now I don’t think that everybody should walk away from HDR entirely. (Probably.) There’s too much other stuff we’ve committed to which requires it. But for god’s sake, we need to fix our tone maps. We need to find curves that are not so aggressively desaturating. We need curves that transition contrast better from crushed blacks to mid-tones to blown highlights. LUTs are garbage in, garbage out and they cannot be used to fix bad tone maps. We also need to switch to industry standard tools for authoring and using LUTs, so that artists have better control over what’s going on and can verify those LUTs outside of the rendering engine. In the meantime, the industry’s heavy hitters are just going to keep releasing this kind of over-contrasty garbage. Before I finish up, I do want to take a moment to highlight some games that I think actually handle HDR very well. First up is Resident Evil 7, which benefits from a heavily stylized look that over-emphasizes contrast by design. That’s far too much contrast for any normal image, but because we’re dealing with a horror game it’s effective in giving the whole thing an unsettling feel that fits the setting wonderfully. The player should be uncomfortable with how the light and shadows collide. This particular scene places the jarring transition right in your face, and it’s powerful. Next, at risk of seeming hypocritical I’m going to say Deus Ex: Mankind Divided (as well as its predecessor). The big caveat with DX is that some scenes work really well. The daytime outdoors scenes do not. The night time or indoor scenes that fully embrace the surrealistic feeling of the world, though, are just fantastic. Somehow the weird mix of harsh blacks and glowing highlights serves to reinforce the differences between the bright and dark spots that the game is playing with thematically throughout. It’s not a coincidence that Blade Runner 2049 has many similarities. Still too much contrast though. Lastly, I’m going to give props to Forza Horizon 3.   Let’s be honest: cars are “easy mode” for HDR. They love it. But there is a specific reason this image works so well. It is low contrast. Nearly all of it lives in the mid-tones, with only a few places wandering into deep shadow (notably the trees) and almost nothing in the bright highlights. But the image is low contrast because cars themselves tend to use a lot of black accents and dark regions which are simply not visible when you crush the blacks as we’ve seen in other games. Thus the toe section of the curve is lifted much more than we normally see. Similarly, overblown highlights mean whiting out the car in the specular reflections, which are big and pretty much always image based lighting for cars. It does no good to lose all of that detail, but the entire scene benefits from the requisite decrease in contrast. The exposure level is also noticeably lower, which actually leaves room for better mid-tone saturation. (This is also a trick used by Canon cameras, whose images you see every single day.) The whole image ends up with a much softer and more pleasant look that doesn’t carry the inherent stress we find in the images I criticized at the top. If we’re looking for an exemplar for how to HDR correctly in a non-stylized context, this is the model to go by. Where does all this leave us? With a bunch of terrible looking games, mostly. There are a few technical changes we need to make right up front, from basic decreases in contrast to simple tweaks to the tone map to improved tools for LUT authoring. But as the Zelda and Forza screenshots demonstrate, and as the Hobbit screenshot warns us, this is not just a technical problem. Bad aesthetic choices are being made in the output stages of the engine that are then forced on the rest of the creative process. Engine devs are telling art directors that their choices in tone maps are one of three and two are legacy options. Is it bad art direction or bad graphics engineering? It’s both, and I suspect both departments are blaming the other for it. The tone map may be at the end of graphics pipeline, but in film production it’s the first choice you make. You can’t make a movie without loading film stock in the camera, and you only get to make that choice once (digital notwithstanding). Don’t treat your tone map as something to tweak around the edges when balancing the final output LUT. Don’t just take someone else’s conveniently packaged function. The tone map’s role exists at the beginning of the visual development process and it should be treated as part of the foundation for how the game will look and feel. Pay attention to the aesthetics and visual quality of the map upfront. In today’s games these qualities are an afterthought, and it shows. UPDATE: User “vinistois” on HackerNews shared a screenshot from GTA 5 and I looked up a few others. It’s very nicely done tone mapping. Good use of mid-tones and contrast throughout with great transitions into both extremes. You won’t quite mistake it for film, I don’t think, but it’s excellent for something that is barely even a current gen product. This is proof that we can do much better from an aesthetic perspective within current technical and stylistic constraints. Heck, this screenshot isn’t even from a PC – it’s the PS4 version.

    View the full article
    • 0 comments
    • 979 views
  • Day 33 of 100 Days of VR: Implementing the High Score System

    By Josh Chang

    Side Note: I've been feeling sick recently and progress have been slow, but I'm feeling better and ready to get back to it! Welcome back to day 33! Yesterday, we looked at 3 ways we can save and load data in Unity: with PlayerPrefs, Data Serialization, and saving our data to a server. Today we’re going to use what we learned the previous day to save our score in our simple FPS. Here’s the goal for today: Implement our SaveManager to help us save score Update our UI to show our high score when the game is over So, let’s get started! Step 1: Saving our Data Of the methods we’ve talked about, I’m going to use the PlayerPrefs to help us save our data. While we can technically use our PlayerPrefs anywhere we want between our scripts, it’s better for us to create a manager where we will centralize everything all our Saving/Loading work so that when we need to make changes, we don’t have to comb through all our Script to fix things. Step 1.1: Creating our SaveManager The first step to saving our score is to create our ScoreManager script and attach it to our GameManager game object. Select our GameManager game object in our hierarchy. In the Inspector, click Add Component and create a new ScoreManager In our SaveManager, we want to be able to save and load our high score. Here’s what we’ll have: using UnityEngine; public class SaveManager : MonoBehaviour { private string _highScoreKey = "highscore"; public void SaveHighScore(float score) { PlayerPrefs.SetFloat(_highScoreKey, score); } public float LoadHighScore() { if (PlayerPrefs.HasKey(_highScoreKey)) { return PlayerPrefs.GetFloat(_highScoreKey); } return 99999999999; } } Variables Used For our SaveManager, we only create a string _highScoreKey that we use to store the text that we want to use for our score. We never want to manually type our key in as that might lead to us mistyping and many hours spent debugging over a single spelling mistake. Walking Through the Code Our SaveManager script is only used to help us access our PlayerPrefs in one centralized location. The beauty of this system is that if one day we decide that we don’t want to use PlayerPrefs and use DataSerialization instead, we can just change SaveManager, but everything else that’s using it can stay the same. Here’s the code flow: In SaveHighScore() we save the score that we’re given to our high score. In LoadHighScore() we return the high score that we saved. It’s important to note that if we don’t have a value in our Prefab, we would return 0, however, in our case, a lower score is better, instead we return a very high score. Step 1.2: Modifying our ScoreManager to Expose Our Score Previously, we had our ScoreManager change the text of the score in our page, however, if we want to be able to show our high score at the end of the game. To do that, we need to use the Victory and GameOver panels that we made in the past. Luckily for us, in our GameManager, we already have some code that uses them. Now, for our GameManager to access our time (and save it with our SaveManager), we need to expose the score for other scripts to access them. Here are our changes to ScoreManager: using System; using UnityEngine; using UnityEngine.UI; public class ScoreManager : MonoBehaviour { public Text Score; private string _time; private bool _gameOver; private float _score; void Start () { _time = ""; _gameOver = false; _score = 9999999999; } void Update() { if (!_gameOver) { UpdateTime(); } } private void UpdateTime() { _score = Time.time; _time = ScoreManager.GetScoreFormatting(Time.time); Score.text = _time; } public void GameOver() { _gameOver = true; } public float GetScore() { return _score; } // we can call this function anywhere we want, we don't need to have an instance of this class public static string GetScoreFormatting(float time) { int minutes = Mathf.FloorToInt(time / 60); int seconds = Mathf.FloorToInt(time % 60); float miliseconds = time * 100; miliseconds = miliseconds % 100; return string.Format("{0:0}:{1:00}:{2:00}", minutes, seconds, miliseconds); } }  New Variables Used We create a new float _score that we’ll use to keep track of the time that passed for the player. Walking Through the Code Most of the code is the same. We just wrote some new functions, here’s what we did: In Start() we set _score to have a starting value In UpdateTime() we update _score to be the current time in the game. I also moved the code that gave us the time in a nice: minutes:seconds:milliseconds format to a static function called GetScoreFormatting() where we used to set our _time. Notice how I use GetScoreFormatting()? GetScoreFormatting() can be used just like this anywhere else in our game now, even if we don’t have an instance to ScoreManager. GetScoreFormatting() is just a copy and paste of what we originally had in UpdateTime(). Finally, we create a public function GetScore() to get the score the player earned in GameManager when they win That’s it! Now let’s combine everything we’ve worked on to create our high score system. Step 1.3: Use Everything in Our GameManager Now that we have everything we want, let’s use it in our GameManager script! Now we’re going to write some code that allows us to save our when the player wins. Here’s what we’ll have: using UnityEngine; public class GameManager : MonoBehaviour { public Animator GameOverAnimator; public Animator VictoryAnimator; private GameObject _player; private SpawnManager _spawnManager; private ScoreManager _scoreManager; private SaveManager _saveManager; void Start() { _player = GameObject.FindGameObjectWithTag("Player"); _spawnManager = GetComponentInChildren<SpawnManager>(); _scoreManager = GetComponent<ScoreManager>(); _saveManager = GetComponent<SaveManager>(); } public void GameOver() { GameOverAnimator.SetBool("IsGameOver", true); DisableGame(); _spawnManager.DisableAllEnemies(); } public void Victory() { VictoryAnimator.SetBool("IsGameOver", true); DisableGame(); if (_scoreManager.GetScore() < _saveManager.LoadHighScore()) { _saveManager.SaveHighScore(_scoreManager.GetScore()); } } private void DisableGame() { _player.GetComponent<PlayerController>().enabled = false; _player.GetComponentInChildren<MouseCameraContoller>().enabled = false; PlayerShootingController shootingController = _player.GetComponentInChildren<PlayerShootingController>(); shootingController.GameOver(); shootingController.enabled = false; Cursor.lockState = CursorLockMode.None; _scoreManager.GameOver(); } } New Variables Used The first thing we did was we got ourselves an instance of SaveManager: _saveManager. Now with our SaveManager in GameManager, we can save and load scores. Walking Through the Changes The code that we’re adding uses our SaveManager to save our score. In Start() we instantiate our SaveManager which is also attached to the GameManager game object. When Victory() is called, we check their current score and then compare it with our high score from our SaveManager if our time is lower than the high score, than we set our score as the new high score With our changes to the GameManager, we now have a working high score system. Now the problem? We have no way of seeing if any of this works! Worry not, that’s going to be the next step of our work! Step 2: Update the Game Over Panels to Show Our Score Now that we have the code to change our high score, we’re going to work on displaying our high score in our UI. To do this, we’re going to make a couple of changes to our script, we’re going to: Change our GameOverUIManager to change the Text that we show. Change our GameManager to get our GameOverUIManager from our game over panels and then set the high score to show when the game is over. Step 2.1: Making Changes to GameOverUIManager If you recall, our GameOverUIManager was created to help us detect when the player clicks on the start over a button in our panel. We’re going to make some changes to change the text in our Panel to also say what our high score is. Let’s get to it! Here are the changes that were made: using UnityEngine; using UnityEngine.UI; using UnityEngine.SceneManagement; public class GameOverUIManager : MonoBehaviour { private Button _button; private Text _text; void Start () { _button = GetComponentInChildren<Button>(); _button.onClick.AddListener(ClickPlayAgain); _text = GetComponentInChildren<Text>(); } public void ClickPlayAgain() { SceneManager.LoadScene("Main"); } public void SetHighScoreText(string score, bool didWin) { print(_text.text); if (didWin) { _text.text = "You Win! \n" + "High Score: " + score; } else { _text.text = "Game Over! \n" + "High Score: " + score; } print(_text.text); } } New Variables Used The only new variable that we used is a Text UI that we call _text. Specifically, this is the UI element that we use to tell the player that they won (or lost) the game. Walking Through the Changes The only changes we did was: Instantiate our Text UI in Start(). In the case of our panel, the Text was a child the panel that GameOverUIManager was attached to, so we have to look for the component in our child. Once we have an instance of our text, I created a new SetHighScoreText() that, depending on if we won or loss, change our text to show the high score. One important thing I want to mention. Do you notice the “\n”? \n is an escape character for new line. Which means that in our UI, we’ll see something like: Game Over High Score: 1:21:12 Step 2.2: Calling our GameOverUIManager from GameManager Next up, we want to be able to set the score from our GameOverUIManager from our GameManager. We must make some pretty big changes to our Game Panels that we use. Before we just grabbed the animator component, now, we need that and the GameOverUIManager. Besides that, we just need to call our GameOverUIManager script and set the text with our high score. Here’s what we’ve done: using UnityEngine; using UnityEngine.UI; public class GameManager : MonoBehaviour { public GameObject GameOverPanel; public GameObject VictoryPanel; private GameObject _player; private SpawnManager _spawnManager; private ScoreManager _scoreManager; private SaveManager _saveManager; void Start() { _player = GameObject.FindGameObjectWithTag("Player"); _spawnManager = GetComponentInChildren<SpawnManager>(); _scoreManager = GetComponent<ScoreManager>(); _saveManager = GetComponent<SaveManager>(); } public void GameOver() { DisableGame(); _spawnManager.DisableAllEnemies(); ShowPanel(GameOverPanel, false); } public void Victory() { DisableGame(); if (_scoreManager.GetScore() < _saveManager.LoadHighScore()) { _saveManager.SaveHighScore(_scoreManager.GetScore()); } ShowPanel(VictoryPanel, true); } private void ShowPanel(GameObject panel, bool didWin) { panel.GetComponent<Animator>().SetBool("IsGameOver", true); panel.GetComponent<GameOverUIManager>().SetHighScoreText(ScoreManager.GetScoreFormatting(_saveManager.LoadHighScore()), didWin); } private void DisableGame() { _player.GetComponent<PlayerController>().enabled = false; _player.GetComponentInChildren<MouseCameraContoller>().enabled = false; PlayerShootingController shootingController = _player.GetComponentInChildren<PlayerShootingController>(); shootingController.GameOver(); shootingController.enabled = false; Cursor.lockState = CursorLockMode.None; _scoreManager.GameOver(); } } New Variables Used The biggest change is with the GameOverPanel and VictoryPanel. Before these were Animators that we used to show our panel, now we have the game objects themselves because we need to access more than just the animator. New Functions Created We created a new function: ShowPanel(), which takes in the GamePanel that we’re changing the text to and whether or not we won the game. From this information, we play our animator and get the GameOverUIManager and call SetHighScoreText() to change the text. Walking Through the Code Here’s how our new code gets used: Whenever the game is over, either the player lost or won, we would call GameOver() and Victory(). From there, we would disable our game and then call our new function ShowPanel() Depending on whether we won or not, we would pass in the correct Panel and state we’re into ShowPanel() Finally, in ShowPanel(), we would play the animation to show our Panel and call setHighScoreText() from our GameOverUIManager to change the text to display our high score. Step 2.3: Attaching our Panels back into GameManager Now with everything in place, we need to add our GameOverPanel and VictoryPanel, because when we changed them, we got rid of any reference to our previous models. Here’s what to do: Select our GameManager game object from the hierarchy. Look for the GameManager script component, drag our Panels (Victory and GameOver) from HUD in our game hierarchy, and put them in the appropriate slots. With that done, we should have something like this: Step 2.4: Fixing our Text UI Display Now with all of this implemented, we can finally play our game! After winning (or losing) in our game, we’ll get our BRAND-NEW panel: New… right? Wrong! It looks the same as what we had! What happened? It turns out, the text is still there, however, we didn’t have enough space in our Text UI to show the remaining text! This problem can be solved easily. We just need to increase the size of our Text UI in our GameOver and Victory Panel in our hierarchy. The first thing we need to do is reveal our Panels that we hid. In our Panels, find the CanvasGroup component, we previously set our alpha to be 0 to hide our panel. Let’s change that back to 1 so we can see our panel again. Just don’t forget to change it back to 0. In the same Panel, select its child game object, Text. We want to try playing around with the Width and Height field in our Rect Transform component. I ended up with: Width: 300 Height: 80 I also added some new Text for a demo of what to expect. Here’s what we have now: Make sure that you make this change to both our GameOver Panel and our Victory Panel in the hierarchy. Now if we were to play the game, here’s what we’ll have when we win: And when we lose: Don’t ask how I won in 2 seconds. I modified our scripts a bit, okay? Conclusion With all of this, we’re done with day 33! Which means we’re officially 1/3 of the way through the 100 days of VR challenge! Not only that, now that we have this high score system, I’m going to officially call our simple FPS finished! Tomorrow, I’m finally going to start looking more into how to do VR development! Until then, I’ll see you all on day 34! Day 32 | 100 Days of VR | Day 34 Side topic: Choosing a phone and platform to develop VR in Home
    • 0 comments
    • 476 views
  • Ludum Dare 40

    By Vilem Otte

    Since Ludum Dare 35 I'm regularly participating in every one of them and this one wasn't exception. My release thoughts are positive - this time I've again worked with one friend (with whom we've worked also in the past on Ludum Dare), and I enjoyed it a lot. As this is not a post mortem yet, I will not go into details what went right or wrong - meanwhile I'll just show the results and put out few notes... Yes, that's one of the screenshots from the game (without UI). I'm using this one as a "cover" screenshot - so it should also be included here. Anyways this was my another experience with Unity and maybe one of the last Ludum Dare experiences with it. While I do like it, if I can think about suitable game for my own game engine for the theme next time, it's possible that I won't use Unity next time. Ludum Dare Each 4 months or so, this large game jam happens. It's a sort of competition, well... there are no prizes, and I honestly do it just for fun (and to force myself to do some "real" game development from time to time). It takes 48 or 72 hours to create your game (depending on whether you go for compo or jam category), and there are just few basic rules (which you can read on site - https://ldjam.com/). Then for few weeks you play and rate other games, and the more you play, the more people will play and rate your game. While ratings aren't in my opinion that important, you get some feedback through comments. Actually I was wrong about no prizes - you have your game and feedback of other people who participate in Ludum Dare as a prize. Unity... I've used Unity for quite long time - and I have 2 things to complain about this time, majority of all used shaders in Air Pressure (yes, that is the game's name) are actually custom - and I might bump into some of them in post mortem. Unity and custom shaders combination is actually quite a pain, especially compared to my own engine (while it isn't as generic as Unity is - actually my engine is far less complex, and maybe due to that shader editing and workflow is actually a lot more pleasant ... although these are my own subjective feelings, impacted by knowing whole internal structure of my own engine in detail). Second thing is particularly annoying, related to Visual Studio. Unity extension for Visual Studio is broken (although I believe that recent patch that was released during the Ludum Dare fixed it - yet there was no time for update during the work), each time a C# file is created, the project gets broken (Intellisense works weird, Visual Studio reports errors everywhere, etc.), the only work around was to delete the project files (solution and vcxproj) and re-open Visual Studio from Unity (which re-created solution and vcxproj file). Unity! On the other hand, it was good for the task - we finished it using Unity, and it was fun. Apart from Visual Studio struggles, we didn't hit any other problem (and it crashed on us just once - during whole 72 hours for jam - once for both of us). So I'm actually quite looking forward to using it next time for some project. Anyways, I did enjoy it this time a lot, time to get back into work (not really game development related). Oh, and before I forget, here they are - first gameplay video and link to the game on Ludum Dare site: https://ldjam.com/events/ludum-dare/40/air-pressure PS: And yes I've been actually tweeting progress during the jam, which ended up in a feeling, that I've probably surpassed number of Tweets generated by Donald Trump in past 3 days.
    • 0 comments
    • 743 views
  • GameDev Challenges - January 2018 (Missile Command)

    By GoliathForge

    This challenge was a good run because of Implementing coroutine with IEnumerator and yield return Simple Dictionary based texture manager Basic Particle Engine Ripped GameStateMachine from previous monogame project (joust) had fun making flare particles Did I mention, it's not cool unless it has lightning bolts browsed current scifi google images...nice...inspired. Source : MarkK_LightningCommand.zip    
    • 5 comments
    • 693 views

Our community blogs

  1. Free mobile Zombie Road kill Action Racing game 'Dead Run : Road of Zombie' is updated to new version 1.0.6

    Enjoy more fun playing with Cinematic camera and mission.

    Google Play에서 다운로드

     

    **************** v1.0.5 Update****************

    Cinematic camera moving

    Play time extension

    Add Control Tip.

    Add slope.

    Display combo Kill numbers.

    1.0.6_Capture.thumb.png.f9b8a4e5a428e11b6fbb59ff235bd44d.png

  2. Currently on alpha test, silverpath online has active event currently that spawns bosses on event room.

    You can join free and feedback bugs, glitches or whatever problem you have encountered.

    Currently friends and community system is on progress you feedbacks are important to me, also alpha users will get prizes on beta regarding their alpha-end rankings.

    https://play.google.com/apps/testing/com.ogzzmert.online.game

     

    sil1.png

  3. Time for an update.  


    So what have I been working on in the last couple of weeks?  Firstly the lighting and particle systems are activated.  The particle system is pretty unintrusive with the most notable aspect being the chimney smoke rising from the different steampunk engines.  Alongside this there is now a bit of splashing water and a few sparks flying around.  Much more noticeable is the lighting system as demonstrated in the new screenshots.  Here there is now a day / night cycle - I spent quite a long time making sure that the night was not too dark and I already have a game setting allowing this to be turned off (while this will lose a lot of the atmosphere just having day light slightly improves performance ... no other lights need to be active ... and maximises visibility).  Introducing other lights was a bit more problematic than expected.  Firstly, it took a while to get the light fall off fine-tuned correctly and secondly I upgraded the code quite a bit.  Originally, the light manager would always choose the lights nearest to the player, meaning that a maximum of 7 lights (beyond the sunlight) could be active in any scene.  Okay, but it did mean that more distant lights would suddenly flick on.  The new logic activates lights nearest to each game object or map tile currently being drawn, allowing a much greater number of lights to be shown in any scene.  In general the list of lights to activate are pre-calculated as each map section is loaded, with only lighting for moving objects being calculated on the fly.  So far seems to be working nicely - if I overloaded a particular area with lights there could still be light pop-up, but with sensible level design this can be avoided.  I did consider pre-baking the lighting but with the day/night cycle and the desire to alter light intensity and even colour on the fly this was going to be too complex and the performance of the current solution seems to be very good.

    Blog2shot1.jpg.7d898aae32d2b3fb02a260206ab72f6c.jpgBlog2shot3.thumb.jpg.af308ed941dc066f4cc56a1a1cf48f85.jpg 

    The other task I've been working on is the introduction of two new map zones.  The objective was to introduce something distinct from what had been done so far and to this end I have been working on a wilderness and an industrial zone.  And the wilderness zone completely failed to work.  It's a beginner zone so there wasn't any intention to overload it with complex gameplay, but even so it's just empty and uninteresting - back to the drawing board on that one.  As for the industrial zone this one is going better.  There are a number of new models being used and a few more to add with a couple of objectives in mind.  First off the aim is to create a little bit the confusion of a steampunk factory - pipes, big machines, smoke and steam.  Secondly, to hint at the down side of (steampunk) industrialisation with the texturing more grimy and even the addition of waste stacks (handily blocking off the player's progression requiring them to navigate their way round more carefully).  An early draft is shown in the screenshot below - the ground texturing needs to be changed with green grass needing to be replaced by rock and sand and I will also be working on the lighting and fog - to draw in the view and create a darker scene even in the middle of the day.  The scene may be a bit too busy at the moment, but I will see how I feel once these changes are made.

    Blog2shot2.thumb.jpg.fcb9d25e3a0844d264a87fe2b4d7c78d.jpg
    Hope the update was interesting - as before any feedback most welcome.  

  4. Originally posted on Troll Purse development blog.

    Unreal Engine 4 is an awesome game engine and the Editor is just as good. There are a lot of built in tools for a game (especially shooters) and some excellent tutorials out there for it. So, here is one more. Today the topic to discuss is different methods to program player world interaction in Unreal Engine 4 in C++. While the context is specific to UE4, it can also easily translate to any game with a similar architecture.

    UE-Logo-988x988-1dee3bc7f6714edf3c21ee71

    Interaction via Overlaps

    By and far, the most common tutorials for player-world interaction is to use Trigger Volumes or Trigger Actors. This makes sense, it is a decoupled way to set up interaction and leverages most of the work using classes already provided by the engine. Here is a simple example where the overlap code is used to interact with the player:

    Header

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API InteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
    	// Sets default values for this actor's properties
    	InteractiveActor();
    
        virtual void BeginPlay() override;
    
    protected:
    	UFUNCTION()
    	virtual void OnInteractionTriggerBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult);
    
    	UFUNCTION()
    	virtual void OnInteractionTriggerEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex);
    
        UFUNCTION()
        virtual void OnPlayerInputActionReceived();
    
    	UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = Interaction)
    	class UBoxComponent* InteractionTrigger;
    }
    

    This is a small header file for a simple base Actor class that can handle overlap events and a single input action. From here, one can start building up the various entities within a game that will respond to player input. For this to work, the player pawn or character will have to overlap with the InteractionTrigger component. This will then put the InteractiveActor into the input stack for that specific player. The player will then trigger the input action (via a keyboard key press for example), and then the code in OnPlayerInputActionReceived will execute. Here is a layout of the executing code.

    Source

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #include "InteractiveActor.h"
    #include "Components/BoxComponent.h"
    
    // Sets default values
    AInteractiveActor::AInteractiveActor()
    {
    	PrimaryActorTick.bCanEverTick = true;
    
    	RootComponent = CreateDefaultSubobject<USceneComponent>(TEXT("Root"));
    	RootComponent->SetMobility(EComponentMobility::Static);
    
    	InteractionTrigger = CreateDefaultSubobject<UBoxComponent>(TEXT("Interaction Trigger"));
    	InteractionTrigger->InitBoxExtent(FVector(128, 128, 128));
    	InteractionTrigger->SetMobility(EComponentMobility::Static);
    	InteractionTrigger->OnComponentBeginOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyBeginOverlap);
    	InteractionTrigger->OnComponentEndOverlap.AddUniqueDynamic(this, &ABTPEquipment::OnInteractionProxyEndOverlap);
    
    	InteractionTrigger->SetupAttachment(RootComponent);
    }
    
    void AInteractiveActor::BeginPlay()
    {
        if(InputComponent == nullptr)
        {
            InputComponent = ConstructObject<UInputComponent>(UInputComponent::StaticClass(), this, "Input Component");
            InputComponent->bBlockInput = bBlockInput;
        }
    
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnPlayerInputActionReceived);
    }
    
    void AInteractiveActor::OnPlayerInputActionReceived()
    {
        //this is where logic for the actor when it receives input will be execute. You could add something as simple as a log message to test it out.
    }
    
    void AInteractiveActor::OnInteractionProxyBeginOverlap(UPrimitiveComponent* OverlappedComp, AActor* OtherActor, UPrimitiveComponent* OtherComp, int32 OtherBodyIndex, bool bFromSweep, const FHitResult& SweepResult)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    EnableInput(PC);
                }
            }
    	}
    }
    
    void AInteractiveActor::OnInteractionProxyEndOverlap(UPrimitiveComponent* OverlappedComp, class AActor* OtherActor, class UPrimitiveComponent* OtherComp, int32 OtherBodyIndex)
    {
    	if (OtherActor)
    	{
    		AController* Controller = OtherActor->GetController();
            if(Controller)
            {
                APlayerController* PC = Cast<APlayerController>(Controller);
                if(PC)
                {
                    DisableInput(PC);
                }
            }
    	}
    }
    

    Pros and Cons

    The positives of the collision volume approach is the ease at which the code is implemented and the strong decoupling from the rest of the game logic. The negatives to this approach is that interaction becomes broad when considering the game space as well as the introduction to a new interactive volume for each interactive within the scene.

    Interaction via Raytrace

    Another popular method is to use the look at viewpoint of the player to ray trace for any interactive world items for the player to interact with. This method usually relies on inheritance for handling player interaction within the interactive object class. This method eliminates the need for another collision volume for item usage and allows for more precise interaction targeting.

    Source

    AInteractiveActor.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/Actor.h"
    #include "InteractiveActor.generated.h"
    
    UCLASS()
    class GAME_API AInteractiveActor : public AActor
    {
    	GENERATED_BODY()
    
    public:
        virtual OnReceiveInteraction(class APlayerController* PC);
    }
    

    AMyPlayerController.h

    // Fill out your copyright notice in the Description page of Project Settings.
    
    #pragma once
    
    #include "CoreMinimal.h"
    #include "GameFramework/PlayerController.h"
    #include "AMyPlayerController.generated.h"
    
    UCLASS()
    class GAME_API AMyPlayerController : public APlayerController
    {
    	GENERATED_BODY()
    
        AMyPlayerController();
    
    public:
        virtual void SetupInputComponent() override;
    
        float MaxRayTraceDistance;
    
    private:
        AInteractiveActor* GetInteractiveByCast();
    
        void OnCastInput();
    }
    

    These header files define the functions minimally needed to setup raycast interaction. Also note that there are two files here as two classes would need modification to support input. This is more work that the first method shown that uses trigger volumes. However, all input binding is now constrained to the single ACharacter class or - if you designed it differently - the APlayerController class. Here, the latter was used.

    The logic flow is straight forward. A player can point the center of the screen towards an object (Ideally a HUD crosshair aids in the coordination) and press the desired input button bound to Interact. From here, the function OnCastInput() is executed. It will invoke GetInteractiveByCast() returning either the first camera ray cast collision or nullptr if there are no collisions. Finally, the AInteractiveActor::OnReceiveInteraction(APlayerController*)  function is invoked. That final function is where inherited classes will implement interaction specific code.

    The simple execution of the code is as follows in the class definitions.

    AInteractiveActor.cpp

    void AInteractiveActor::OnReceiveInteraction(APlayerController* PC)
    {
        //nothing in the base class (unless there is logic ALL interactive actors will execute, such as cosmetics (i.e. sounds, particle effects, etc.))
    }
    

    AMyPlayerController.cpp

    AMyPlayerController::AMyPlayerController()
    {
        MaxRayTraceDistance = 1000.0f;
    }
    
    AMyPlayerController::SetupInputComponent()
    {
        Super::SetupInputComponent();
        InputComponent->BindAction("Interact", EInputEvent::IE_Pressed, this, &AInteractiveActor::OnCastInput);
    }
    
    void AMyPlayerController::OnCastInput()
    {
        AInteractiveActor* Interactive = GetInteractiveByCast();
        if(Interactive != nullptr)
        {
            Interactive->OnReceiveInteraction(this);
        }
        else
        {
            return;
        }
    }
    
    AInteractiveActor* AMyPlayerController::GetInteractiveByCast()
    {
        FVector CameraLocation;
    	FRotator CameraRotation;
    
    	GetPlayerViewPoint(CameraLocation, CameraRotation);
    	FVector TraceEnd = CameraLocation + (CameraRotation.Vector() * MaxRayTraceDistance);
    
    	FCollisionQueryParams TraceParams(TEXT("RayTrace"), true, GetPawn());
    	TraceParams.bTraceAsyncScene = true;
    
    	FHitResult Hit(ForceInit);
    	GetWorld()->LineTraceSingleByChannel(Hit, CameraLocation, TraceEnd, ECC_Visibility, TraceParams);
    
        AActor* HitActor = Hit.GetActor();
        if(HitActor != nullptr)
        {
            return Cast<AInteractiveActor>(HitActor);
        }
    	else
        {
            return nullptr;
        }
    }
    

    Pros and Cons

    One pro for this method is the control of input stays in the player controller and implementation of input actions is still owned by the Actor that receives the input. Some cons are that the interaction can be fired as many times as a player clicks and does not repeatedly detect interactive state without a refactor using a Tick function override.

    Conclusion

    There are many methods to player-world interaction within a game world. In regards to creating Actors within Unreal Engine 4 that allow for player interaction, two of these potential methods are collision volume overlaps and ray tracing from the player controller. There are several other methods discussed out there that could also be used. Hopefully, the two implementations presented help you decide on how to go about player-world interaction within your game. Cheers!

     

     

    Originally posted on Troll Purse development blog.

  5. Hi there,

    this week I was working on following stuff.

    Forest Strike - Dev Blog 4


    Scaling issues

    I was struggling with the scaling issues. As you might have seen, the "pixels" displayed on the screen did not always had the same size. Now, this issue is fixed and it looks way better. Check it out:

    Forest Strike - Scaling issue fix

    Mouse dragging

    Because of the scaling issue, now not all the tiles are visible on one image. In order to navigate through the map, you can now use the mouse and drag the camera around. You're going to use this feature in greater maps in order to navigate your characters and get an overview.

    Forest Strike - Mouse dragging

    Title screen

    Finally, I implemented a title screen. From here, you can navigate to a new game, your settings and exit the game. It is currently under development, so it might change a bit. The background image should stay the same.

    Forest Strike - Title Screen


    That's it for this update. Be sure to follow this blog in order to stay up2date. :3

    Thank you for reading! :D

    As always, if you have questions or any kind of feedback feel free to post a comment or contact me directly.

    Additionally, if you want to know where I get my ideas regarding pixel arts from, you can check out my Pinterest board.

     
  6. Hello everyone!

    Oh, I'm so delighted with the number of views! And gamedev.net even featured our entry on their Facebook page! Thank you for finding this blog interesting! 

    In the last entry, I made a brief introduction of our Egypt: Old Kingdom game. It's not just based on history, we're basically trying to recreate the history in a game form. Of course, it requires a tremendous amount of research!


    Sometimes people ask us: "Why did you choose Hierakonpolis/Memphis as the main location, and not Thinis or some other important settlements?"

    The reply will be: because in order to make the game really historical, our location of choice has to be very well researched. We need a lot of information about the location: events, personalities, buildings, lifestyle. 

    The research was done by the game designer, Mikhail, and I think he can now get his master degree as an Egyptologist because he knows A LOT about Ancient Egypt thanks to his research!  xD He did the research by himself for Bronze Age and Marble Age, but then it got too hard to keep up with both research and game design. For the next game, Predynastic Kingdom, we contacted the scientists from the Center For Egyptian Study of Russian Academy of Sciences (CES RAS). We're lucky they agreed to help! Predynastic Egypt was the first game made with their support.

    For Egypt Old Kingdom Mikhail created a huge database containing most of the known events, places and personalities of the Old Kingdom period:

    5a64470c34901_dffca616076ebecc0b4ebead239864a41.thumb.png.72ad3711babe7a4f859a3caca6d4afbe.png

    Every little thing about the period is studied thoroughly in order to immerse the player deeper in the game. We learn about kings’ deeds, their authority, did they properly worship gods or not, did they start any wars or not. We study climate, soil, vegetation, natural disasters of that period. We learn about the appearance of ancient Egyptians, their dress, their food, their houses.

    Sketches of Egyptians' appearance:

    5a64479231389_4c1e49e02c7cbef62c4d37a685bfcddd1.png.ea1763c326a99dad0ad0f6a0a19e9d10.png

    When the database is ready, Mikhail goes over it with the scientists. They check everything, correct what's necessary, provide more information and details. Like every other science,  history has a lot of controversial points. For example, "The White Walls" of Memphis is something that scientists can't agree about. There are two major opinions about what could it be:

    1. It is the walls of a palace. 

    2. It is the walls of burial grounds.


    In our game, we don't want to take sides, so the scientists of CES RAS inform us about such "dangerous" topics as well. This way we can avoid the controversy and let the player decide which theory he prefers.

    This is Mikhail (left side) discussing the game events with scientists :) In the middle - Galina Belova, one of the most famous Russian Egyptologists. The director of CES RAS to the right.

    5a644bfd20667_de9a44b1a23e268d5a7f26dd5510e15b1.thumb.jpg.adb125edf29c7c9fc6eec38db75997c2.jpg

    During this part of the work we sort out all of the events and divide them in groups: the most important events which must be in the game;  less important events which can be beneficial for the atmosphere of the game; insignificant events.  

    When this part of work is done, and all of the information is sorted out, the design of the game begins. In the process we still keep in touch with the scientists, because some events are not easy to turn in a game at all.

    For example, one of our goals is to make the player fully experience the life of Ancient Egypt. We want to make player think like Ancient Egyptians, to make him exparience the same difficulties. In order to do that we have to know what Egyptians were thinking, and also through the gaming process, we have to put the player in the same conditions as Egyptians had.

    Ancient Egyptians strongly believed that if they would not worship their ancestors and gods properly, the country will experience all kinds of disasters. This belief was unconscious and unconditional, that’s why they were building all those funeral complexes, made sacrifices, trying to please their ancestors. Even cities were built only as a way to please gods and ancestors! They were sure if they will stop properly worship them, the country will be doomed, because ancestors will stop to protect them.

    We wanted to nudge the player to build all these pyramids for the same reasons as Egyptians, and this is how stat “Divine favor” appeared. This stat is mostly necessary to worship the gods’ cults, and player can earn it by working in temples and worshipping ancestors. But what really makes the player to feel like Egyptians did is the feature of “Divine favor” stat – it degrades by 0,1 every turn. It happens because people are dying; hence, there are more and more ancestors that must be worshipped. If player will not pay attention to this stat and it will degrade too much, more and more disasters will start to happen, such as fires, earthquakes, droughts, etc. If will greatly influence the economy and the result of the game.

    That's how we turn history in a game. It can be fun and challenging! There are many other examples of similar transitions. We'll definitely keep working with the scientists, not only Russian, but also foreign. In fact, we hope to engage more and more people in the process of game making.

    That's it for now. Thank you for reading! Comments are very welcome!

    If you would like to know more about the game and follow our social media, here are links:

    Egypt: Old Kingdom on Steam;

    Predynastic Egypt on Steam;

    Our community on Facebook;

    Our Twitter.

  7. Three weeks have passed, a few new people have installed the game. I had the chance to deploy the game also to juniors x86 tablet, where it currently crashes. The tablet does have a gyro sensor, which I thought worked fine when using it on Windows Phone, but apparently the code isn't crash proof enough.

    I've been looking mostly for crashes now, and a handful (well, two in the last 3 days) did appear. The crash info is all over the place. Some have a really great stack trace, other nothing. Some seem to be in between. I reckon this is heavily affected by fiddling with telemetry settings. 

    What I'm also missing on a first glance is the configuration of the crashed app. Since the compile builds executables for x86, x64 and ARM it'd be nice to know which of these were the cause. What's always there is the name, IP, Windows build version and device type of the device that was running the game.

    While the stack traces sometimes help they are only that. You don't get a full dump or local watch info. So you can get lucky and the location of the problem is clear, or you're out of luck. In these last two crashes the stack traces hints on a null pointer on a method that is used throughout the game (GUI displaying a texture section). I suspect that it happens during startup and the code path went inside a function when it wasn't ready. In these cases I can only add a safety check and cleanly jump out of the function. Build, upload, re-certify and next try in 1 to 3 days.

     

    Currently I'm struggling getting remote debugging done on the tablet. I can deploy the app via enabled web portal, but the remote debugger is not properly recognized by Visual Studio. I was hoping for USB debugging as that works nicely on the phone, but had no luck with it.

     

    Well, here's to the next version hoping to get those crashes fixed!

     

    • 1
      entry
    • 1
      comment
    • 55
      views

    Recent Entries

    Latest Entry

    1.png.1f031b7b43238d7170f55af0367bfcbd.png

    Here goes my game.

    This challenge is sooooo convenient to me for I have no ability of drawing... :(

    3.png.b3d3a24587e40fb00c4964ae3bda5b20.png

    By finishing it I learned lots of Cocos Creator, which is good at UI effects.

    Thanks a lot for the Challenge!

     

    Download (Windows only):

    https://www.dropbox.com/s/q0l37r5urhqgtup/MissileCommandRelease.zip?dl=0

     

    Source code:

    https://github.com/surevision/Missile_Command_Challenge

     

    Screenshot:

    Spoiler

    1.png.1f031b7b43238d7170f55af0367bfcbd.png

    title

    2.png.9fdfd0204fa9f90ae63e86c7050f8773.png

    gameplay

     

  8. For QLMesh (and some other projects), I am running my own fork of Asset Import Library.  The difference: it is amalgamated build - all sources are merged into one file (including dependencies). Since Assimp recently switched to miniz, I have replaced remaining references to zlib with miniz - so zlib is not required too.

    drwxr-xr-x  85 piecuchp  staff     2890 Jan 17 23:34 assimp
    -rw-r--r--   1 piecuchp  staff  4921627 Jan 17 23:34 assimp.cpp
    -rw-r--r--   1 piecuchp  staff  2893785 Jan 17 23:34 private\assimp.h

     

    Everything you need to buid the assimp is:

    g++ -c -std=c++11 code-portable/assimp.cpp
    

    or just add assimp.cpp to your project/IDE (you can find code-portable directory in my repo.

    One disclaimer: I have only tested this amalgamation under OSX with QLMesh). Main reason for this amalgamation is that it makes compilation/recompilation on different platforms with different configurations rather easier.

    Side-effect is that single-file assimp.cpp compiles really fast (like 10x faster on my MacBook than original project files).

    (http://pawelp.ath.cx/)(http://komsoft.ath.cx/)(https://itunes.apple.com/us/app/qlmesh/id1037909675)

    model2-9.jpg

    • 1
      entry
    • 7
      comments
    • 60
      views

    Recent Entries

    I have released my first free prototype!

    https://yesindiedee.itch.io/is-this-a-game

    How terrifying!

    It is strange that I have been working to the moment of releasing something to the public for all of my adult life, and now I have I find it pretty scary.

    I have been a developer now for over 20 years and in that time I have released a grand total of 0 products.

    The Engine

    The engine is designed to be flexible with its components, but so far it uses

    Opengl, OpenAL, Python (scripting), CG, everything else is built in

    The Games

    When I started developing a game I had a pretty grand vision, a 3D exploration game. It was called Cavian, image attached. and yep it was far to complex for my first release. Maybe I will go back to it one day.

    I took a year off after that, I had to sell most of my stuff anyway as not releasing games isn't great for your financial situation.

    THE RELEASE

    When I came back I was determined to actually release something! I lowered my sights to a car game, it is basically finished but unfortunately my laptop is too old to handle the deferred lighting (Thinkpad x220 intel graphics)  so I can't test it really, going to wait until I can afford a better computer before releasing it.

    Still determined to release something I decided to focus more on the gameplay than graphics.

    Is This A Game?

    Now I have created an Experimental prototype. Its released and everything: https://yesindiedee.itch.io/is-this-a-game 

    So far I don't know if it even runs on another computer. Any feedback would be greatly appreciated!

    If you have any questions about any process in the creation of this game design - coding - scripting - graphics - deployment just ask, I will try to make a post on it.

    Have a nice day, I have been lurking on here for ages but never really said anything.....

    I like my cave

     

    ScreenSat27.jpg

    • 2
      entries
    • 0
      comments
    • 137
      views

    Recent Entries

    Last week we made a draft design document to visualize the global structure of the game. 
    choconoa_levels.png

    Because the levels are linear with a lot of verticality and the camera free, we are trying to find a good solution for the boundaries. Of course it will be heavily filled with environments deco but we want don't want to spend a lot of time making sure there are no holes! And we don't want invisible walls which are not obvious. So I tried with a depth-faded (using depth fade for the intersection between the wall and the other objects, and a camera depth fade) magic wall like this:

    giphy.gif

    Now the chantilly path automatically conform to the ground:

    giphy.gif

    As much as possible we try to save time by creating tools like this:

    giphy.gif
     

  9. This will be a short technical one for anyone else facing the same problem. I can't pretend to have a clue what I was doing here, only the procedure I followed in the hope it will help others, I found little information online on this subject.

    I am writing an Android game and want to put in gamepad support, for analogue controllers. This had proved incredibly difficult, because the Android Studio emulator has no built in support for trying out gamepad functionality. So I had bought a Tronsmart Mars G02 wireless gamepad (comes with a usb wireless dongle). It also supports bluetooth.

    The problem I faced was that the gamepad worked fine on my Android tv box device, but wasn't working under Linux Mint, let alone in the emulator, and wasn't working via bluetooth on my tablet and phone. I needed it working in the emulator ideally to be able to debug (as the Android tv box was too far). 

    Here is how I solved it, for anyone else facing the same problem. Firstly the problem of getting the gamepad working and seen under linux, and then the separate problem of getting it seen under the Android emulator (this may work under Windows too).

    Under Linux

    Unfortunately I couldn't get the bluetooth working as I didn't have up to date bluetooth, and none of my devices were seeing the gamepad. I plugged in the usb wireless dongle but no joy.

    It turns out the way to find out what is going on with usb devices is to use the command:

    lsusb

    This gives a list of devices attached, along with a vendor id and device id (takes the form 20bc:5500).

    It was identifying my dongle as an Xbox 360 controller. Yay! That was something at least, so I installed an xbox 360 gamepad driver by using:

    https://unixblogger.com/2016/05/31/how-to-get-your-xbox-360-wireless-controller-working-under-your-linux-box/

    sudo apt-get install xboxdrv

    sudo xboxdrv --detach-kernel-driver

    It still didn't seem to do anything, but I needed to test whether it worked so I installed a joystick test app, 'jstest-gtk' using apt-get.

    The xbox gamepad showed up but didn't respond.

    Then I realised I had read in the gamepad manual I might have to switch the controller mode for PC from D-input mode to X-input. I did this and it appeared as a PS3 controller (with a different USB id), and it was working in the jstest app!! :)

    Under Android Emulator

    Next stage was to get it working in the Emulator. I gather the emulator used with Android Studio is qemu and I found this article:

    https://stackoverflow.com/questions/7875061/connect-usb-device-to-android-emulator

    I followed the instructions here, basically:

    Navigate to emulator directory in the android sdk.

    Then to run it from command line:

    ./emulator -avd YOUR_VM -qemu -usb -usbdevice host:1234:abcd

    where the host is your usb vendor and id from lsusb command.

    This doesn't work straight off, you need to give it a udev rule to be able to talk to the usb port. I think this gives it permission, I'm not sure.

    http://reactivated.net/writing_udev_rules.html

    Navigate to etc/udev/rules.d folder

    You will need to create a file in there with your rules. You will need root privileges for this (choose to open the folder as root in Nemo or use the appropriate method for your OS).

    I created a file called '10-local.rules' following the article.

    In this I inserted the udev rule suggested in the stackoverflow article:

    SUBSYSTEM!="usb", GOTO="end_skip_usb"
    ATTRS{idVendor}=="2563", ATTRS{idProduct}=="0575", TAG+="uaccess"
    LABEL="end_skip_usb"
    SUBSYSTEM!="usb", GOTO="end_skip_usb"
    ATTRS{idVendor}=="20bc", ATTRS{idProduct}=="5500", TAG+="uaccess"
    LABEL="end_skip_usb"

    Note that I actually put in two sets of rules because the usb vendor ID seemed to change once I had the emulator running, it originally gave me an UNKNOWN USB DEVICE error or some such in the emulator, so watch that the usb ID has not changed. I suspect only the latter one was needed in the end.

    To get the udev rules 'refreshed', I unplugged and replugged the usb dongle. This may be necessary.

    Once all this was done, and the emulator was 'cold booted' (you may need to wipe the data first for it to work) the emulator started, connected to the usb gamepad, and it worked! :)

    This whole procedure was a bit daunting for me as a linux newbie, but if at first you don't succeed keep trying and googling. Because the usb device is simply passed to the emulator, the first step getting it recognised by linux itself may not be necessary, I'm not sure. And a modified version of the technique may work for getting a gamepad working under windows.

  10. NeutrinoParticles is a Real-time Particles Effect Editor and it is a new extraordinary editor on the market.www.neutrinoparticles.com

    What makes this editor recognisably different than other editors is, it allows you to export the effects to the source code in JavaScript or C# which makes them extremely compact and fast, and it is absolutely FREE.

    MacOS and Linux users may use WINE to run the editor for now. Native packages will be available soon.

    Particles Effect Editor, JavaScript, C#, Unity, PIXI Engine, Generic HTML

    The software has some renderers for JavaScript (PIXI Engine, Generic HTML) and for C# (Unity, C# Generic).

    For example, if you use PIXI on your projects, you only need to copy/paste several lines of code to make it work.

    Moon2.png

    Fire3.png

    Birds5.png

    Home Page (54).png

  11. We've uploaded a video to our Dailymotion account that is a full length recording of the intro text scene we've been working on for the game.  This was interesting to do because we've never done animations that are timed and sequenced.  The intro quickly details the lore a bit, who you are (good), who's the bad guy, and some history behind your powers and training.  This is one of the first things we've made for the game that is actually a playable part (not a test level, or rigging to get controls right).  Follow for more updates on the game, hope to show the tutorial section of the game quite soon.

    Crystal Dissention Intro Text Trailer - Dailymotion video

    • 1
      entry
    • 2
      comments
    • 67
      views

    Recent Entries

    Game Programming Resources

    5a610531495ba_gameprogrammingresources2.thumb.png.0024fd7e0c8f4a6533bb2b56faab4c32.png

    Rodrigo Monteiro, who has been making games for twenty years now, started a thread on Twitter for sharing his favorite game programming resources. I then collected those and a few responses and indexed them into a Twitter moment here:

    Here’s what was in the thread: 

    Game Networking: https://gafferongames.com/categories/game-networking/

    Development and Deployment of Multiplayer Online Games by IT Hare / No Bugs’ Hare is a multiplayer game programming resource split into nine volumes; the first of which is available here on Amazon.

    Linear Algebra: 

     

    Geometry – Separating Axis Theorem (for collision detection): http://www.metanetsoftware.com/technique/tutorialA.html

    How to implement 2D platformer games: http://higherorderfun.com/blog/2012/05/20/the-guide-to-implementing-2d-platformers/

    Pathfinding: https://www.redblobgames.com/pathfinding/a-star/introduction.html

    OpenGL Tutorial: https://learnopengl.com/

    Audio Programming: https://jackschaedler.github.io/circles-sines-signals/index.html

    OpenAL Effects Extension Guide (for game audio): http://kcat.strangesoft.net/misc-downloads/Effects%20Extension%20Guide.pdf

    Entity Component Systems provide an alternative to object-oriented programming.

    Entity Systems are the future of MMOG development: http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/

    What is an entity system framework for game development? http://www.richardlord.net/blog/ecs/what-is-an-entity-framework.html

    Understanding Component-Entity-Systems: https://www.gamedev.net/articles/programming/general-and-gameplay-programming/understanding-component-entity-systems-r3013/

    Alan Zucconi blogs about shaders and game math for developers on his site: https://www.alanzucconi.com/tutorials/

    AI Steering Behaviours: http://www.red3d.com/cwr/boids/

    Bartosz Olszewski blogs about game programming here: gamesarchitecture.com

    How to write a shader to scale pixel art: https://colececil.io/blog/2017/scaling-pixel-art-without-destroying-it/

    Here’s podcast on C++ programming: http://cppcast.com/archives/

    http://gameprogrammingpatterns.com/

    Note: This post was originally published on my blog as game programming resources.

  12. Main character : Zoile.

    First shared concept art ! Please give me your tough about him (comment below, share and subscribe). My first intention was to reveal the plot, but I felt like the blog was missing some more visual support. 10 followers and we will unlock a new character next week :)

    Today's question: who was your favorite game, cartoon or comic book heroes and why ?

    Zoile.png.bc3430c98f2c83225c6ed0b94e0e13e7.pngZoile_noarmor.png.723aa0325d5b87b6871647fcfa155b01.png

    Zoile is the main character of a group of three. In our game, you will have the chance to control 3 main character, each having individual weakness and strength. You will be able to control them all at the same time and all the time ! How you will control them will be covered in an upcoming post, but promise to be a fun and unique way of handling different skill set. I remember one of my old time favorite arcade game 1989 Teenage Mutant Ninja Turtle. My favorite character was Michelangelo, but Donatello had this long stick that could reach enemy from a much further distance making him the best choice to fight the first boss Rocksteady which I felt was the strongest of the game beside Shredder. It was always annoying to have the choice between the one I liked and the one that was most capable to handle that situation. I felt the same thing playing many RPG like Diablo, II and WoW where you just can't invest enough time in all classes and you always dreamed to mix and match class skill to get the best possible character you could build that would match your play-style. How we approach that problem will be reveal soon.

     

     

    Quote

    At this point in time, you may feel that ETs are old-dated, unrefreshing and filled with "cliché", think again. I swear that our anti-heroes are different and that the plot and the phylosophical humorous twist that will be explored in this universe is very different and refreshing from what you have in mind.

    Zoile is the self-proclaimed leader of our squad. He is particularly strong physically for his race and possesses a quick wit, few have dared to challenge him. His descent gave him a high level of  self-confidence and Zoile quickly became imbued of himself. Afraid of none, he sees himself as one of the best fighters of his race and has an unwavering pride of his homeland. He was born from a war hero and has always dreamed of becoming a full member of the prestigious flying squad of the 452b Pekler Interplanetary Army. After several failures in the admissions exam, Zoile became very mean and bitter towards others. After 4 years of attempts, he finally was accepted as a low rank recruit. It is said that his father intervened with the council so he could have the chance to demonstrate his value. Unfortunately, many disciplinary problems and conflict with other members have confined him in low rank function. After several difficult years with the squad, the high council allowed him to create a small squadron with two members of his choice for his first mission. He, unsurprisingly selected the only two members with whom a sincere friendship was developed. The two members accepted the honor after a convincing patriotic speach from Zoile and since then he has taken his role with the great honor. Despite is strong wrong and frequent fight with his two friend, he is willing to give everything to show to everyone that his team is the best, because he is the best leader and a good leader can bring weak soldier to great honor.

    You can see Zoile with and without his armor set and a typical ray-gun weapon. The Zarin are a humanoid skinny race from a nearby planet.

    Zarin, as a race, are mostly peaceful, minding their own business and seeking knowledge. They have an equivalent length of evolution compared to human, but they took a different path. They are scared about a recent discovery they made that put their world at risk of what they call the "multicolored tall people invasion". More info about the Zarin will be shared in an upcoming post !

     

    Thanks, please share your opinion, it's very valuable. 5 comments and we publish a video tutorial on how to draw it.

     

     

     

  13. I'm a man on a Mobile Gaming Quest (MGQ) to play a new mobile game every day, and document my first impressions here and on YouTube. Below is the latest episode.

    Run, swipe, die. Rinse and repeat. Seriously, Glitch Dash looks gorgeous but might just be the most difficult arcade game I've played on mobile (well, apart from Flappy Bird). Avoiding the swinging hammers and laser beams is pure torture, but extremely satisfying when you finally complete each level. 

    The game's currently in beta, but I decided to include it as I was having a lot of fun with it, and I figured some of you might want to signup for the beta.

    In terms of monetization, you start out with ten lives, which you'll quickly burn through, and get 10 new lives after 120 seconds, or immediately by watching an ad. Luckily, we can also remove the life system entirely through a $2 IAP. 

    My thoughts on Glitch Dash:


    Google Play: https://www.signupanywhere.com/signup/nvip99qq
    iOS: https://www.signupanywhere.com/signup/nvip99qq

    Subscribe on YouTube for more commentaries: https://goo.gl/xKhGjh
    Or join me on Facebook: https://www.facebook.com/mobilegamefan/
    Or Instagram: https://www.instagram.com/nimblethoryt/
    Or Twitter: https://twitter.com/nimblethor

  14. Latest Entry

    New inventory interface framework

    DOMEN KONESKI

    I established a new inventory interface framework by writing a new logical layer on top of the current system. By doing this we can now save a lot of coding and designing time. The system is generic which means UI elements are built on the fly and asynchronously. For example, we load sprites from the pool if they already exist in the memory, else we load them asynchronously from the resources – disk. Everything feels seamless now, which was the primary goal in mind while recreating the system.

    UI_inventory_lowpoly_floatlands.png?fit=

    New logical layer introduced new item manipulation techniques while having your inventory open:

    • By hovering over an item, a description panel shows which holds information about your item such as item name, description and quality;
    • By dragging one item onto another one, swapping or merging becomes possible;
    • If you right click on an item, you can split it if there is enough room in your inventory;
    • If you left click while holding left-shift, you can transfer an item instantly to another inventory panel.

    giphy.gif

    Harvesting world resources

    DOMEN KONESKI

    World resources such as stone veins, metal veins, niter veins, crystals and trees now have a harvesting feature – items (wooden logs, ores) now drop while mining/harvesting the resource which significantly improves resource gathering feature of Floatlands:

    resource_gathering_lowpoly_floatlands.pn

    Critters

    ANDREJ KREBS

    We started adding some more critters to the game. These will be harmless small animals and robots that will add some more life to the game. So far I’ve modeled, rigged and animated the hermit crab and the spherical robot. I also added a missing animation to the rabbit.

    Foliage

    ANDREJ KREBS

    I have prepared some undergrowth foliage to make the nature more varied and interesting. They’re made in a similar way the grass, bushes and treetops are made, with alpha transparent textures on planes.

     

    giphy.gif

     

    Companions

    MITO HORVAT

    Roaming around the world can sometimes feel a bit lonely. So we decided to implement a companion to follow you and keep you company as you play. Companion won’t just follow you arround and be useless, they’ll be a valuable asset early in the game. For instance, they’ll help you gather resources with their set of utilities. They’ll even provide the source of light during the night!

    As with any concept design task I’m given, I usualy sketch a bunch of ideas on a small collage. In this case we then printed it and everyone in the office picked their 3 favourites.

    companion_concepts_lowpoly_floatlands.pncompanion_sketch_lowpoly_floatlands.png?

    Based on that decision I combined the elements of the companions everyone picked and came up with the following concept. It’s also important that the companion has elements that you can find in the game, to give it the unique aesthetic look.

  15. I took the last two weeks of December off for holidays, so no production was done for Spellbound during that time. I met up with my friend Russel (Magforce7) for an afternoon at my office and gave him a demo of Spellbound in VR. He works for Firaxis, so it was interesting to compare notes on development and production. Without a doubt, he's a lot more experienced with production and development, so I tried to glean as many tips and tricks as I could. It was also his first time trying VR, so I gave him a bunch of quick VR demos so that he could get familiar with the medium and how to interface with it. It's interesting to compare the differences between producing a traditional video game vs. a room scale VR video game.

    In terms of production, I've written out the complete narrative manuscript for Episode 1 of Spellbound and have begun shopping it around to anyone willing read it. It's not "done" by any stretch, it's just the first draft, and the first draft is always going to be susceptible to lots of revisions. Currently, it's about 40 pages in length. That's about what I had expected. Now, I need to go through and do a ton of polishing passes. I think of the story sort of like one of those JPG images which loads over a slow internet connection. The very first version of the image is this highly artifacted mess which barely holds a semblance to the actual image, but with each pass, the resolution of the image improves and the details get more refined each time, until you end up with a perfectly clear image.

    With regards to writing narrative for a VR game, I think the pass process is going to be a lot more convoluted. The first pass is just trying to write the story itself and figure out what the story even is. The writer explores a bunch of different directions and the final product is the choices by the writer which yield the most interesting story. But, you can't just take the story of a writer and plop it into a VR game and call it perfect. In fact, the writer must keep in mind the medium they're writing for and what the capabilities of that medium are. If you're writing a script for a movie, you have to think about what scenes you're going to create and possibly consider a shot list, and also think about the actors who will portray your characters and the acting style. You can effectively frame the shot of the scene to show exactly what you want the audience to see. That's a great amount of power and control over the audience experience. Writing for VR is completely backwards. I have to preface this by saying that I'm a novice writer and have never written a script, much less, a script for VR, so take my words with a hefty grain of salt. My writing technique mostly consists of putting myself into the body of the character. I am that character. That character has a personal history. A personality. A style. Stated interests, and unstated secret interests and ambitions. Character flaws, and character strengths. I see the scene from the eyes of the character, see the state of the world, listen to what was just said, and then react based on the traits of my embodied character. The character should always be trying to progress their ambitions. Character conflict should happen when ambitions collide. When it comes to VR games, the protagonist is the player themselves, so you have to keep in mind that the protagonist has agency which the writer can't control. They experience the story from the first person perspective, through the eyes of the character they embody. So, whatever happens to the main character also happens to the player. With VR, the player brings their own body and hands into the scene, so those are things the writer can interface with. Maybe the player gives a bow to a king? What if they don't bow before royalty? Maybe when you meet a new character, they extend a hand to give a handshake? What happens if you don't shake their hand? Maybe a character comes forward to give the player a huge hug? The secret sauce for VR is finding these new ways to develop interpersonal connections with characters in the world and using that to drive story and player experience. I try to keep this at the forefront of my mind when writing for VR -- first hand player experience is king. I also want to give my characters depth, so I do this mostly through subtle narrative exposition, mostly in the form of ambient banter between characters. For the sake of simplicity of production, the main character doesn't have narrative conversation choices. This means I don't have to create conversation trees or user interfaces for dialogue choices and the flow of dialogue can be seamless and uninterrupted.

    I am starting to audition for character voices. I've got a list of local voice actor talents and am asking a few of them to send me a few demo lines from the manuscript and a quote for their day rates. It's hugely inspiring to hear the voices of the characters saying the lines I've written. It feels like these characters might actually exist somewhere outside of my imagination, and I owe it to them to give them the very best lines I can come up with to portray their nature and ambitions correctly. A few people have read my manuscript and given mostly positive feedback, so that suggests that I'm roughly on the right track. I'm going to spend a few days taking it to various writers meet up groups and getting critical feedback, so this will help me immensely to get to a higher level of polish and clarity. If you're interested in reading the manuscript and my production notes, feel free to read the google doc and supply feedback in the comments below:

    https://docs.google.com/document/d/1IvNYNf9NqtdikD6UZuGq-rUo9yU5LVqqgIlWsz6n2Qs/edit?usp=sharing

    (Note: It's a work in progress, so you can see changes happening live as I edit it.)

    The ideal is to write a story which is so compelling that it grabs people and makes them want to read it. I want to be able to drop a 40 page manuscript in someones lap and tell them to read it. They'll be thinking, "oh god, more bullshit. I don't want to read this crappy novice writing. I'll humor them and read two pages." So, they read two pages. It's good. They decide to read another page. It's also good. In fact, it's getting better. They turn to the next page to keep going. Wow. It's actually a decent story. They keep turning pages. Forty pages later, they're surprised to have read the whole thing and they are left wanting more. It's a pleasant surprise. The story should be good enough that it stands strongly on its own legs. It doesn't need anything else to be a compelling experience. Now, if you experience the same story in VR, and characters act out their lines, and the voice acting is stellar, the experience of the story is just multiplied by the talent and quality. This is the ideal I'm shooting for. Spellbound will be a story centered VR game, rather than a game which happens to have a shallow story layered on top. It's worth taking the time to nail the story and get it right, so I'm taking my time.

    When the manuscript is complete, I'll have voice actors voice out each of the characters. I really don't want to have to do a lot of dialogue resamples, so I need to make sure that the first time a line is voiced is also the last time it's voiced. The goal is to avoid revisions. So, how do I do this? My current plan is to polish the story and get it as close to perfection as possible. Hiring voice actors costs money. When I drop voiced lines into the game, I am going to need to know whether the line works in the current scene with the current context. So, a part of the creative writing process will require for me to experience the scene and adapt the writing for context and length through a bunch of iterations. I'm going to voice act my own characters with a crappy headphone mic and use these assets as placeholders. It'll be a really good way for me to quickly iterate on the character interactions and player experience. I kind of feel silly, like I'm just playing with dolls who are having a conversation with each other. But hey, maybe that's really the core of script writing in hollywood too?

    On a personal note, I've decided to give up all social media for a month. No facebook, no twitter, no reddit, no youtube, etc. The primary reason is because it costs me too much time. Typically, my day starts by waking up, pulling out my laptop and checking twitter and facebook for updates to my news feed. That costs me about 30-45 minutes before I get out of bed. Then I go to work. I get to work an hour later, start a build or compile, and since it's going to take 5 minutes to complete, I decide "Hey, I'll spend five minutes checking facebook while I wait.". That five minutes turns into twenty minutes without me realizing it. And this happens ten times a day. I can easily waste hours of my day on social media without consciously realizing it. It adds up, especially over the course of days and weeks. And for what? To stay updated and informed on the latest developments in my news feeds? Why do I actually care about that? What value does it add to my life? How is my life better? Or, is my life actually better? What if social media is actually unhealthy? What if its like cigarettes? Cigarettes cause lung cancer with prolonged use, so maybe social media causes mental health problems like depression, low self worth and narcissism with prolonged use? What if social media is inherently an anti-social activity? Anyways, I've consciously decided to abstain for a full month without social media as an experiment. So far, I'm five days in and realizing how much I was using it as an outlet for self expression. Something happens to me and my default reaction is, "Oh, this would be good to share in a post!", and now I realize "Oh, I can't share this on social media. Who am I actually trying to share this with? Why am I trying to share this? Can I just forget about sharing and just relish the experience in this fleeting moment?" The secondary effect of abstaining from social media is that I'm also trying to pull away from technology a bit more so I can find a more healthy balance between technology and life. Currently, if I'm not staring at a screen, I'm at a loss for what to do with my time. Should I really live my whole life staring at glowing rectangles? Is there more to life than that? How would I feel if I'm laying on my deathbed and reflecting on my life, realizing that I spent most of it looking at screens?

    I need new hobbies and passions outside of screens. So, I've picked up my old love for reading by starting in on some fantasy books. Currently, I'm well on my way through "The Way of Kings" by Brandon Sanderson. I'm reading his first book slowly, digesting it sentence by sentence, and thinking about it from the eyes of a writer instead of a reader. It's an amazingly different experience. He's got some amazingly clever lines in his book, and there are some amazing pieces of exposition which the author uses as a proxy to share his own attitudes and life philosophies. I am going to steal some of the writing techniques and use them myself.

    I'm also still doing VR contract work on the side in order to make money to finance my game project. The side work is picking up slightly and I'm getting better at it. I have this ambitious idea for a new way to create VR content using 360 video and pictures. Most clients are trying to capture an experience or create a tour of something in VR and taking audiences through it. Essentially, it's mostly just video captured in 360 and then projected onto the inside of a sphere, and then setting the player camera at the center of the sphere. It's somewhat simple to implement. My critique is that this isn't a very compelling virtual reality experience because it's really just a passive experience in a movie theater where the screen wraps all around the viewer. There's very little interaction. So, my idea is to flip this around. I'd like to take a 360 camera and place it at various locations, take a photograph/video, and then move the camera. Instead of having a cut to the next scene, the viewer decides when to cut and where to cut. So, let's pretend that we're creating a virtual reality hike. We incrementally move the 360 camera down the trail, 50 feet at a time, for the entire length of the hike. A hike may not be perfectly sequentially linear, there may be areas where you take a detour to experience a look out on the side of the trail. So, on the conceptual data structure level, we are going to have a connected node graph arranged spatially, and the viewer will transition between connected nodes based on what direction they want to go on the hiking trail. I'll have ambisonic audio recording, so you'll be able to hear birds chirping in the trees and a babbling brook in the side of the trail, etc. The key difference here is that the viewer drives the pace of the experience, so they can spend as much or as little time as they want, experiencing an environment/scene, and since they can control what nodes to visit next, they have agency over their entire experience. This is the magic of VR, and if I get a prototype proof of concept working, I think it can be a new type of service to sell to clients. I can go around Washington State and go create virtual recreations of hikes for people to experience. There's some beautiful hikes through the Cascade mountains. We have a desert on the eastern half of washington, filled with sage brush and basalt lava rocks. We also have a temperate rainforest on the Olympic peninsula, where we get 300+ inches of rain a year, with six feet of moss hanging off of tree branches. The geography, flora and fauna are somewhat unique to Washington state, so if I can create a library of interactive virtual reality experiences of various parts of our state, it would be a pretty cool experience, where you can get virtual tours of various parts of the state. It would almost be as good as visiting in person and a good way to preview a place you might want to experience. IF it is a popular form of content, I can expand my content library by offering virtual reality tours of other parts of the world people wouldn't otherwise be able to visit. Would you like to explore the tropical jungles of Costa Rica? Would you like to climb the mountains of Nepal? Would you like to walk around in Antarctica? Would you like to go to the Eiffel Tower? If I do this right, I could create a fun VR travel channel and combine some educational elements to the experience. It would be a good way for me to get out of the office and experience the world. I'm currently working on building a prototype proof of concept to figure out the technical side and user interface, and will probably have something rough built out by the end of the month. This could turn into a cool new way to do interactive cinema in VR. I haven't seen anyone else do something like this before, but I may just be under informed.

  16. Meteor Bombardment 1 Devblog 01

    • Genre: Fixed Shooter
    • Engine: Unity
    • Platform: PC
    • Art Style: 8-bit Pixel Graphics
    • Current State: Technical Design Phase - 30% Complete

     

    Game Description

    Aliens from a distant planet have begun redirecting meteors and attack ships at earth in order to wipe out as much of the population as possible before they invade. Using the only salvaged alien attack ship, you must work to destroy the meteors before they impact with earth and kill off its population.

     

    Development Status Overview

    Conceptual design for the game is completed. Technical Design has begun, which involves defining how meteors and attack ships will travel, how many hits needed to destroy meteors and attack ships, as well as level design theory.

    Attached to this blog is the album for the game which includes the conceptual design image. When Technical Design is completed images regarding the technical aspects will be uploaded to the same album, with a new developer blog posted.

     

    Project General Goals

    1. Concept Design
    2. Technical Design
    3. Recruit Team
    4. Develop Game
    5. Test Game
    6. Launch as Free Title
  17. 2018 has already been a busy year. The Gears of Eden team has been hard at work as we prep for our Alpha 2 release. For our art and dev team, that means designing and implementing in-game resources. For our writing team, that means research and planning. But, combined, that means we get the chance to test our cool new toys and show them off for everyone to see.

    To accomplish this, our team recently held our first Gears of Eden Discord Day. In case you don't know, Discord is an app that allows gamedevs to make their own chat servers and share images and info with people who join (here's YOUR invite). We shared our new rover design, talked about influences for the game, explained how this project got started, and streamed the first meeting between our old rover and the new one. Check out the first meeting: https://www.twitch.tv/videos/208097308?t=12m04s

    Since then, the art team has been hard at work on base design and updating our UI. We've gone over quite a few models looking to find the right fit for our game. Because the bases are to be used by rovers, and modular (read expandable!), we decided that design must be functional rather than just aesthetically pleasing.

    GearsOfEden_SmallBase_Form-Langu.thumb.jpg.c80456b2dc2e538cf3b0dc9d51e7b26f.jpg

    The images above were just a few samples that we reviewed, and we are getting closer and closer to deciding what the base design will be for Alpha 2. Based on these images, our team was able to render a sample of the base in-game.

    DRLh22sUIAAvY8k.thumb.jpg.7e35a5a415186f74d311a87a29f4d4da.jpg

    As mentioned, our UI is being updated to be more intuitive and provide better information for players. Instead of clicking on the gears to craft using the inventory, there will be separate tabs implemented. The Crafting tab will show all blueprints collected, which you will then be able to craft from if you have the resources.

     

    With that in mind, we've been doing some more Twitch streaming. Some of that has been development videos, and some of that has been members of our team showing off some games we enjoy playing. Here you can see Sledge going over the new UI, testing out the new rover, and doing some crafting: https://www.twitch.tv/videos/210491947?t=02m54s

     

    We're making a lot of progress, but Alpha 2 is going to be a critical phase for us. Right now, we're doing all our development at our own costs, with a small team. Once Alpha 2 is out, we're going to have to find a way to secure some financial backing if we want to finish our demo in a reasonable timeframe. That's where you come in. We really, really need your help in growing our audience. Please engage with us, and follow us on our various social media accounts to help spread the words to others. Like, comment, share. And, if you're able, you could always support our endeavors at our donation rewards page, or through Patreon. We literally cannot make this game with you. Thank you so much!

    This is going to be so fun! We can't wait to show you everything we've been working on these past few months, and it'll be a great stamp on this stage of development! If you want to see how we get all this done as we get it done, follow us on Twitter, Twitch, and Facebook for all the latest and greatest news on Gears of Eden.

  18. Corona’s engineers have slipped a couple of really cool features into recent daily builds that may have not caught your attention.

    Emitter Particles and Groups

    Previously, the particles emitted from an emitter became part of the stage until they expired. This would create problems with the relative positioning of the particles if your app needed to move the emitter. If you moved the emitter as part of a parent group, it didn’t create a natural look. Emitters can now have their particles be part of the parent group the emitter is in. This was added to daily build 2018.3199.

    emitter.gifTo use this feature, you can set emitter.absolutePosition to the parent group of the emitter. Previously you had the options of true or false to determine if the positioning was absolute or relative to the emitter. By passing a group, it’s now relative to the group. You can download a sample project to see the feature in action.

    Controlling iOS system gestures

    On iOS when you swipe down from the top, it shows the Notifications panel. When you swipe up, you get the control panel. If you have UI elements in your game that are near the top of bottom in areas that are likely to result in swipe gestures, it would be nice to be able to control that. Now you can! Starting with daily build 2018.3193, you can use native.setProperty( “preferredScreenEdgesDeferringSystemGestures”, true ) to have those swipes just show a swipe arrow and a second swipe to activate the panels.

    We have more great things in the pipeline so watch this space for news and updates.


    View the full article

  19.  

    Mobile games still represent the highest growing niche among apps. The mobile game market worldwide is supposed to touch $46 billion in the present year. In spite of this staggering growth, just 10% of the game apps can actually be called commercially successful as per the growth and ROI they achieved. Naturally, rethinking the strategy and planning for something effective to market mobile games will continue. Based on the experience of the near past, what are the key marketing tips for mobile games we can consider in 2018? Let us have a look.

    5a2b8e97bcf6b_8WaysYouCanRetainYourGamePlayerforLonger-Nimblechapps.thumb.jpg.af5d3f68ecb1632c34f9da7a2710d244.jpg

    1) Localization will be key

    To make your game connect its audience in different parts of the world your game needs to speak the language of your audience in different local markets. There are many markets that are far from dominated by any single language, and often these markets offer the bigger untapped potential for new game apps. While localising the game language is crucial, there are other considerations as well.

    Localisation should also be offered through a selection of payment methods. The selection of payment methods should be made available as per the penetration of such methods in respective markets. For instance, markets having lower credit card penetration should be given other methods of payment for the game players. In some markets, third-party payment solution or third party publishing rights may be good solutions while in others they may not.

    2) Consider the latest In-app purchase (IAPs) trends

    Throughout 2017 In-app purchase has dominated the mobile app marketing space, and it has been the most effective and revenue earning strategy. In-app purchases have earned $37 billion in app revenue in 2017 alone. In spite of the fact that just 5% of game players actually end up spending money through IAPs, this monetisation avenue is credited for 2000% more profit compared to other avenues.

    In the months to come, In-app purchase (IAP) will be more specific in targeting game players with new tools and tweaks like limited period events, behavioral incentives and dynamic pricing. We can also expect more games to adopt several different types of virtual currency for payment. Specially targeted offers for some game playing audience will also be a key trend.

    3) Consider these social media hacks

    Social media will continue to feature more prominently in the mobile game marketing. There will be a few effective social media hacks and effective strategies that will dominate mobile game marketing in 2018 and beyond.

    When planning marketing for your new game apps on social media, you need to prioritise social platforms based on the type of user your game app will be entitled for. There are plenty of social platforms, but the app that can work on Facebook may not work well on Pinterest. It obviously depends on the audience.

    When it comes to marketing your game on Facebook, you need to build up excitement for the game for several months prior to your launch and based on the reaction of your launch should launch the game app to generate maximum buzz.

    Pinterest can be a great medium if you can register various screenshots and app related images in an appealing manner to the visual database of the platform. Pinterest works great if you have a separate website for the app to draw and engage traffic.

    Reddit, on the other hand, can be a good platform to track information and spot marketing opportunities of your game app. Lastly, make use of social analytics to track and monitor your game playing audience and activities.

    4) Paid games

    You may have discarded paid apps already as a monetisation strategy, but in the last year only there have been several highest grossing paid mobile games. In fact, there has been $ 29 billion revenue from the paid apps alone. Yes, we know that nearly 98% of paid apps in Play Store are free apps, but to your surprise, many of these apps are now coming with a mix of strategies by offering paid sister apps. Often value additions like new graphic contents with these paid sister apps can actually boost engagement from the audience.

    5) Game ads rewarding players

    Mobile game players are more habituated with promotional contents compared to other app users. With the traction and engagement for in-app ads to garner any substantial revenue often a game needs a huge audience or players to earn a substantial value. This is why game ads need to be thought in a completely new light. Rewarding game players for watching game ads have come up with a really effective strategy. Around 80% of game players prefer watching video ads for game rewards.

    6) In-game sponsorship

    Sponsored contents within mobile games continued to remain another popular aspect of many mobile games in the recent years. It started way back in 2015 when Angry Birds players were allowed to kill the awful pigs by Honey Nut Cheerios as long as for 2 whole weeks and thus promoting another game app. Soon, several other games followed this trend which incorporated elements of other game apps in the gaming actions for the purpose of sponsorship. It works great specially for developers who have multiple game apps with a few successful ones across the board. In the present year, we can see mobile game developers to reduce dependence on the in-app purchase by embracing these rewarded and sponsored ads.

    7) Merchandising game products

    Merchandising your game-related products to the game players is still effective for too many mobile games. But it needs at least a certain level of commercial success for the game app. Only when your game has a widespread following and enjoys a niche branding, you can come up with the marketing of in-game characters shaped into real life products like t-shirts, stuffed toys, game-inspired cars, and even notebooks or coffee mugs.

    In conclusion

    All these strategies and avenues seem to have one thing in common, and it is more about connecting audience more specifically and in a targeted manner. In 2018, we can expect these strategies evolve further.

    • 1
      entry
    • 0
      comments
    • 1204
      views

    Recent Entries

    It's been awhile since I've been on this site.  Been busy at work, but as with all contracting, sometimes work gets light, which is the case as of the new year.  So I saw this challenge, and thought it might be fun to brush up on my skills.  I've been working mainly with embedded systems and C#, so I haven't touched C++ in awhile, and when I have, it's been with an old compiler, that's not even C++11 compliant.  So, I installed Visual Studio 2017, and decided to make the best use of this.

    Time is short, and I don't exactly have media to use, so I decided to just go out and start to learn Direct2D.  I have little experience with any modern form of C++, and zero experience with Direct2D and XAudio.  Whereas I didn't mind learning Direct2D, I fully admit XAudio presented a bit of problems.  In the end, I blatantly stole Microsoft's tutorial and have a barebones sound system working.  And unlike the Direct2D part, I didn't bother to spend much time learning what it does, so it's still a mystery to me.  I'm not entirely sure I released everything correctly.  The documentation said releasing IXAudio2 would release objects, and when I tried to manually delete buffers, things blew up, so I just let it be.  There are most likely memory leaks there.

    As you can plainly tell, this is by far the worst entry in the challenge.  This is as much of a learning experience as an attempt to get something out the door.  I figured, if I couldn't be anything close to modern, at least be efficient.  And I failed at that miserably.  Originally I wrote this in C.  Excluding the audio files, it came out to a whopping 16 KB in size, and memory usage was roughly 6 MB.  And then I decided to start to clean up my spaghetti code (I said start, never said I finished), and every time I thought I was getting more clever, the program grew in size and memory usage.  As of right now, it's 99 KB and takes up roughly 30 MB RAM on 720p resolution.  I haven't really checked for memory leaks yet, and I'm sure they exist (beyond just the audio).  In reality, I'd prefer to clean up a lot of the code.  (And I found a few errors with memory management, so I need to track down where I went wrong.  I removed allocating memory for the time being and pushed everything onto the stack.)

    The other thing is, this code is ugly.  Towards the end, I just started taking a patchwork approach rather than keeping it clean.  I was originally hoping for modularity, but that got destroyed later on.  And I'd love to replace the pointers that are thrown liberally throughout the code with smart pointers.

    Unlike the other entries, I only have missiles for the gameplay.  I didn't include UFOs, airplanes, smart bombs, nor warheads.  I just don't feel I had enough time.  Yes, there's still a couple weeks to go, but I'd prefer to cleanup what I have than add new features.  And unfortunately, I was a bit shortsighted, which caused problems later on.  There are multiple places where the code is far more verbose than it needs to be, because I wasn't properly focused on the correct areas.  I wanted to make it scalable, and I focused making the game a 1:1 ratio internally, yet displayed 16:9 to the user, which caused massive problems later on.  I ended up having to do math on pretty much every piece of graphics and game logic whereas if I had just displayed it as 1:1, or handled the internals in 16:9, I could have shaved off a thousand lines of code.  And it also caused problems with hit detection, which is another reason I didn't bother adding in anything but missiles.

    The hit detection was a mess.  I had everything mapped out.  The game was going to work whether a missile went 1 pixel a second, or 1000 pixels a nanosecond.  Calculating moving objects and collision with circles or boxes is easy.  Unfortunately, I was using ellipses.  And while there are formulas for that, I'll admit my eyes started to glaze over at the amount of math that would be required.  In the end, I decided to leave it buggy, and only detect if it was currently in a static ellipse, which is easy and fast enough to calculate.  I mean, I guess if the program freezes up, the user was going to lose a city/silo anyway, or lose it if the missile was traveling at light speed, but it's still a bug, and still annoys me, especially since everything else was calculated regardless of what the user sees. (*EDIT* Thinking about this more, the solution was right in front of me the entire time.  Just squish the world back to 1:1 and do the hit detection that way).

    Controls:

    1,2, and 3 control the missiles, and the arrow keys control the cursor.  Escape for the menu, and Enter for selection.  I've only tested this on Windows 10, as I'm pretty sure it requires those libraries.  It's a 64-bit executable.

    MCDemo.png

  • Advertisement
  • Advertisement
  • Popular Blogs

  • Advertisement
  • Blog Comments

    • Since I've given away so much of the spine of the Astral Invasion story that I wasn't originally meaning too, I'll add this as well.  Especially considering the Triumph songs I posted, and how perfect the final lines of this one are.  I'll also mention that this is all closely related to the Time of the Titans chess set in Armageddon Chess, and only someone who has gotten into the story spread across this whole blog, and really taken it in, will be likely to be able to make much sense of all of this other than just what is apparent on the surface.  Although after reading just Space Hockey, and what I have given away about Astral Invasion in Space Hockey and since posting Space Hockey, the trailer and theme songs of Fallen Angel Rising found in the Armageddon Chess blog post will have a lot more meaning too you.  
    • That's really just another way of saying "they don't hire game designers in this business" which they've been insisting isn't true for 35 years.  They get downright offended by it.  But like I said in an earlier post, I've understood for a long time now that it is because they have a completely different definition of the term. I might, as a hobby after I give up on this, mess with something like GameMaker just to do something for myself.  That would be a very long road to making my games, and I don't have a long road anymore.  I will be 50 later this year, and I was born with a genetic condition that makes me already older than I should expect to live.  But my family has a history of living to an old age for someone with this problem.  So I probably only have 10 or 15 years left.  Making simple board-game like things in a generic editor is not going to get me to making PDU games in any kind of amount of time that I have left.  Really, I should have been making computer games in the early 1990s, the computer game industry has never liked board game designers right from the beginning.  I know, I was there.  So I was really just born into the wrong career at exactly the wrong time in history. And I do have skills to help other than just writing it.  I did all 30 levels of Sinistar: Unleashed across four levels of difficulties through the raw data files in under 3 months.  As soon as I have something to work with... "a game is never finished, someone wearing a suit eventually rips it from your hands and puts it on a shelf".  I would never be happy with it, or consider it to be "finished", there would always be more than I could possibly do before it shipped.  And finishing the story of any one game is a monumental task when there are 11 other games intricately woven through it as well. I'm not going to get to the PDU in any amount of time that I have playing with a generic editor.  If those kinds of things existed 20 years ago I'm sure I would have done a lot of things with them, but in 2018 its a little late for me for that.
    • In the business i used to work last time, nobody of us was great at what he was doing, really And again, time has changed. Today artists with some basic scripting skills create games, some of them very good. We are heading towards creating professional games just by clicking. If you really want to create computer games, you can. You could start with something simpler, like Game Maker. I work for 4 months on a problem that i'm still unable to solve. I'm not good at this stuff. There are few people who finally developed seemingly good solutions for the problem after a decade of research - maybe. Academic experts, so i do not understand what they say in their scientific papers. I really don't like to work on the problem, it's kinda boring, most code i already wrote is useless. But i must succeed - otherwise i'm totally stuck. And nobody will join me to help even if i claim my vision is worth it. I have to show it really works first i guess. So, i really think you should to do the same. Even if it's just for fun and still won't work after a year. Writing design documents is not enough. What you do is like going to a record company without a demo tape and not willing to sing, compose or play an instrument.            
    • It's not my thing, I would never become truly good at it.  You need to be great at it to be of any help in your business.  I've worked with "AAA" programmers before.  I would never become good enough at it to be more of a hindrance too them than a help.  Someone good be in an endless state of cleaning up my mess.  I wouldn't try to become an artist, either, because I have no talent for it.  I believe there are only two ways for a game designer to find a way into the computer game industry.  You either need to be a programmer or artist, and you can become a part of the committee.  Not really a game designer, but at the same time you will be "designing games". To be a true game designer, designing the games and creating the background/story, you need to be both a businessman and a designer.  This was as true in the hobbyist game industry as it is in yours, it was just a lot cheaper and easier to do with board games.  Often a one-man operation running out of a spare bedroom.  There is a little more too it than that if you want to make computer games, and either way you have to be two things if you want to be a game designer who is "creating their own art".  And that is my problem, I only do one thing.  I am as terrible of a business man as I am a programmer or artist.  Space Hockey is actually the fourth time that I have tried to start my own company to make games.  The only time I ever came close was when my father, unlike me being a businessman is his thing, devoted a tiny bit of his time to help and almost did it.  But I won't ever pull that off on my own, as you can probably see from my two post attempt that is all I can think to do along those lines. I really am very good at what I do, but there really is only one thing that I do well. What the heck, another Astral Invasion Cindy song...  
    • Thank you! A mobile export is optionally planned after the desktop version is finished. Actually, it should not be a big issue as I am using Game Maker to develop the game, which comes with a mobile export.
  • Advertisement
  • Advertisement