# Blogs

## Frogger - programming day 2

Just a quick update to show how my frogger challenge entry is coming along. I had a day off yesterday and have today put in collision detection and a few other things. You don't die yet if you drown and the crocs and turtles are going the wrong way, the collision detection has lots of bugs, but it is getting there. My plan is to first get a working version, then flesh it out as time allows. I will put in UI soon and work out how to do menus etc. Once it is playable and meets the challenge I will put in animation, improve backgrounds, cameras etc.

## Battle Gem Ponies DevLog #191 (Not a Single Thing Checked Off the List)

I am shocked at how quickly this Wednesday snuck up on me. Guess that's a good thing. Another week closer to my short term goals, but on the other hand there's still next to no progress being made on BGP. So caught up in the daily routine and so mentally drained after hours of office work, it's incredibly easy to understand how people fall into "the grind" trap you know?

Just wanna drag yourself home, flop down somewhere (preferably with pizza at hand) and just watch stuff, endlessly scroll social media, wish you were happier and doing awesome things, and fall asleep. But I gotta stay on track. Even if it's just 2-3 hours at a time. Even if these little tasks like"Just write 8 YouTube scripts already!" end up taking forever, just like every other small task self-assigned.

Pretty sure I'm waist deep in burnout, but not in any particular situation to be able to aliviate that stress, so... Onward! Here's today's blog: https://www.yotesgames.com/2018/10/battle-gem-ponies-devlog-191-not-single.html

## End of the Line Progress (10/10/2018)

"End of the Line" development has been at its fastest since the beginning of development, trying to reach its mid-late December Alpha release! Some amazing work has gone into the planning of such a complex story universe with many characters, groups, and locations all coming together to form one story. Expect the hints of the large mysteries involved in the series to start coming out around Halloween! Gameplay-wise, the game has received some major fixes and work on the code since the beginning of the month. Most of the voice actors planned have recorded their necessary lines so we're waiting on some last minute stragglers to finish up! It's very exciting to see all the effort that people are putting in for this game. A new story route is being finalized which will change the end drastically and allow for more choices to affect gameplay. The interactivity between the enemy and environment is being implemented as I type! It has been an amazing week for development and I can't wait to update all of you guys!

## Seeing ships and chain gauges in game

We're really progressing I was just looking at the new release and wanted to share a few screen shots of what models look like in game. Original post blogged on Rank: Warmaster Dev Blog.

/jan.

## SFX Magma Chamber Sound Design for Video Game.

Hi community ! Check out my new "Magma Chamber" sound design track. What do you think ? Would you use it for your projects involving volcano sceneries ? You can find this sound and more volcano sounds on my website: http://www.ogsoundfx.com/

## BSP split plane determination

It's been a while since my last blog entry, but the problem I posed on my earlier blog entry still persists - how to efficiently choose a good split plane for an n-vector data structure. To summarize the structure, geographic points are stored as n-vectors (unit vectors) in a binary tree. Branch nodes of this tree define a plane that bisects/halves the unit sphere - one child of the branch contains points that are below the plane, the other child contains points that are above. Like all trees, leaf nodes are turned into branch nodes as they are filled beyond a threshold. That is the basic idea of it.   Split plane determination must occur when a leaf becomes a branch or when a branch becomes unbalanced. In either case, the method called to determine the split plane is agnostic as to why it was called - it simply receives a bunch of points that it must determine a good split for. My initial implementation was naive:   bestsplit = ... bestscore = infinite; for (i=0; i<numpoints; i++) { for (j=i+1; j<numpoints; j++) { split = (points[i]-points[j]).normalize(); score = 0; for (k=0; k<numpoints; k++) { dot = split.dot(points[k]); // accumulate some scoring heuristics, such as is each point above/below the split, how far above/below, etc. score += ...; } if ( bestscore > score ) { bestscore = score; bestsplit = split; } } } So basically - for all points, determine the point between it and every other point, normalize it, and consider this difference as the normal to the split plane, then test it to see how it "scores" using some heuristics. Probably you can see how this will perform miserably for any significant number of points. The big-O notation is something like n^3 which doesn't include the expensive normalization that is called n^2 times. Several other methods were tested, such as fixing the normal of the split plane to be perpendicular to the x, y, or z axis, but these also proved too expensive and/or also had test cases where the split determination was unsatisfactory.   Heuristics Enter calculus. If we can represent each of the heuristics as a mathematical function, we can determine when the function reaches what is called a "critical point". Specifically we are interested in the critical point that is the global maximum or minimum, depending on which heuristic it is for. So far we have three of these. 1. Similar Distance We don't want a split plane where, for example, all points above are nearby and all points below are far away. The points should be as evenly distributed as possible on either side. Given that the dot product of a plane normal and any other point is negative when the point is below the plane and positive when the point is above, and the absolute value of the dot product increases as distance to the plane increases, the sum of all dot products for a good split will be at or close to zero. If we let $$P$$ be the array of points, $$N$$ be the number of points in the array, and $$S$$ the split plane, the following function will add all dot products: $$SumOfDots = \displaystyle\sum_{i=1}^{N} P_i \cdot S$$ The summation here is not really part of a mathematical function, at least not one we can perform meaningful calculus on, since the calculation must be done in the code. We don't know ahead of time how many points there will be or what their values are, so the function should be agnostic in this regard. As written we cannot use it without inordinate complication, but consider that it is really doing this: $$SumOfDots = \displaystyle\sum_{i=1}^{N} P_{ix} * S_x + P_{iy} * S_y + P_{iz} * S_z$$ This summation expanded will look something like: $$SumOfDots = P_{1x} * S_x + P_{1y} * S_y + P_{1z} * S_z + P_{2x} * S_x + P_{2y} * S_y + P_{2z} * S_z + P_{3x} * S_x + P_{3y} * S_y + P_{3z} * S_z + \cdots$$ We can then rewrite this as: $$SumOfDots = S_x*(P_{1x} + P_{2x} + P_{3x} + \cdots) + S_y*(P_{1y} + P_{2y} + P_{3y} + \cdots) + S_z*(P_{1z} + P_{2z} + P_{3z} + \cdots)$$ Or similarly: $$SumOfDots = S_x*(\displaystyle\sum_{i=1}^{N} P_{ix}) + S_y*(\displaystyle\sum_{i=1}^{N} P_{iy}) + S_z*(\displaystyle\sum_{i=1}^{N} P_{iz})$$ As far as the mathematical function is concerned, the sums are constants and we can replace them with single characters to appear concise: $$A = \displaystyle\sum_{i=1}^{N} P_{ix}$$ $$B = \displaystyle\sum_{i=1}^{N} P_{iy}$$ $$C = \displaystyle\sum_{i=1}^{N} P_{iz}$$ We can pre-calculate these in the code as such: double A = 0, B = 0, C = 0; for (i=0; i<N; i++) { A += points[i].x; B += points[i].y; C += points[i].z; } We will rewrite the function with these constants: $$SumOfDots = S_x*A + S_y*B + S_z*C$$ This is great so far. We are interested in when this function reaches zero. To make it simpler, we can square it which makes the negative values positive, and then we become interested in when this function reaches a global minimum: $$SquaredSumOfDots = (S_x*A + S_y*B + S_z*C)^2$$ So again, when this function reaches zero it means that the points on either side of the split plane - as denoted by $$S$$ - are spread apart evenly. This does not mean that $$S$$ is a good split plane overall, as the points could all lie on the plane, or some other undesirable condition occurs. For that we have other heuristics. As a final note, since the vector $$S$$ represents a unit normal to a plane, any determination of it must be constrained to space of the unit sphere: $$S_x^2 + S_y^2 + S_z^2 = 1$$   2. Large Distance Practically speaking, the points will not be random and will originate from a grid of points or combination of grids. But random or not, if the points form a shape that is not equilateral - in other words if they form a rectangular shape instead of a square or an ellipse instead of a circle - the larger axis of the shape should be split so that child areas are not even less equilateral. To achieve this we can emphasize that the sum of the absolute value of all the dot products is large, meaning that the points are farther away from the split plane. To do this, we want to find the global maximum of a function that calculates the sum: $$SumOfAbsoluteDots=\displaystyle\sum_{i=1}^{N} |P_i \cdot S|$$ Unfortunately there is no way, that I know of, to handle absolute values and still determine critical points, so we need to rewrite this function without the absolute value operator. Really all we are interested in is when this function reaches a maximum, so we can replace it with a square: $$SumOfSquaredDots=\displaystyle\sum_{i=1}^{N} (P_i \cdot S)^2$$ Not unlike the previous heuristic function, we need to rewrite this so that $$S$$ is not contained in the summation, and we can extract the constant values. If we skip some of the expanding and reducing the squared dot product we will arrive at this step: $$SumOfSquaredDots=(S_x^2*\displaystyle\sum_{i=1}^{N} P_{ix}^2) + (S_y^2*\displaystyle\sum_{i=1}^{N} P_{iy}^2) + (S_z^2*\displaystyle\sum_{i=1}^{N} P_{iz}^2) + (S_x * S_y * 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iy})+ (S_x * S_z * 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iz})+ (S_y * S_z * 2 * \displaystyle\sum_{i=1}^{N} P_{iy}*P_{iz})$$ Again we will create some named constants to be concise: $$D = \displaystyle\sum_{i=1}^{N} P_{ix}^2$$ $$E = \displaystyle\sum_{i=1}^{N} P_{iy}^2$$ $$F = \displaystyle\sum_{i=1}^{N} P_{iz}^2$$ $$G = 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iy}$$ $$H = 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iz}$$ $$I = 2 * \displaystyle\sum_{i=1}^{N} P_{iy}*P_{iz}$$ As before, these can be pre-calculated: double D = 0, E = 0, F = 0, G = 0, H = 0, I = 0; for (i=0; i<N; i++) { D += points[i].x * points[i].x; E += points[i].y * points[i].y; F += points[i].z * points[i].z; G += points[i].x * points[i].y; H += points[i].x * points[i].z; I += points[i].y * points[i].z; } G *= 2.0; H *= 2.0; I *= 2.0; And then the function becomes: $$SumOfSquaredDots=(S_x^2*D) + (S_y^2*E) + (S_z^2*F) + (S_x * S_y * G)+ (S_x * S_z * H) + (S_y * S_z * I)$$ Again when this function maximizes, the points are farthest away from the split plane, which is what we want. 3. Similar number of points A good split plane will also have the same number of points on either side. We can again use the dot product since it is negative for points below the plane and positive for points above. But we cannot simply sum the dot products themselves, since a large difference for a point on one side will cancel out several smaller differences on the other. To account for this we normalize the distance to be either +1 or -1: $$SumOfNormalizedDots=\displaystyle\sum_{i=1}^{N} \frac{P_i \cdot S}{\sqrt{(P_i \cdot S)^2}}$$ Expanding this becomes: $$SumOfNormalizedDots=\displaystyle\sum_{i=1}^{N} \frac{P_{ix} * S_x + P_{iy} * S_y + P_{iz} * S_z}{\sqrt{(S_x^2*P_{ix}^2) + (S_y^2*P_{iy}^2) + (S_z^2*P_{iz}^2) + (S_x*S_y*2*P_{ix}*P_{iy})+ (S_x*S_z*2*P_{ix}*P_{iz})+ (S_y*S_z*2*P_{iy}*P_{iz})}}$$ Unfortunately there is no way to reduce this function so that we can extract all $$S$$ references out of the summation, and as with the previous heuristics, put all $$P$$ references into constants that we pre-calculate and use to simplify the function. So at present, this heuristic is not able to be used. I am still working on it. The problem is that each iteration of the sum is dependent on the values of $$S$$ in such a way that it cannot be extracted. Putting it all together What we ultimately want to do is combine the heuristic functions into one function, then use this one function find the critical point - either the global minimum or global maximum depending on how we combine them. The issue is that for $$SquaredSumOfDots$$ we want the global minimum and for $$SumOfSquaredDots$$ we want the global maximum. We can account for this by inverting the former so that we instead want the global maximum for it. The combined function then becomes: $$Combined = SumOfSquaredDots - SquaredSumOfDots$$ Applying the terms from each function, we get: $$Combined = (S_x^2*D) + (S_y^2*E) + (S_z^2*F) + (S_x * S_y * G)+ (S_x * S_z * H) + (S_y * S_z * I) - (S_x*A + S_y*B + S_z*C)^2$$ Expanding the square on the right side and combining some terms will give us: $$Combined = (S_x^2*(D-A^2)) + (S_y^2*(E - B^2)) + (S_z^2*(F - C^2)) + (S_x * S_y * (G - (2 * A * B)))+ (S_x * S_z * (H - (2 * A * C))) + (S_y * S_z * (I - (2 * B * C)))$$ Again we can combine/pre-calculate some constants to simplify: $$J = D-A^2$$ $$K = E-B^2$$ $$L = F-C^2$$ $$M = G - (2 * A * B)$$ $$N = H - (2 * A * C)$$ $$O = I - (2 * B * C)$$ double J = D - (A * A); double K = E - (B * B); double L = F - (C * C); double M = G - (2 * A * B); double N = H - (2 * A * C); double O = I - (2 * B * C); And then apply these to the function: $$Combined = (S_x^2*J) + (S_y^2*K) + (S_z^2*L) + (S_x * S_y * M)+ (S_x * S_z * N) + (S_y * S_z * O)$$ Because we are lazy, we then plug this into a tool that does the calculus for us. And this is where I am currently blocked on this issue, as I have found no program that can perform this computation. I've actually purchased Wolfram Mathematica and attempted it with the following command: Maximize[{((x^2)*j) + ((y^2)*k) + ((z^2)*l) + (x*y*m) + (x*z*n) + (y*z*o), ((x^2) + (y^2) + (z^2)) == 1}, {x, y, z}] After 5 or 6 days this had not finished and I had to restart the computer it was running on. I assumed that if it took that long, it would not complete in a reasonable amount of time. I will update this blog entry if I make any progress on this problem as well as the 3rd heuristic function.   Further Optimizations While I have not gotten this far yet, it may be ultimately necessary to emphasize (or de-emphasize) one heuristic over the others in order to further optimize the split determination. This could be done by simply multiplying each heuristic function by a scalar value to either increase or decrease it's emphases on the final result. The reason I haven't researched this yet is because if I cannot find the global maximum without these extra variables, I certainly cannot do it with them.

## Quick Terrain Renders

I had some extra time this evening and wanted to do some terrain renders for fun. I know that a lot of people will use procedural tools to generate terrain layouts, but I wanted to toy around with just sculpting on a square with some brushes I created to make a few quick terrain renders. The first step was to use my custom brushes to sculpt out some landscape as shown below. This mesh is 1.5 million polygons I then had two lower poly versions made up because I wanted to see how low I could go and still keep a decent bake result.   The first test was at 90k polygons: I then took this mesh and hand painted it and rendered the following result: My next test was to see if I could go lower. I tried my mesh at 16k polygons. Once I baked, and hand painted everything I was able to get this result from rendering: I then took the render into Photoshop to add a more softer snow on top: The snow is pretty "rough" still but I didn't have a lot of time left today. I'm pretty happy with the bake and render though for my 16k poly mesh. Anyhow! This was just a little thing I wanted to do for fun this evening. Thanks for stopping by!

## Warfront Infinite Dev Blog #21: The First Enemy (Animations)

This week me and my artist were working hard on the new alien model and its animations. My job was to implement the animated models into the game engine, manage transitions between different states and rewrite the enemy controller script. Since I haven't worked on skeletal animations in unity yet, it was a good learning experience and I found out that unity has this Animator component which handles transitions between animations and it is really easy to use. All I had to do was use Animator editor to create few different states (for walking, dying, getting hurt and attacking) and draw some lines between states to mark transitions. Then you can click on each of the connecting lines to edit the transition between those states and it will bring up this window: Here you can edit how fast the transitions occurs, when does it occur and more. To control the animations using the C# script all you have to do is to use GetComponent<Animator>() and then call its .Play(<animationName>) method. It will automatically do all the transitions which you created in the Animator tab. This is how the animations look in game: As you can see I added some glowing which I thought would look cool. There's still 5 more alien enemies left to do and after that we'll be working on the environment, adding buildings, new textures and overall changing the look of the levels.

## This Week in Game-Guru - 10/08/2018

Official Game-Guru News: There is a new survey available which offers users the chance of getting a free DLC pack as incentive for completing it so give them your thoughts! https://www.thegamecreators.com/post/gameguru-users-enter-our-survey-to-win-a-dlc

There were also some AI improvements made to the most recent public preview. Apparently some input was run by smallg (one of the resident scripting experts) who cleaned it up.

They updated the PBR materials to include mega-pack 3:
Details on the above here: https://www.thegamecreators.com/post/gameguru-mega-pack-3-dlc-updated-2

And lastly there was ALSO an update to EAI's weaponry:https://www.thegamecreators.com/post/gameguru-mega-pack-3-dlc-updated-2

What's Good in the Store:
Tarkus's music.  I usually like his stuff but this isn't my speed.  That said it would work well for many games seeking some well priced work that can fit a wide range of modern/post modern genre games: https://www.tgcstore.net/pack/11055
Pasquill's PBR construction vehicles were completed and are now available at a very reasonable price in a pack! https://www.tgcstore.net/pack/11054

Free Stuff https://forum.game-guru.com/thread/220103 - This beautiful gateway by Lafette II
https://forum.game-guru.com/thread/220124 - This Bizarre Spongebob/Domo-kun esque character for cartoon style games.  Free, follows waypoints, has Gtox's quality on it.  Nice stuff. Characters are always a fairly expensive proposition so it's particularly noteworthy when one is available for free.

Third party tools and Tutorials There's this interesting texturing tutorial (care of Bugsy): https://www.youtube.com/watch?v=a8d6p-E4KSE

Random Acts of Creativity Amenmoses has been working on physics and particles a lot this past week.  He put together a nice demo reel of his physics-based leaves here: https://vimeo.com/292789619

There's this campfire with smoke physics and fire particles: https://vimeo.com/293305766
What a busy man!

Duchenkuke detailed his picture below (I promised not to name names but he made a video so it kind of betrays him a bit!  https://www.youtube.com/watch?v=WWsEVUgcYIw&feature=youtu.be
He's also updated his web presence, check out his new site here:

In My Own Works:
I created a screenshot for an impromptu private contest for other GG Forumites.
Came in dead last.  Was based on 'forests'.  I gambled on going Alien Fungal and lost big!

That said, it is what it is.  Next time, I guess and no, I will not tell you where this contest was as it was fairly secret/private.  I can't post all of the pictures at this time as it will make this page unreasonably long to load for some.  So here's mine (dead last), the runner up, and the winner!

Mine:

Runner-Up:

Winner:

Congrats to the winner :)

That said I also got about 4500 words done thanks to said contest on how to make a forest, though admittedly I wrote it post-picture as a sort of 'lessons learned by my failure'.  Still, the overall nature of it came out well.  I plan on doing a city and desert/Tundra style one as well.

See you next week!

## OOP is dead, long live OOP

Inspiration This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras! You can find Aras'  slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground. I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations.
Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version!
TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too). I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure: Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules). Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson). Show that the relational model is a great fit for games (but call it "ECS"). This structure grinds my gears because:
(A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good,
but more importantly:
(B) it has the side effect of suppressing knowledge and unintentionally encouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers. Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own.
I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind. I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code.
OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code. Background As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP": Abstraction Encapsulation Polymorphism Inheritance I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them.  In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules... Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C". Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes). Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation. Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice   Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them. Not included in the SOLID acronym, but I would argue is just as important is the:
Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required. This gives us SOLID-C(++)   A few other notes: In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software. As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition. Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation. Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance. In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword. In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword. OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell! And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation). When you were learning about hierarchies / inheritance, you probably had a task something like:
Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person!
Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first. When you were learning about hierarchies / inheritance, you probably had a task something like:
Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square?
This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance. If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool.
From this perspective, the following makes perfect sense:
struct Square { int width; }; struct Rectangle : Square { int height; };
A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle! As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever.
A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width".
By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle. Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width;
This will work correctly for squares (producing the sum of their areas), but will not work for rectangles.
Therefore, Rectangle violates the LSP rule. If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other. So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go! For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is:
struct Shape { virtual int area() const = 0; };
struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; };
struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; }; "public virtual" means "implements" in Java. For use when implementing an interface. "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it. I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it! TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time! Entity / Component frameworks With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point.
However, it's not great as DIP -- many of the components do have direct knowledge of each other. So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow. Frameworkless composition (AKA using the features of the #*@!ing programming language) If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job. Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c
Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp

## Quest a day challenge

The first week of October is over. I was writing quests and NPCs. The main goal was to write at least one quest a day. And this is really interesting and fun!  My total result for this week is 15 quests and 20 NPCs. There is a lot of work on polishing ‘em all but at least I captured the main idea on every quest and NPC. Today I skipped and did nothing but analyze the whole result. Tried to make a document where I can store quest data (conditions, dialogues, rewards, stages and thoughts). Now I’m planning on prototyping quests via Creation Kit (Skyrim). But still there are relatively negative moments. I’m not satisfied because there is no real motivation for a player to complete those quests (as I think) and quests a simple as hell. They are more MMO like actually. Yes, they are not about killing n amount of slimes but still they don’t force player to use game features. Like why should anyone get fun from a quest that is fully independent from the game? Or maybe good narrative is enough for an engaging & fun adventure? This made me think about the main game mechanics. Those that player will use real often regardless of the playstyle. Of course I should start from movement as my game features an extended movement system which includes crawling and climbing mechanics.  Example: Assassins Creed where player MUST use those mechs, so he can climb up to a viewpoint and open quests markings on a map and he MAY use them to avoid enemy NPCs in a different type of situations. This mechanics was used in solving puzzles and getting to a right place quests.  Well, will see how all of this thoughts, quests and NPCs will evolve a week after 😛

## Fourth Entry - October 7, 2018

Greeting readers! This is the fourth entry of my development blog for my project 'Tracked Mind'. This month I've been focusing on learning how animations work in Unreal Engine 4, especially how different character stances would look and how different animations blend together, and to make it all look good of Course. I am very pleased with the result and that I will be able to use what I have learned in this project. I have also been doing some research on the lore of the game and how non-human creatures would fit into the story.  The idea of adding an ability to do a short backwards dodge does not seem to be a something that fits the game’s pace, it might make the game too easy in certain situations as well, but I will keep testing it to see if I change my mind. Sadly no new screenshots or development videos yet but I’m planning on uploading some during october so stay put. Some changes this month:
General changes Added new effects for some enemies that will "evaporate" when killed.

## Frogger GameDev Challenge - Part 1 - Frog GFX

• ### Popular Blogs

Group
North
Oblique
Leg
Level
Subdivision GNOLLS! (Yes I had to think a while to make the acronym work )
• does the lava slosh around?
• Wow, that all looks very awesome.  Please keep the pictures coming.
• @Gnollrunner, can you explain this part "We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates.  Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry.  Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights."  A little more?
• Good stuff dood! How (roughly) are you doing your lava, or is it top secret lol? Look forward to your frogger post!