Jump to content
  • Advertisement

Search the Community

Showing results for tags 'C++'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 721 results

  1. Gnollrunner

    Mountain Ranges

    For this entry we implemented the ubiquitous Ridged Multi-fractal function. It's not so interesting in and of itself, but it does highlight a few features that were included in our voxel engine. First as we mentioned, being a voxel engine, it supports full 3D geometry (caves, overhangs and so forth) and not just height-maps. However if we look at a typical world these features are the exception rather than the rule. It therefor makes sense to optimize the height-map portion of our terrain functions. This is especially true since our voxels are vertically aligned. This means that there will be many places where the same height calculation is repeated. Even if we look at a single voxel, nearly the same calculation is used for a lower corner and it's corresponding upper corner. The only difference been the subtraction from the voxel vertex position. ...... Enter the unit sphere! In our last entry we talked about explicit voxels, with edges and faces and vertexes. However all edges and faces are not created equal. Horizontal faces (in our case the triangular faces), and horizontal edges contain a special pointer that references their corresponding parts in a unit sphere, The unit sphere can be thought of as residing in the center of each planet. Like our world octree, it is formed from a subdivided icosahedron, only it is not extruded and is organized into a quadtree instead of an octree, being more 2D in nature. Vertexes in our unit sphere can be used to cache height-map function values to avoid repeated calculations. We also use our unit sphere to help the horizontal part of our voxel subdivision operation. By referencing the unit sphere we only have to multiply a unit sphere vertex by a height value to generate voxel vertex coordinates. Finally our unit-sphere is also used to provide coordinates during the ghost-walking process we talked about in our first entry. Without it, our ghost-walking would be more computationally expensive as it would have to calculate spherical coordinates on each iteration instead of just calculating heights, which are quite simple to calculate as they are all generated by simply averaging two other heights. Ownership of units sphere faces is a bit complex. Ostensibly they are owned by all voxel faces that reference them (and therefore add to their reference counter) . However this presents a bit of a problem as they are also used in ghost-walking which happens every LOD/re-chunking iteration, and it fact they may or may not end up being referenced by voxels faces, depending on whether mesh geometry is found. Even if no geometry is found we may want to keep them for the next ghost-walk search. To solve this problem, we implemented undead-objects. Unit sphere faces can become undead and can even be created that way if they are built by the ghost-walker. When they are undead they are kept in a special list which keeps them psudo-alive. They also have an un-dead life value associated with them. When they are touched by the ghost-walker that value is renewed. However if after a few iterations they are untouched, they become truly dead and are destroyed. Picture time again..... So here is our Ridged Multi-Fractal in wire frame. We'll flip it around to show our level transition........ Here's a place that needs a bit of work. The chunk level transitions are correct but they are probably a bit more complex than they need to be. We use a very general voxel tessellation algorithm since we have to handle various combinations of vertical and horizontal transitions. We will probably optimize this later, especially for the common cases but for now it serves it's purpose. Next up we are going to try to add threads. We plan to use a separate thread(s) for the LOD/re-chunk operations, and another one for the graphics .
  2. Hodgman

    OOP is dead, long live OOP

    Inspiration This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras! You can find Aras' slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground. I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations. Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version! TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too). I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure: Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules). Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson). Show that the relational model is a great fit for games (but call it "ECS"). This structure grinds my gears because: (A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good, but more importantly: (B) it has the side effect of suppressing knowledge and unintentionally encouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers. Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own. I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind. I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code. OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code. Background As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP": Abstraction Encapsulation Polymorphism Inheritance I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them. In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules... Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C". Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes). Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation. Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them. Not included in the SOLID acronym, but I would argue is just as important is the: Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required. This gives us SOLID-C(++) A few other notes: In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software. As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition. Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation. Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance. In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword. In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword. OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell! And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation). When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person! Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first. When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square? This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance. If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool. From this perspective, the following makes perfect sense: struct Square { int width; }; struct Rectangle : Square { int height; }; A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle! As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever. A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width". By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle. Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width; This will work correctly for squares (producing the sum of their areas), but will not work for rectangles. Therefore, Rectangle violates the LSP rule. If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other. So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go! For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is: struct Shape { virtual int area() const = 0; }; struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; }; struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; }; "public virtual" means "implements" in Java. For use when implementing an interface. "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it. I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it! TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time! Entity / Component frameworks With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point. Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional. I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae // ------------------------------------------------------------------------------------------------- // super simple "component system" class GameObject; class Component; typedef std::vector<Component*> ComponentVector; typedef std::vector<GameObject*> GameObjectVector; // Component base class. Knows about the parent game object, and has some virtual methods. class Component { public: Component() : m_GameObject(nullptr) {} virtual ~Component() {} virtual void Start() {} virtual void Update(double time, float deltaTime) {} const GameObject& GetGameObject() const { return *m_GameObject; } GameObject& GetGameObject() { return *m_GameObject; } void SetGameObject(GameObject& go) { m_GameObject = &go; } bool HasGameObject() const { return m_GameObject != nullptr; } private: GameObject* m_GameObject; }; // Game object class. Has an array of components. class GameObject { public: GameObject(const std::string&& name) : m_Name(name) { } ~GameObject() { // game object owns the components; destroy them when deleting the game object for (auto c : m_Components) delete c; } // get a component of type T, or null if it does not exist on this game object template<typename T> T* GetComponent() { for (auto i : m_Components) { T* c = dynamic_cast<T*>(i); if (c != nullptr) return c; } return nullptr; } // add a new component to this game object void AddComponent(Component* c) { assert(!c->HasGameObject()); c->SetGameObject(*this); m_Components.emplace_back(c); } void Start() { for (auto c : m_Components) c->Start(); } void Update(double time, float deltaTime) { for (auto c : m_Components) c->Update(time, deltaTime); } private: std::string m_Name; ComponentVector m_Components; }; // The "scene": array of game objects. static GameObjectVector s_Objects; // Finds all components of given type in the whole scene template<typename T> static ComponentVector FindAllComponentsOfType() { ComponentVector res; for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) res.emplace_back(c); } return res; } // Find one component of given type in the scene (returns first found one) template<typename T> static T* FindOfType() { for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) return c; } return nullptr; } Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead. What does this code do? Well, nothing good To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x. What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules: the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components". GameObjects fulfill the service locator pattern - they can be queried for a child component by type. Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject. Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects). A GameObject may only have one component of each type (some frameworks enforced this, others did not). Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update". GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components). This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today. However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework! Let's evaluate this code according to OOD: GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells. GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games... I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect. Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation. The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use. However, it's not great as DIP -- many of the components do have direct knowledge of each other. So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow. Frameworkless composition (AKA using the features of the #*@!ing programming language) If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job. Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp The gist of the changes is: Removing ": public Component" from each component type. I add a constructor to each component type. OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized. I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent. I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes. Fix the "virtual void Update" anti-pattern. Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction. The objects So, instead of this "VM" code: // create regular objects that move for (auto i = 0; i < kObjectCount; ++i) { GameObject* go = new GameObject("object"); // position it within world bounds PositionComponent* pos = new PositionComponent(); pos->x = RandomFloat(bounds->xMin, bounds->xMax); pos->y = RandomFloat(bounds->yMin, bounds->yMax); go->AddComponent(pos); // setup a sprite for it (random sprite index from first 5), and initial white color SpriteComponent* sprite = new SpriteComponent(); sprite->colorR = 1.0f; sprite->colorG = 1.0f; sprite->colorB = 1.0f; sprite->spriteIndex = rand() % 5; sprite->scale = 1.0f; go->AddComponent(sprite); // make it move MoveComponent* move = new MoveComponent(0.5f, 0.7f); go->AddComponent(move); // make it avoid the bubble things AvoidComponent* avoid = new AvoidComponent(); go->AddComponent(avoid); s_Objects.emplace_back(go); } We now have this normal C++ code: struct RegularObject { PositionComponent pos; SpriteComponent sprite; MoveComponent move; AvoidComponent avoid; RegularObject(const WorldBoundsComponent& bounds) : move(0.5f, 0.7f) // position it within world bounds , pos(RandomFloat(bounds.xMin, bounds.xMax), RandomFloat(bounds.yMin, bounds.yMax)) // setup a sprite for it (random sprite index from first 5), and initial white color , sprite(1.0f, 1.0f, 1.0f, rand() % 5, 1.0f) { } }; ... // create regular objects that move regularObject.reserve(kObjectCount); for (auto i = 0; i < kObjectCount; ++i) regularObject.emplace_back(bounds); The algorithms Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just: // go through all objects for (auto go : s_Objects) { // Update all their components go->Update(time, deltaTime); You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire. Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits) // Update all positions for (auto& go : s_game->regularObject) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } for (auto& go : s_game->avoidThis) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } // Resolve all collisions for (auto& go : s_game->regularObject) { ResolveCollisions(deltaTime, go, s_game->avoidThis); } The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series. Performance There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)! Next steps There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when...
  3. Hi, I have C++ Vulkan based project using Qt framework. QVulkanInstance and QVulkanWindow does lot of things for me like validation etc. but I can't figure out due Vulkan low level API how to troubleshoot Vulkan errors. I am trying to render terrain using tessellation shaders. I am learning from SaschaWillems tutorial for tessellation rendering. I think I am setting some value for rendering pass wrong in MapTile.cpp but unable to find which cause I dont know how to troubleshoot it. Whats the problem? App freezes on second end draw call Why? QVulkanWindow: Device lost Validation layers debug qt.vulkan: Vulkan init (vulkan-1.dll) qt.vulkan: Supported Vulkan instance layers: QVector(QVulkanLayer("VK_LAYER_NV_optimus" 1 1.1.84 "NVIDIA Optimus layer"), QVulkanLayer("VK_LAYER_RENDERDOC_Capture" 0 1.0.0 "Debugging capture layer for RenderDoc"), QVulkanLayer("VK_LAYER_VALVE_steam_overlay" 1 1.1.73 "Steam Overlay Layer"), QVulkanLayer("VK_LAYER_LUNARG_standard_validation" 1 1.0.82 "LunarG Standard Validation Layer")) qt.vulkan: Supported Vulkan instance extensions: QVector(QVulkanExtension("VK_KHR_device_group_creation" 1), QVulkanExtension("VK_KHR_external_fence_capabilities" 1), QVulkanExtension("VK_KHR_external_memory_capabilities" 1), QVulkanExtension("VK_KHR_external_semaphore_capabilities" 1), QVulkanExtension("VK_KHR_get_physical_device_properties2" 1), QVulkanExtension("VK_KHR_get_surface_capabilities2" 1), QVulkanExtension("VK_KHR_surface" 25), QVulkanExtension("VK_KHR_win32_surface" 6), QVulkanExtension("VK_EXT_debug_report" 9), QVulkanExtension("VK_EXT_swapchain_colorspace" 3), QVulkanExtension("VK_NV_external_memory_capabilities" 1), QVulkanExtension("VK_EXT_debug_utils" 1)) qt.vulkan: Enabling Vulkan instance layers: ("VK_LAYER_LUNARG_standard_validation") qt.vulkan: Enabling Vulkan instance extensions: ("VK_EXT_debug_report", "VK_KHR_surface", "VK_KHR_win32_surface") qt.vulkan: QVulkanWindow init qt.vulkan: 1 physical devices qt.vulkan: Physical device [0]: name 'GeForce GT 650M' version 416.64.0 qt.vulkan: Using physical device [0] qt.vulkan: queue family 0: flags=0xf count=16 supportsPresent=1 qt.vulkan: queue family 1: flags=0x4 count=1 supportsPresent=0 qt.vulkan: Using queue families: graphics = 0 present = 0 qt.vulkan: Supported device extensions: QVector(QVulkanExtension("VK_KHR_8bit_storage" 1), QVulkanExtension("VK_KHR_16bit_storage" 1), QVulkanExtension("VK_KHR_bind_memory2" 1), QVulkanExtension("VK_KHR_create_renderpass2" 1), QVulkanExtension("VK_KHR_dedicated_allocation" 3), QVulkanExtension("VK_KHR_descriptor_update_template" 1), QVulkanExtension("VK_KHR_device_group" 3), QVulkanExtension("VK_KHR_draw_indirect_count" 1), QVulkanExtension("VK_KHR_driver_properties" 1), QVulkanExtension("VK_KHR_external_fence" 1), QVulkanExtension("VK_KHR_external_fence_win32" 1), QVulkanExtension("VK_KHR_external_memory" 1), QVulkanExtension("VK_KHR_external_memory_win32" 1), QVulkanExtension("VK_KHR_external_semaphore" 1), QVulkanExtension("VK_KHR_external_semaphore_win32" 1), QVulkanExtension("VK_KHR_get_memory_requirements2" 1), QVulkanExtension("VK_KHR_image_format_list" 1), QVulkanExtension("VK_KHR_maintenance1" 2), QVulkanExtension("VK_KHR_maintenance2" 1), QVulkanExtension("VK_KHR_maintenance3" 1), QVulkanExtension("VK_KHR_multiview" 1), QVulkanExtension("VK_KHR_push_descriptor" 2), QVulkanExtension("VK_KHR_relaxed_block_layout" 1), QVulkanExtension("VK_KHR_sampler_mirror_clamp_to_edge" 1), QVulkanExtension("VK_KHR_sampler_ycbcr_conversion" 1), QVulkanExtension("VK_KHR_shader_draw_parameters" 1), QVulkanExtension("VK_KHR_storage_buffer_storage_class" 1), QVulkanExtension("VK_KHR_swapchain" 70), QVulkanExtension("VK_KHR_variable_pointers" 1), QVulkanExtension("VK_KHR_win32_keyed_mutex" 1), QVulkanExtension("VK_EXT_conditional_rendering" 1), QVulkanExtension("VK_EXT_depth_range_unrestricted" 1), QVulkanExtension("VK_EXT_descriptor_indexing" 2), QVulkanExtension("VK_EXT_discard_rectangles" 1), QVulkanExtension("VK_EXT_hdr_metadata" 1), QVulkanExtension("VK_EXT_inline_uniform_block" 1), QVulkanExtension("VK_EXT_shader_subgroup_ballot" 1), QVulkanExtension("VK_EXT_shader_subgroup_vote" 1), QVulkanExtension("VK_EXT_vertex_attribute_divisor" 3), QVulkanExtension("VK_NV_dedicated_allocation" 1), QVulkanExtension("VK_NV_device_diagnostic_checkpoints" 2), QVulkanExtension("VK_NV_external_memory" 1), QVulkanExtension("VK_NV_external_memory_win32" 1), QVulkanExtension("VK_NV_shader_subgroup_partitioned" 1), QVulkanExtension("VK_NV_win32_keyed_mutex" 1), QVulkanExtension("VK_NVX_device_generated_commands" 3), QVulkanExtension("VK_NVX_multiview_per_view_attributes" 1)) qt.vulkan: Enabling device extensions: QVector(VK_KHR_swapchain) qt.vulkan: memtype 0: flags=0x0 qt.vulkan: memtype 1: flags=0x0 qt.vulkan: memtype 2: flags=0x0 qt.vulkan: memtype 3: flags=0x0 qt.vulkan: memtype 4: flags=0x0 qt.vulkan: memtype 5: flags=0x0 qt.vulkan: memtype 6: flags=0x0 qt.vulkan: memtype 7: flags=0x1 qt.vulkan: memtype 8: flags=0x1 qt.vulkan: memtype 9: flags=0x6 qt.vulkan: memtype 10: flags=0xe qt.vulkan: Picked memtype 10 for host visible memory qt.vulkan: Picked memtype 7 for device local memory qt.vulkan: Color format: 44 Depth-stencil format: 129 qt.vulkan: Creating new swap chain of 2 buffers, size 600x370 qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) qt.vulkan: Creating new swap chain of 2 buffers, size 600x368 qt.vulkan: Releasing swapchain qt.vulkan: Actual swap chain buffer count: 2 (supportsReadback=1) qt.vulkan: Allocating 1027072 bytes for transient image (memtype 8) QVulkanWindow: Device lost qt.vulkan: Releasing all resources due to device lost qt.vulkan: Releasing swapchain I am not so sure if this debug helps somehow :(( I dont want you to debug it for me. I just want to learn how I should debug it and find where problem is located. Could you give me guide please? Source code Source code rendering just few vertices (working) Difference between links are: Moved from Qt math libraries to glm Moved from QImage to gli for Texture class Added tessellation shaders Disabled window sampling Rendering terrain using heightmap and texturearray (Added normals and UV) Thanks
  4. Hello! I'm trying to understand how to load models with Assimp. Well learning how to use this library isn't that hard, the thing is how to use the data. From what I understand so far, each model consists of several meshes which you can render individually in order to get the final result (the model). Also from what assimp says: One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node The only thing that confuses me is how to create the shader that will use these data to draw a mesh. Lets say I have all the information about a mesh like this: class Meshe { std::vector<Texture> diffuse_textures; std::vector<Texture> specular_textures; std::vector<Vertex> vertices; std::vector<unsigned int> indices; } And lets make the simplest shaders: Vertex Shader: #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform vec3 model; uniform vec3 view; uniform vec3 projection; out vec2 TextureCoordinate; out vec3 Normals; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TextureCoordinate = aTexCoord Normals = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader: #version 330 core out vec4 Output; in vec2 TextureCoordinate; in vec3 Normals; uniform sampler2D diffuse; uniform sampler2D specular; void main() { Output = texture(diffuse, TextureCoordinate); } Will this work? I mean, assimp says that each mesh has only one material that covers it, but that material how many diffuse and specular textures can it have? Does it makes sense for a material to have more than one diffuse or more that one specular textures? If each material has only two textures, one for the diffuse and one for the specular then its easy, i'm using the specular texture on the lighting calculations and the diffuse on the actual output. But what happens if the textures are more? How am i defining them on the fragment shader without knowing the actual number? Also how do i use them?
  5. Hello everyone, While I do have a B.S. in Game Development, I am currently unable to answer a very basic programming question. In the eyes of OpenGL, does it make any difference if the program uses integers or floats? More specifically, characters, models, and other items have coordinates. Right now, I am very tempted to use integers for the coordinates. The two reasons for this are accuracy and perhaps optimizing calculations. If multiplying two floats is more expensive in the eyes of the CPU, then this is a very powerful reason to not use floats to contain the positions, or vectors of game objects. Please forgive me for my naivette, and not knowing the preferences of the GPU. I hope this thread becomes a way for us to learn how to program better as a community. -Kevin
  6. I'm having trouble with glew sharing some defines that I can't resolve. Does anyone know of a way to get the following statements working instead of an include with glew (glew resolves the red squigglies too.) glColor3f(0, 1, 0.); glRasterPos2i(10,10); I really want to use a quick glut command for now. The command uses the statements above. Thank You, Josheir
  7. As the title says, I'm a bit stumped on this. I'm not sure what to do for the write_callback of libFLAC++. I have implemented the rest of the callbacks correctly (I think). So that way, libFLAC can decode using an ifstream rather than a C-style FILE*. Below is my implementation of the decoder and its callbacks. #include <FLAC++/decoder.h> #include <fstream> #include <iostream> class FLACStreamDecoder : public FLAC::Decoder::Stream { private: std::ifstream& input; uint32_t sample_rate; uint32_t channels; uint32_t bits_per_sample; public: ~FLACStreamDecoder(); // The FLAC decoder will take ownership of the ifstream. FLACStreamDecoder(std::ifstream& arg) : FLAC::Decoder::Stream(), input(arg) {} uint32_t getSampleRate() { return sample_rate; } uint32_t getChannels() { return channels; } uint32_t getBitsPerSample() { return bits_per_sample; } virtual void metadata_callback(const FLAC__StreamMetadata *); virtual ::FLAC__StreamDecoderReadStatus read_callback(FLAC__byte *, size_t *); virtual ::FLAC__StreamDecoderWriteStatus write_callback(const FLAC__Frame *, const FLAC__int32 * const *); virtual void error_callback(FLAC__StreamDecoderErrorStatus); virtual ::FLAC__StreamDecoderSeekStatus seek_callback(FLAC__uint64 absolute_byte_offset); virtual ::FLAC__StreamDecoderTellStatus tell_callback(FLAC__uint64 *absolute_byte_offset); virtual ::FLAC__StreamDecoderLengthStatus length_callback(FLAC__uint64 *stream_length); virtual bool eof_callback(); }; FLACStreamDecoder::~FLACStreamDecoder() { input.close(); } void FLACStreamDecoder::metadata_callback(const FLAC__StreamMetadata * metadata) { std::cerr << "metadata callback called!" << std::endl; if (FLAC__METADATA_TYPE_STREAMINFO == metadata->type) { std::cerr << "streaminfo found!" << std::endl; sample_rate = metadata->data.stream_info.sample_rate; channels = metadata->data.stream_info.channels; bits_per_sample = metadata->data.stream_info.bits_per_sample; } } static_assert(sizeof(char) == sizeof(FLAC__byte), "invalid char size"); FLAC__StreamDecoderReadStatus FLACStreamDecoder::read_callback(FLAC__byte * buffer, size_t * nbytes) { if (nbytes && *nbytes > 0) { input.read(reinterpret_cast<char *>(buffer), *nbytes); *nbytes = input.gcount(); if (input.fail()) { return FLAC__STREAM_DECODER_READ_STATUS_ABORT; } else if (input.eof()) { return FLAC__STREAM_DECODER_READ_STATUS_END_OF_STREAM; } else { return FLAC__STREAM_DECODER_READ_STATUS_CONTINUE; } } else { return FLAC__STREAM_DECODER_READ_STATUS_ABORT; } } ::FLAC__StreamDecoderSeekStatus FLACStreamDecoder::seek_callback(FLAC__uint64 absolute_byte_offset) { if (input.is_open()) { input.seekg(absolute_byte_offset); return FLAC__StreamDecoderSeekStatus::FLAC__STREAM_DECODER_SEEK_STATUS_OK; } return FLAC__StreamDecoderSeekStatus::FLAC__STREAM_DECODER_SEEK_STATUS_ERROR; } ::FLAC__StreamDecoderTellStatus FLACStreamDecoder::tell_callback(FLAC__uint64 *absolute_byte_offset) { if (input.is_open()) { *absolute_byte_offset = input.tellg(); return FLAC__StreamDecoderTellStatus::FLAC__STREAM_DECODER_TELL_STATUS_OK; } return FLAC__StreamDecoderTellStatus::FLAC__STREAM_DECODER_TELL_STATUS_ERROR; } ::FLAC__StreamDecoderLengthStatus FLACStreamDecoder::length_callback(FLAC__uint64 *stream_length) { if (input.is_open()) { std::streampos currentPos = input.tellg(); input.seekg(std::ios::end); *stream_length = input.tellg(); input.seekg(currentPos); return FLAC__StreamDecoderLengthStatus::FLAC__STREAM_DECODER_LENGTH_STATUS_OK; } return FLAC__StreamDecoderLengthStatus::FLAC__STREAM_DECODER_LENGTH_STATUS_ERROR; } bool FLACStreamDecoder::eof_callback() { return input.eof(); } // This is called for every audio frame. FLAC__StreamDecoderWriteStatus FLACStreamDecoder::write_callback(const FLAC__Frame * frame, const FLAC__int32 * const * buffer) { // A the size of a FLAC Frame is frame->header.channels * frame->header.blocksize. That is, the size of the buffer array is the number of channels in the current frame, times the number of samples per channel (blocksize). return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE; } void FLACStreamDecoder::error_callback(FLAC__StreamDecoderErrorStatus status) { std::string msg; switch (status) { case FLAC__STREAM_DECODER_ERROR_STATUS_BAD_HEADER: msg = "BAD HEADER"; break; case FLAC__STREAM_DECODER_ERROR_STATUS_LOST_SYNC: msg = "LOST SYNC"; break; case FLAC__STREAM_DECODER_ERROR_STATUS_FRAME_CRC_MISMATCH: msg = "FRAME CRC MISMATCH"; break; case FLAC__STREAM_DECODER_ERROR_STATUS_UNPARSEABLE_STREAM: msg = "UNPARSEABLE STREAM"; break; default: msg = "UNKNOWN ERROR"; break; } std::cerr << msg << std::endl; } As you see, I have no idea what to do for the write_callback. Any help with regards to that would be appreciated. To be a bit more clear, the problem is that WASAPI has a frame size of numChannels * bitsPerSample bits, or numChannels * bitsPerSample / 8 bytes. I can't seem to figure out how to go from a FLAC frame to a WASAPI frame. I'll also paste my WASAPI playback code below: #pragma once #include <iostream> #define NOMINMAX #include <Mmdeviceapi.h> #include <Audioclient.h> #include <fstream> #include <algorithm> class WASAPIBackend { public: WASAPIBackend(); ~WASAPIBackend(); private: HRESULT hr; IMMDeviceEnumerator* pDeviceEnumerator; IMMDevice* pDevice; IAudioClient3* pAudioClient; IAudioRenderClient* pAudioRenderClient; //WAVEFORMATEX* pMixFormat; WAVEFORMATEX mixFormat; uint32_t defaultPeriodInFrames, fundamentalPeriodInFrames, minPeriodInFrames, maxPeriodInFrames; HANDLE audioSamplesReadyEvent; }; #include "WASAPIBackend.h" constexpr void SafeRelease(IUnknown** p) { if (p) { (*p)->Release(); } } WASAPIBackend::WASAPIBackend() : hr(0), pDeviceEnumerator(nullptr), pDevice(nullptr), pAudioClient(nullptr), pAudioRenderClient(nullptr)/*, pMixFormat(nullptr)*/ { try { // COM result hr = S_OK; hr = CoInitializeEx(nullptr, COINIT_MULTITHREADED); if (FAILED(hr)) throw std::runtime_error("CoInitialize error"); hr = CoCreateInstance( __uuidof(MMDeviceEnumerator), nullptr, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), reinterpret_cast<void**>(&pDeviceEnumerator)); if (FAILED(hr)) throw std::runtime_error("CoCreateInstance error"); hr = pDeviceEnumerator->GetDefaultAudioEndpoint(EDataFlow::eRender, ERole::eConsole, &pDevice); if (FAILED(hr)) throw std::runtime_error("IMMDeviceEnumerator.GetDefaultAudioEndpoint error"); std::cout << "IMMDeviceEnumerator.GetDefaultAudioEndpoint()->OK" << std::endl; hr = pDevice->Activate(__uuidof(IAudioClient3), CLSCTX_ALL, nullptr, reinterpret_cast<void**>(&pAudioClient)); if (FAILED(hr)) throw std::runtime_error("IMMDevice.Activate error"); std::cout << "IMMDevice.Activate()->OK" << std::endl; WAVEFORMATEX wave_format = {}; wave_format.wFormatTag = WAVE_FORMAT_PCM; wave_format.nChannels = 2; wave_format.nSamplesPerSec = 44100; //nSamplesPerSec * nBlockAlign wave_format.nAvgBytesPerSec = 44100 * 2 * 16 / 8; wave_format.nBlockAlign = 2 * 16 / 8; wave_format.wBitsPerSample = 16; //pAudioClient->GetMixFormat(reinterpret_cast<WAVEFORMATEX**>(&wave_format)); hr = pAudioClient->GetSharedModeEnginePeriod(&wave_format, &defaultPeriodInFrames, &fundamentalPeriodInFrames, &minPeriodInFrames, &maxPeriodInFrames); hr = pAudioClient->GetSharedModeEnginePeriod(&wave_format, &defaultPeriodInFrames, &fundamentalPeriodInFrames, &minPeriodInFrames, &maxPeriodInFrames); if (FAILED(hr)) throw std::runtime_error("IAudioClient.GetDevicePeriod error"); std::cout << "default device period=" << defaultPeriodInFrames << "[nano seconds]" << std::endl; std::cout << "minimum device period=" << minPeriodInFrames << "[nano seconds]" << std::endl; hr = pAudioClient->InitializeSharedAudioStream(AUDCLNT_STREAMFLAGS_EVENTCALLBACK, minPeriodInFrames, &wave_format, nullptr); if (FAILED(hr)) throw std::runtime_error("IAudioClient.Initialize error"); std::cout << "IAudioClient.Initialize()->OK" << std::endl; // event audioSamplesReadyEvent = CreateEvent(nullptr, false, false, nullptr); if (FAILED(hr)) throw std::runtime_error("CreateEvent error"); hr = pAudioClient->SetEventHandle(audioSamplesReadyEvent); if (FAILED(hr)) throw std::runtime_error("IAudioClient.SetEventHandle error"); UINT32 numBufferFrames = 0; hr = pAudioClient->GetBufferSize(&numBufferFrames); if (FAILED(hr)) throw std::runtime_error("IAudioClient.GetBufferSize error"); std::cout << "buffer frame size=" << numBufferFrames << "[frames]" << std::endl; hr = pAudioClient->GetService(__uuidof(IAudioRenderClient), reinterpret_cast<void**>(&pAudioRenderClient)); std::cout << std::hex << hr << std::endl; if (FAILED(hr)) throw std::runtime_error("IAudioClient.GetService error"); BYTE *pData = nullptr; hr = pAudioRenderClient->GetBuffer(numBufferFrames, &pData); if (FAILED(hr)) throw std::runtime_error("IAudioRenderClient.GetBuffer error"); const char* flac_filename = "audio/07 DaMonz - Choose Your Destiny (Super Smash Bros. Melee).flac"; std::ifstream stream(flac_filename, std::ifstream::binary); FLACStreamDecoder streamer(stream); auto initStatus = streamer.init(); if (FLAC__STREAM_DECODER_INIT_STATUS_OK != initStatus) { std::cerr << "ERROR INITIALIZING" << std::endl; } else { streamer.process_until_end_of_metadata(); if (!streamer.process_single()) { std::cerr << "FAILED PROCESSING" << std::endl; } else { std::cerr << "SUCCEEDED PROCESSING" << std::endl; } } hr = pAudioRenderClient->ReleaseBuffer(numBufferFrames, 0); if (FAILED(hr)) throw std::runtime_error("IAudioRenderClient.ReleaseBuffer error"); AudioClientProperties audioClientProp = {}; audioClientProp.cbSize = sizeof(AudioClientProperties); audioClientProp.bIsOffload = true; audioClientProp.eCategory = AUDIO_STREAM_CATEGORY::AudioCategory_GameMedia; audioClientProp.Options = AUDCLNT_STREAMOPTIONS::AUDCLNT_STREAMOPTIONS_MATCH_FORMAT; pAudioClient->SetClientProperties(&audioClientProp); hr = pAudioClient->Start(); if (FAILED(hr)) throw std::runtime_error("IAudioClient.Start error"); std::cout << "IAudioClient.Start()->OK" << std::endl; //bool playing = (streamer.get_total_samples() > numBufferFrames); while (/*playing*/ true) { WaitForSingleObject(audioSamplesReadyEvent, INFINITE); uint32_t numPaddingFrames = 0; hr = pAudioClient->GetCurrentPadding(&numPaddingFrames); if (FAILED(hr)) throw std::runtime_error("IAudioClient.GetCurrentPadding error"); uint32_t numAvailableFrames = numBufferFrames - numPaddingFrames; if (numAvailableFrames == 0) continue; hr = pAudioRenderClient->GetBuffer(numAvailableFrames, &pData); if (FAILED(hr)) throw std::runtime_error("IAudioRenderClient.GetBuffer error"); for (size_t i = 0; i < numAvailableFrames; ++i) { streamer.process_single(); memcpy(&pData[i], &m_audioFrame, streamer.get_blocksize() * streamer.getChannels()); } hr = pAudioRenderClient->ReleaseBuffer((uint32_t)numAvailableFrames, 0); if (FAILED(hr)) throw std::runtime_error("IAudioRenderClient.ReleaseBuffer error"); //playing = (streamer.get_total_samples() < numAvailableFrames); } do { // wait for buffer to be empty WaitForSingleObject(audioSamplesReadyEvent, INFINITE); uint32_t numPaddingFrames = 0; hr = pAudioClient->GetCurrentPadding(&numPaddingFrames); if (FAILED(hr)) throw std::runtime_error("IAudioClient.GetCurrentPadding error"); if (numPaddingFrames == 0) { std::cout << "current buffer padding=0[frames]" << std::endl; break; } } while (true); hr = pAudioClient->Stop(); if (FAILED(hr)) throw std::runtime_error("IAudioClient.Stop error"); std::cout << "IAudioClient.Stop()->OK" << std::endl; } catch (std::exception& ex) { std::cout << "error:" << ex.what() << std::endl; } } WASAPIBackend::~WASAPIBackend() { //CoTaskMemFree(pMixFormat); if (audioSamplesReadyEvent) CloseHandle(audioSamplesReadyEvent); SafeRelease(reinterpret_cast<IUnknown**>(&pAudioRenderClient)); SafeRelease(reinterpret_cast<IUnknown**>(&pAudioClient)); SafeRelease(reinterpret_cast<IUnknown**>(&pDevice)); SafeRelease(reinterpret_cast<IUnknown**>(&pDeviceEnumerator)); CoUninitialize(); } The playback loop is most definitely broken, but I can't fix it if the decoder isn't working. Please note that I'm doing this for learning purposes, to get a better understanding of how libraries like SDL_Mixer and libsndfile work at a more fundamental level.
  8. Looking for example code... I can find lots of ways to find the points of intersection, but not the rectangle solution im looking for (see picture) The black rect is a tilemap, the red rect is the camera... Im looking to supply 2 rectangles (with four members - x,y,w,h), and receive back essentially the green rectangle in the examples. also, what's this problem known as?
  9. Hello there Some time ago I created EventBus implementation with friends. I used previous and recent implementation in 3-4 games that I was part of. During this time I receive only feedback from developers that were working with me. Now I would like to get more feedback what do you think and maybe you have some requests/improvements ? It would be nice to have some opinions from "fresh developers" and more advanced ones. My main questions: (you don;t have to answer if you don't want to) - How it generally look ? Is it OK ? ( I mean your first impression, main README) - Is it easy to use ? - Does documentation explaining what it needs (in clear way), especially if you didn't previously use event bus pattern. - What is missing for you ? - Why you don't want to use it ? (eg. license, minimum C++11, etc.) - Are you using other implementation, if yes pls mention :) https://github.com/gelldur/EventBus
  10. Gnollrunner

    Bumpy World

    After a LOT of bug fixes, and some algorithm changes changes, our octree marching prisms algorithm is now in a much better state. We added a completely new way of determine chunk level transitions, but before discussing it we will first talk a bit more about our voxel octree. Our octree is very explicit. By that we mean it is built up of actual geometric components. First we have voxel vertexes (not to be confused with mesh vertexes) for the corners of voxels. Then we have voxel edges that connect them. Then we have voxel faces which are made up of a set of voxel edges (either three or four) and finally we have the voxels themselves which reference our faces. Currently we support prismatic voxels. since they make the best looking world, however the lower level constructs are designed to also support the more common cubic voxels. In addition to our octree of voxels, voxel faces are kept in a quadtrees, while voxel edges are organized into binary trees. Everything is pretty much interconnected and there is a reference counting system that handles deallocation of unused objects. So why go though all this trouble? The answer by doing things this way we can avoid traversing the octree when building meshes using our marching prisms algorithms. For instance, If there is a mesh edge on a voxel face, since that face is referenced by the voxels on either side of it, we can easily connect together mesh triangles generated in both voxels. The same goes for voxel edges. A mesh vertex on a voxel edge is shared by all voxels that reference it. So in short, seamless meshes are built in place with little effort. This is important since meshes will be constantly recalculated for LOD as a player moves around. This brings us to chunking. As we talked about in our first entry, a chunk is nothing more than a sub-section of the octree. Most importantly we need to know where there are up and down chunk transitions. Here our face quadtrees, and edge binary tress help us out. From the top of any chunk we can quickly traverse the quadtrees and binary trees and tag faces and edges as transition objects. The algorithm is quite simple since we know there will only be one level difference between chunks, and therefore if there is a level difference, one level will be odd and the other even. So we can tag our edges and faces with up to two chunk levels in a 2 element array indexed by the last bit of the chunk level. After going down the borders of each chunk, border objects will now have one of two states. They will be tagged with a single level or a two levels one being one higher than the other. From this we can now generate transition voxels with no more need to look at a voxel's neighboring voxels. One more note about our explicit voxels, since they are in fact explicit there is no requirement that they form a regular grid. As we said before our world grid is basically wrapped around a sphere which gives us fairly uniform terrain no matter where you are on the globe. Hopefully in he future we can also use this versatility to build trees. Ok so it's picture time......... We added some 3D simplex nose to get something that isn't a simple sphere. Hopefully in our next entry we will try a multi-fractal.
  11. Hi, I have an object handle to each actual object block in the memory, because the object blocks in the memory moves position in the heap for reasons. This leads to indirect access to the object block through the object handle as it is the only pointer to it. Now this becomes a problem when I want to integrate to STL containers, because creating custom STL allocator for it seem to only require a direct access to the object block, and I'm not sure how to do it using the object handle as the only mean to access the object block reliably. The current best solutions are either modifying the containers, or make my own. Can anyone help me?
  12. Hello every one , please i want to be sure this's the best + optimized way , or even the worst way 🙁 what i want is to be able to export a struct from my DLL . struct UserIdentity { std::string name; int id; }; std::map<std::string, int> m_UserIdentities = { { "Bob", 100 }, { "Jone", 101 }, { "Alice", 102 }, { "Doe", 103 } }; /* * Will be used in a DLL that will export UserIdentity struct * OUT _UserIdentity */ void Ui_export(UserIdentity *_UserIdentity) { for (auto& t : m_UserIdentities) { _UserIdentity->name = t.first; _UserIdentity->id = t.second; } } Regards.
  13. komires

    Matali Physics 4.3 Released

    We are pleased to announce the release of Matali Physics 4.3. The latest version introduces significant changes in support for DirectX 12 and Vulkan. Introduced changes equate the use of DirectX 12 and Vulkan with DirectX 11 and OpenGL respectively, significantly reducing the costs associated with application of low-level graphics APIs. From version 4.3, we recommend using DirectX 12 and Vulkan in projects that are developed in the Matali Physics environment. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes You can find out more information on www.mataliphysics.com
  14. komires

    Matali Physics 4.3 Released

    We are pleased to announce the release of Matali Physics 4.3. The latest version introduces significant changes in support for DirectX 12 and Vulkan. Introduced changes equate the use of DirectX 12 and Vulkan with DirectX 11 and OpenGL respectively, significantly reducing the costs associated with application of low-level graphics APIs. From version 4.3, we recommend using DirectX 12 and Vulkan in projects that are developed in the Matali Physics environment. What is Matali Physics? Matali Physics is an advanced, multi-platform, high-performance 3d physics engine intended for games, virtual reality and physics-based simulations. Matali Physics and add-ons form physics environment which provides complex physical simulation and physics-based modeling of objects both real and imagined. Main benefits of using Matali Physics: Stable, high-performance solution supplied together with the rich set of add-ons for all major mobile and desktop platforms (both 32 and 64 bit) Advanced samples ready to use in your own games New features on request Dedicated technical support Regular updates and fixes You can find out more information on www.mataliphysics.com View full story
  15. Hi all, I'm targeting OpenGL 2.1 (so no VAO's for me 😢) and I have multiple shaders rendering quite different things at each frame; for the sake of argument, say that I have just two shaders - one shader draws the background (so it uses vertex attributes for texture coordinates, 2D positions, etc.) while the other draws some meshes (so it uses vertex attributes for colors and 3D positions, and no textures). The data provided to each of the shaders changes for each frame. Now, I've realized I can choose between two different strategies when it comes to binding vertex attribute locations. With the first strategy I reuse vertex attribute indices, whereby the first shader would bind to, say, attribute locations 0, 1, 2, 3 and the second shader would bind to, say, 0, 1, 2. With this approach I'll have to constantly call glVertexAttribPointer for these indices, as for each frame one shader would require one set of VBO's to feed 0, 1, 2, 3 and the other shader would require another set of VBO's to feed 0, 1, 2. With the second strategy, instead, I use "dedicated" vertex attribute indices: the first shader would bind to 0, 1, 2, 3 and the second shader would bind to 4, 5, 6. In other words, each vertex attribute index has its own dedicated VBO to feed data to it. The advantage of this approach is that I need to call glVertexAttribPointer only once per index, say at program initialization, but at the same time it's limiting my capacity to "grow" shaders in the future (my GPU only supports 16 vertex attributes, hence it'll be hard to add new shaders or to add new attributes to existing shaders). Now, in my implementation I can't see any performance benefit of one versus the other, but truth to be told, I'm developing on an ancient Dell laptop with an Intel mobile card...😩 nonetheless I would like this code to run as fast as possible on modern GPU's. Is there any performance benefit to choosing one of the two strategies over the other? And in case there's no performance benefit, is one strategy preferable over the other for reasons that I can't think of at the moment? Thanks so much in advance for any tips!!!!
  16. Hello, I was just wondering whether it was a thing to use a component-based architecture to handle models in OpenGL much like in a entity component system. This structure has especially helped me in cases where I have models that need different resources. By that I mean, some models need a texture and texture coordinate data to go with it, others just need some color data (vec3). Some have a normals buffer whereas others just get their normals calculated on the fly (geometry shader). Some have an index buffer (rendered with glDrawElements) whereas others don't and get rendered using glDrawArrays etc... Instead of branching off into complicated hierarchies to create models that have only certain resources, or coming up with strange ways resolve certain problems concerning checking which resources some models have, I just attach components to each model such as a vertex buffer or texture coordinate buffer or index buffer etc... So, I was just wondering if I was using the other version of model handling wrong or whether this style of programming is a viable option and whether there are flaws that I am unable to foresee?
  17. Hello again Recently I was trying to apply 6 different textures in a cube and I noticed that some textures would not apply correctly, but if i change the texture image with another it works just fine. I can't really understand what's going on. I will also attach the image files. So does this might have to do anything with coding or its just the image fault??? This is a high quality texture 2048x2048 brick1.jpg, which does the following: And this is another texture 512x512 container.jpg which is getting applied correctly with the exact same texture coordinates as the prev one: Vertex Shader #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; } Fragment Shader #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); } Texture Loader Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (!trans) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Free the image data. stbi_image_free(data); } Texture::~Texture() { } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } Rendering Code: Renderer::Renderer() { float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; //Create the Vertex Array. m_vao = new Vao(); //Create the Vertex Buffer. m_vbo = new Vbo(vertices, sizeof(vertices)); //Create the attributes. m_attributes = new VertexAttributes(); m_attributes->Push(3); m_attributes->Push(3); m_attributes->Push(2); m_attributes->Commit(m_vbo); } Renderer::~Renderer() { delete m_vao; delete m_vbo; delete m_attributes; } void Renderer::DrawArrays(Cube *cube) { //Render the cube. cube->Render(); unsigned int tex = 0; for (unsigned int i = 0; i < 36; i += 6) { if (tex < cube->m_textures.size()) cube->m_textures[tex]->Bind(); GLCall(glDrawArrays(GL_TRIANGLES, i, 6)); tex++; } }
  18. How can I change the skeletal mesh position in a c++ class? because I got this. I´m working in UE4 -> 4.20 I know that the pivot of the mesh is on the bottom, but there must be a way to change the position by code. By the way, my object is a character's derived class.
  19. phil67rpg

    plane game

    well I have drawn two plane sprites and have them move around the screen using keys. one them can also shoot bullets, I have drawn an animated collision sprite
  20. Hello everyone, I have a strange problem. I have been coding C++ with CodeBlocks 17.12 using SDL 2.0.8 and SDL Image 2.0.3. I have been using MingGW and the GCC GNU compiler. When I run my app in CodeBlocks under either Debug or Release it runs fine, but when I go to my Release output folder and double click my EXE, it merely opens and immediately closes. I have the SDL and SDL image dlls in the folder with my release build, and I'm not getting errors that any dlls are missing. This is strange! Help would be much appreciated, Thank you, -Omerta
  21. Hello everyone. This is my first post on GameDev.net. I'm not in the industry in any way. I'm just a hobbyist that likes to program. I was a CS major in college but went into another field entirely. I have been helping an emulator project for a now shut down mmorpg and wanted to get some opinions from people who know more than I about setting up systems to handle things like Spell Casting and the timers that go with it. I set up a functional spell casting system, but since its the first time I have done anything like this, I wanted to check to see if you guys had suggestions on how it *should* be set up. This is what I did. Each map zone is run by its own zone server. I gave each zone server a SpellProcess class, whose constructor is called when the zone server constructor is called. As the zone server updates, it calls the SpellProcess::Process() function which checks to see whether any spells are waiting to be cast. It runs these checks every 50ms. When a PC or NPC casts a spell, if that spell has a cast time (some are instant use), it adds the spell to the casting queue and sets the cast_time as a timem_t data type. This info is placed in an unordered_map with the PC/NPC as the key and the spell struct as the object. When SpellProcess::Process() runs, it checks to see if the current time is greater than the cast time and if so, moves that struct to a separate vector for processing. The SpellProcess::Process() function will also check TickTimers for pulse effects, SpellTimers for other functions (like fading of a detrimental/beneficial spell, firing a secondary effect, etc...) and Maintained Effects Timers (which covers things like the bard song and auras. Nor as far as the system goes for beneficial and detrimental effects goes, this system I could really use some advice on. Initially, I had a global list that stored the active effects for every PC/NPC in the game, but I moved that to store that data at the object level so every PC or NPC maintains their own list of effects. So in one example, Player A casts a buff on Player B. If that buff has a cast time, its added to the active_spells map in SpellProcess until the cast time is met and the spell triggers. We have our spells scripted in LUA, so it pulls the data it needs about the spell from the master lists built from the database at runtime and makes the call to LUA. The LUA system will apply a beneficial effect (since its a buff) to Player B. Its adds that effect to the Effects list maintained by Player B. Say that spell has a duration of 30 minutes, a Spell Timer is created in the zones SpellProcess class with an end time 30 minutes from cast that will call the function "Remove Beneficial" from the lua script for that spell. The same goes for debuffs except they call the "Remove Detrimental" function. When a timer is created, it is given an index number of type uint32_t and that index is passed back to the Player's effects list and stored. This makes it easy and fast to update that timer if an effect is refreshed or removed by naturally expiring. I know thats probably all as clear as mud since its tough to really describe code, but here are my issues. When a player moves from one zone to the other, all of that players timers have to be moved as well. The new zone may have those index numbers already in use by another player, so I would have to verify every index and if used, get the next available index and update the appropriate player effects struct so it could find the timer when it needed to. These are surmountable issues, but seems like its overly complicated and a better design should be possible, There are 200+ zones in the game and many thousands of NPCs. The emulator will probably never be hugely popular so lets say 100 concurrent players? How would you design a responsive system that could do all the things I mentioned above in as simple a fashion as possible. Should each PC/NPC object have its own Spell casting and timer system? That seems very straightforward and no timers ever have to be shuffled around, but polling every PC/NPC in the game seems inefficient unless there is a good way to only check the ones that are casting. I could use a character flag that is set to false if all the timer lists are empty for that character, but it would still have to be checking the character flag. Another thought I had was creating a new thread within the server to check a global SpellProcess object that only stored information for PCs. Since NPCs don't change zones, their timers never need to be moved and the total PC load will be low enough to make that global list not a real burden on processing power. I'm a newbie at this, so any help is appreciated. Let me know if you need more information and I'll happily provide it.
  22. I'm currently learning how to import models. I created some code that works well following this tutorial which uses only one sampler2D in the fragment shader and the model loads just fine with all the textures. The thing is what happens when a mesh has more than one textures? The tutorial says to define inside the fragment shader N diffuse and specular samplers with the format texture_diffuseN, texture_specularN and set them via code, where N is 1,2,3, .. , max_samplers. I understand that but how do you use them inside the shader? In the tutorial the shader is: #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; void main() { FragColor = texture(texture_diffuse1, TexCoords); } which works perfectly for the test model that the tutorial is giving us. Now lets say you have the general shader: #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; uniform sampler2D texture_diffuse2; uniform sampler2D texture_diffuse3; uniform sampler2D texture_diffuse4; uniform sampler2D texture_diffuse5; uniform sampler2D texture_diffuse6; uniform sampler2D texture_diffuse7; uniform sampler2D texture_specular1; uniform sampler2D texture_specular2; uniform sampler2D texture_specular3; uniform sampler2D texture_specular4; uniform sampler2D texture_specular5; uniform sampler2D texture_specular6; uniform sampler2D texture_specular7; void main() { //How am i going to decide here which diffuse texture to output? FragColor = texture(texture_diffuse1, TexCoords); } Can you explain me this with a cube example? Lets say i have a cube which is a mesh and i want to apply a different texture for each face (6 total). #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; uniform sampler2D texture_diffuse2; uniform sampler2D texture_diffuse3; uniform sampler2D texture_diffuse4; uniform sampler2D texture_diffuse5; uniform sampler2D texture_diffuse6; void main() { //How am i going to output the correct texture for each face? FragColor = texture(texture_diffuse1, TexCoords); } I know that the text coordinates will apply the texture at the correct face, but how do i now which sampler to use every time the fragments shader is called? I hope you understand why I'm frustrated. Thank you
  23. Hey, I'm currently starting next iteration on my engine project and have some points I'm completely fine with and some other points and/or code parts that need refactoring so this is a refactoring step before starting to add new features. As I want my code to be modular to have features optional installed for certain projects while others have to stay out of sight, I designed a framework that starting from a core component or module, spreads features to several project files that are merged together to a single project solution (in Visual Studio) by our tooling. This works great for some parts of the code, naming the Crypto or Input module for example but other parts seem to be at the wrong place and need to be moved. Some features are in the core component that may belong into an own module while I feel uncomfortable splitting those parts and determine what stays in core and what should get it's own module. An example is Math stuff. When using the framework to write a game (engine), I need access to algebra like Vector, Quaternion and Matrix objects but when writing some kind of match-making server, I wouldn't need it so put it into an own module with own directory, build script and package description or just stay in core and take the size and ammount of files as a treat in this case? What about naimng? When cleaning the folder structure I want to collect some files together that stay seperated currently. This files are foir example basic type definitions, utility macros and parts of my Reflection/RTTI/Meta system (which is intended to get ipartially t's own module as well because I just need it for editor code currently but supports conditional building to some kind of C# like attributes also). I already looked at several projects and they seem to don't care that much about that but growing the code means also grow breaking changes when refactoring in the future. So what are your suggestions/ oppinions to this topic? Do I overcomplicate things and overengeneer modularity or could it even be more modular? Where is the line between usefull and chaotic? Thanks in advance!
  24. babaliaris

    OpenGL Fragment Position

    Take a Look at my Shaders: Vertex Shader: #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec3 FragPos; out vec3 Normal; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); FragPos = vec3(model * vec4(aPos, 1.0)); Normal = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader: #version 330 core out vec4 Color; uniform vec3 viewPos; struct Material { vec3 ambient; vec3 diffuse; vec3 specular; float shininess; }; uniform Material material; struct Light { vec3 position; vec3 ambient; vec3 diffuse; vec3 specular; }; uniform Light light; in vec3 FragPos; in vec3 Normal; void main() { //Calculating the Light Direction From The Fragment Towards the light source. vec3 lightDirection = normalize(light.position - FragPos); //Calculating the camera direction. vec3 camDirection = normalize(viewPos - FragPos); //Calculating the reflection of the light. vec3 reflection = reflect(-lightDirection, Normal); //Calculating the Diffuse Factor. float diff = max( dot(Normal, lightDirection), 0.0f ); //Calculate Specular. float spec = pow( max( dot(reflection, camDirection), 0.0f ), material.shininess); //Create the 3 components. vec3 ambient = material.ambient * light.ambient; vec3 diffuse = (material.diffuse * diff) * light.diffuse; vec3 specular = (material.specular * spec) * light.specular; //Calculate the final fragment color. Color = vec4(ambient + diffuse + specular, 1.0f); } I can't understand how this: FragPos = vec3(model * vec4(aPos, 1.0)); is actually the fragment position. This is just the vertex position transformed to world coordinates. The vertex shader is gonna get called n times where n is the number of vertices, so the above code is also going to get called n times. The actual fragments are a lot more so how can the above code generate all the fragments positions? Also what is a fragments position? Is it the (x,y) you need to move on the screen in order to find that pixel? I don't think so because i read that while you are in the fragment shader the actual pixels on the screen have not been determined yet because the viewport transformation happens at the end of the fragment shader.
  25. isu diss

    3D Rigidbody Simulation

    I started to build a physics engine using "An Introduction to Physically Based Modeling" http://www.cs.cmu.edu/afs/cs/user/baraff/www/pbm/pbm.html by by the authors (Andrew Witkin, David Baraff and Michael Kass). This physics engine is used in a cricket simulation. So far I'm trying to simulate the cricket ball. The problem is, the ball should spin around its body space x- axis, given the angular momentum around body space x-axis but it only spins around the world space x-axis. How do I change the coordinate axis? Rigidbody dynamics code RigidBody::RigidBody(float M, XMMATRIX IBody, XMVECTOR x0, XMVECTOR v0, XMVECTOR L0, XMMATRIX R0) { Mass = M; IBodyInverse = XMMatrixInverse(NULL, IBody); x = x0; v = v0; L = L0; I0_Inverse = XMMatrixMultiply(XMMatrixTranspose(R0), IBodyInverse); I0_Inverse = XMMatrixMultiply(I0_Inverse, R0); Omega = XMVector4Transform(L0, I0_Inverse); q = MatrixToQuaternion(R0); F_Total = XMVectorSet(0, 0, 0, 0); Tau_Total = XMVectorSet(0, 0, 0, 0); } XMMATRIX RigidBody::QuaternionToMatrix(XMVECTOR q) { float vx = q.m128_f32[0]; float vy = q.m128_f32[1]; float vz = q.m128_f32[2]; float s = q.m128_f32[3]; return XMMatrixSet( (1 - ((2*vy*vy) + (2*vz*vz))), ((2*vx*vy) + (2*s*vz)), ((2*vx*vz) - (2*s*vy)), 0, ((2*vx*vy) - (2*s*vz)), (1 - ((2*vx*vx) + (2*vz*vz))), ((2*vy*vz) + (2*s*vx)), 0, ((2*vx*vz) + (2*s*vy)), ((2*vy*vz) - (2*s*vx)), (1 - ((2*vx*vx) + (2*vy*vy))), 0, 0, 0, 0, 1); } XMVECTOR RigidBody::MatrixToQuaternion(XMMATRIX m) { XMVECTOR q_tmp; float tr, s; tr = m.r[0].m128_f32[0] + m.r[1].m128_f32[1] + m.r[2].m128_f32[2]; if (tr >= 0) { s = sqrt(tr + 1); q_tmp.m128_f32[3] = 0.5f*s; s = 0.5f/s; q_tmp.m128_f32[0] = (m.r[2].m128_f32[1] - m.r[1].m128_f32[2])*s; q_tmp.m128_f32[1] = (m.r[0].m128_f32[2] - m.r[2].m128_f32[0])*s; q_tmp.m128_f32[2] = (m.r[1].m128_f32[0] - m.r[0].m128_f32[1])*s; } else { int i = 0; if (m.r[1].m128_f32[1] > m.r[0].m128_f32[0]) i = 1; if (m.r[2].m128_f32[2] > m.r[i].m128_f32[i]) i = 2; switch(i) { case 0: { s = sqrt((m.r[0].m128_f32[0] - (m.r[1].m128_f32[1] + m.r[2].m128_f32[2])) + 1); q_tmp.m128_f32[0] = 0.5f*s; s = 0.5f/s; q_tmp.m128_f32[1] = (m.r[0].m128_f32[1] + m.r[1].m128_f32[0])*s; q_tmp.m128_f32[2] = (m.r[2].m128_f32[0] + m.r[0].m128_f32[2])*s; q_tmp.m128_f32[3] = (m.r[2].m128_f32[1] - m.r[1].m128_f32[2])*s; break; } case 1: { s = sqrt((m.r[1].m128_f32[1] - (m.r[2].m128_f32[2] + m.r[0].m128_f32[0])) + 1); q_tmp.m128_f32[1] = 0.5f*s; s = 0.5f/s; q_tmp.m128_f32[2] = (m.r[1].m128_f32[2] + m.r[2].m128_f32[1])*s; q_tmp.m128_f32[0] = (m.r[0].m128_f32[1] + m.r[1].m128_f32[0])*s; q_tmp.m128_f32[3] = (m.r[0].m128_f32[2] - m.r[2].m128_f32[0])*s; break; } case 2: { s = sqrt((m.r[2].m128_f32[2] - (m.r[0].m128_f32[0] + m.r[1].m128_f32[1])) + 1); q_tmp.m128_f32[2] = 0.5f*s; s = 0.5f/s; q_tmp.m128_f32[0] = (m.r[2].m128_f32[0] + m.r[0].m128_f32[2])*s; q_tmp.m128_f32[1] = (m.r[1].m128_f32[2] + m.r[2].m128_f32[1])*s; q_tmp.m128_f32[3] = (m.r[1].m128_f32[0] - m.r[0].m128_f32[1])*s; break; } } } return q_tmp; } XMVECTOR RigidBody::GetPosition() { return x; } XMMATRIX RigidBody::GetOrientation() { return R; } void RigidBody::AddForce(XMVECTOR Force) { F_Total += Force; } void RigidBody::AddTorque(XMVECTOR Torque) { Tau_Total += Torque; } void RigidBody::Update(float h) { x += h*v; v += h*(F_Total/Mass); XMVECTOR Omegaq = XMQuaternionMultiply(q, XMVectorSet(Omega.m128_f32[0], Omega.m128_f32[1], Omega.m128_f32[2], 0)); q += 0.5f*h*Omegaq; L += h*Tau_Total; R = QuaternionToMatrix(XMQuaternionNormalize(q)); IInverse = XMMatrixMultiply(XMMatrixTranspose(R), IBodyInverse); IInverse = XMMatrixMultiply(IInverse, R); Omega = XMVector4Transform(L, IInverse); } Rendering code mgWorld = XMMatrixMultiply(rbBall->GetOrientation(), XMMatrixTranslation(rbBall->GetPosition().m128_f32[0], rbBall->GetPosition().m128_f32[1], rbBall->GetPosition().m128_f32[2])); cuvscb.mWorld = XMMatrixTranspose( mgWorld ); cuvscb.mView = XMMatrixTranspose( mgView ); cuvscb.mProjection = XMMatrixTranspose( mgProjection ); cuvscb.mLightView = XMMatrixTranspose(mgLightView); cuvscb.mLightProjection = XMMatrixTranspose(mgLightProjection); cuvscb.vCamPos = XMFLOAT3(MyCAM->GetEye().m128_f32[0], MyCAM->GetEye().m128_f32[1], MyCAM->GetEye().m128_f32[2]); cuvscb.vSun = XMFLOAT3(cos(Theta)*cos(Phi), sin(Theta), cos(Theta)*sin(Phi)); cuvscb.MieCoeffs = MC; cuvscb.RayleighCoeffs = RC; pImmediateContext->UpdateSubresource( pVSCUConstBuffer, 0, NULL, &cuvscb, 0, 0 ); pImmediateContext->VSSetConstantBuffers(0, 1, &pVSCUConstBuffer); Draw code...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!