Jump to content
  • Advertisement

dmatter

Member
  • Content Count

    2762
  • Joined

  • Last visited

  • Days Won

    1

dmatter last won the day on August 30

dmatter had the most liked content!

Community Reputation

4869 Excellent

1 Follower

About dmatter

  • Rank
    Contributor

Personal Information

  • Website
  • Role
    DevOps
    Programmer
    Technical Director
  • Interests
    Business
    DevOps
    Programming

Social

  • Twitter
    daveagill
  • Github
    daveagill

Recent Profile Visitors

20559 profile views
  1. Yeah C++ doesn't have those in the 'core' language, mainly because they're the kind of functionality that can be provided through libraries. For example instead of events you might use Boost Signals2. For delegates the C++ standard library has std::function that you can use. For properties I would just say it's something that is typically achieved by writing member functions. https://en.cppreference.com and http://www.cplusplus.com/reference are pretty good for learning about what's in the core language and what's in the standard library. While https://www.boost.org/doc/libs/ is a large set of high-quality community libraries and one of the most influential projects in the C++ world. Several libraries from Boost have since been standardised into the C++ standard library (e.g. boost::shared_ptr became std::shared_ptr).
  2. dmatter

    My main complaint with OOP

    The problem with this example is that it doesn't not really follow the basic tenants of object oriented design, no wonder then that it contains many issues. I think the real criticism here is that it is possible to write "object-oriented" code without following good first-principles and arrive at a mess. This example has beneficial characteristics that are not wholly present in your other examples. Which I feel almost creates an applies-to-oranges comparison even. For example this design enables multiple implementations of a 'circle renderer' from FancyCircleRenderer to GruesomeCircleRenderer and everything in-between, all adhering to a common interface. You have also achieved a partial-application of the render function. You can pass renderers around in the form of the Renderer interface and invoke DrawToCanvas without ever knowing that you'll be drawing a circle. Imagine a function that takes a List<Renderable> and draws all the shapes in batch. Of course there are languages out there that support partial-application more succinctly than this. I am not sure that the number of classes is a particularly important metric so I don't really see it as a downside per-se. Whether those classes are scattered throughout the codebase is really up to you. Many languages (e.g. C++) allow multiples classes to exist in a single code file so you could very well implement all of those in one file if you wished and then it would not be any more scattered than your first example. This is completely fine. Although it obviously lacks 'features' that some of your other examples sported but as the author you get to decide whether those features were useful to you in the first place. It is all too common to want to draw a 'shape' without knowing specifically what shape that is (circle, square, 3d model, ...) and this solution provides no form of abstraction as-is. That would require further code not shown here. So whether this example is a fair comparison really depends on the overall requirements. I guess you are referring to Golang's ability to attach functions to a type after the type has already been defined? It's a good approach for Golang, but it may not be a good approach for, say, C++ where it would probably complicate the compilation model even more than it already is. Other languages can also support that through various mechanisms: Golang's approach is to define functions that 'listen' on an existing struct type, this allows types to satisfy an unlimited number of interfaces 'after the fact' and can subsequently be consumed by functions via a form of static duck-typing. C#'s approach is called 'extension methods' which simply augment some existing type with extra methods. Python's approach allows for methods defined at the time of a class definition but they can also be attached to objects after construction. As a dynamically typed language functions rely on duck-typing to consume objects with a compatible interface. Rust's approach is to explicitly implement traits for an existing type, e.g. struct Circle { center: Vec2, radius: f32, } impl Renderer for Circle { fn DrawToCanvas(&self, c: &Canvas) { } } This approach known as a 'typeclasses' and is not unique to Rust (e.g. Haskell and Scala support this too). All in all, we live in a multi-paradigm world and nobody is really forcing us to use one approach for everything. Most languages let us pick and choose from at least a small set of approaches and one of those usually gets the job done. Programming languages are just tools after all.
  3. ^ There's a lot of discussion here about Rust being managed and using a garbage collector - but Rust is not managed, it does not run atop a runtime VM and it does not use a garbage collector. Rust does have a reference-counted smart-pointer, but so does C++, and that's not usually what is meant by the term "garbage collection". This is why Rust performs comparably to C++ and could be used as a substitute for C++ if it met your needs (sufficient libraries, etc).
  4. dmatter

    jump algorithm

    Looks right to me. I am not sure how you intend to apply this to your game though (or how the video suggests you do that)? What you have is the closed-form equation that describes the motion of your entity jumping under gravity from its initial state (initial position and initial velocity). Depending on your game you may be able to use that approach but often games will use numerical integration for this instead. I.e Track the current physical properties of the entity (e.g. current position & current velocity) and integrate them with respect to the frame's time delta to attain the new updated position for that frame. A full-blown physics library would do all that for you (along with collisions, joints, etc), but lots of games don't need or want a full-blown physics engine so they just code the basic motion equations using (for example) Euler integration as it is dead simple to code and understand, although I typically will use velocity Verlet integration as it is more accurate (and still pretty simple). This looks like a decent source for more info: http://lolengine.net/blog/2011/12/14/understanding-motion-in-games
  5. dmatter

    jump algorithm

    If X is simply linear: x = velocityX * t + initialX Obviously if your x velocity is 1 and you start at the origin 0 then that simplifies to just x = t
  6. dmatter

    jump algorithm

    Speaking specifically about that code then Vo is the initial Y velocity and Po is the initial Y position. No part of that code attempts to calculate anything for X. However since the horizontal is not affected by gravity (and not parabolic) then X is likely just a simple linear function of 't'.
  7. I believe you are referring to mesh "simplification", the process of reducing the level-of-detail in a mesh topology.
  8. Since the classic GPU voxelization approach has been mentioned already I will throw in some other ideas which are also suitable for CPU. As you mentioned "high poly model" - One idea would be to treat your model like a point-cloud and just flag voxels at those points. That only really works if your voxel density is low compared to your poly density i.e. you want dense, tiny little polygons compared to the size of your voxels so you can be sure that you won't have any 'holes' in the centre of polygons from the lack any true rasterization. One trick is to tessellate up your polys to get denser points if necessary, or simply generate extra points over the face of each poly (e.g. emit a point at each vertex plus one at the center of each triangle). Properly rasterizing your polys into your voxel grid is the other way to go. At the moment it sounds like you are doing cube-vs-triangle intersection tests between *every* voxel and *every* polygon which is probably why it's so slow. Whereas if you iterate your polys and rasterize them directly to the voxels that they occupy it is algorithmically a lot cheaper (roughly linear if all polys are about the same size as each other) and scales better for large numbers of polys and/or dense voxel grids. Thirdly, even with your current approach of box-vs-triangle tests you can likely speed that up a lot from where you are now. Hopefully you have some early-out tests like if all the vertices of the triangle are beyond one side of the box then it's a cheap fail. After that you could think about doing a coarse-grained spatial partitioning of polys first - The simple example is to split your model in half down the middle and voxelize each half model into the corresponding half of the voxel grid, now you only have to test 50% of the polys per voxel so you have doubled your performance and you don't have to stop there - split it into four and you quadruple your speed (minus some upfront cost to partition your model). If you're coding this as a CPU algorithm then for another speed boost you could throw the voxelization of each partition onto a different thread and process them concurrently - let's say you're on a quad-core machine with 2 hardware threads per core and at least 8 partitions then you might expect to achieve somewhere between 4x to 8x speed increase (just from concurrency).
  9. dmatter

    Hey

    Welcome back Aardvajk - Always enjoy reading your posts. Is this project available online anywhere out of interest? Do you have overarching goals in mind for what the language is intended for? If I had to guess I would say it's a "simplified C++" so sort of similar to the goal of Java or C#.
  10. dmatter

    DOOM: Multi Level BVHs

    Works for me on an antiquated Radeon HD 7790 with the window maximised on a 1440p screen at ~4.5 FPS and 15.7 MRays/s. On my 1080p screen that's 7 FPS and 13.3 MRays/s.
  11. I wouldn't be surprised if there are fast-paths in place for small and/or common exponents. That could be as part of the library implementation or applied by the compiler or the hardware.
  12. dmatter

    How to make unit tests for a game engine?

    It sounds like you're confusing unit tests with end-to-end or integration tests. A unit test should test a single unit not an integration so you do not, by definition, initialise a window, glew glfw or any of that for a unit test. So, if you are finding that your unit tests are looking a lot like integration or e2e tests then the best advice I can give is to break down your code into more-independent units to decouple those dependencies and allow them to be mocked/stubbed/faked as appropriate. On the other hand if you really are asking about e2e and integration tests then they don't need to be 'fast', sure faster is better, but you can always run them overnight and see the results the next morning.
  13. dmatter

    Has C# replaced C++?

    Eh, I don't see it the same way. C++ seems to be turning into something that nobody ever asked for. The focus seems to be on adding features that aren't relevant with how the language is actually used today. For sure things like std::thread existing as part of the standard library is a good thing and so are smart pointers (for a language without a GC) and a few syntactic niceties like lambdas/closures. On the whole I think all anybody ever really wants from C++ is C-with-classes with a few extra bits of syntactic sugar, plus maybe generics (not even full-blown templates). Within C++ is a much smaller language trying to get out. Yup I agree. It's kind of amazing to me that nobody has yet replaced C++ with a C# or Java like language that is compiled and compatible with C and/or C++ ABIs or perhaps transpiled to a C++ subset -- something to free people from actually writing C++ without giving up the decades of libraries and compiler optimizations.
  14. You could try Gephi if you don't mind an 'offline' visualisation outside of Unity. A "force-directed layout" is actually pretty easy to implement yourself though. A simple solver could be achieved with just a couple of simple rules: Graph nodes repel each other (like charged particles) according to the inverse-square law. Graph edges act like springs (Hooke's law) Implement those physical equations, iterate and integrate using either Euler or velocity-Verlet intégration et-voila. Plus you don't have to figure this out yourself - there are literally tons of codified examples online that you can "be inspired by" 😉
  15. I have never built an ECS before (that's the disclaimer 😆) but I would probably aim to avoid that scenario altogether and keep components aligned so they can be indexed together. You example was of a component being deleted, so perhaps that's just something to avoid: Don't remove components from entities. If components need to be 'dynamic' for an entity (in that it comes and goes) then perhaps just treat that like a state on the component and add a flag to activate/deactivate it If you can't do that then you could give all components a pointer to a 'junction object' (which is "entity" if you like) that contains pointers to all the components for that entity, that way looking up a healthComponent from a collisionComponent could be done like this: collisionComp->entity->healthComp; Where entity was a pointer to a struct that had a pointer to every possible component (most would be null perhaps): struct Entity { CollisionComponent * collisionComp; HealthComponent * healthComp; // etc } This gives you constant-time access to related components and avoids scanning a pool of components for one with a matching entity ID (which is a linear-time operation)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!