Jump to content
  • Advertisement

Shaarigan

Member
  • Content Count

    781
  • Joined

  • Last visited

  • Days Won

    4

Shaarigan last won the day on October 7

Shaarigan had the most liked content!

Community Reputation

1226 Excellent

7 Followers

About Shaarigan

  • Rank
    Advanced Member

Personal Information

  • Role
    Artificial Intelligence
    Programmer
  • Interests
    Art
    Design
    Programming

Social

  • Github
    Shaarigan

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. In C/C++ this involves the platform dependant GDI library. On Windows this is GDI+ and comes along with the windows header. I seriously never used it! In C# this would be much easier. However, using OpenGL is the more simple way to do it. You can upload your sprite as texture what is the preferred way nowdays but you could also create a pixel-buffer. This is possible in legacy GL only but the texture+vertices way works on whatever platform and driver version you are. You could take a look onto the good old GL tutorials here to learn the basics and go for more advanced stuff if you feel ready to learn matrices and shaders. Moving the sprite is simple, transform you key press into a +1 or -1 on whatever axis you want to use and then translate your rendering via the matrix. The tutorial shows how this works
  2. You can of course do that. You can receive the window messages in every OS that supports them and listen for a Key_Up/Key-Down, maybe Key-Press message. However, you need a use interopts in C# or have a Window that provides these events. In C for example you utilize the WinMain entry-point for windows and use PeekMessage and TranslateMessage. Btw, your question isn't very clear, you asked for C but tagged it C#?
  3. This is what my statement is about, you can put anything into a unity file but this also means building your whole architecture every time you change just a single line of code. If you can modularize your project, setting up Log System, Resource Manager or whatever once and never touch it, why should you want to put it into your unity file too. Instead compile them once and use a clever tool like Visual Studio that is able to know when to recompile. No need to process things never touched. @lawnjelly's suggestion of using development time DLL/SO and release time static libraries is a good addition
  4. Finally if you still feel to suffer in compile time, you can modularize your project into standalone modules. Log-System for example is a good candidate to be excluded from the main project. Separate them into own static library projects and then just use their public interface and have the linker doing the rest of the work. Static libraries don't produce additional overhead because they are statically linked together with the rest of your code and Visual Studio is well in deciding what you really used so it dose link only the code used in your final assembly. This has the advantage if you turned incremental builds on, that you only recompile the projects with changes and keep anything else from the previous compiler run. So if you work on your game for example, only your game gets recompiled while the Log Module remains the same and safes compile time for that code. I expect the linker to be faster than the compiler because it dosen't has to struggle with macros, templates and so on
  5. To not get any misapprehensions, Unity File dosen't involve anything from Unity3D or Unity Technologies Inc, it is just named like this because it has all dependencies in one file. The only resource I was able to find about it without pointing you to the source code is UnrealBuildTool Build Configuration chapter Our compile time was reduced to round about 3 minutes after turning Unity Build Mode on
  6. Compilation depends on many circumstances. First the compiler has to handle your preprocessor directives, these are includes but also anything conditional you define with a # in front. Therefore the preprocessing unit has to resolve all of these statements from top to bottom before anything else can take place. Every include will be processed, the file will be loaded and added to the process too. After the conditionals are resolved and the preprocessor knows what code to push to the compiler, macro replacement takes place. Any macro definition (so a define with additional arguments required) are resolved recursively wherever you used the definition in code. It is truely recursively until either the preprocessor dosen't find anymore defined identifiers in the macro code or the macro calls itself, then the processor breaks. This steps may happen at the same time, this is preprocessor dependant. The preprocessor I wrote in C# to detect dependencies between C++ files does all of this on the fly for example. Templates are resolved (specific code is generated for each template for each different arguments passed to them, this is why templates may cause code bloat) and then the code is pushed finally to the compilation unit. So to answer your question, it depends: how many include files do you have, how much and complex are the macros you use, how many templates do you use with different arguments. Did you set the include guards correctly (to not include a file twice as you already processed it in this compilation unit) or included more files as necessary, does a simple forward declare can be used instead of the whole header? By the way, 40 seconds are nothing. Huge projects like game engines (Unreal for example) use so called "Unity Files" where anything is included at once in one file. This is a try to reduce huge build times that may occure else, our Unreal project for example took more than 10 minutes to compile before we toggled it to generate a Unity File. What I like to do in such cases is to have our custom build tool generate a dependency graph of each include and where it has been used. This not just helps avoid circular dependencies and helps modularizing the project but also shows unnecessary include directives that can cause much higher compile times
  7. I think the standard library is outdated too but in my opinion for reasons like code style, conventions and the heavy nesting of templates. I know that all of these was/is usefull and it can't be changed that easy to not miss backwards compatibility with old code, but nowdays I would prefer a more modern naming. I know that such legacy systems often have these problems, especially in Web; HTML 4 is still supported in all modern browsers because for not breaking the internet and also Javascript still has old features developers I talked to would like to get rid of. But on the other side maybe it would be wise to make a break instead of forcing new features into the language and so the standard library like Lambdas (they are prohibitted where I work because they are a potential performance issue). What's wrong with writing Vector instead of vector or SharedPtr or whatever, or have a C# style generic list instead? I know everybody has his/her prefered style but as languages like C# are widely used these days, even those styles have changed since C++ was introduced for the first time and most coding guidelines prefer the title case. I also use iterators very rarely because I got to the point that in our case, a good old for loop did the trick too. It is like IEnumerator/ IEnumerable in C#, there are use cases for it but you won't use them all the time. But I'm not strictly against iterators or think they are outdated. I even implemented one by my own for the reflection system to iterate over the list of member fields/ functions. As I mentioned, they are like their C# siblings and can be used whenever a simple index-based iteration is not possible. Dont' know how you used vector in your code but if you used push_back all the time, without deeper knowledge how you replaced the old code in your project with c-style array, I think it was slow because you too often allocated memory on every add of a new element !? The beneifts of vector is that it is a growing array (anything between the normal c-style array and C#'s generic list) so on a regrow, there is much more memory allocated as you actually use. The capacity grows every time the vector runs out of memory so allocations are pretty often in the first few pushes and become less often later. But I think you already know that, do you? If used correctly, as a static block of memory, c-style arrays should beat vector in performance
  8. Have you taken a look into your Preferences if VSC is still the default editor?
  9. Shaarigan

    C# Path relation lookup

    This however correct but we are working in a very small subset of the file system in our own SDK or the user's project directory. So if you create a tool that is called mytool and one the same spelling but in a different case in the same folder, than this is some kind of issue we don't address here Not really, it is just a maximum of 3 levels of hirachy here. We start at the top level, so the SDK or project root and go down to the first level of sub-folders and collect all sub-folders in there. Then looking for the sub-foldes in there and collect all files found. The files won't process, only the paths because every Processor Unit attached to the global data pipeline has itself a directory where it was compiled from. Then the data is pushed to those units only, that are the standard units or located in certain directory in the SDK or project the data order is currently greater or equal. For example I have the SDK path C:\\Users\...\SDK\ and the forge project inside the SDK located at C:\\Users\...\SDK\Tools\Forge, then a Processor Unit is compiled from the Forge directory, Data that is above the Forge directory will be passed to the default unit with an order of zero, data that is inside or below the Forge directory is passed to the unit defined there. So in this case, first the order is compared to early out anything don't matching at this point. After that the absolute path is matched against the remaining units because I can potentially have a diectory in my project folder that has the same order as the Forge directory. Oh and I forgott to mention the priority between SDK folder and project folder. So local (project) units beat the global (SDK) ones but this is determined at startup
  10. Hey community, I'm a little bit drown in thoughts about the best/ fastest way to perform a lookup of the relation between two paths. My paths have an order (a number that determine the amount of sub-sequences/directories) and are already unified so there is no need take this into account. What I want to achieve is to determine if a given path is one of the following: The parents of the path tested; given A/B/C then is A the parent of B and C and A/B is the parent of C The same path as the path tested A totally different path as the path tested I know there are some tricks in C# to speed string comparsion up, I'm currently using public bool Contains(PathDescriptor subFolder) { return subFolder.GetAbsolutePath().StartsWith(GetAbsolutePath(), StringComparison.OrdinalIgnoreCase); } but I was wondering if there is a better/faster solution to perform the same task. Again, I need to know if the path tested is any child directory regardless of the hirachy. This is important because for our tool we decided to allow local overrides for our Processor Units. This means the deeper the tool walks into the project directory, the more priority get the Processor Units found there. Thanks in advance
  11. If you are interested and willing to learn, you might join us and our project https://www.gamedev.net/forums/topic/703055-game-framework-contributor-partner/
  12. Shaarigan

    How should a DLL be designed?

    C# is a language with a rich feature set and without the need of managing memory by one's own. This is a big plus because those tools often have to be robust and their development time should be very low (if you are motivated enougth). UBT is created in C# because it supports on-demand compilation of the module settings written in C# also. UBT creates an assembly from it, grabs the classes via reflection and integrates them into the build process. Forge is written in C# for the same reasons, we have also .Build.cs files to configure the project, they are unless Unreal, build into a Mixins (extensions that are connected to well defined points in the tool), but we are also using a quick-setup. The script locates .Net Framework 4 on your PC and calls CSC.exe, the old C# compiler to compile the initial version of Forge before you can start coding. The upcomming version of Forge will be even more flexible due to a built-in compiler to translate Blueprint-like visual developed pipelines into C# code loaded as Plugin into it
  13. Shaarigan

    How should a DLL be designed?

    I'm working with DLLs or better static libs since I first redesigned my engine because the modular approach feeled better to me like the monolithic one project solution. This way I can design and compile my modules in a way that allows them to stay apart from each other, especially if you work in a team this is very usefull. My approach of deciding what belongs where is simple, DLLs/libs provide utility classes so they have a public interface. Each class stays on it's own except for needed dependencies to other classes (like when defining a member of certain type) Put as much code as you can in header files and inline it (so the compiler can optimize calls) Put code that is specific to certain circumstances in code files (like the implementation of an API function for different platforms) and last but not least, I maintain a hirachy system. If a class or function is used in multiple modules, then it's hirachy level is increased so both modules have a dependency to the new module. This way I can build a dependency pyramide. My custom build-tool also outputs a dependency graph that contains all classes in my projects to help decoupling modules and interfaces from each other
  14. Shaarigan

    Inhouse vs Public

    In-house solutions often have more tools like the generity of the public engines because those are mostly made on a short way when one department needs a specific tool for a project. On the other hand, the public engines have asset stores so their tools have a larger but most of the time not that specific count because everyone submits his/her tools to the store in expectation to earn some money, or doing it for free to just be the first, best whatever. In-house tools increase in count and improvement the longer the in-house engine exists because new tools are created frequently to target specific problems, employees often improve their workflow and share these tools with the rest of the company and even completely new solutions are designed, for example using AI for game content. This is rarely possible in public asset stores
  15. No C++ doesn't has these features because they are something that is bound to the runtime and languge model. It starts with having delegates (because events are in fact just delegates). In C++ you don't have delegates, just function pointers. The reason is simple, a function pointer points to an address in the assembly binary. The compiler will perform just a jmp instruction as same as if you call the function directly. Those functions can have and for the kind of instruction the compiler uses need different calling conventions. A static function is just a cdecl/ stdcall, this determines how the stack is created and cleaned after the call returnes. The you have member functions that have a different calling convention: thiscall. It is again some special kind of stack creation, member functions are also statically called except that there is an additional parameter on the stack, the pointer to the object you call it for. It isn't possible to interchange calling conventions to store them in the same variable because the compiler has to ensure correct stack management. This is different for C# because in C# the CLR is able to deduce the function calls at runtime, so you can store whatever you like static or member function in a delegate. However, there are tricks in C++ using templates to create such kind of delegate too. If you are interested in how this works, I provide delegates in my common C++ source at GitHub. It works because you store two pointers in the dynamic delegate instance, an optional object that will be used for thiscall convention and a calling proxy that is a templated function. If you assign a static function, the cdecl version call guard is used, if you use a thiscall member function, the thiscall version call guard is used. This works because those template functions are static functions and just route the arguments passed to the target function you add in there. So in fact the delegate calls the call guard passing the functions arguments and the object pointer allways and then the call guard decides how to exactly execute the function pointer given in the template argument. Properties is another topic, they are compiler convinience for function calls. If you declare a property in C#, the compiler will add hidden functions to your class that are named get<PropertyName> and/or set<PropertyName>. You can investigate this if you use reflection to inspect the classes functions. So you can have this in C++ too, just add a function to your class the same way the C# compiler does or more simple inline void PropertyName(PropertyType const& type) { myVariable = type; } inline PropertyType PropertyName() const { return myVariable; } I use this practise in my code base all the time, the only difference is in calling it //C# MyType m = myClassInstance.PropertyName; //C++ MyType m = myClassInstance.PropertyName(); There are of course more topics that aren't implemented in C++ right now, for example reflection. The potentially most usefull thing in C# is tricky to do in C++ because of the CLR C# runs at. The CLR knows at runtime the type and consistency of each class and so each object. This isn't possible in C++ because objects are just memory, they don't have any information about their name, fields and methods. Yes, there is the function table in an object which inherits from another class but this only contains function pointers for derived functions and virtual overrides. There are some patterns how to implement reflection in C++ but keep in mind, reflection means storing meta information for every type in your assembly, so your code will increase in size. Some solutions generate code for more complex reflection including fields, functions and their arguments, some other are just a lookup of a typeID or type code like in C#. It is up to you here how complex your reflection support should be. I decided for a mixed version in my code base, have a typeID for every type I use in my code, regardless of if it is built-in or mine and have generated more detailed information for some chosen types I declared by my own
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!