Jump to content
  • Advertisement


The search index is currently processing. Leaderboard results may not be complete.

Popular Content

Showing content with the highest reputation on 06/27/19 in all areas

  1. 3 points
    I started a little more than 2 years ago. At first i tackled C++ and Vulkan simultaneously but failed because of the complexity and of being overwhelmed by too many subjects at once. But OpenGL is well graspable, imo as well as a game engine because however you turn it, you won't get around the math basics and programming/scripting, which were the biggest steps for me (former archaeologist). OpenGL together with C++ is well manageable imo, but calculate with at least half a year before you see real progress. The linked site learnopengl.com is a very good start imo, but as soon as you have grasped the principles of linear algebra, buffers, textures and the game loop you should carry on. The blue book (OpenGL Superbible 7th ed.) explains the concepts of OpenGL 4.5, i know of no more modern introduction than that, together with the Red Book OpenGL Programming Guide 9thed (which teaches pre 4.5 despite the subtitle). Both explain the principles with examples etc. very well. Another good read is the latest version (4.5) of the GLSL Cookbook from packtpub.org. You will have to get into shader programming in order to accomplish something. learncpp.com can be used to start over with c++. But consider a good book covering c++17 introduction, not just "modern" because after "modern" came "postmodern" and several other stages of art history until today 😉 https://en.cppreference.com/w/ or http://www.cplusplus.com/ for language details. The above OpenGL sources start with setting up an environment and going on step by step. Start with C++, as a programmer already, it should be manageable, and you can get to the "secrets" step by step. Concepts like object orientation or templates aren't needed in the beginning, they come when you start to brew your own stuff, but until then you have already the overview you need. imo
  2. 1 point
    OK. It just helps us give you better answers if we have an idea of what research you've already done and know what you've tried doing yourself.
  3. 1 point
    HI, Currently I am writing my own custom decoder for GIF and faced with an issue that when file has only global color table, output image has incorrect colors. Here is my LZW decompression algorithm: private Image DecodeInternalAlternative(GifImage gif) { var frame = gif.Frames[70]; var data = frame.CompressedData; var minCodeSize = frame.LzwMinimumCodeSize; uint mask = 0x01; int inputLength = data.Count; var pos = 0; int readCode(int size) { int code = 0x0; for (var i = 0; i < size; i++) { var val = data[pos]; int bit = (val & mask) != 0 ? 1 : 0; mask <<= 1; if (mask == 0x100) { mask = 0x01; pos++; inputLength--; } code |= (bit << i); } return code; }; var indexStream = new List<int>(); var clearCode = 1 << minCodeSize; var eoiCode = clearCode + 1; var codeSize = minCodeSize + 1; var dict = new List<List<int>>(); void Clear() { dict.Clear(); codeSize = frame.LzwMinimumCodeSize + 1; for (int i = 0; i < clearCode; i++) { dict.Add(new List<int>() { i }); } dict.Add(new List<int>()); dict.Add(null); } int code = 0x0; int last = 0; while (inputLength > 0) { last = code; code = readCode(codeSize); if (code == clearCode) { Clear(); continue; } if (code == eoiCode) { break; } if (code < dict.Count) { if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[code][0]); dict.Add(lst); } } else { if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[last][0]); dict.Add(lst); } } indexStream.AddRange(dict[code]); if (dict.Count == (1 << codeSize) && codeSize < 12) { // If we're at the last code and codeSize is 12, the next code will be a clearCode, and it'll be 12 bits long. codeSize++; } } var width = frame.Descriptor.width; var height = frame.Descriptor.height; var colorTable = frame.ColorTable; var pixels = new byte[width * height * 3]; int offset = 0; for (int i = 0; i < width * height; i++) { var colors = colorTable[indexStream[i]]; pixels[offset] = colors.R; pixels[offset + 1] = colors.G; pixels[offset + 2] = colors.B; offset += 3; } ... } For gifs, which has local color tables, everything seems decodes fine, but If gif has global color table or has X and Y offsets, then first frame is good, other frames - not. Here is some examples: First - gifs with local color table and NO offset (all frames are the same size) And here is a gif file, which has only global color table and different frame size. I think that if first frame was built correctly, this issue might happening because of horizontal and vertical offsets (could be different for each frame), but the only thing I cannot understand in such case - why this is actually happening? After decoding I must have valid color table indices in decoded array and I should not care about offsets. Also must admit that LZW decoding algorithm seems working fine. At least it always produces array of the expected size. I attach also my full test GIF files here. So, could someone point me to the right direction and say why I see observed behavior in one case and dont see in another?
  4. 1 point
    I would suggest giving this book a read for C++: https://www.amazon.ca/Primer-5th-Stanley-B-Lippman/dp/0321714113 Then you'll want to update your knowledge with the current standard. There also might be something on UDEMY worth a look to help with your language of choice, plus engine. I personally started a long time ago due to engine and tool-kit development, but when it comes down to what you really want to do. Are you wanting to build games or the foundation to which games are built on? Maybe both? If you just want to make games then I wouldn't devote time in creating the foundation as those wheels are built and spinning just fine in 2019.
  5. 1 point
    An Entity Component System is something that is open to interpretation and everyone implements it in a different way. What I like to use looks like this: Entity: just a number Component: a simple struct holding data. It doesn't know about entity ComponentManager: Holds an array of entities and an array of components of a type. Also holds a lookup table (hash map) that maps entity to arrayindex (arrayindex can index a component and the corresponding entity). Using this, you can find out if an entity has a component of the specific type or not by performing a map lookup. You can also iterate all components or all entities as linear arrays with very fast data access pattern. It also has some smart features, like removing elements by swapping in the last element, ordering elements, etc. System: simple function, that operates on arrays of components/entities (by using component manager for example). With this, basically I only needed to implement the ComponentManager and I have an entity component system. My implementation is simple but I used it everywhere in my engine and confident that it can be used as a fully featured entity component system: https://github.com/turanszkij/WickedEngine/blob/master/WickedEngine/wiECS.h
  6. 1 point
    Ideas are a cent a dozen. Learn to program.
  7. 1 point
    You can't add these to Visual Studio that easily because this is not build into VS rather than the preprocessor that comes along with MSVC. But what you are trying to achieve (if it is really just what your example shows) is however possible without touching the preprocessor in MSVC. What you are looking for is similar to RTTI using the _PRETTY_FUNC_ macro. Let me show ya a small example for it #if defined(__clang__) #define SIGNATURE __PRETTY_FUNCTION__ #elif defined(__GNUC__) #define SIGNATURE __PRETTY_FUNCTION__ #elif defined(_MSC_VER) #define SIGNATURE __FUNCSIG__ #endif #if defined(__clang__) #define SIGNATURE_PREFIX "char *tui() [T = " #define SIGNATURE_SUFFIX "]" #elif defined(__GNUC__) #define SIGNATURE_PREFIX "char* tui() [with T = " #define SIGNATURE_SUFFIX "]" #elif defined(_MSC_VER) #define SIGNATURE_PREFIX "char *__cdecl tui<" #define SIGNATURE_SUFFIX ">(void)" #endif #define SIGNATURE_LEFT (sizeof(SIGNATURE_PREFIX) - 1) #define SIGNATURE_RIGHT (sizeof(SIGNATURE_SUFFIX) - 1) template<int Size> struct TypeIdentifier { public: char value[Size]; inline TypeIdentifier(const char* identifier) { Runtime::memcpy(value, identifier, Size); value[Size - 1] = 0; } }; template<typename T> inline char* tui() { static TypeIdentifier<sizeof(SIGNATURE) - SIGNATURE_LEFT - SIGNATURE_RIGHT> identifier = TypeIdentifier<sizeof(SIGNATURE) - SIGNATURE_LEFT - SIGNATURE_RIGHT>(SIGNATURE + SIGNATURE_LEFT); return identifier.value; } I use this as basis for my type system. Calling tui - Type Unique Identifier creates a static global initialized struct of certain size that has just one task, to store a c-style string that is prefilled during construction with the SIGNATURE macro that itself is prefilled by the compiler during building with the full name of the function called. The trick is the template type; it is placed into the function signature at compile time and can then be cutted from the rest so you end up with something like std::cout << tui<uint64_t>(); to print unsigned long long int I then wrapped an ammount of convinience around that in my type system but the base of any type-id related function is still my tui<>() call
  8. 1 point
    I use smart pointers in many places. If you don't use them you still need to manage object lifetime somehow so there will be performance considerations either way. That being said you shouldn't use them everywhere. Smart pointers designate ownership. There is nothing wrong with using smart pointers in your main data structures and passing around raw pointers for use on temporary basis. Another strategy is to pass around your smart pointers by reference. What you are trying to do here is avoid the reference count increments, decrements and checks. Keep in mind unique_ptr is basically a raw pointer so there is no performance hit. Use them freely. However shared_ptr is a different story. There is a separate memory control block. If you used shared_ptr, use make_shared where possible. This combines the memory allocation of the pointer and the control block. Better yet, don't use shared_ptr at all. While it is a versatile implementation it has some major drawbacks. First the control block contains an extra reference count for any weak_ptr references whether you need it or not. This is not so bad. Much worse is the fact that the pointer itself is typically TWICE! the size of a raw pointer. There is a raw pointer for the control block and one for the object it self. You can write a small test program and see if this is true in your standard library implementation. In any case If you are using a lot of them it's a huge memory hog which of course has memory caching implications. If your design is such that you can have a standard base class, it's easy to put the reference count there, and do your own smart pointer implementation. This is not really that difficult. You can do it in a few minutes if you know how. If you want to get fancy you can work smart pointers into a custom heap implementation. The pointers are aligned by some standard byte alignment so you can then reduce their size. So for instance you can have 8 byte aligned 32 bit heap pointers into a 32 gig max heap. So again you have cut your pointer size in half on a 64 bit machines. The two drawback are that you have to have the heap available to reference your objects and you are limited by the heap size. The later problem is partially offset by the fact that you can have multiple heaps . Also implementation is a bit more tricky but the memory saving can be drastic depending on what you are doing. While you're at it you can do slab allocation and make a super fast heap. I don't, but then I'm doing procedural generation. I build stuff directly into an octree and do collision calculations right there. This is mainly because I'm doing JIT (Just in Time) streaming terrain. However for a more typical use case, I would fathom this is a reasonable option. I guess I don't really have a strong opinion on this one. It's kind of a design decision.
  9. 1 point
    Yes, I've been trying to do the same, but I had to realize that it's not that simple sometimes. And that's why I separated the update into various steps. The physics only calculates the forces and torques and nothing more. And then I calculate the movements/rotations and CHEAT. Not exactly out of nothing. It comes from the locking mechanism (clutch, fluid...etc) but yes, it comes from the diff and only affects the wheels, never goes back to the engine AFAIK. In reality they are never locked, only torques play as you would expect. The only problem is we don't have infinite small update steps, but we have floating point precision issues in return And this is why we have to cheat sometimes. The viscous never "locks", that's a simple case. The LSD is a bit trickier, but only because you have to mimic the locking state when the timestep is not small enough to calculate it properly. Regarding diffs, you can go into the "deep forest", the question is whether you want to You can get away with spool, open, viscous, and LSD, and leave the rest. For a long time I didn't even have LSD because the viscous was so convincing
  10. -1 points
    well why did you guys downvote me again, I am new to direct x and am doing my best to understand it.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!