Jump to content
  • Advertisement

Search the Community

Showing results for tags 'C++'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 582 results

  1. enum class Thing { FOO = 0x01, BAR = 0x02 }; Thing thing = Thing::FOO | Thing::BAR; // doesn't compile This is an age old problem. What is the proper way to achieve what I'm trying to do here? As far as I can tell, using enum class is more a hindrance than a useful tool with flags. I end up typing less with a normal enum: enum Thing { FOO = 0x01, BAR = 0x02 }; unsigned char thing = FOO | BAR;
  2. Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project. I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much. Examples: - Procedural multi-legged walking animation - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
  3. This is the code I have: //Create Window DWORD windowStyle = WS_VISIBLE; DWORD windowExStyle = WS_EX_OVERLAPPEDWINDOW; SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE); RECT client{ 0, 0, 100, 40 }; UINT dpi = GetDpiForSystem(); AdjustWindowRectExForDpi(&client, windowStyle, false, windowExStyle, dpi); UINT adjustedWidth = client.right - client.left; UINT adjustedHeight = client.bottom - client.top; m_hwnd = CreateWindowEx(windowExStyle, className.c_str(), windowName.c_str(), windowStyle, CW_USEDEFAULT, CW_USEDEFAULT, adjustedWidth, adjustedHeight, nullptr, nullptr, m_hInstance, m_emu ); The generated window has a client area of 1 pixel in height, even though I'm asking for 40. so I'm always getting 39 pixel less than what I need...can someone help me with this? x_x
  4. I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system. I've bounced ideas around to various people in the past, but never really gotten something satisfactory. Basically, the requirements I'm trying to target are: Composability -- allocators that seamlessly allocate from memory allocated by other allocators. This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption. It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks. Handling the fact that some allocators have different interfaces than others in an elegant way. For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once). I want to be able to tell how much I've allocated, and how much of that is actually being used. I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note: I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs. To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks. Static Policy-Based Allocators I originally built this off of this talk. Examples; struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages: SFINAE gives me a pseudo-duck-typing thing. I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages: Composability is type composability, meaning every allocator I create has an independent chain of compositions. This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues. I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors. This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. 😃 Examples: struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above: Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface. They can't, because their interfaces provide different semantics. There's to ways I can solve this: Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid). As above: Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this: Advantages: Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory. How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages: The interface issues above. There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried. Examples: struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it. Advantages: "Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages: Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool. I might be able to get around this though. I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example). That said, it seems like something that should have a lot of good examples, but I can't find a whole lot. Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?
  5. Hello, and welcome dear reader to my upcomming long term project 'Spark' I'm planning for a while now. My name is Bastian, I'm working as professional in games business for a couple of years as Senior Software Developer and hobby engine architect and am going to start running a project soon that is targeting the following thoughts based on this forum discussion Problem Many hobby projects and small studios out there don't have the financial/man-power capacity or experience to make good looking game maps as of the discussion above; but also while I read through this forum for some years now, I have seen more artist requests than any programmer requests. The reason might be systems like Unreal or Unity that themselfs provide a large non-coding game making support by blue-prints, asset store content and whatever is there in addition designers can take to reach there goal. While this is great for designers and small game creation teams, there isn't such a thing for pure coders (like me) that aren't capable of doing level design or modeling. The so called "programmer's graphics" This is sadly a factor that leads to most projects failure because people tend to support such projects that look beatifull but may lack some gameplay functionality against those projects that have the fully featured gameplay ready but dont get an artist to do the environmental work. And not every game developer has some artists in it's connections. This and the fact that I personally love multiplayer (online or mmo-games are just multiplayer) RPGs, I decided to go for tis project after doing game engine development from scratch over several years now. Spark Is a product of several thoughts combined into one single software package that supports a basic, modular game engine to extend for whatever multiplayer/rpg game one decides to develop, that already has a range of functions implemented to support all the necessary systems like character management, inventory management or UI, along with an SDK to provide utility functions for a range of use-cases like player authentication or content packaging written in C++ and C# (for the utility tools). I currently decide if the package should also contain a special game-server implementation that is supporting the general gameplay interface. But whait, what makes the project unique from what is already out there? The plan is to provide more game creation features and less content editing necessity in the first level of the game. Content like world maps, quests and even assets can be generated procedurally using a set of defined rules to tell a generator unit a context taken into account to decide what should be generated. This can be map data/landscape data, fully playable (maybe limitless) maps and even assets to place on those maps including environment, plants, trees and a lot more. Gameplay content can be created too. A generator can run in the background or on a server to create new quests while the game runs including any dialog and log text for supported languages as same as potentially voice overs. I'm currently doing the research for the translation unit that should be able to generate text in English and Japanese for now (other languages may be extended) as not fully perfect a native speaker would write but acceptable enougth and simple enougth to fit into the plan. But yeah, you reader may have still some doubts about it. Let me tell you from a talk I had past time with a good friend from a grand publishing and development studio. That friend told me that they are developing an AI for one of there games, that should create content for that game after learning from there best level designers and the community. You see, this isn't a crazy backyard idea but also something even an AAA studio is planning. I'm also playing arround with the thought to make this also VR compatible for current and upcomming technology so this wont be a one-shot project but something that we can base on and continiously enhance over the months and years. Motivation I'm totally self motivated to run the project simply from passion for playing games that have quality and mesmerize me with or without a story, regardless of fantasy, sci-fy or steam punk, RPG or shooter. Games worth playing are played but there is a problem with stagnanting quality over the past 20 years that makes me sad. I now want to tell my own stories, show my own worlds to others and play like I would live in the game while I have my all-day job so don't need to do this for profit (yet). So this is not only an SDK or game engine project but also a multiplayer RPG project who's core system can be used from other people for free to setup there own ideas, make there own stories and let others explore there own worlds with certain kind of compatibility between those games for player data and mods. While I develop on my every day practice with a team, I do it on my own in spare time so the decision for this request post was simply to get some people together that may or may not share my passion for good games and are also moivated and reliable enougth to bring this to a happy ending over the next year or two up to a playable demo version to maybe pitch for more support. Closure I thank you reader and congrats for reaching the last point of my post. What I want to do might sound as a huge never ending project that will fail but my motivation is there even after years and prototypes of game engine development so if you think you could hold this kind of motivation even a single bit, are reliable, experienced with C++ and/or C# or an artist (any other matching role could be also usefull for Spark) and could work at least a day in the week, then I would be happy if you would leave a message, either in this topic, as PM or Skype/Discord chat. I will meanwhile finish my framework work (ha double spending ) and do some technical and game design for this. Dont let me wait too long and thank you for reading! Greetings!!
  6. Hi saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug. What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors. Thanks
  7. I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ. Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com. Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me. Many thanks, Andrew.
  8. I already asked this question on stack overflow, and they got pissed at me, down-voted me and so forth, LOL .... so I'm pretty sure the answer is NO, but I'll try again here anyway just in case..... Is there any way to get the size of a polymorphic object at run-time? I know you can create a virtual function that returns size and overload it for each child class, but I'm trying to avoid that since (a) it takes a virtual function call and I want it to be fast and (b) it's a pain to have to include the size function for every subclass. I figure since each object has a v-table their should be some way since the information is there, but perhaps there is no portable way to do it.
  9. So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself. I created this code in order to test a moderately sized file: Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption. Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory. So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this: void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me. After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this: static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds. Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this: static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting. While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad. I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?
  10. So I am trying to using Yaml as my game data files (mainly because it support comments, is a bit easier to read than JSON, and I am going to be working in these files a lot) with C++ and yaml-cpp (https://github.com/jbeder/yaml-cpp) seems like the most popular library for dealing with it however I am running into an issue when using pointers. Here is my code: struct InventoryItem { std::string name; int baseValue; float weight; }; struct Inventory { float maximumWeight; std::vector<InventoryItem*> items; }; namespace YAML { template <> struct convert<InventoryItem*> { static Node encode(const InventoryItem* inventoryItem) { Node node; node["name"] = inventoryItem->name; node["baseValue"] = inventoryItem->baseValue; node["weight"] = inventoryItem->weight; return node; } static bool decode(const Node& node, InventoryItem* inventoryItem) { // @todo validation inventoryItem->name = node["name"].as<std::string>(); inventoryItem->baseValue = node["baseValue"].as<int>(); inventoryItem->weight = node["weight"].as<float>(); return true; } }; template <> struct convert<Inventory> { static Node encode(const Inventory& inventory) { Node node; node["maximumWeight"] = inventory.maximumWeight; node["items"] = inventory.items; return node; } static bool decode(const Node& node, Inventory& inventory) { // @todo validation inventory.maximumWeight = node["maximumWeight"].as<float>(); inventory.items = node["items"].as<std::vector<InventoryItem*>>(); return true; } }; } if I just did `std::vector<InventoryItem> items` and had the encode / decode use `InventoryItem& inventoryItem` everything works fine however when I use the code above that has it as a pointer, I get the following error from code that is part of the yaml-cpp library: impl.h(123): error C4700: uninitialized local variable 't' used The code with the error is: template <typename T> struct as_if<T, void> { explicit as_if(const Node& node_) : node(node_) {} const Node& node; T operator()() const { if (!node.m_pNode) throw TypedBadConversion<T>(node.Mark()); T t; if (convert<T>::decode(node, t)) // NOTE: THIS IS THE LINE THE COMPILER ERROR IS REFERENCING return t; throw TypedBadConversion<T>(node.Mark()); } }; With my relative lack of experience in C++ and not being able to find any documentation for yaml-cpp using pointers, I am not exactly sure what is wrong with my code. Anyone have any ideas what I need to change with my code?
  11. Hi I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break. Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code. It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt. There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before. Any help would be greatly appreciated.
  12. I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?
  13. phil67rpg

    wait loop

    void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and animation.
  14. Hi guys, I'm trying to learn this stuff but running into some problems 😕 I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data: //... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this: hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1. Can someone point out what I'm doing wrong? 😕
  15. Hi, I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms. Code: https://github.com/HawkDeath/shader/tree/test To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be. PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.
  16. Hello everyone, this is a very important question so I hope I get a response, please. I created a program using windows API and it is nice. However In a separate thread we discussed scaling up the graphics of a different game I was working on. Now what I'm working on is a windowed utility that is 1024 x 768 for every monitor below 2x. My question is do I need to scale up this simple window with buttons too. I can do this if I need to, but the program is small so I was wondering if this is the way to go (increasing the size of the buttons, and the font size.) So, do I need to make a different version for 2x, 4x, 8x? I hope I get a response or I'm doomed. Thanks, Josheir
  17. I have some important questions on the backburner but first I have a question of interest. When using Visual Studio 2015 with C++ there are predefined environmental variables such as $SolutionDir that are used for the build. I have searched long and hard to no avail. I am trying to modify these in a simple way. I moved my file structure and although it did work relatively (which is great) the build still mentions the old folders' names that were named before the change to the next project. Thank you, Josheir Edit: If it helps they are also called macros.
  18. On stackoverflow, I asked the same question I'm about to ask now. There are commentaries on the question that gives information. What are the differences between OpenMP, OpenACC, OpenCL, SIMD, and MIMD? Also, in which cases each library is more suited for? What I currently know : OpenCL and CUDA are for GPU programming. They take advantage of the fact that GPUs have a lot of cores. CUDA is proprietary to NVIDIA and only works on its GPUs, whilst OpenCL is multiplatform. OpenMP is for CPU wise parallelism. OpenACC also seems to be for CPU task parallelism. SIMD is for executing an operation on multiple cores (CPU only?). MIMD seems to be for executing multiple operations on multiple cores (CPU only?). What I aim to learn here is which libraries are best suited for optimizing an algorithm on the CPU & GPU. Hopefully, I would like to use only one library to do both.
  19. Hi, I have a terrain engine where the terrain and water are on different grids. So I'm trying to render planar reflections of the terrain into the water grid. After reading some web pages and docs and also trying to learn from the RasterTek reflections demo and the small water bodies demo as well. What I do is as follows: 1. Create a Reflection view matrix - Technically I ONLY flip the camera position in the Y direction (Positive Y is up) and add to it 2 * waterLevel. Then I update the View matrix and I save that matrix for later. The code: void Camera::UpdateReflectionViewMatrix( float waterLevel ) { mBackupPosition = mPosition; mBackupLook = mLook; mPosition.y = -mPosition.y + 2.0f * waterLevel; //mLook.y = -mLook.y + 2.0f * waterLevel; UpdateViewMatrix(); mReflectionView = View(); } 2. I render the Terrain geometry to a 512x512 sized Render target by using the Reflection view matrix and an opposite culling (My Terrain is using front culling by nature so I'm using back culling for the Reflction render pass). Let me say that I checked with the Graphics debugger and the Reflection Render target looks "OK" at this stage (Picture attached). I don't know if the fact that the terrain is shown only at the top are of the texture is expected or not, but it seems OK. 3. Render the Reflection texture into the water using projective texturing - I hope this step is OK code wise. Basically I'm sending to the shader the WorldReflectionViewProj matrix that was created at step 1 in order to use it for the projective texture coordinates, I then convert the position in the DS (Water and terrain are drawn with Tessellation) to the projective tex coords using that WorldReflectionViewProj matrix, then I sample the reflection texture after setting up the coordinates in the PS. Here is the code: //Send the ReflectionWorldViewProj matrix to the shader: XMStoreFloat4x4(&mPerFrameCB.Data.ReflectionWorldViewProj, XMMatrixTranspose( ( mWorld * pCam->GetReflectedView() ) * mProj )); //Setting up the Projective tex coords in the DS: Output.projTexPosition = mul(float4(worldPos.xyz, 1), g_ReflectionWorldViewProj); //Setting up the coords in the PS and sampling the reflection texture: float2 projTexCoords; projTexCoords.x = input.projTexPosition.x / input.projTexPosition.w / 2.0 + 0.5; projTexCoords.y = -input.projTexPosition.y / input.projTexPosition.w / 2.0 + 0.5; projTexCoords += normal.xz * 0.025; float4 reflectionColor = gReflectionMap.SampleLevel(SamplerClampLinear, projTexCoords, 0); texColor += reflectionColor * 0.25; I'll add that when compiling the PS I'm getting a warning on those dividing by input.projTexPosition.w for a possible float division by 0, I tried to add some offset or some minimum to the dividing term but that still not solved my issue. Here is the problem itself. At relatively flat view angles I'm seeing correct reflections (Or at least so it seems), but as I pitch the camera down, I'm seeing those artifacts which I have no idea where are coming from. I'm culling the terrain in the reflection render pass when it's lower than water height (I have heightmaps for that). Any help will be appreciated because I don't know what is wrong or where else to look.
  20. Just as the title implies, when I make a small change such as "x=4;" now to read "x=5;" the compiler doesn't build that part of the code and leaves me with the old value. It correctly compiles when I select "rebuild solution".... How do I fix this?
  21. I'm trying to implement my own tree container. I have written a "TreeContainer.h" template class and I am posting a piece of its content below. I have defined a type named "ContainerType", because I am planning to test different other core STL container types later. I want the container type to be define by a "using" or "typedef" macro. I am getting some compiler error messages which I am having a hard time to understand. Every error message is give in the code as comments. Can you please help me understand what I am doing wrong in my code. IDE: Visual Studio 15.7.3 (Language Standard: C++17. Other than that the default compiler options are used.) #include <memory> #include <vector> template <class T> class TreeContainer { public: using pT = T *; using ContainerType = std::vector<pT>; // ... bool DeleteChild(T * const pChild); ContainerType & GetChildren1(); ContainerType & GetChildren2(); // ... private: ContainerType Children; }; // ... template <class T> bool TreeContainer<T>::DeleteChild(T * const pChild) { for (std::vector<T*>::iterator it=Children.begin(); it!=Children.end(); ++it) // Error C2760 syntax error: unexpected token 'identifier', expected ';' { if (pChild == (*it)[i]) { delete *it; Children.erase(cit); return true; } } return false; } template <class T> TreeContainer<T>::ContainerType & TreeContainer<T>::GetChildren1() // ERROR: Error C2061 syntax error: identifier 'ContainerType' { // ERROR1: Error C2447 '{': missing function header (old-style formal list?), ERROR2: Error C2143 syntax error: missing ';' before '{' return Children; } // THIS WORKS template <class T> std::vector<T*> & TreeContainer<T>::GetChildren2() { return Children; } // ... TreeContainer.h
  22. Hi I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for. I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem. The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over). If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed. To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above. This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255]. Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions? // [the following code includes all used gl* functions, other parts are due to readability partialy excluded] // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 4); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation // -------------------- GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL); if (window == NULL) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); glfwSetCursorPosCallback(window, mouse_callback); glfwSetScrollCallback(window, scroll_callback); // tell GLFW to capture our mouse glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); // glad: load all OpenGL function pointers // --------------------------------------- if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // ==================================================================================================== // window and functions are set up // ==================================================================================================== // configure global opengl state // ----------------------------- glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); // build and compile our shader program [...] // set up vertex data (and buffer(s)) and configure vertex attributes [...] // shader configuration [...] // render loop // =========== while (!glfwWindowShouldClose(window)) { // input processing and fps calculation[...] // render // ------ glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDepthMask(GL_TRUE); //enable depth writing glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw ambient component into color and depth buffer view = camera.GetViewMatrix(); projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); // setting up lighting shader for ambient pass [...] // render the cubes glBindVertexArray(cubeVAO); for (unsigned int i = 0; i < 10; i++) { //position cube [...] glDrawArrays(GL_TRIANGLES, 0, 36); } //------------------------------------------------------------------------------------------------------------------------ glDepthMask(GL_FALSE); //disable depth writing glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); //additive blending glEnable(GL_STENCIL_TEST); //setting up shadowShader and lightingShader [...] for (int light = 0; light < lightsused; light++) { glDepthFunc(GL_LESS); glClear(GL_STENCIL_BUFFER_BIT); //configure stencil ops for front- and backface to write according to z-fail glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP); //-1 for front-facing glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP); //+1 for back-facing glStencilFunc(GL_ALWAYS, 0, GL_TRUE); //stencil test always passes if(hidevolumes) glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); //disable writing to the color buffer glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_CLAMP); //necessary to render SVs into infinity //draw SV------------------- shadowShader.use(); shadowShader.setInt("lightnr", light); int nr; if (onecaster) nr = 1; else nr = 10; for (int i = 0; i < nr; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } //-------------------------- glDisable(GL_DEPTH_CLAMP); glEnable(GL_CULL_FACE); glStencilFunc(GL_EQUAL, 0, GL_TRUE); //stencil test passes for ==0 so only for non shadowed areas glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //keep stencil values for illumination glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); //enable writing to the color buffer glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw diffuse and specular pass lightingShader.use(); lightingShader.setInt("lightnr", light); // render the cubes for (unsigned int i = 0; i < 10; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } } glDisable(GL_BLEND); glDepthMask(GL_TRUE); //enable depth writing glDisable(GL_STENCIL_TEST); //------------------------------------------------------------------------------------------------------------------------ // also draw the lamp object(s) [...] // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) // ------------------------------------------------------------------------------- glfwSwapBuffers(window); glfwP } // optional: de-allocate all resources once they've outlived their purpose: // ------------------------------------------------------------------------ glDeleteVertexArrays(1, &cubeVAO); glDeleteVertexArrays(1, &lightVAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. // ------------------------------------------------------------------ glfwTerminate(); return 0;
  23. irreversible

    Visual Studio 2017 usability issues

    I recently upgraded from 2013 Pro to 2017 Community and I've been putting the IDE through its paces. After a couple of weeks, here are the top things that still confuse me. I'm wondering if there might be something I'm missing. like everyone else, I don't understand the sudden need for the green squigglies. I've seen MS suggest adding problematic macro declarations to hint.cpp over and over again, but that 1) didn't use to be necessary, 2) is a blatant hack and 3) doesn't work occasionally, when writing a new variable declaration or a function definition, I get a sporadic "Object reference not set" error in the form of a modal popup. Usually this happens when I type in a type, a variable name and then press tab (which I do often due to the way I format my code). The error persists until I press OK/Cancel (eg Enter or Escape), then briefly multitask out of the IDE and back again. I used to get this error in 2013, but only occasionally when code peek failed to display something. I can't really find information on the particular flavor of the error on the web despite being eligible for the free Community edition, I bought myself into the new ecosystem by renewing Visual Assist X. Initially this caused massive slowdown issues in the editor, to the point where the screen would be redrawn 2-3 times per second while scrolling quickly. I alleviated this by disabling VAX's own intellisense (which is apparently really slow and feels incomplete) and disabling most extensions I was using in 2013. I'm mentioning this, because it leads me to my next issue: auto-complete (eg the built-in parser) flat out fails in many cases. One most notable case happens when listing enum members, which simply does not work. Now, I don't use vanilla enum syntax, but rather have my own self-unrolling macros that recursively expand enum declarations using FOR_EACH. This, however, was not an issue in 2013 what baffles me even more, is the fact that auto-complete no longer suggests function names and/or signatures when overloading a function in a derived class. It simply doesn't suggest anything build times seem comparable to 2013, but link times can be uneven and abysmal, even when changes are small and incremental Given that the IDE has been out for a while and my whole system is as up to date as it gets, I'm left to wonder if at least some of these issues can be mitigated in some way. Considering that it took me 10+ hours just to find the right config to get the IDE to be responsive without sacrificing too many things I'm accustomed to, it's entirely possible I've missed a checkbox or somehow managed to mal-configure either the VS or VAX. Are you having a similar user experience? Any ideas or suggestions?
  24. I've been wondering how to choose from what seems like a huge assortment of external C++ libraries. How do people choose what's best for them? I know I need a 2D graphical library for my game, but I'm unsure what else could be useful. Is there an easy way to see what's available?
  25. Hi, i am self teaching me graphics and oo programming and came upon this: My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up. Here are some code snippets, mostly standard from tutorials: // EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector). In future i'll take that functionally out into an own class that inherits from the event handler. void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler. // static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method. // virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ... // straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors(). Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing. I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-) And thanks for any hints and so ...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!