• Content count

  • Joined

  • Last visited

Community Reputation

1054 Excellent


About Shaarigan

  • Rank

Personal Information

  • Interests
  1. I like the idea and did something similar in the past (in pure C# and Winforms) as an inspired response to Alice OS Hacker Game but as a home brew only. I have this kind of project on my TODO for arround a couple of years to get the old idea back to life but with a different background. My Idea was to utilize the game to be also a mobile social interaction thing where you could join on the train station, in the train or while sitting in a mall to find local other players and try to hack there devices (in game not in real ), let players provide challenges or take part at group events. I have also wrote some kind of "OS Emulator" a very long time ago (let it be 8 years now), also in C# that looked similar to this one (also C# ) So I could absolutely imagine that this can make some fun for the right audience. Maybe the gamer community is ready for this now but some similar projects as Notch's 0x10c already failed for this and things like Watchdog made more aspect to the action part as for "real hacking" so I and maybe some others are corious about what you plan to make the game long term "fun" and beginner friendly to people with non/minimal coding/OS skills? Do you have have made some game design document already? Dont get me wrong, I could imagine to spend some time working on this
  2. You need either a complete input system or utilize touch input for mobile devices. Input.GetTouch contains all the information you need about touch input. You could write an interface for this that captures touch input and maps that to Unity UI but you wont get any hover events for touch as there dosent exists a mouse that could "hover" anything. I wrote such a touch UI about 2 years ago for Unity when there was the need for a car control stick
  3. C# How to reduce data sizes?

    We have had a huge ammount of AI in one game I worked on this year where we needed to simulate a 40*40km highway updating a huge ammount of vehicles. Updating a million AI agents per frame is not to solve without a huge server farm I think but why do you want to do this? Lets assume you have a world where each player is able to see all zombies all the time, even then a zombie will typically not do anything every frame rather than stay arround most of the time. Zombies as every other AI needs a reaction time to seem realistic that may be a few hundrets of milliseconds so dosent even need to update every frame too. Lets assume this, then you could separate those million AI agents so that each agent has its reaction latency and update only a few of them every frame. Assuming 60 frames you have a frametime of 16.667 milliseconds per frame assuming a standard zombie reaction time of 200 milliseconds you need to update N / 11 zombies each frame. This are round about 91.000 zombies per frame. Now I do not know anything about your gameplay but I do not think that you need to update 91.000 zombies each frame. Assuming a standard zombie size of 60*60cm would mean that you have a reagion of 2.730.000^2 cm or 27,3^2 km without any level geometry between them, a pure wall block of zombies. Even when your player has a view distance of 14 km, you wont need to simulate more than 10% of the zombies per frame in this region because any other zombie is too far away to be seen. If I would make a game with such many AI, I would cheat wherever possible. Separate those zombies into different update loops that run every 10th frame for any AI that is visible (or faster like I did every 3rd frame for the vehicle AI depending on the driving speed), run your update every 20th frame for each AI that is near the player but not visible and also every 100th frame for the rest. I would do this as a sub task so every update has enought time to run and collect updates for your tasks like a packet of 20 or 50 agents data to continiously send over the network. As I do not think you need to rotate in all three directions, you could pass one rotation value as 3 bit, one ID value as 20 bit and 35 bit for X/Z axis (Y axis is much less, maybe 16 bit) would mean 14 byte (112 bit) per agent. A 512 byte package can then send updates for 35 agents including some overhead bytes. Then you should reduce this too by only send updates for agents that have changed there state. I think in a real game this would reduce network traffic to a few kilobytes per second
  4. Open Source Direct3D 12 Game Engine?

    I asked this question a little ago too but more or less on the core-feature code base as I was (and due to low spare time am still) on the go to reassemble my source base to be more flexible. The initial intention I had with my engine (that has become more of a framework now) was to be totally module based because I thought and still think as for what @kburkhart84 wrote, that the big players have too much stuff in there engines that you might need but mostly not need dependent on each distinct game you make. Why: Is simple, they want to be general. That means that such engines as Unreal or Unity 3D offer stuff they think is wished from the avarage of there users. Unity did lack of a good UI framework for long times and people used NGUI and whatever was on the market until Unity implemented its own improved UI framework What: I asked to myself was a simple question. When making a 2D game, why am I still forced to use 3D and 3D math? A good question as it turned out because a single matrix multiplication in 3D tooks (even when optimized) a good ammount more time to compute as a 2D matrix. It may or may not make any difference related to the ammount of computations per frame but in AAA games with sometimes a million of recalculations per frame it does. Just for thought So at the end I ended up to choose my core features I need to use in every module and decided the same for the features that seem to be not core. As you wrote that your engine is intended to be DX12 I will skip the graphics API diskussion part while I think one should these days not pin to a single or not to such a platform depending API but that is your choice. What I think should be part of the engine today is (except for the game necessary features Input, Audio, Physics) a good UI framework in the first case before thinking of any voxel terrain or whatever people think they need. Working in the backend I see different UI approaches but the current trend goes to web based UIs. So I decided for me to implement my UI framework based on HTML 5 and CSS 3 for some advantages. First they are standard by the WC3 and everyone has access to that standard so what works in a browser should also work in the engine. It is simple to write UIs as they are HTML code and allow rapid prototyping in a web browser. They are responsive (what was a tricky thing in NGUI and Unity 3D) and support SVG by default. The second thing I would implement is a good terrain and asset system. I have seen and experienced very often that small and indie games have problems getting artists and level designers. What about a system that proceduraly generates assets and levels by certain rules and at least for small and indie games reduce the dependencie to artists a little? Generated meshes should not look high end but good enougth to use. I have had a talk to someone some time ago that works on a terrain generator AI in a AAA studio that should learn from level designers and users to make new maps based on certain criteria for one of there games. Sounds interesting and could help in development. I'm curious to read about what other people think
  5. Open World Terrain Generation process question

    Wrong point of beginning! You will need to spend time here, at least for managing your open world. An open world is just a huge terrain like any other terrain in games that is chunked and automatically loaded/unloaded when a player moves out of its current chunk so it gets a feeling as if the world is endless in opposite to level based games. The secret is a good loading/unloading management and a terrain that is able to be chunked. Maybe you need some chunking algorythm here that for example cutts hills/mountains. Using Unreal is absolutely ok but I do not know if they natively support terrain streaming or if you need to do the stuff by your own. For making such a terrain there are very little alternatives to handcraft your map. You could basically use some terrain generator like World Machine or Terragen but will need to do the level design work on your own. Thats what makes development times, graphics and level design
  6. I personally use include statetemnts in header files as least as possible and move anything that is not needed to be outer visible to the implementation file as already mentioned. This includes also platform dependant headers like the windows.h file but also types/utilities that are used in functions internally only. But I have had also implementation specific circular dependencies to solve that were caused by object/manager relations where I want the manager class to have friend access to single properties of the object for setting a class private API handle with external read-only access. Those were a few simple ones like GraphicsContext <> Graphics and InputDevice <> Input relations (as already written, to set a class private handle used by the API). Otherwise except those rare situations like making some functions friend access to a class, I would always prevent circular dependencies. Another exception to use forward declaration over header includes is for template heavy classes like for example delegates. Declaring a delegate class delegate<returnType (functionArgs)> myDelegate; as forward declaration only and later specialize it so that the above signature works template<typename signature> struct delegate; //in a loop for any number of arguments template<typename returnType, typename Argument> struct delegate<returnType (Argument)> { ... };
  7. Deterministic gameplay depends also on the implementation of your engine. even if your engine runs multithreaded you might have such systems like Unity 3D that does not allow your gameplay code to be executed on different threads (without doing some management heavy synchronization tricks) while a good tested and maintained task/job system will help your gameplay code to speed up (depending on the complexity and dependencies) with splitting different sections into different tasks and spread them along your system so they could run in parallel. This gets performance and behaves deterministic but needs to be modified for different scenario. Multithreading as @Hodgman mentioned is part of game development since the day as Unreal Engine 3 first releases (and I think also much earlier). Any AAA game engine these days supports multiple threads to run those massive titles these days. Sure a computer CPU runs on a lot faster tick but you do not have scheduling with other processes on a console platform so could exclusively use any except one thread (for PS4, I do not about XB1) for your game running without beeing scheduled into a wait queue. That lets make consoles "seem" fast Game development these days is more or less planning of what runs when where and how could wait locks be reduced depending on dependencies between different tasks/jobs and systems rather than the difference between being determinstic or performing. Sure floating point and double precision is another topic to be discussed about and I have had many bugs in my career (mostly related to AI) when floating point precision leads to minimal differences but in the end this could be also utilized to let an AI look more natural in behavior
  8. Recruiting team for this awesome game

    You should remove this non-sense screenshot and replace it with some more speaking shots or a short video if you want to show your work to us so one will not need to join our discord to choose for join the team. Also your shot is more a con than a pro, it shows a figure in front in a unity scene, that is as meaningful as showing a shot of the demo project included in unity. You could provide also a little more information about your team, who is in there and what experience do they have? What is your role in the project except the "idea guy"? You could provide more information about your game idea. What is its concept and do you already have some design documents, concept art, models, anything to show us and illustrate your intention and for people to join show a possible road map?
  9. Android development is not that hard as it sounds and does not (or at least need some little) Java because NDK could handle calls into C++ code compiled for Unix platforms (as Android is build up on the Linux kernel) and so same is for input. I wont go for the "API's" @swiftcoder mentioned rather than utilize the HID capabilities of Windows/Unix. They are a lot faster and you could handle device/platform specific stuff like multitouch on your own even if that sounds like a lot more of work it isnt and there already exists wrapper libraries arround this too or you go for your own implementation in less than 7 header files and a platform specific cpp file for each platform for a full functional input system. I used it in my game engine/framework as base input system (setting up event driven manager classes on top to handle certain callback functions as 'Actions' to be executed). Note that you will need some platform specific headers for this and the Windows Driver SDK for Windows while HID is a driver level implementation here. There exists a tutorial I think is the best tutorial I found so far (even it is not english) for Windows focusiing on Gamepad/Controller input but Mouse/Keyboard and Touch Display work the same way very well. The extra work is worth it, trust me
  10. Data access layer design for games

    This depends on what you try to do. Going into UI development threre are several ways to do things "right" from coupling anything, MVC, MVVM or something that works like a Server/Client environment (your server does not now anything about the client while your client knows the server) against gameplay data access or asset management. Game data access can be solved by a bunch of possibilities using events, using global static or singelton static management classes, closed or semi open systems or using an ECS (entity component system) like Unity 3D does. In general your game should be structured in systems that each has itself one single task to keep hold of. This may be rendering, AI or game logic. Then you need to have those systems provide as few as possible interaction capabilities to other code like query information needed about entities, physics events or material states always depending on your needs. The render system does not need to know current AI sates while AI maybe tell render system to play some animation. But your problem seems like a resource loading issue. These loading operations should be done by the resource manager and/or in cooperation with a scene graph and you should tell your resource manager only to create an instance of your "Asset" archer or whatever. Resource manager then should handle loads itself and potential add the archer entity into a list of entities like a scene graph (or let the entity do this by itself when calling the constructor). This is the only "general" advice that I could give you without an exact scenario what do you want to achieve and where/when do you intend to access it
  11. Not forgetting the raising costs for a single one to live. Rent is increasing from year to year and living expenses like paying for food up to ensurances raise too while people will have less and less money for there pensions. This is not a game development related problem only but also for the whole industry level and costs raise in the private section, costs will also raise in the human resources section too. People are always anxious to keep there living standard, giving each employee a salary increase of lets say 50$ a year means in a company like EA (with lets assume round about 9000 employees) a cost increase of 450.000$ a year in this hypothetical example. This should not be reliable counts but an aspect of the social component of game development costs
  12. In Visual Studio you could also use the "autoexp.dat" file in %VSINSTALLDIR%\Common7\Packages\Debugger\autoexp.dat for VS2010 and previous (I do not know if this works from 2012 upwards) and write something like this into the file ;------------------------------------------------------------------------------ ; Drough::Vector from <Vector.h> ;------------------------------------------------------------------------------ ; vector is previewed with "[<size>](<elements>)". ; It has [size] and [capacity] children, followed by its elements. ; The other containers follow its example. Drough::Vector<*>{ preview ( #( "[", $e.elementCount, "](", #array( expr: $[$i], size: $e.elementCount ), ")" ) ) children ( #( #([size] : $e.elementCount), #([capacity] : $e.arraySize), #array( expr: $[$i], size: $e.elementCount ) ) ) }
  13. Optimization Asset storage

    I do not agree to this as I did an RTTI system some weeks ago on my own from scratch and it took more time about reading about how other people have done it as it took to roll my own one. Yes you will rely mostly on templates and some compiler tricks depending on the complexity but mostly it will be stupid code production work. I'm currently in progress to automate the process (as for the fact I already have my own build tool written in C#) so I added some special comment parsing to it to behave like C# attributes because it is part of a framework and should so be flexible to extend for types. //$[[Attribute][(Parameter)]] <-- will be parsed to certain attribute with or without fields set ... //$[Serializable] class A { public: //$[Constructor] inline A() : myField() { } //$[Property(Set)] inline void MyField(MyType const& value) { myField = value; } //$[Property(Get)] inline MyType MyField() const { return myField; } //$[Function] inline void DoSomething(int N) { ... } protected: //$[Field] //$[Protected] MyType myField; }; will create a C++ header file like this class TypeInfo<A> : public Type { public: inline TypeInfo() { properties[0] = Property(&ReflectionContext<void (MyType const&)>::ConstClassContext<A, &A::MyField>, &ReflectionContext<MyType ()>::ConstClassContext<A, &A::MyField>, Meta::GetType<MyType>(), "MyField"); //DoSomething #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 doSomethingFunctionArgs[0] = Parameter("N", 0, Meta::GetType<int>()); functions[0] = Function(&ReflectionContext<void (int)>::ClassContext<A, &A::DoSomething>, Meta::GetType<void>(), 1, doSomethingFunctionArgs, "DoSomething"); #else doSomethingFunctionArgs[0] = Meta::GetType<void>(); doSomethingFunctionArgs[1] = Meta::GetType<int>(); functions[0] = Function(&ReflectionContext<void (int)>::ClassContext<A, &A::DoSomething>, Meta::GetType<void>(), 2, doSomethingFunctionArgs, "DoSomething"); #endif //ctor~1 #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 constructors[0] = Function(&ReflectionContext<A* ()>::StaticContext<&Create1>, Meta::GetType<void*>(), 0, 0, 0); #endif ... } inline ~TypeInfo() { } inline virtual uint32 TypeId() const { return Meta::GetTypeId<A>(); } inline virtual const char* Name() const { return Meta::GetTypeName<A>::Name(); } inline virtual const char* FullName() const { return Meta::GetTypeName<A>::FullName(); } inline virtual ArrayBuffer<Type*> Composites() { return ArrayBuffer<Type*>(0, 0); } inline virtual ArrayBuffer<Type*> Generics() { return ArrayBuffer<Type*>(0, 0); } inline virtual ArrayBuffer<Field> Fields() { return ArrayBuffer<Field>(1, fields); } inline virtual ArrayBuffer<Property> Properties() { return ArrayBuffer<Property>(1, properties); } inline virtual ArrayBuffer<Function> Functions() { return ArrayBuffer<Function>(1, functions); } #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 inline virtual ArrayBuffer<Function> Constructors() { return ArrayBuffer<Function>(1, constructors); } #endif inline virtual uint32 Size() const { return sizeof(A); } inline virtual uint32 Flags() const { return (Type::SerializableFlag); } inline virtual void* Instantiate(IAllocator& allocator, ArrayBuffer<void*>& args) const { return Meta::Creator<A>::Instantiate(allocator, args); } inline virtual void* Instantiate(byte* buffer, ArrayBuffer<void*>& args) const { return Meta::Creator<A>::Instantiate(buffer, args); } private: Field fields[1]; Property properties[1]; Function functions[1]; #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 Parameter doSomethingFunctionArgs[2]; #else Type* doSomethingFunctionArgs[3]; #endif #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 Function constructors[1]; inline static A* Create1() //proxy function because you cannot get a pointer to constructor { return MainAllocator::Allocator().Allocate<A>(); } #endif }; I then include this as A.Meta.h at the bottom into my A.h header file. This is something for convenience rather than really a compiler trick Anything else is a set of simple template functions for the RTTI informations I want to have including a type's name, id and struct like above that contains its composition. template <typename T> struct GetTypeName { public: static inline const char* Name() { ... } static inline const char* FullName() { ... } }; template <typename T> inline uint32 GetTypeId() { return GetHash(GetTypeName<T>::FullName()); } template<> inline const char* GetTypeName<void>::FullName() { return "void"; } template<> inline const char* GetTypeName<void*>::FullName() { return "void"; } template<> inline uint32 GetTypeId<void>() { return 0; } template<> inline uint32 GetTypeId<void*>() { return 0; } #define REFLECTABLE_TYPE_NAME(T, NAME) template<> inline const char* GetTypeName<T>::FullName() { return NAME; } REFLECTABLE_TYPE_NAME(byte, "byte"); REFLECTABLE_TYPE_NAME(bool, "bool"); REFLECTABLE_TYPE_NAME(char, "char"); REFLECTABLE_TYPE_NAME(int8, "sbyte"); REFLECTABLE_TYPE_NAME(int16, "int16"); REFLECTABLE_TYPE_NAME(uint16, "uint16"); REFLECTABLE_TYPE_NAME(int32, "int32"); REFLECTABLE_TYPE_NAME(uint32, "uint32"); REFLECTABLE_TYPE_NAME(long, "long"); REFLECTABLE_TYPE_NAME(unsigned long, "ulong"); REFLECTABLE_TYPE_NAME(int64, "int64"); REFLECTABLE_TYPE_NAME(uint64, "uint64"); REFLECTABLE_TYPE_NAME(float, "float"); REFLECTABLE_TYPE_NAME(double, "double"); REFLECTABLE_TYPE_NAME(const char*, "cstring"); class Type { public: enum TypeFlags { SerializableFlag = 0x1, }; virtual uint32 TypeId() const = 0; virtual const char* Name() const = 0; virtual const char* FullName() const = 0; #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 virtual ArrayBuffer<Type*> Composites() = 0; inline Type* Parent() const { if(Composites().Size() == 0) return 0; else return Composites()[0]; }; #else inline virtual Type* Parent() const { return 0; } #endif virtual ArrayBuffer<Type*> Generics() = 0; virtual ArrayBuffer<Field> Fields() = 0; virtual ArrayBuffer<Property> Properties() = 0; virtual ArrayBuffer<Function> Functions() = 0; #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 virtual ArrayBuffer<Function> Constructors() = 0; #endif virtual uint32 Size() const = 0; virtual uint32 Flags() const = 0; inline bool IsSerializable() const { return HasFlag(Flags(), SerializableFlag); } inline bool operator ==(Type const& other) const { return (TypeId() == other.TypeId()); } inline bool operator !=(Type const& other) const { return (TypeId() == other.TypeId()); } virtual void* Instantiate(IAllocator& allocator, ArrayBuffer<void*>& args) const = 0; virtual void* Instantiate(byte* buffer, ArrayBuffer<void*>& args) const = 0; }; template<typename T> class TypeInfo : public Type { public: inline virtual uint32 TypeId() const { return Meta::GetTypeId<T>(); } inline virtual const char* Name() const { return Meta::GetTypeName<T>::Name(); } inline virtual const char* FullName() const { return Meta::GetTypeName<T>::FullName(); } #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 inline virtual ArrayBuffer<Type*> Composites() { return ArrayBuffer<Type*>(0, 0); } #endif inline virtual ArrayBuffer<Type*> Generics() { return ArrayBuffer<Type*>(0, 0); } inline virtual ArrayBuffer<Field> Fields() { return ArrayBuffer<Field>(0, 0); } inline virtual ArrayBuffer<Property> Properties() { return ArrayBuffer<Property>(0, 0); } inline virtual ArrayBuffer<Function> Functions() { return ArrayBuffer<Function>(0, 0); } #if defined(USE_EXTENDED_RTTI_INFOS) && USE_EXTENDED_RTTI_INFOS == 1 inline virtual ArrayBuffer<Function> Constructors() { return ArrayBuffer<Function>(0, 0); } #endif inline virtual uint32 Size() const { return sizeof(T); } inline virtual uint32 Flags() const { return 0; } inline virtual void* Instantiate(IAllocator& allocator, ArrayBuffer<void*>& args) const { return Meta::Creator<T>::Instantiate(allocator, args); } inline virtual void* Instantiate(byte* buffer, ArrayBuffer<void*>& args) const { return Meta::Creator<T>::Instantiate(buffer, args); } }; template<> inline uint32 TypeInfo<void>::Size() const { return 0; } template <typename T> inline Type* GetType() { static TypeInfo<T> type; return &type; } As you can see A.Meta.h contains a specialization TypeInfo<A> of template TypeInfo that lets the compiler choose this one instead of the default one. I did this trick because I wanted to have RTTI information for EVERY possible type without compiler errors on template parameters that do not have an own TypeInfo struct yet. Same is true for Name and FullName of GetTypeName struct template. This utilizes the __FUNCSIG__ macro compiler trick that benefits from compile time compiler dependent constant replacement for the complete function signature (using a template function const char* GetName<T>() will then be replaced to "char* const GetName< Namespace::TypeName >(void)") so you could get the full name from your compiler without specializing the template. I however specialized it for often used types and will specialize it for types with a TypeInfo struct to get a cross platform compatible type id (as I wrote above the __FUNCSIG__ macro trick is compiler dependent and results in different signatures for different compilers) to be used in serialization. This was the compile time constant stuff, now a small but important runtime code. To utilize a full deserialize you will need a creator function that is capable to map type id to the associated creator. My TypeConverter needs a little runtime initialization to fill a hash table with type id -> function pointer pairs. An automated Serialize function could then look like follows void Serialize(IDataWriter& stream, Type const& type, void* instance) { stream.Write((byte*)bigEndian_Cast(type.TypeId), 4); const uint8 fieldCount = (uint8)type.Fields.Size(); stream.Put(fieldCount); for(uint8 i = 0; i < fieldCount; i++) { const Type& fieldType = type.Fields[i].GetType(); if(fieldType.IsPrimitive()) //simply write a single byte type code and field value else { const void* fieldValue = type.Fields[i].Get(instance); if(fieldValue) { stream.Write((byte*)bigEndian_Cast(fieldType.TypeId), 4); Serialize(stream, fieldType, type.Fields[i].Get(instance)); } else stream.Put(TypeCodes::Null); } } } or you do the same as for TypeInfo struct and specialize your Serialize functions for each type you want to be serializable. Keep in mind to prepare for versioning of your serializer so you might need to add some extra information (like a chunk id) to group fields for preventing backwards incompatibility. class A { int X; float Y; }; --> ATypeCode[4]X[4]Y[4] class A { int X; float Y; double Z; }; --> ATypeCode[4]X[4]Y[4]Z[8] Deserialize v1 with v2 --> Oops! 8 byte missing Deserialize v2 with v1 --> Oops! Found a type of 8 bytes that should not be there
  14. <Place something here about wood and trees>
  15. wglChoosePixelFormatARB should not be the problem because I do it the same way without and it works pretty well. Two options, check for GetLastError() and glGetError to be sure nothing happened in the background. Maybe your initialisation of the attributes is a littler bit failing what I could not verify yet. I do it this way for simplicity int32 attributes[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, profile.Version.Major, WGL_CONTEXT_MINOR_VERSION_ARB, profile.Version.Minor, WGL_CONTEXT_FLAGS_ARB, flags, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0 }; HGLRC rc = CreateContextAttribsPtr((HDC)context.deviceHandle, 0, attributes); But as I write this and look into my own code as reference I see a major mistake you do in yours if (!wglMakeCurrent(deviceContext, newContext)) { PushError_Internal("[Win32] Warning: Failed activating Modern OpenGL Rendering Context for version (%llu.%llu) and DC '%x') -> Fallback to legacy context.\n", videoSettings.majorVersion, videoSettings.minorVersion, deviceContext); wglDeleteContext(newContext); newContext = nullptr; } else { // Destroy legacy rendering context wglMakeCurrent(nullptr, nullptr); wglDeleteContext(legacyRenderingContext); activeRenderingContext = newContext; } Looking at this when I do not oversee something, you test for setting your newly created context current. Thas fine but if it is successfull you remove all context calling wglMakeCurrent(nullptr, nullptr); to delete your legacy one and dont set the newly created core context active anywhere else then. If I dont be wrong here, this results in no active context and glew fails because any gl operation proceedes on a null-context. You should call wglMakeCurrent after you received a valid context above the line win32State.opengl.renderingContext in your code and it should work