• Content count

  • Joined

  • Last visited

  1. /* * GPU Details structure * NOTE: Subject to change */ [StructLayout(LayoutKind.Sequential, Size = 136), Serializable] public struct GPUDETAILS { [MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 128)] public string DeviceDesc; [MarshalAsAttribute(UnmanagedType.U4, SizeConst = 1)] public UInt32 DeviceID; [MarshalAsAttribute(UnmanagedType.U4, SizeConst = 1)] public UInt32 VendorID; } Fixed it, works perfectly. Thanks. Shogun
  2. I'm writing a GUI based GPU tool and I'm using C# and WPF since it will make my life easier bringing this app to the Win10 store. IMHO, it's surprisingly hard to find a good series of tutorials that teach you more than how to add some buttons to a window, input text, etc. My app won't use that much, and what I really need is to create a GUI similar to that of the task manager. What I mean are tabs to switch between graphs and tables, etc. Also, I can't find any code on how to add a graph (histogram) to my app, and so on. I've googled quite a bit and found out that some tutorials are behind a friggin pay wall?! Ugh. Not to complain, just hoping to find something so I can get this thing working acceptably before Wednesday. Thanks, Shogun
  3. I'm writing a GPU tool with C# for the UI and command line .exes that the user interacts with, and writing the necessary driver code to make it work in C++ (via .dll). So far, loading an unmanaged .dll file written in C++ is trivial and easy, but there's one part that confuses me (mostly because I am not a C# expert, yet). How do you handle structures as parameters? My code crashes when I try to use a structure as a parameter. Should I use an IntPtr instead and just cast it? I'll show you a bit of code to show you what I mean: C++: typedef struct _GPUDETAILS { CHAR DeviceDesc[128]; DWORD DeviceID; DWORD VendorID; } GPUDETAILS; ... GPUMONEXDRIVERNVAPI_API int Drv_GetGpuDetails( int AdapterNumber, GPUDETAILS* pGpuDetails ) { _LOG( __FUNCTION__ << "(): TODO: Implement...\n" ); if( !pGpuDetails ) { _ERROR( "Invalid parameter!" << std::endl ); return 0; } _LOG( __FUNCTION__ << "(): Gathering GPU details...\n" ); strcpy( pGpuDetails->DeviceDesc, "NVIDIA Something..." ); pGpuDetails->DeviceID = 0xFFFF; /* TODO */ pGpuDetails->VendorID = 0x10DE; /* This is always a given */ return 1; } Something simple for now. Let's move on to the C# part... namespace GPUMonEx { /* * GPU Details structure * NOTE: Subject to change */ public struct GPUDETAILS { public string DeviceDesc; public UInt32 DeviceID; public UInt32 VendorID; } /* * Driver importer classes for the following APIs under Windows * TODO: Get ahold of Intel's SDK as well as implement AMD's equivalent for their hardware. * * NVAPI - NVIDIA Driver Specific functionality * D3DKMT - Direct3D internal driver functions. Should work for all GPUs, but currently needed for Intel. */ static class DrvD3DKMT { [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_Initialize(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern void Drv_Uninitialize(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern unsafe int Drv_GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_GetOverallGpuLoad(); [DllImport("GPUMonEx.Driver.D3DKMT.dll")] public static extern int Drv_GetGpuTemperature(); } static class DrvNVAPI { [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_Initialize(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern void Drv_Uninitialize(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern unsafe int Drv_GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_GetOverallGpuLoad(); [DllImport("GPUMonEx.Driver.NVAPI.dll")] public static extern int Drv_GetGpuTemperature(); } /* * GPU Driver interfacing classes (the ones you actually call in user mode) */ public abstract class GPUDriverBase { public abstract int Initialize(); public abstract void Uninitialize(); public abstract int GetGpuDetails( int Adapter, ref GPUDETAILS pGpuDetails ); public abstract int GetOverallGpuLoad(); public abstract int GetGpuTemperature(); } public class GPUDriverD3DKMT : GPUDriverBase { public override int Initialize() { return DrvD3DKMT.Drv_Initialize(); } public override void Uninitialize() { DrvD3DKMT.Drv_Uninitialize(); } public override int GetGpuDetails( int Adapter, ref GPUDETAILS pGpuDetails ) { return DrvD3DKMT.Drv_GetGpuDetails( Adapter, ref pGpuDetails ); } public override int GetOverallGpuLoad() { return DrvD3DKMT.Drv_GetOverallGpuLoad(); } public override int GetGpuTemperature() { return DrvD3DKMT.Drv_GetGpuTemperature(); } } public class GPUDriverNVAPI : GPUDriverBase { public override int Initialize() { return DrvNVAPI.Drv_Initialize(); } public override void Uninitialize() { DrvNVAPI.Drv_Uninitialize(); } public override int GetGpuDetails(int Adapter, ref GPUDETAILS pGpuDetails) { return DrvNVAPI.Drv_GetGpuDetails(Adapter, ref pGpuDetails); } public override int GetOverallGpuLoad() { return DrvNVAPI.Drv_GetOverallGpuLoad(); } public override int GetGpuTemperature() { return DrvNVAPI.Drv_GetGpuTemperature(); } } } So, focusing on Drv_GetGpuDetails(), how do I actually get a valid structure filled in here? Calling that function just crashes. I'm sure it's a stupid easy fix, but once again, I'm far too C++ oriented and have yet to get used to C# in the same manner. Any advice is welcome (on the question at hand or anything else). Shogun
  4. Understanding OpenGL 3+ profiles

    Quite simple. So when you are creating a core OpenGL profile (3.0 and beyond), you are essentially requesting the OpenGL equivalent of D3D10+ functionality (depending on what profile you are using). To be more specific, certain core OpenGL extensions were introduced with certain core OpenGL updates. Usually, the higher the profile you have, the more functionality you have at you disposal. Of course, some things are vendor specific just as with legacy OpenGL. You still have to query extension support. Example, let's say you want to use GL_NV_command_list, you will need core OpenGL 4.5 to use it as NV added support for this extension with OpenGL 4.5 and later. If you need a better explanation, take a look at this history chart of each OpenGL update as well as what extensions each profile is supposed to support. Usually, the ARB or Khronos will approve an extension before making it an official part of the spec. Vendor specific ones do so at their own desire. Now, as for your question on creating a core OpenGL context, I don't remember off hand on how to do it. I personally use SDL 2.0 and use it's API to select the core OpenGL profile I want because my engine has to be cross platform (plus I use glew for simplicity with extensions, which you can still use for core OpenGL). A lot of tutorials use SDL or glfw for simplicity but that doesn't explain how a core context is created though. But if you want a windows specific that uses the wglCreateContextAttribsARB, take a look at the Khronos example here: Specifically, let's focus on this part: bool CGLRenderer::CreateGLContext(CDC* pDC) { PIXELFORMATDESCRIPTOR pfd; memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR)); pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR); pfd.nVersion = 1; pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 32; pfd.cDepthBits = 32; pfd.iLayerType = PFD_MAIN_PLANE; int nPixelFormat = ChoosePixelFormat(pDC->m_hDC, &pfd); if (nPixelFormat == 0) return false; BOOL bResult = SetPixelFormat (pDC->m_hDC, nPixelFormat, &pfd); if (!bResult) return false; HGLRC tempContext = wglCreateContext(pDC->m_hDC); wglMakeCurrent(pDC->m_hDC, tempContext); GLenum err = glewInit(); if (GLEW_OK != err) { AfxMessageBox(_T("GLEW is not initialized!")); } int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 1, WGL_CONTEXT_FLAGS_ARB, 0, 0 }; if(wglewIsSupported("WGL_ARB_create_context") == 1) { m_hrc = wglCreateContextAttribsARB(pDC->m_hDC,0, attribs); wglMakeCurrent(NULL,NULL); wglDeleteContext(tempContext); wglMakeCurrent(pDC->m_hDC, m_hrc); } else { //It's not possible to make a GL 3.x context. Use the old style context (GL 2.1 and before) m_hrc = tempContext; } //Checking GL version const GLubyte *GLVersionString = glGetString(GL_VERSION); //Or better yet, use the GL3 way to get the version number int OpenGLVersion[2]; glGetIntegerv(GL_MAJOR_VERSION, &OpenGLVersion[0]); glGetIntegerv(GL_MINOR_VERSION, &OpenGLVersion[1]); if (!m_hrc) return false; return true; } You'll still want to create your legacy rendering context, but next you want to fill out a structure that tells the driver what core context you want, and pass that into wglCreateContextAttribsARB() and go from there. That article explains the initialization part well enough I guess, as well as how to render primitives using the proper methods. Hope that helps. Shogun
  5. Grown out of playing games

    Yeah, the first Panzer Dragoon was great and had a really high replay value. Sadly, I can't say the same about Crimson Dragon. I was quite disappointed as the controls were quite frustrating and doesn't live up to the hype, but of course that's my opinion. Panzer Dragoon Orta definitely played better if you ask me. Instead of taking my word for it, I'd rather you play it for yourself because you might actually enjoy it. Shogun
  6. Grown out of playing games

    If you're still in the dev space, then I still recommend playing a few modern games here and there. This way I can keep up with the standards, give you a few ideas of your own, as well as keep you up to date on your competition. As much as I hated doing video game testing, I learned alot about increasing my own standards for writing a good/better game. Even though this era of gaming doesn't interest me nearly as much as the timeline marked by the beginning of Atari 2600 and the end of original Xbox, I still find it necessary to see and experience what other companies are up to. Shogun EDIT: Also, like Mr. Hodgman, my tastes have changed since 10+ years ago. My steady decline in FPS games and my rise in interest of story based games like Syberia and Broken Sword as well as bullet hell and rail shooters such as Panzer Dragoon account for my recent lack of interest in this era of gaming as those types of games are made by a select few. I also want my 2D beat em up games like Streets of Rage back!
  7. OpenGL API and structures

    Why OpenGL 1.x does not contain structures is not a question I have a direct answer for, but my assumption is that it would be better designed that way so driver developers can keep the most complex stuff internal and privatized in their driver code. Not sure if you know this already, let's take a brief look at why OpenGL came about in the first place. Before OpenGL was even thought of, there was an API called PHIGS back in 198x. From what I've read, the main issue with PHIGS is that ultimately it didn't give the developers what they needed in many instances. So SGI initially created Iris GL, which eventually became the basis for OpenGL in January 1992. Unlike PHIGS, OpenGL had a simplified state machine and supported an "immediate mode" rendering component. AFAIK, simplicity was the overall goal, while having a standard that graphics hardware could support via software or hardware all across the board and with little setbacks as possible. Prior to what we have today, programming graphics hardware was quite a task and all sorts of structs were everywhere. I mean, if you take a look at how NVIDIA's gfx registers were laid out and accessed back in 199x (hello NV1 and Riva128), you'll see that each channel is just a series of structs. OpenGL was meant to simplify graphics programming greatly. Keep in mind that it was not originally designed for games, but for CAD, 3D simulations, and so forth. Not that it really matters though... This is just my two cents. If I'm wrong about any of this, someone feel free to correct me. Shogun
  8. How to stay motivated?

    Wow, I didn't realize that this thread was still active! In case you are all wondering, have things gotten better since then? Actually no, things have gotten MUCH worse. Do I feel the same? Honestly, no. In fact, I feel more motivated now. I've quit binge drinking too! Right now I am living with mom and pops for a while since I can't afford my place anymore and still broke as a joke. But at least I have a part time job doing game testing for Win10/Xbox Game certification. I'll likely be doing this until I can get something better. Since I'm far away from the job now, I have to commute about 4-5 hours a day via bus, but I scraped up enough money to get a used Surface Pro to work on my game's UWP port for Dream.Build.Play. I'm going to submit my game to this contest before the end of this year and hopefully win some money or exposure. You just have to: Stop complaining. There's always someone that has it worse and deals with a greater set of challenges than I do. I mean really, I have Microsoft contacts, id@xbox access, the business card of a Sony publisher, and more. Plus Josh said stop Keep on keeping on. Leverage your advantages, build smart solutions to overcome your disadvantages. Stop drinking! Killing your brain cells and trashing your liver isn't going to help. So even though I've had no breakthroughs and things have gotten far worse, I feel more motivated. Ever hear the saying "I'm sick and tired of being sick and tired?" Well, I'm sick and tired of saying that I'm sick and tired of being sick and tired. That's enough, let's just move forward! Shogun
  9. Not sure how many of you would care about this, but today is Windows Developer day, and Microsoft has been having a live stream today. So far, it's been pretty interesting, for both games and non-game apps. It's mostly on UWP (which everyone seems to hate), but I'm taking advantage of UWP for my game. One bit of good news is that (IIRC) Microsoft will allow UWP games to access the full GPU and other resources. Curious what you all think of this, as well as if you share the opinion that Microsoft is practically *begging* developers to support UWP at this point. Shogun
  10. Wow, didn't realize I had more responses to the thread... Anyway, I'm fixed it "forrealzees" this time. Using the roxlu portable nanosecond timer in place of my millisecond one, then converting the numerator from 1000 milliseconds to the appropriate number (1000*1000*1000), it appears to work fine this time. Even without Vsync, ran nicely at 120+fps. It was a combination of a low resolution timer plus my own spawning code was causing some entities to spawn yet rapidly disappear! Since it happens in the blink of an eye, it was a rather hard bug to catch until today. So far, no more spawning issues! Now to try it on my desktop Mac and PC, as well as mobile devices. If only I had one. All of my monitors are 60hz only Shogun.
  11. Yes, now I am finding the flaws as they surface. Sometimes after coming out of the background or a suspended state, the FPS calculation will spew a really high number and cause the game to move rapidly for one second, then go back to normal. This will result in death many times for the user. So yes, I dun f@#%ed up even more. The entire gameloop is too large and is a complete mess (I'll never code a game this way ever again). The delta_speed variable is a percentage that is multiplied against the entity's speed value so that it moves at an adjusted speed based on frame rates. I am not accumulating time as I did not plan this thing ahead or even consider the need for time based movement when I originally wrote it. Then when primitive counts started reaching the millions, frame rates dropped and then I realize "I dun screwed up". The loop is updated further down. I forgot to add that. If milisecond timing is a bad design choice, then I will do a way with it pronto. I wasn't aware of the poor accuracy, and if the margin of error is that great, then I'll most definitely stop using it. I wrote that half arsed timing function out of laziness. Speaking of high resolution timers, I'll need one that's portable to all three major OSes. Which I did find here: /* ----------------------------------------------------------------------- */ /* Easy embeddable cross-platform high resolution timer function. For each platform we select the high resolution timer. You can call the 'ns()' function in your file after embedding this. */ #include <stdint.h> #if defined(__linux) # define HAVE_POSIX_TIMER # include <time.h> # ifdef CLOCK_MONOTONIC # define CLOCKID CLOCK_MONOTONIC # else # define CLOCKID CLOCK_REALTIME # endif #elif defined(__APPLE__) # define HAVE_MACH_TIMER # include <mach/mach_time.h> #elif defined(_WIN32) # define WIN32_LEAN_AND_MEAN # include <windows.h> #endif static uint64_t ns() { static uint64_t is_init = 0; #if defined(__APPLE__) static mach_timebase_info_data_t info; if (0 == is_init) { mach_timebase_info(&info); is_init = 1; } uint64_t now; now = mach_absolute_time(); now *= info.numer; now /= info.denom; return now; #elif defined(__linux) static struct timespec linux_rate; if (0 == is_init) { clock_getres(CLOCKID, &linux_rate); is_init = 1; } uint64_t now; struct timespec spec; clock_gettime(CLOCKID, &spec); now = spec.tv_sec * 1.0e9 + spec.tv_nsec; return now; #elif defined(_WIN32) static LARGE_INTEGER win_frequency; if (0 == is_init) { QueryPerformanceFrequency(&win_frequency); is_init = 1; } LARGE_INTEGER now; QueryPerformanceCounter(&now); return (uint64_t) ((1e9 * now.QuadPart) / win_frequency.QuadPart); #endif } /* ----------------------------------------------------------------------- */ Since this game is cross platform, it has to work on everything. If nano seconds are the way to go, then I'll use that instead. And yes, using the frame rate isn't really a reliable way to do this (it blew up in my face). I found that using a fixed value will give me consistent results. A fixed delta doesn't generate any issues for me. Shogun
  12. my personal future in game programing

    Curious what country you are in? Spain? Eastern Europe? I guess I shouldn't have assumed you live in the same country I do, Murica Sorry, it's a bad habit. Can't say I recommend you come here if you are living elsewhere (I sometimes wish I could be elsewhere). So I was told that Sweden has more jobs and that it would be easier to find work there. If you have friends in Sweden, try reaching out to them and see if you can at least use them as a reference. Every leg up helps. Shogun
  13. my personal future in game programing

    First of all, why do you want to contract/freelance? If you plan to do game dev for a living, then I strongly do not recommend this route. I have been a contractor for years, and I've grown sick and tired of it (I can't get a full time position anywhere here to save my life). You would be better off just getting a full time position at a game dev company (maybe a well managed startup) and establish yourself that way. If this is going to be something part time and not your main source of income, then that's fine. Second, where do you live? Location is often key to your success not in gamedev only, but in software engineering as a whole. If you are in sillicon valley or areas with good tech jobs, then you should try going to development events and meeting people. In this industry, it's not all about what you know, but WHO you know. Knowing someone behind the scenes can be the difference between landing your dream position in the long term or the contractor who is unemployed for months on end between contracts. Even if necessary, you may have to take some game testing position(s) to get your foot in the door. Testing sucks but that's how I got started. If you aren't in a tech oritented area, then you might want to move. This is what I did also. Lastly, if you want to get started right away, try freelancing on A co-worker of mine recommends it and he gets work from time to time there. So if you do, work on building your profile, take some time to make a good one, and you may get some really neat projects to work on. So by all means, give it a go. There's an occasional game project there too, but make sure that you are working with competent people. Shogun EDIT: If you do come to a tech oritented city for a job, I highly recommend avoiding Seattle. There are people fleeing California and other places for the jobs and you will likely have an extremely hard time getting anything up here. I have 7 years of experience and I've been out of a proper dev job for over 8 months and there's no end to this BS in sight. At this point I want to leave and maybe go to Portland instead.
  14. Windows Mouse Handling Sucks

    Wow, I did not know that. Heh, learn something new every day! Shogun
  15. Windows Mouse Handling Sucks

    In my opinion, of course... My game not only needs to know if the mouse is currently out of the game's window, but also the position of the cursor relative to the window as it moves outside. MacOS and Linux give me this by default, but for Windows, I had quite a bit of trouble finding a reliable way to do it. This isn't the first time I've had issues dealing with Window's mousing handling. It's been a pain in my arse multiple projects. #if defined(_WIN32) && !defined(_WINRT) /* Because Windows mouse handling sucks in general compared to Mac/Linux, a little extra work is required to find out if the cursor is outside of the game's window to prevent cheating. In my experience, the reliability of GetCursorPos and ScreenToClient vary from one comp/OS to another, so a better idea is to check every frame and use those resulting coordinates only if it tells us we are off screen. No disrespect Microsoft, but why do you insist on making such trivial things a pain in my behind sometimes? */ struct vec2_t<float> bounds; This->get_bounds( bounds ); POINT pt; GetCursorPos((POINT*)&pt); ScreenToClient(GetActiveWindow(), (POINT*)&pt); if( pt.x < 0 || pt.x > bounds.v[0] || pt.y < 0 || pt.y > bounds.v[1] ) { This->mouse_move_func(pt.x, pt.y); } #endif So yeah, I can't believe it took me that long to get this stupid thing figured out. Anyone else have the same problem before? Anyone else come up with a better solution? I'm just glad it works.... Shogun