Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

144 Neutral

About _necrophilissimo_

  • Rank

Personal Information

  1. Greetings again,   Thanks to the helping pointers from this wonderful community, I've now successfully ported all my DirectInput calls to Raw Input. However, one minor headache remains; CapsLock and NumLock buttons.   Ideally, when a person hits CapsLock or NumLock while in the game (i.e. as buttons bound to player actions), this wouldn't actually toggle CapsLock/NumLock/ScrollLock. This is simply because after a session in the game, if the key command isn't intercepted, the user will have absolutely no idea if the capslock is on or not when he/she/it returns to Windows.   In DirectInput this was easy; Just set the keyboard as DISCL_NONEXCLUSIVE|DISCL_FOREGROUND and there, no more CapsLock presses passed on to the system; Whatever was the user's preference before he/she/it started the game remained, regardless of what keys were hammered during the play session.   So, in other words, is there a way to block these keys from toggling the Caps/Num/ScrollLock states whilst in-game - without disabling the keys altogether?   As always, thank you in advance!
  2. _necrophilissimo_

    DirectInput vs Windows API input

    Awesomes, thank you so much for the replies - I knew there had to be a more sensible way of doing things :D
  3. Cheers,   While waiting for a new computer to arrive, I used the break from development duties to browse MSDN for development updates (it's been YEARS since the last time I did so). It seems Microsoft has all but abandoned DirectInput a long while ago. So, a question;   If DirectInput is kicked to the curb in the future iterations, is there a way to use Windows API to;   A) track relative mouse movement and B) restrict the mouse to the app window without resorting to global ClipCursor trap?   I know I could do a horrible cluster-frakk of a work-around using global hooks, forced cursor re-positioning and the like, but that seems like awful lot of effort for something this simple. Especially since main motives for MS to move stuff over to Windows API has been, according to them, streamlining the codebase.   Sorry, if this question seems silly, but I just refuse to believe this task has moved from a simple few-lines-of-code into a colossal API-hacking nightmare - there just has to be something I'm missing.   Thanks you for your patience with silly questions! :D
  4. _necrophilissimo_

    Windows program priority

    Thank you so much, just what I needed; I was under misconception that this too for threads, not the whole process.
  5. _necrophilissimo_

    Windows program priority

    Some times a program that used to have a high CPU usage, but is now idling, prevents my current game build from hogging enough CPU processing power it needs. Ironically this doesn't bother the run-time so much, but the loading times become atrocious. While I feel this is not an issue with my program but that of the other program in question (specifically, Opera internet browser) (*), I'd love to know is there a way to tell Windows that my program needs all the juice it can get for the time being? You know, something like what the thread priorities do within the program itself, only for the program in general within Windows processes. Any help is greatly appreciated. (*) I deducted this from the fact that the instant I force Opera shut from Task Manager and restart my game, the issue gone.
  6. _necrophilissimo_

    The final word on stencil buffer and FBOs

    Thank you, just what I wanted to know
  7. I've been reading from all sorts of Internet articles about trying to get stencil buffer to work within Frame Buffer Objects and what I've encountered is a massive amount of conflicting information. Some articles say it doesn't work, others say it does but requires the use of Depth Buffer as well - and a couple mention potential hacks using multiple FBOs. So, what exactly is the case with Stencil Buffers and FBOs? What I really need is an FBO with just color and stencil buffers - adding a depth buffer I do not use results a serious framerate dip. Is there a special multi-FBO work-around to do this or this a lost cause? Any help is appreciated.
  8. Cheers, I've run into a small issue on one machine with two video cards. The other is used mainly to run web solutions simultaneously with high-end stuff running on the main NVidia card. For some reason no matter how I initialize OpenGL window, it uses the low-end video card and drivers. I assume this is because it is OpenGL compatible as well as was actually installed prior to NVidia card, thus having its drivers showing up in the registries first. I know this is mainly user/OS side driver/config issue, but I'd love to create a user-friendly way to go around this; Simply by allowing user choose the display driver from the ones available if the default doesn't suit his/her/its needs. So, is there a way to force OpenGL Windows application to choose a different video card / driver as opposed to the one Windows defaults it to? I didn't manage to find any articles / topics about this, but feel free to point me to one if this has already been discussed elsewhere.
  9. _necrophilissimo_

    Resizing the texture in memory

    Thanks, I'll look into all the options :) Quote:Original post by swiftcoder To be honest, that is some truly ancient hardware you are targeting - the GeForce 2 MX had support for GL_ARB_texture_rectangle, as does my old Radeon 7000 mobility. Anything a couple of revisions newer than those two should support GL_ARB_texture_non_power_of_two as well... Well, yeah, I'm aiming at very low end. Judging by the feedback I've been getting surprisingly many people have faced issues ragarding the matter. The biggest offenders seem to be the integrated Intel Graphics chips. Apparently those things can run Vista Premium but can't handle rectangle OpenGL textures :-/
  10. _necrophilissimo_

    Resizing the texture in memory

    Quote:Original post by swiftcoder There isn't one. You can emulate this by allocating a new texture with the new size, and then using some form of render-to-texture (preferably FBO) to render the old texture into the new one with appropriate scaling. Thanks for the tip, but I already concidered that, but I cant go down that path; The whole purpose of this endevour of mine is to find a work-around to some videocards not accepting GL_TEXTURE_RECTANGLE extensions. The idea is to allow people using such cards to downscale the textures to power-of-2 dimensions. Render-to-texture doesn't work if the videocard can't render the texture, not to mention using FBO would force me to use even more extensions videocards of those users can't handle. So, is there any other option (other than re-scaling the textures files or re-writing the actual image file reading routines)?
  11. I'm sitting here wondering if it is possible to resize the texture based on image data already loaded? Code-wise the resizing would ideally take place between glGenTextures and glBindTexture and glTexImage2D. I've been scanning through OpenGL references but I can't seem to find anything. Either there isn't one or my eyes are getting too sore :-/
  12. _necrophilissimo_

    Keeping Windows 'busy'

    Found it, thanks for pointers :D
  13. _necrophilissimo_

    Keeping Windows 'busy'

    What is the proper way of keeping Windows busy during game loop? You know, in a way it prevents Windows from getting idle and thus activating screensavers, search indexers, scheduled scans and so on. I currently just prevent screensaver from happening, but that doesn't affect idling status of the OS. So, any ideas? Any help is greatly appreciated. PS: Sorry if this topic has already been covered, I didn't find it while searching.
  14. A quickie question; Is it possible to put message pump into a separate thread? The usual method seems to be putting the message pump in main program and the game in threads, but in my prototype's case it would be convinient to do things the other way around. However, just moving message pump into a thread didn't seem to work, so is there some detail I should be aware of? Thanks for the help :D
  15. Cheers. Recently I've been banging my head against the wall in search for an alternative for GL_TEXTURE_RECTANGLE_ARB. Is there any OTHER way to render textures with non-power-of-two dimensions? The problem is that GL_TEXTURE_RECTANGLE_ARB isn't supported by most of the integrated graphics cards (i.e. Intel Graphics), resulting my pet project being incompatible on those systems. Converting the textures to power-of-two dimensions isn't an option in my case, as the additional graphics data would make the memory requirements of my game go through the roof. So, any ideas? Thanks for giving it a thought :)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!