Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

136 Neutral

About Lifepower

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have separate code paths for ARB_direct_state_access (either under OpenGL 4.5 or when exposed as extension), EXT_direct_state_access and "non-DSA" way, which simulates DSA by preserving the state. However, I mostly focus on maintaining DSA versions as I haven't found any OpenGL 3.3 (which is minimal for my framework) hardware that doesn't support it. The issue I'm having on Intel driver occurs both with EXT_direct_state_access (exposed by Intel's drivers on Windows) and non-DSA approaches. Regarding "glGetIntegerv performance impact" - I have macros to disable that alongside with "glGetError". However, in my own performance benchmarks that I've performed on 7 different AMD/Nvidia graphics card each, and Intel graphics card ranging from OpenGL 3.3 to 4.5 support on Windows and Linux, I've found negligible performance difference when massively updating buffers and issuing drawing calls, less than 1/10th of 1%. If you know a specific combination of a graphics card, OS and driver version where this does make any significant difference - I would definitely be interested in testing it. I've read somewhere that in OpenGL ES, especially older devices with OpenGL ES 2, "glGet" calls may indeed hurt performance, but at least on desktop this doesn't seem to be a issue and even if it is, the design of my framework requires not to modify OpenGL state unless it is part of the method's purpose (e.g. activate shader, bind texture, etc.), which is why I'm focusing on DSA-way almost exclusively.
  2. Thank you for the replies. I've tried installing updated driver from Intel on machine with Intel HD Graphics 4600, but the result was the same. I'll try to contact Intel dev support with a simple example reproducing the issue.
  3. As the title says, it seems that updating contents of UBO while it is currently bound to a slot on Windows 10 + Intel HD Graphics doesn't work. I can reproduce this issue on several machines with clean Windows 10 install, fully updated (including 1607 update) and graphics driver that OS installed by itself - each having either Intel HD Graphics card, either Intel HD Graphics 4600 or Intel HD Graphics 5300. My rendering schedule is basically the following:Activate the appropriate shader program.Attach UBOs to the appropriate slots of shader program.Activate VAO for model 1.Update UBOs with the appropriate parameters (matrices, light parameters).Draw call for model 1.Activate VAO for model 2.Update UBOs with new parameters.Draw call for model 2.Repeat steps #6-8 for models 3, 4, .., N -1, N (it is the same mesh, just using different data in UBO).The above scheme appears to work just fine on a set of Nvidia and AMD graphics cards that I've tried, both on Windows and Linux; it also works on Intel HD Graphics cards under Linux. However, it doesn't seem to work on Intel HD Graphics under Windows 10 with the driver installed by OS. The contents of UBO doesn't seem to update in step #7 and keeps old data uploaded in step #4. I have different code paths for "EXT_direct_state_access", "ARB_direct_state_access" and non-DSA approach, but the issue is exactly the same (on Intel HD Graphics cards with Windows 10, "ARB_direct_state_access" is actually not exposed, so I'm not using it there). Basically, the non-DSA code that exhibits the issue on that configuration is: // VAO is activated before this. // (a) create UBO (note: this chunk of code is called at startup, it is not part of rendering loop) glGenBuffers(1, &bufferHandle); glGetIntegerv(bufferTargetToBinding(bufferTarget), reinterpret_cast<GLint*>(&previousBinding)); // simulate DSA way glBindBuffer(bufferTarget, bufferHandle); // bufferTarget is GL_UNIFORM_BUFFER glBufferStorage(bufferTarget, bufferSize, nullptr, GL_MAP_WRITE_BIT); glBindBuffer(bufferTarget, previousBinding); // (b) bind UBO glBindBufferRange(bufferTarget, bufferChannel, bufferHandle, bufferOffset, bufferSize); // bufferOffset is 0 // (c) update UBO glGetIntegerv(bufferTargetToBinding(bufferTarget), reinterpret_cast<GLint*>(&previousBinding)); // simulate DSA way glBindBuffer(bufferTarget, bufferHandle); mappedBits = glMapBufferRange(bufferTarget, mapOffset, mapSize, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT); // mapOffset == 0, mapSize == bufferSize std::memcpy(mappedBits, data, mapSize); glUnmapBuffer(bufferTarget); glBindBuffer(bufferTarget, previousBinding); // (d) draw call glDrawArrays(topology, baseVertex, vertexCount); // (e) Repeat (c) and (d) for other models. If instead of "glMapBufferRange", I use "glBufferSubData" to update the contents, the issue is less pronounced, but still exists (some models seem to jump back and forth between old and new positions as specified in UBO). Note that the issue occurs only on Windows 10 / Intel HD Graphics cards, not anywhere else. I have found two workarounds: one is to call "glFinish" right after "glDrawArrays", which seems to fix the problem; another workaround is to call "glBindBufferBase" to unbind UBO before updating its contents, then bind it again: glBindBufferBase(bufferTarget, bufferChannel, 0); // unbind UBO // Update UBO contents here as in code above, step (c) glBindBufferRange(bufferTarget, bufferChannel, bufferHandle, bufferOffset, bufferSize); // bind buffer back However, both of these workarounds seem to impact he performance. I couldn't find anything in the GL spec mentioning that buffer objects need to be unbound first or "glFinish" be called, before updating their contents.  So my question would be: is the issue I'm experiencing just a driver bug, or should buffer objects be really unbound before updating their contents? P.S. I'm using a very similar code to update VBOs as well, and they also exhibit the same issue on Intel HD Graphics cards and Windows 10, albeit to a less degree simply because I don't update them often.
  4. Lifepower

    Wicked Defense 2 re-released as freeware

    Similarly, we have released earlier version of Wicked Defense 1 and re-released Aztlan Dreams. The second one is a turn-based puzzle strategy game with role-playing elements, which first appeared back in 2009. Below are some screenshots of Aztlan Dreams:
  5. Previously, Wicked Defense 2 was released under commercial terms back in 2009. It was a sequel to Wicked Defense 1, which first appeared in 2007. Since it has been around 7 years since the release and especially now that Ixchel Studios web site has gone offline, we have decided to re-release the game as freeware.   This is basically a tower-defense genre, real-time strategy 3D game, where you have to build and upgrade towers, and cast instant spells to prevent large hordes of monsters from reaching their destination. It uses abstract conceptual graphics and gameplay.   The game's engine uses shaders, CPU's vector instructions and geometry instancing to produce procedural 3D models in real-time. It mainly does so using DX9/SM2 and certain number of instances per batch, although we did have experimental back-end running on DX8 with v1.1 shaders. In some scenarios (e.g. "Rebellion") at times there could be over million triangles per frame, e.g. when the whole group of "Weaver" monsters gets cloned and then sub-divides. The engine also features volumetric visual effects such as lightning, rays, beams and particle showers.   Some screenshots are shown below: For more screenshots and the actual game installer, you can visit the game's official web site at: http://asphyre.net/games/wd2
  6. If you consider your units as having origin at top/left corner, you can precalculate for each cell on the map a 2D value, which indicates the biggest unit that can be placed at this location (at top/left corner), e.g.: (1,1), (2,2), etc., which considers any nearby walls. This needs to be calculated only once per map. After that, you execute normal A* or any similar path-finding algorithm, where you check your current unit size versus the size in each cell to determine if it's an obstacle or not (so 2x2 unit can't go on cell that has 1x1 value in it). Then, the final path will be an array of top-left corner positions that lead your unit from your current location to the desired one. If you have dynamic world where units also occupy locations, you can extend the approach to have references in each cell to unit occupying it, so you can apply this check in your path-finding algorithm. Does this make any sense?
  7. Lifepower

    Tools for iOS/Android Apps

    You can also develop native iOS applications using FreePascal.   The incoming Mobile Studio of Embarcadero will also allow you to create native iOS and Android applications, which is essentially Delphi for iOS/Android.
  8. Lifepower

    Linear color space

    You are wrong. sRGB is an application of standardization to RGB color space and it is defined by three primaries in CIE XYZ color space. The transformation between linear and non-linear color spaces is entirely different topic. I've already said this before. [quote name='Hodgman' timestamp='1347898336' post='4980926']What errors or misleading statements are there in the Microsoft and nVidia links that you've accused?[/quote] I've already said this in my earlier posts. The error is to mix gamma correction concepts along with RGB and sRGB color spaces together, trying to imply that at one point or another when you "convert" or "transform" (or similar term) from one to another, you need to do gamma correcion, or that at some point gamma correction is applied. My suggested correction is that there are separate topics and the introduction of sRGB texture format is poorly fundamented and sRGB color space name is misused. Just because you think/decide/believe I'm wrong, it doesn't make you right. It just makes you superficial. I was not referring to actual thread moderation, rather than you feel that you are right because you are moderator. Perhaps I'm wrong and maybe there are other reasons why you think you are automatically right. You have said that I'm wrong and failed to give any reasonable evidence to support your points, other than referring to popular belief, your own belief, mixing my phrases with new words among others. I wouldn't mind if you only posted your own points, but copying my text and then adding stuff of your own with the purpose of misguiding the discussion is just uncool. I think you just don't like being seen as wrong on forums where you moderate. Why don't you follow your own advice? On that note, I might suggest that you don't limit your reading to [s]Facebook[/s]Wikipedia only. P.S. you might want to read some earlier versions of Wikipedia sRGB entry. The end result is that when sRGB is viewed on CRT, the viewed gamma appears as 2.2, but again, this is CRT/Display issue, not the space itself. Coincidence and consequence are two different things. Just because gamma is mentioned, it doesn't mean (non-S)RGB has different gamma. In fact, I think mentioning gamma in sRGB discussion is not relevant.
  9. Lifepower

    Linear color space

    This is Argumentum ad populum. The articles I've mentioned in my post are verified and have been passed scientific review (by several council members), while you provide your opinions backed up by your own words, some stuff on Internet and popular belief. So you are saying that the companies you have randomly selected and mentioned have something to do in decision-making regarding misleading usage of sRGB term? With this, you automatically decide that I'm wrong and you are right? Yes, it's appeal to authority. You are moderator, so you are always right and if there is something you don't like, you rather attack the person (Appeal to the person fallacy) rather than provide sound arguments in discussion. While we're here - I don't consider myself authority and there are many things in the world that I don't know or understand, and I'm humble about it. Yes, my master's and doctoral thesis works were regarding practical applications in mobile systems of color theory and I have published 12 council-reviewed scientific works regarding different color spaces and applications, so this is why I have something to say about it. I could always be mistaken as well as people who review and judge my work, but while I try to base my points on proven facts, you try to prove something by use of Wikipedia, popular folklore and your Moderator badge. Please, I know your intentions in answering OP's question was good, I just tried to clarify things as something that made its way to SDK does not necessarily mean it is correct. You don't have either to defend something blindly just because I've pointed out to a misconception in Microsoft manual.
  10. Lifepower

    Linear color space

    No, my accusations against sRGB in DirectX/OpenGL are based on fact that conversion between RGB and sRGB is thought in terms of gamma correction, while in reality RGB and sRGB may actually be the same thing. In any case, you cannot convert between the two using gamma correction, so this SDK article, for instance, is misleading. Have you read the article yourself? The article you provided can be used as an exercise to find out logical fallacies. Begging the question and fallacy of composition are amongst the first ones visible. So, you've repeated what I've said, then added phrase "as an intermediate step" (to what, by the way?) and now you are saying that: Nonsense. sRGB is just a color space, nothing more. It's not "gamma correction of 2.2", left alone "piecewise transform with linear toe at [gibberish]". Proof by verbosity is a logical fallacy (but you already know that), please don't do that. Two separate arguments. Yes, sRGB is a standard and popular color space. But starting from "display performs sRGB gamma correction" - just a senseless manipulation of words.
  11. Lifepower

    Linear color space

    Just wanted to clarify this one. For some reason, they have used "sRGB" to denote "linear color space" in DirectX and OpenGL, which are just two separate things. Indeed, you can convert from linear to non-linear color spaces and vice-versa by using Gamma correction. RGB color space by itself lacks any standard or definition, so sRGB was proposed as a standard, which is defined by specifying white point and three chromaticities. For instance, there is also Wide gamut RGB, Adobe RGB and so on. Now, the conversion from one color space to another, where the color gamut is different, you would need to convert your initial color space to CIE XYZ by using linear transformation and then to the desired color space. This is why it is simply wrong to call sRGB "linear" and non-sRGB "non-linear" and do the conversion between both using gamma correction. In reality, both typical RGB and sRGB may or may not be linear. In fact, typically, you can assume that your RGB color space is actually linear. You don't need to voluntarily apply any gamma correction there. Since it lacks standard definition, you can simply assume that when you work with RGB, you work in sRGB, or in Adobe RGB - whatever your choice is. In order to properly standarize your color space, you would need to convert it to one of perceptually uniform (or supposedly) color spaces such as CIELAB, CIELUV, DIN99, ATD95, CIECAM, or at least CIE XYZ, which can actually represent all visible colors by human eye, unlike RGB, which is limited by triangle in CIE diagram. Now, the problem is that most LCD displays apply huge gamma correction to the input image. Not only that, they may also pre-process images and oversaturate them too. Why? To sell better since higher contrast and crispier images appear prettier, but in the end you receive a very distorted image. This is not your problem, it is a problem of display's manufacturers and vendors! You simply can't make an application that will predict all of the monitors out there, so it's their responsibility to generate final image as accurate as possible. I don't know why they introduced "sRGB" into DirectX and OpenGL - after all, suposedly, you are already working in sRGB and it's display's job to properly represent input sRGB data so that output strictly conforms to sRGB, or any other standard. If you do gamma correction in your application - well, you still don't know how display is going to re-transform your image data, so in the end you may actually get less accurate results. My guess is that they introduced so-called "sRGB" in APIs just for the hype of it, e.g.: "We can now store textures and front-buffer in gamma-adjusted format! WOW!" (like we couldn't do it back in 1969). You may check some of the following bibliography to figure out more about different color spaces (you can see by the dates that this is a very studied topic, yet it seems that people making changes in DirectX/OpenGL standards regarding sRGB have never read them): 1. Poynton, Charles. Digital Video and HDTV Algorithms and Interfaces. Morgan Kaufmann, 2003. 2. Poynton, Charles. "Frequently-Asked Questions about Color." http://www.poynton.com/ColorFAQ.html 3. Hill, Francis S. Computer Graphics using OpenGL. Prentice Hall, 2000. 4. Hearn, Donald, and Pauline M. Baker. Computer Graphics, C Version. Prentice Hall, 1996. 5. Luo, Ronnier M., Guihua Cui, and Changjun Li. "Uniform Colour Spaces Based on CIECAM02 Colour Appearance Model." Color Research & Application (Wiley InterScience) 31, no. 4 (June 2006): 320-330. 6. Lindbloom, Bruce J. "Accurate Color Reproduction for Computer Graphics Applications." Computer Graphics 23, no. 3 (July 1989): 117-126. 7. Brewer, C. A. "Color Use Guidelines for Data Representation." Proceedings of the Section on Statistical Graphics. Alexandria VA: American Statistical Association, 1999. 55-60. 8. MacAdam, David L. "Visual Sensitivities to Color Differences in Daylight." (Journal of the Optical Society of America) 32, no. 5 (May 1942): 247-273. 9. Schanda, Janos. Colorimetry: Understanding the CIE system. Wiley Interscience, 2007. 10. Pratt, William K. Digital Image Processing. 3rd Edition. Wiley-Interscience, 2001. 11. Keith, Jack. Video Demystified: A Handbook for the Digital Engineer. 5th Edition. Fremont, CA: Newnes, 2007.
  12. Lifepower

    Num lock

    I always have Num Lock off by default and configure bios to have it off at startup. I have never used numpad to type actual numbers, since DOS days have been using PgUp/PgDown/Home/End/Insert keys while working with text documents and code. For some reason, I find it uncomfortable to move hand vertically to reach the alternative six keys (Insert/Delete/etc) that are on top of arrow keys. As for numbers, I can type them quickly from main keys that are below F1-F12. If I try to type numbers on numpad, I find it difficult to do so without looking at actual keys (although have no problem doing so using one finger on a cellphone). It reminds me of very old keyboard layouts where F1-F12 keys were on the right all grouped together in 2x6 matrix, which also felt quite awkward.
  13. Lifepower

    Best Fast Food

    In Mexico, it's definitely Tacos al Pastor or locally here, chicken with the special sauce at Don Pollo (though technically this place is a restaurant and not fast food). We also have McDonalds, KFC, Burger King and few others, but these tiny burgers they sell that are 10% of the pictured size are unappealing. I've also got food poisoning on multiple occasions from burgers at Burger King, salads at KFC and once from MacDonald's food (probably due to poorly decontaminated vegetables). P.S. Chipotle is a fast food? I thought it was a spicy sauce using dried pepper...
  14. Lifepower

    Laptop vs Desktop

    You should do battery calibration as your power gauge could be off. Recharge your battery to maximum level, then disable any mechanisms that will shut down your laptop when battery is low (in Windows Vista/7 you need to use some console commands or change registry). Then leave the system on until it shuts down after the battery is fully drained. This is called a deep discharge and it should recalibrate your power gauge (although it may need several of these cycles). Note that although this will recalibrate your power gauge, it will also reduce your battery's life span. This is why you should discharge your battery (but not lower than 10%) from time to time to make sure power gauge functions properly. Also note that some laptops will not allow you to do deep discharge: BIOS in my Dell Precision M4500 will shut down the system at approx 3% of charge. I agree. As I've said before, I have several laptops having batteries aged over 5 years that still hold more than 90% of charge. However, I think it also depends on the manufacturer.
  15. Lifepower

    Laptop vs Desktop

    Never had this problem. My old Dell Latitude D830 came with 9-cell battery and I've purchased second 6-cell battery for CD-bay, both have been with me since 2007. I've used this laptop throughout the year with 100% of charge, always leaving batteries inside. Every month or so I'm doing deep discharge (but only to 10%). 9-cell battery has 9% of wear, 6-cell battery has 1% of wear (maybe because it's Lithium-Polymer). Both batteries are now 5 years old. By the way, this laptop has also supposedly defective Nvidia chip (Quadro NVS 140), which after 5 years had not failed yet. Similar story with my Eee PC 1000 HE. The battery is a couple years old and only has about 9% of wear. Neither of laptops have anomalies with their battery gauge, since I've been doing regular discharges (but not below 10%). My guess is that there are other factors like quality, charging scheme, usage patterns and so on. My argument for leaving battery at 100% is that whenever you would use the laptop, it will always have maximum charge (e.g. in power outrages and while traveling). On the other hand, the battery of my older Dell Latitude C810 died in a week and the battery of my new Samsung 14'' laptop has 15% of wear right after purchase.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!