• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Matias Goldberg

Members
  • Content count

    1666
  • Joined

  • Last visited

Community Reputation

9572 Excellent

About Matias Goldberg

  • Rank
    Contributor

Personal Information

  1. Oh btw on loops: If your loop is based on equality of floating point, then there could be issues. For example: for( float x=0; x == 0.3; x += 0.1 ) { } May spin forever due to precision issues.
  2. I'm not sure I understand by "bad values to an intrinsic function". As for your loops, if they look like these: for( int i=0; i<4; ++i ) { } //Or this: #define LOOP_COUNT 4 for( int i=0; i<LOOP_COUNT; ++i ) { } It's fine. But if it looks like this: uniform int myConstValue; for( int i=0; i<myConstValue; ++i ) { } Then the value you pass to myConstValue is potentially dangerous (you better never send an insanely huge value) Normally yes, but lots of things can happen to report the wrong enum (driver bugs, the GPU actually hung while switching) Both. Multithreading is hard to get right. You said you properly put a mutex around the immediate context... but do you really put the mutex on every single usage of the immediate context? Is it also possible the mutex is malfunctioning (i.e. unlocking from a thread without locking it first)? Additionally, a driver may be reading data from the immediate context and assuming it's fully single threaded so it begins to read the data from a worker thread while you're actually still writing to it from a secondary thread. Technically, this would be a driver bug. It may even be fixed by now, but your user could be running an old driver.
  3. Usually this problem happens because you have: Corrupted memory or similar memory error. e.g. setting a dangling pointer as a texture SRV is bad. Shader being used with uninitialized data (const buffer, tex buffer, vertex buffer, etc) Infinite loop inside shader. Be it vertex, compute, or pixel shader. This is often caused by uninitialized variables, or variables with very large values or NaNs. Some very obscure API usage the debug layer didn't catch. However you ruled out most of these (except point #1 & #4). #1 is debugged the same way you debug any kind of memory corruption (either use a third party tool or override malloc and hook your own sanitizer). Other causes for these issues are: Out of date drivers. Seriously. This happens very often. Ask for driver version. If it's very old, ask them to update their drivers. This happens more often than you think. GPU problems that magically go away after updating drivers. Overclocked / overheating systems. Simple games will often allow everything in the GPU to run at 100%, something AAA games often don't achieve (because there's usually a huge bottleneck somewhere). It would explain why using VSync helps with the problem. Switchable graphics. Some notebooks may have Intel + NVIDIA GPUs combination (or Intel + AMD, but that's less common) and for some reason the system may have decided to switch the GPU (e.g. battery, thermal throttling). Monitor issues. The user literally detached / unplugged the monitor to which the active GPU was rendering to. More common on laptops and Win 10 tablets. Third party applications. Apps like MSI Afterburner, Plays.tv, Mumble hook themselves to D3D11 to intercept game's calls and either capture video or render overlays on top of it. They can also cause problems. Having a dump of all active processes when the game hung the GPU can be a good way to rule this out. If you find a common third app between a large percentage of these users, ask them to turn it off. Errr, unless you're really really good at multithreading, you shouldn't be making D3D API calls from other threads. It's asking for a lot of problems. This also explains why VSync diminishes the problem, since you're likely in a race condition and now the access patterns have changed. IIRC accessing the immediate context from two threads is not allowed, even if protected by a mutex. Update: It's allowed, but still you're asking for trouble. Also you better be synchronizing your context absolutely perfect.
  4. When you push to your repo, one of three things should happen (assuming you rebooted your computer): The tool you're using to push to github asks for your password. The tool you're using to push to github asks for your SSH password to decrypt it. You're not asked. That means the tool either has your github password stored in plain text somewhere on your hard drive, or the SSH key is stored unencrypted somewhere on your drive; which I dislike because anyone with access to your computer (i.e. someone steals your PC, breaks into your home, or infects your system with a virus/trojan) could steal your password and/or SSH key. If you're in option 3; I'd advise about looking into the settings of your tool online so that it doesn't save the password, or so that your SSH key is encrypted. Every time you reboot your PC, your tool should ask you for the password if security is what concerns you.
  5. Depends on what you see as "password protected" when it comes to public repos. GitHub uses passwords to prevent unauthorized users from modifying your repository (e.g. writing to it, making changes, pushing to it). However anyone can see everything you pushed to your public repos, download it (either via clone or pulling), and fork your repository, without needing a password. They can also make their own changes and upload those changes to their own forks.
  6. You're still confusing things. In a lockstep environment, the server will receive client inputs and it must apply them in the order of their frames. Anything else will cause desync. This means the server can't simulate too far behind because it must wait for everyone's input. And this is why it doesn't scale to many users. In a prediction-based, server-based network model (aka Quake's Multiplayer), client inputs can be applied in any order. But typically for responsiveness reasons you'll want to apply them in the order they're received (inputs aren't frame numbered, but packets still are sequenced) and discard inputs belonging to past packets. For example if you receive packet 0, packet 2, and packet 1, in that order, then packet 1 should be ignored (unless you're receiving all those packets at the same time, in which case you sort them first, and apply them in order). This potentially means if the user hit a button for one frame and its packet gets lost or reordered, then the server will never see that he pushed that button. But that's rarely an issue because: In a UDP model, most packets actually arrive just fine for most of the time. The user isn't that fast to push a button for just 16.66ms Button presses that need to be hold down (like firing a weapon in a shooter, or moving forward) aren't a problem. Worst case scenario, you can send this "button pressed" message repeated in several packets, and the server gives it a small cooldown to prevent acting on this button push twice; or instead of a cooldown, this message is sent with a "I hit this important button 2 frames ago"; and the server keeps a record to see if that was done. If it wasn't, then we do it now. Alternatively, worst case scenario the user will push that button again. To put it bluntly, a client-server Quake style model is like a mother and her child. The child has a toy gun, but the toy only makes a sound when the mother pushes a button in a remote control in her hand. The kid fires his toy gun but nothing happens, then suddenly 5 seconds later the toy gun begins making sound. The child says "Why mom!?!? I pressed this button 5 seconds ago! Why is it only reacting now!?" And the mother replies: BECAUSE I SAY SO. Client/Server models are the same. The client says what it wants, but the server ends up doing what it wants. (have you ever played a shooter where you're clearly shooting at an enemy but he doesn't die? and suddenly you're dead???) Now, the internet is unreliable, but it isn't that unreliable. It's not chaos. Normally most packets arrive and they arrive in order, and when they don't, it's hard to notice (either because nothing relevant was happening, or because the differences of what the client said it wanted and what the server ended up doing are hard to spot) and this is further masked via client side prediction (i.e. the weapon firing animation begins when the client pushed the button so it looks like it's immediate, but enemies won't be hit until server says so). Errors only get really obvious when the ping is very high (> 400ms) or your internet connection goes really bad for a noticeable amount of time (e.g. lots of noise in the DSL line, overheated router/modem, overloaded ISP, overloaded Server, Wifi connectivity issues, etc) and thus lots of packets start getting dropped or reordered until the connection quality improves again. For more information read Gaffer on Game's networking series, and read it several times (start from the bottom, then to the top articles)
  7. I agree with frob, plus one more thing: it's really easy nowadays to record pictures, video and audio by just a few swipes away with your hand. Most of these teachers have been busted with hard evidence (mostly because they were foolish enough to save pictures of the encounters).
  8. There's an issue with that: You don't have the guarantee that all warps will be working on the same primitive. Half of Warp A could be working on triangle X, and the other half of Warp A could be working on triangle Y. GPUs make some effort to keep everything convergent; but if they were to restrict triangle X to a set of warps; and triangle Y to another set to Warps, it would get very inefficient quickly.   I am curious: why are you asking these extremely low level questions? Knowing the insides of your GPU is important, specially if you want to squeeze the last drop of it both in a technique you want to do and performance you want to achieve. However without specifying a particular set of HW, GPUs are very heterogeneous. They're not like x86 CPUs which all work relatively similar because they have to produce perfectly identical results. Although there is some common ground, more than half of these answers will change in 2 years. Specializing in a particular HW is more useful (i.e. GCN is present in PC, XBox One & PS4; PowerVR is present in Metal-capable iOS devices). For example you ask about TMUs, yet TMUs no longer exist as such concept. It's much more complex and very GPU-specific. For instance Mali GPUs do not have threadgroup/LDS at all. They emulate it via RAM fetches. Therefore any optimization that relies on the use and reuse of threadgroup data in GCN (and other GPUs) hurts a lot in Mali. It's like learning how to drive a car and asking how atoms of a car battery move from one end to another to power the car's instruments. Yes, if you want to be the best driver perhaps this knowledge could be of use to you to be on the top 3 best drivers of the world; however you need to sit on the car and feel the wheel first. Btw this is a nice resource on latency hiding on GCN. If you want to learn the deep internals of each HW, I recommend you start by reading their manuals: https://01.org/linuxgraphics/documentation/hardware-specification-prms https://www.x.org/wiki/RadeonFeature/ (Go to "Documentation") https://static.docs.arm.com/100019/0100/arm_mali_application_developer_best_practices_developer_guide_100019_0100_00_en2.pdf http://malideveloper.arm.com/downloads/OpenGLES3.x/arm_mali_gpu_opengl_es_3-x_developer_guide_en.pdf GPUOpen The presentations on SIGGRAPH and GDC are also very useful.
  9. First, like Hodgman said, you don't need 3 of everything. Only of the resources you would consider "dynamic". Also "static" resources you want them to be GPU-only accessible, so that they always get allocated in the fastest memory (GPU device memory); while dynamic resources need obviously CPU access. Second, you don't need 3x number of resources and handles. Most of the things you'll be dealing with are going to be just buffers in memory. This means all you need to do is reserve 3x memory size; and then have a starting offset:   currentOffset = baseOffset + (currentFrame % 3) * bufferSize; That's it. The "grand design of things" is having an extra variable to store the current offset. There is one design issue you need to be careful: you can only write to that buffer once per frame. However you can violate that rule if you know what you're doing by "regressing" the currentOffset to a range you know its not in use (in GL terms this is the equivalent of doing GL_MAP_UNSYNCHRONIZED_BIT|GL_MAP_INVALIDATE_RANGE_BIT and in D3D11 of doing a map with D3D11_MAP_WRITE_NO_OVERWRITE). In design terms this means you need to delay writing to the buffers as much as possible until you have everything you need, because "writing as you go" is a terrible approach as you may end up advancing the currentOffset too early (i.e. thinking that you're done when you're not), and now you don't know how to go regress currentOffset to where it was before; so you need to grab a new buffer (which is also 3x size; so you end wasting memory).   If you're familiar with the concept of render queues, then this should be natural; as all you need is for Render Queues to collect everything and once you're done; start rendering what's in those queues.   Last but not least, there are cases where you want to do something as an exception; in which cases you may want to implement a "fullStall()" which waits for everything to finish. It's slow, it's not pretty; but it's great for debugging problems and for saving you in a pinch.
  10. What you're trying to do is known as shader dynamic control flow, and it is forbidden to do in the pixel shader in GLES 2.0 (most ES2 hardware is incapable of doing it), and IIRC also in ES 3.0 (not sure about that last one).
  11. Taken from here.   That is NOT what you described in your original post. You're talking about converting your in-game currency into something that has value in real life outside of your game. What Fyber, Tapjoy, Supersonic, inMobi, and Google gift card API do works in the opposite direction (turn real life money into in-game currencty)
  12. Whoa, it ain't supported in any GL 3 level hardware.    Yeah, NV's notion of "widely-supported ARB_clip_control" must be different from ours.
  13. It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]
  14.   (cough) Heartbleed (cough) Proprietary's nothing to do with it. The difference between what happened in heartbleed is that heartbleed was a bug, while an OS like Windows simply has weak security by default, for "friendliness". Replacing dynamic libraries on Windows by malicious version is pretty easy, files and folders have weak permission system. The protocol that this current virus exploits is for network transfer, while there's nothing special about accessing files or folders and then to modify them. Not to mention that even if all that was good, it's still a stupid thing of the government of anywhere to rely on closed-source software.   You do realize that for every Windows exploits that got leaked from the NSA, there's like 5 leaks for *nix OSes, right? Linux has had extremely very bad exploits: Heartbleed Shellshock Debian Fiasco X11 is impossible to implement a secure lockscreen or screensaver. This is not fixed as of today. Unless you use Wayland... and when is Wayland adoption going to become wide spread? I'm tired of waiting... OpenGL drivers (including Mesa) return GPU memory without zero-initializing it first (which is a MASSIVE security hole). This is not fixed as of today. Just today was released patch to a lightdm bug allows guest users to access any file. I agree that basic infrastructure should be running in FOSS software and not proprietary. But asserting FOSS software is more secure than proprietary just because it's open source is blatantly wrong. Stop trolling.
  15. You're correct on all accounts, however you forget the physics is still updated very fast with no lag. To put it in another way, play a game blindfolded, with only sound cues or playing by memory; and you'll still be able to react and the physics engine will process your input immediately. Because the visuals are only 1 frame behind at 60fps, it's not that big of a deal (it is, but it's not the end of the world. Now if the framerate is lower...). Another issue you're forgetting is that the distance between physics & graphics may not be an exact frame (because it depends on graphics' framerate). The visual may be up to 1 frame behind. But they may be less (i.e. 0.5 frame behind, 0.2, 0.1). If both graphics & physics are updating at exact multiples then you may end up being 1 frame behind. You can also try to disable triple buffer to compensate.