• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

271 Neutral

About Noctumus

  • Rank
  1. Thanks for your explanation. I think the "up to" part explains a lot and that 20 milliseconds uninterrupted CPU time for a thread is definitely more the exception than the rule 
  2. In this article on MSDN about mutitasking the author mentions that each thread on a Windows system runs for the duration of a "time slice" and that one of these time slices is around 20 milliseconds. In another article I read the author referes to time slices as "quantums" and wrote that they are usually around 15 milliseconds on a Windows system.   Now, what I don't understand is that when I run a program and use a timer (like QPC/QPF or timeGetTime) to continuously spit out time-stamps it seems my program is indeed not suspended every 15-20 milliseconds. In fact, it looks like it isn't suspended at all.   Also, when I ran the program there was a total of 618 threads running on my system which means that if each of them was granted a 20 milliseconds time slice each thread would have to wait no less than 12.36 seconds every time it had worked for 20 milliseconds, which obviously isn't the case. I know a lot of these threads are sleeping most of the time, but even when I had Left 4 Dead 2 running which has 45 threads (during gameplay) and uses quite a lot of CPU resources it still didn't seem to affect my program ... ?
  3. Nice game :)   Next challenge: Add an AI player!
  4. Thanks a lot for all your really, really useful feedback, guys! Thanks to you I'm now pretty sure I have all the knowledge I need to make a good and effective solution (which will most likely involve loading resource data into main memory from a working thread, add the address of the resource to a (linked) list, monitor the list from the main thread and transfer the data from the nodes to VRAM whenever something is pending).
  5. Thanks for your feedback, and just to be clear: by the above you mean creating the device with the D3DCREATE_MULTITHREADED flag, right? If so, how much should I worry about the following from the MSDN documentation about this flag? Would you personally recommend using it? "This makes a Direct3D thread take ownership of its global critical section more frequently, which can degrade performance"
  6. Hello Forum   Suppose you wanted to make a loading module for your game displaying some object rotating on the screen while your game's resources were being loaded. My intuitive approach for this task would be to split it into two threads; one for rendering the animation (the main thread), and one for loading the resources in the background.   Unfortunately I learned that D3D9 doesn't allow you to share devices and resources (vertex buffers, textures, ...) between threads, which obviously killed that idea. However it did give me another idea of splitting the data into many small blocks and uploading1* them between each frame, like   upload block, render frame, upload block, render frame ...   in order to avoid the risk of stalling the rendering process due to the time it would take to upload a large portion of data at once.   The problem is that even if this would work it somehow feels like a "hack solution" to me. On the other hand I don't really see any other way of doing it since you have no choice but to keep everything in a single thread when using DirectX.   How would you do it?   1* By uploading I mean transferring data from local memory into a D3D object, eg. obtaining a pointer to the contents of a vertex buffer using IDirect3DVertexBuffer9::Lock and then copying the local data to that address.
  7. Lol, in that case I should probably use a BinaryChristmasTree for managing scene objects internally
  8.   “Render” can be used an as adjective, however, unlike “walking”, and in the case of “render farm” it describes the type of farm, and in “render system” it describes the type of system. Both are grammatically correct, but “rendering system” should be preferred.     L. Spiro   Thanks! Good explanation :)
  9. Intuitively I think "Rendering System" sounds more correct, but both give plenty of hits on Google (OGRE, for example, has a class called RenderSystem) so I was wondering what you guys think is the more grammatically correct term? Or do they actually have different meanings?
  10. I agree with greyfox. I hadn't looked at GameMaker for quite some time, but just visited their homepage, and you can actually make some really cool games with it, like this one: https://www.yoyogames.com/showcase/38   If I, with all respect, had as little experience with game development as you apparently do, I would totally start with this program! And who knows, if you become good at it you may not even need to learn anything else (in which case you should consider yourself lucky taken into consideration all the headaches you will be spared from learning a more "serious" language like C++) 
  11. Cool you found a solution using memory barrieres However, if it's of any interest I once had a similar problem and in my case what happened was that the OS would sometimes switch from one thread to the other immediately after reading a shared (volatile) variable from memory before it had a chance to examine its value. As a result, when the first thread regained focus the variable had actually been altered by the second one although the value it would use hadn't been updated. For example, if you do something like this: [source]volatile bool busy=0; if(!busy) {     // do something here }[/source] the first line "if(!busy)" is actually translated into two instructions; one for fetching the boolean value from memory, and one for testing if it's zero. This means that if the OS switches to the other thread immediately after the value has been read (with the first instruction) and the other thread in the meanwhile accesses some shared resources, trouble can arise when the first thread regains focus because it will examine the value as it was stored before it lost focus and perhaps determine that it's ok to proceed, even though that actually isn't the case. It may sound unlikely that the OS would switch focus at that particular point in the code, but trust me: if the code is repeated perhaps thousands of times per second it will happen, sooner or later. I'm talking from bitter experience here ;)
  12. After reading this article by Microsoft about timing in games in which they encourage always clamping the delta values (and even imply they could potentially be negative due to bugs) when using QueryPerformanceCounter I decided I could have ruled out this function returning erroneous values a bit too early. And indeed I had. This function was exactly what was causing the glitches and although I had tried capping the delta values earlier I can only assume I must have done something wrong because now it's running flawlessly! I even made a festive screensaver to celebrate:     Thanks for all your feedback and suggestions   PS: Jason, I did run the program in release mode and if it's still of any interest here are the specs for my computer:   Operating System: Windows 7 Home Premium, 64-bit (Service Pack 1) Processor: Intel Core i5-3570K CPU @ 3.4 GHz Motherboard: ASUS Maximus V GENE Memory: Kingston HyperX Predator DDR3 4x4 GB DirectX version: 11.0 Direct3D API version: 11 Direct3D feature level: 11_0 GPU processor: GeForce GTX 560 Ti: Driver version: 331.65 (November 7 2013) CUDA Cores: 384 Core clock: 822 MHz Shader clock: 1645 MHz Memory data rate: 4008 MHz Memory interface: 256-bit Memory bandwidth: 128.26 GB/s Total available graphics memory: 4095 MB Dedicated video memory: 2048 MB GDDR5 System video memory: 0 MB Shared system memory: 2047 MB Video BIOS version: IRQ: 16 Bus: PCI Express x16 Gen2
  13. I appreciate your feedback but I just updated my driver less than a week ago and all tests were executed outside the IDE.   Thanks for enlightening me about the DirectX Control Panel I tried creating an IDE project and did what you instructed which actually resulted in a couple of warnings from the DX environment. However, after having adjusted the code accordingly the glitches still occurred and the screen didn't turn pink/green at any point (unfortunately since what you described about seeing the previous frame repeated could definitely have been what was happening). The good news is I think I may have found a way to figure out what's causing the trouble since the program is heavily based on one of the tutorials that ship with the DirectX SDK which, despite the similarity, does not have the same problem as my own program. What I'll do is simply use the tutorial as a starting point and try to "morph" it into my own program gradually and see when the glitches start appearing. I'll let you know if I find it.
  14. I made a version that deliberately generated glitches in order to determine how big the delta time would have to be in order to cause these visual artifacts. Then I added functionality to keep track of the minimum, maximum, and average values and tried capping the dt to ensure it would always lie close to the average (+/- 20%). This, however, didn't solve the problem. Even when the dt couldn't possibly be anywhere near the value required to generate these huge gaps the glitches still occurred. This is also one of the things that made me think it's probably something related to D3D that's causing it (like some buffer not being flushed or whatever). I haven't tried logging the times though, but even if I did I'm not sure it would even help me figure out what's causing the problem.   I don't know if this ping-pong pattern is what's actually happening; it's just how I perceive it when the program is running and since a single frame only lasts around 16 ms at 60 Hz I could very well be wrong. In fact I actually think your own theory is much more plausible, but even so I don't know what to do about it. I mean it's a pretty simple program and without more knowledge about D3D it feels like there's not a whole lot of things left to try :/ Thanks for your feedback.
  15. Greetings, Forum!   I've recently begun studying Direct3D 9 but have unfortunately encountered a little bug in my latest experiment; a simple program that rotates a textured 6 pointed star figure about the 3 axes with a single directional light (source code available here). To describe the bug imagine I was rendering a ball instead of a star moving from the left side of the screen to the right. Then, this is what it looks like when the program runs normally (the top number being the frame in which the ball is rendered):     And this is what appears to be happening when the glitch occurs:     I tried running the program without clearing the screen in each frame and managed to capture what it looks like when the glitch occurs with vsync enabled at 60 Hz (all three arrows point to visual artifacts caused by the same glitch):     It appears what I thought was happening could actually be the case (especially if you look closely (click image to magnify) at what's going on near the left arrow). The problem is I don't have a clue what the heck is causing it. Taken into consideration I have less than a week's experience with Direct3D there's a good chance it's something very basic I'm missing that one of you can teach me about. At least I hope so, because right now it's driving me nuts    Here are some of the things I have gathered so far from testing and experimenting:   The glitch is apparently not caused by A floating point rounding error The performance counter returning a wrong value The glitch is apparently not affected by whether I use hardware or software vertex processing (D3DCREATE flags) I use performance counters (QPF/QPC) or multimedia timers (timeGetTime) for calculating the delta time I restrict the program to only run on a single CPU core or not Some other observations: The glitch appears, on average, about once every 10th second on my computer when vsync is enabled and about once every 3rd second when it's not (using D3DPRESENT_INTERVAL_IMMEDIATE in the present parameters to disable it) The impact/magnitude of the glitch doesn't seem to be affected by whether vsync is enabled or not The glitch appears very randomly; not at any specific angle or time and sometimes (much) less frequent than others (every now and then the program can run smoothly without any problems for several minutes)   I tried creating the device using D3DDEVTYPE_REF but this causes the framerate to drop so dramatically it's impossible for me to determine if the glitch still occurs. I also examined a couple of my Direct3D accelerated games to see if something similar could occur in one of them, but that didn't seem to be the case.   PS: If you want to compile and run the program you can download the texture here