• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

DEVLiN

Members
  • Content count

    121
  • Joined

  • Last visited

Community Reputation

203 Neutral

About DEVLiN

  • Rank
    Member
  1. Quote:Original post by Dragonsoulj Just a thought on all the stuff that doesn't matter as much, like health, or position: Why not store it client side Because then I could teleport wherever I wish and always be at full health. This is how lots of the early WoW exploits were created (zipping around zones - gathering and opening chests from beneath the world so no monsters could harm them, etc).
  2. Quote:Original post by cache_hit Projection and view are both identity. Wouldn't that explain it? Coordinates would range from -1, -1 to 1,1 no? You're drawing from 0,0 (center) to 1,1, equaling one quarter of the screen?
  3. To add to "lack o comments" comment: FPS isn't a good measuring tool as it isn't linear. The drop from 1500 fps to 250 fps is ~ 3.33 milliseconds. This is the same as the drop between 60 and 50 fps. Thus, the same initial drop of 1250 fps is exactly the same as the later drop of 10 fps. I suggest you use ms/frame (divide 1 by your FPS as long as your fps isn't 0) as a measuring tool instead (or in addition to fps) as it's much more readable. To hit 60 fps you have 16,66 ms/f to play with. To hit 30 fps you have 33,33 ms/f to play with. It's your call if it's worth losing 2.66-3.33 of those to the flexibility and feature set that Direct2D/DirectWrite offers.
  4. Must be a driver issue. It seems it was a problem with "dirty" areas, only clearing the areas where something happened. It went away if I drew a fullscreen filled rectangle (even with alpha 0). Might not prove to be a problem in the long run though - have to give it a try in a real application. Thanks for your help yet again. :)
  5. Tack så mycket för hjälpen! ;) (Thanks!) Odd thing though, that last sample you posted - while working well, it doesn't actually clear properly. I might have a case of bad drivers? Edit: Same result on the shared resource example. [Edited by - DEVLiN on November 9, 2009 12:26:43 PM]
  6. Thanks a lot guys - I'll give it a try. :) Just curious though - is there any way to overlay a hwndsurfacerendertarget or is that bound to fail?
  7. Quote:Original post by ET3D Have you used the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag when creating the device? This is required for Direct2D interoperability. See D3D11_CREATE_DEVICE_FLAG in the SDK docs. I'm using D3D10_CREATE_DEVICE_BGRA_SUPPORT at the moment. I can change this to D3D11_CREATE_DEVICE_BGRA_SUPPORT tonight but I have a feeling it will not matter - sadly. Thanks though! Erik: I've read it through (thus the 2 devices comment in my original post) - but couldn't get it to work as OpenSharedResource fails with E_INVALIDARG. Definitely a coding error on my behalf due to being relatively new to D3D (coming from OpenGL). And unless I missed it there was no code samples other than the pseudo code ones. Sadly, neither the DXSDK (August 2009) nor the 7.0A WinSDK seem to provide samples for D2D at all unless I missed a setting somewhere. (they do both provide headers and libs for it though *sigh*) I would rather get the HWND-method working if at all possible rather than the two-device method though. If they do provide proper interop soon enough the HWND-method would only require one #define and two lines change. It would be nice to get some feedback on "we're working on this" or "not going to happen at all" because if it's soon enough I could work on other parts of the code in the meantime. If not at all, biting the ugly bullet would be the only way.
  8. Greetings! (Before we begin, yes - I've searched on these forums for answers, they've not fully helped though) Having recently acquired a brand new computer with windows 7 and a dx11 graphics card - I wanted to finally get started implementing the dx11-only stuff I've been waiting to do for quite some time now. (Tesselation, Compute Shaders etc. Worth mentioning is that I've recently converted from OpenGL to D3D, so a lot of this stuff is new to me) However, getting Direct2D and D3D11 to work together seem to be quite a hassle. I planned on using D2D and DWrite for text and UI stuff as they seem to provide some really neat stuff. However, where all the samples show neatly how to acquire the backbuffer into a DXGI surface for use with Direct2D; for some reason, this doesn't seem to work with a D3D11 device at all (not even using feature level 10_1). I understand that there exists what I would call a "hack" to get this working (using 2 devices, shared full-screen surfaces and sync-mutexes, resulting in ugly, error-prone code - frame delays and unnecessary video memory going to waste *yuck*). However, I haven't gotten this to work either (OpenSharedResource seem to return E_INVALIDARG no matter what args I pass in.), not that I think it's the way forward at all. I don't have access to the code right now, but if someone has the energy to help me through that I'll post it tonight. Also, it doesn't seem to matter in what order I include headers and libraries - CreateDxgiSurfaceRenderTarget always return E_NOINTERFACE when trying to use the D3D11 device backbuffer instead of the double-device method. This should (?) mean that I'm using the wrong or mixed headers/libs - but I've tried getting them all from DXSDK_DIR (August 2009) or from the WinSDK (7.0A) to no avail. Is there any way to check which headers/libs are actually getting used? I'm using Visual Studio 2010 beta 2. I installed the DXSDK after the WinSDK if that matters at all. Another possible(?) solution: I have also attempted to use a hwndrendertarget until proper D3D11 support is added in as it doesn't require additional devices or mutexes and it renders - but not the way I would want it. (No E_NOINTERFACE result when using this method). The screen flickers between the two presentations irregularly. Isn't the D2D1_PRESENT_OPTIONS_NONE supposed to wait until display refresh? Do I have to turn vsync on for this to happen? Is it the double-buffering of the D3Ddevice that's causing problems? And my final question - is there any way to let the HwndRenderTarget only draw where I draw and not clear the background? (I've tried handling WM_ERASEBKGND but that doesn't seem to affect it at all). Is this method of combining D3D11 and D2D bound to fail? Sorry if some of it is not readable. English isn't my primary language but I've tried my hardest to make it passable. [Edited by - DEVLiN on November 10, 2009 5:32:25 AM]
  9. Try this: (as root or via sudo, or you'll get a permission denied on /dev/mem) dmidecode --type baseboard Gives output similar to: Base Board Information Manufacturer: Gigabyte Technology Co., Ltd. Product Name: 8IPE1000-G Version: x.x Serial Number:
  10. Thank you abdulla - that was exactly the kind of response I was looking for. :)
  11. If I somehow offended anyone I apologize, but I can't find anything in my original post that sparked such hostility in the replies. I'm also not sure why you brought sockets into this. In quite a few posts on these very forums (a simple search for: shared memory sockets, for instance) - people recommend shared memory over sockets for inter-process communication for precisely performance reasons. The lists I've seen imply that shared memory using semaphores is the fastest and sockets the slowest way of inter-process communication. Quite different from what your reply suggested. I'll rephrase my question to something more general instead: Is there any inherent, large difference performance wise between using multiple threads and multiple processes - or is it solely dependent on how it's used?
  12. Hi all! First of all, please excuse any language errors - as English is not my native language. Since our current game is in an art-heavy stall - I'm able to really take the time to design our next game engine from scratch - with a (at the moment) very narrow planned user base (Windows Vista+, DirectX 10+ and most likely multi-core processors under the hood) - with no planned support for downscaling to lesser cards or sidestepping into other Operating Systems apart from later incarnations of Windows. (these facts might influence the issues presented below) In order to fully utilize the potential of the system at hand I will have to utilize the amount of available cores at our disposal as well as possible. Now, the initial idea was to start threading the stuff that could be run in parallel as usual - but I was thinking about if another approach would potentially lead to better parallelism and scale better with potential future many-core processors. This is where I feel I'm not fully aware of the implications, and thus seek your knowledge. The other approach consists basically of a host "kernel process" and separate worker processes (not threads but actual processes) that each have predefined tasks. The worker processes are to be buffered - and use shared memory (through memory mapped files) and use some sort of compare-and-swap to handle messaging between the processes. This would imply a slight latency between what is rendered and what the world state is in (i.e. what's rendered isn't the whole truth) - but hopefully I can arrange that latency to hit certain tasks harder than others. (keeping input and local actor highly up to date while allowing a bigger latency for less important actors) Now, it might seem such an approach is overly complicating things - but it would have a few nice perks to go along with it. First of all, it would force us to think about parallelism at all times, since we can never be sure in what order things will occur without a sync-lock from the kernel process. It feels like we'd automatically have less locks on data due to the separate application pool and buffered nature of the system and it gives a rather nice way of handling the update rate of specific tasks. We'd also potentially be able to detect crashes of separate subsystems from the kernel and try to handle those gracefully. Now - before I get to involved into the design of the second approach, I have to ask: Am I shooting myself in the foot here? Would performance plunge by utilizing separate processes with shared memory to handle the interchange of data? (only interested in the performance side of things, not the manageability or debug-friendliness of the solution) I'm fully aware that buffering the data will imply a higher memory cost - but regardless of that fact and assuming I could pull it off - would (could) it work at least near a standard multi-threading solution, performance-wise?
  13. Someone please correct me if I made a mistake (or misunderstood your question), too early morning for me to type this, but here goes: The "LP" part of LPDIRECT3DTEXTURE9 stands for a "long (far) pointer" in hungarian notation. LPDIRECT3DTEXTURE9 is thus defined/typedef'ed simply as a "IDirect3DTexture9*" (notice the pointer asterisk) Pointers (*), like references (&) maintain any changes to it within the function upon return - whereas you are entirely correct in saying that if you use neither - just the local copy receives the changes.
  14. Problem solved. Seems like the MP3 decoder played some tricks on me while the debugger was attached to the process.