• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Myiasis

Members
  • Content count

    15
  • Joined

  • Last visited

Community Reputation

306 Neutral

About Myiasis

  • Rank
    Member
  1.     Yes, that's one way to do it.  Once you get a UI system in place you'll probably find there are a bunch of ways to tackle the same problem.  It will be influenced by your UI's visual design too.   Your player info screen sounds like a collection of related controls.  I would probably make them all children of some parent.  The parent could be an element that does nothing, just organizational, or it could draw a background or borders to make the placement of the player info controls look better.     Does the player info cover the entire screen and is it opaque?   If so, might make sense to do what you suggested, turn off everything and only show the player info.   Would it be better to just draw the player info window on top of what is already there but make it the only window you can interact with?  The UI manager can control this sort of functionality since the elements get fed their events via the manager.  It is for things like this that I like to keep the UI manager in charge rather than putting the code to recognize clicks and movements directly in the controls.
  2. I use a UI manager approach.  There is a lot to making a UI beyond visibility and I put most of that responsibility in the UI manager.   Basics of my approach: Game loop gathers up all the inputs and translates it into a common form for my engine. These inputs are used by both the UI and the game. UI manager gets first chance to process the inputs.  If the UI decides it wants an input such as a keypress or button press, it marks the input consumed so that the game will not also respond to it. UI elements are based off a common class that has the functionality needed for the UI manager to interact with them. UI elements are a hierarchy of elements. I gave my base element functions that I found handy for implementing the UI behaviors.  The manager will figure out things like whether the cursor is over an element and notifies the elements of events like: OnCursorEnter, OnCursorLeave, OnCursorMove.  If buttons are clicked there are functions for that too.  Typical stuff that lets you build up functionality for specific types of controls.  A button control might change colors on OnCursorEnter and go back to a regular state on OnCursorLeave, stuff like that.   The manager maintains a list of top level items.  These I consider attached to the "screen".  The manager decides what will get drawn based on a visibility flag on the items.  You don't have to worry about calling specific functions like you mentioned above, you just let manager do its job.  If an item is marked visible, it will be drawn.  I let the elements decide whether they want to draw their children or not though.   In the cases you mentioned a system like this lets you control it in various ways: Don't want to draw any UI?  You could choose to not call the Render function on the manager. Don't want to draw specific buttons:  Set their visibility flag, or inactive flag, or whatever makes sense for your UI. You could have your UI grouped: Make a parent element with some child buttons, hiding the parent will prevent any of the children from drawing.  
  3.       I have used #3 for a GUI and did not find it slow.   Since you are working on a GUI you might want to consider what you specifically need for that use case.  For example, specifying transforms for each quad sounds a bit like overkill to me.  You can certainly do it, but do you need to for a GUI?   I think in many cases the quads of a GUI are relatively static.  You might want some animation for hovering the cursor or sliding elements into view or whatever, but most things are just going to sit where you stick them.  You could transform the verts of the quad on the CPU and then keep them around until there is a change.     Consider something like a text prompt/label, you are probably going to plop it on the screen somewhere, compute the quads for the letters, and from that point on it probably isn't going to move or rotate or scale.  If you were to create the array of verts and hold onto it then all you have to do is copy the verts into the dynamic buffer every frame.   Assuming your GUI doesn't have an enormous amount of constantly changing elements, computing the new quads when the data changes is probably not going to be a big deal -- FPS counter, score, stats...  Even then, those things are probably not changing values every frame.   You might want to take a look at the DirectX Took Kit which has a sprite batcher similar to what I'm describing.
  4. I did something similar to this a while back.  What I did was separate all of the generation logic into separate threads but I left all of the integration of the new data on the main thread only.  This separation makes it easy to use;  you don't have to worry about data races.   I used Microsoft's Concurrency Runtime (c++).  Basically I set up two queues, one to push requests for data into, and one that holds the results.  Both of the queues are concurrency safe.   When I need some data that takes a while to produce (such as voxel landscape) I push a request with all the data it needs into the first queue.  The other thread wakes up when it has items in the queue, pops off the request, processes it using as many threads as it needs, could be sequential, could use more parallel processing.  When it has the data created it pushes a message on the result queue that has info about the result, including a pointer to the data.   Each frame, on the main thread, I check the result queue, pop off the results, stick the finished data wherever it needs to go (such as a VBO), and move on.   In this fashion the data generation can take as long as it needs without stuttering the game and you don't have to worry about clobbering your data by reading and writing to the same buffer at the same time, or having to take locks.  In my case, I was creating the voxel data a little beyond what the player could see and then caching the results so when it came time to display it, it was generally ready to go.  
  5. The way I have done my resources can cover at least some of your concerns.  The answers already given I think show this is primarily up to personal preference.   Phil's suggestion is closest to what I did.   I created a few templates for caching my resources:   CacheItem<Type> The CacheItem wraps a pointer to the allocated resource and a reference count.   I started with a shared_ptr but had some issues and found it simpler to make my own object.  I also wanted to be able to track data other than just a reference count, so if I used a shared_ptr, I would still end up having to wrap that shared_ptr in an object with my extra data.   This sort of matches your desire for a handle.  This could point at a dummy object until the real data was loaded if you wanted to.   Other things you might want to track depending on how you intend to flush your cache: last time requested, maybe a sector id (from your original post)   CacheRef<Type> The CacheRef is what my game uses external to the cache.  This is what the resource manager returns.  It takes a reference to a CacheItem.  This object is responsible for incrementing/decrementing the reference counter in the CacheItem and providing the pointer to the data.   When this object goes out of scope it will decrement the reference count.  Similar to using a shared_ptr.  This prevents me from having to worry about letting the resource manager know when I am done with a resource.   Cache<TKey, Type> Implemented with a std::map.  The stored type for the map is a "CacheItem<TData>*".     Relatively simple has functions for adding, finding, flushing the cache.  Could be tailored to flush by whatever you want to put into the CacheItem.  Sectors, reference counts, last request time, etc...   I have a single resource manager that uses instances of the Cache<Key,Type> with custom load functions for the various types.   The declarations get a little cumbersome, so some TypeDefs are used to make life a little easier.  With the templates there is very little duplicated code, just the specifics needed for the various resource loading/creation.   My resource requests are something like: 1) auto* item = m_whicheverCache.Find(name); 2) if (!item) -> Load data, create resource, add to cache. 3) return item   What gets returned is a CacheRef which can be queried to see if it is empty/null which indicates an error somewhere.  Hang onto the CacheRef for as long as you need it which keeps the reference count up.   This also provides another safety check for me in regards to whether I cleaned up what I thought I did.  When my cache goes out of scope it will check the reference counts on all of the items in the cache; they should be zero if the cache is about to be destroyed.  I have had times where I thought all my resources were cleaned up, but they were not.   General pattern of usage is pretty simple: auto nameRef = resourceManager.GetString("login_name"); if (nameRef.IsEmtpy()) { DoSomethingAboutIt(); return; } Print(nameRef.Data()); I have a .Data() and .DataPtr() which are just a reference and pointer to the underlying data.  With the templates, everything is nice and type safe, no casting required.
  6. DX11

      No, you only want to reset your counts when you DISCARD on the buffer.  Drawing doesn't do anything to the buffer at all.   The DrawIndexedPrimitive function takes an offset into the buffer where it should start drawing.   If you put in 12 vertices the first time, when you take the lock with No_Overwrite on a second texture you will start putting vertices into the buffer starting at offset 12.  When you call DrawIndexedPrimitive the second parameter is which vertex to start drawing from (12 in this case).   When you stack the draw calls like that, using No_Overwrite, the GPU is probably still working on the buffer even as your are filling it with more data.  You have to make sure you don't overwrite the data you already put in there because you don't know whether the gpu is in the middle of working on it.  If you were to reset your counts when using No_Overwrite, you would in fact be overwriting.    
  7. DX11

      Something like that, yes.  If you are going to store the actual vertex data in your draw list, you can just memcpy a block of it, be a little easier.  Also need to remember that you if are locking the buffer with the no-overwrite option the pointer you get back will be the pointer to the same buffer you filled last time.  So you need to write data into it past the point where you put data last time. vertex* vertices; vBuffer->Lock(0, 0, (void**)&vertices, NULL); for (auto& item : m_drawList) { memcpy(vertices + m_countWritten, item.verts, sizeof(vertex) * 4); m_countWritten += 4; } When you Discard, set your count variable back to 0 to start at the top.  Use your count to determine when you are going to overflow the buffer, that's when you know you need to Discard again.   True that the sorting by texture like that is only going to work if you don't care about the order that they overlap.  This is where the SpriteBatch needs to be tailored to however you find it handy to use.   For example, you could add "int m_level" to the DrawData, and add a "level" parameter to the DrawCalls.  This lets the caller says "all these are on level 0, these are one level 1 (need to be drawn on top of level 0)."  Then when you sort, sort by level and texture both so you end up with the levels sorted together, then by texture within the level.   Or you could drive that externally:  The code that uses your sprite batch could Begin -> Draw all the stuff that doesn't overlap -> End, then start up a new batch Begin -> Draw overlapping stuff -> End...   Or your sprite batch could determine levels automatically based on overlapping rectangles.    
  8. DX11

    Yup, your vertex declaration looks better the dynamic flag.  I think I saw that you can't create a dynamic buffer with the managed pool though?  Maybe I'm wrong, I haven't used that API -- something to look into.   That actually brings up another thought though, do you have to use the DX9 API?  You can use the DX11 API with a DX9 feature set.  Same concepts, but the functions are a little different to do this.     Create your own structure/class to stuff into the vector like you mentioned.  Something like this: struct DrawData { DrawData(const Rectangle& rect, Texture* texture) : m_rect(rect), m_texture(texture) { } Texture* m_texture; Rectangle m_rect; }; std::vector<DrawData> m_drawList; SpriteBatch::BeginBatch() { m_drawList.clear(); } SpriteBatch::Draw(const Rectangle& rect, Texture* texture) { m_drawList.push_back(DrawData(rect,texture)); } SpriteBatch::EndBatch() { std::sort( m_drawList.begin(), m_drawList.end(), [](DrawData& first, DrawData& second) { return first.m_texture < second.m_texture; }); for (auto& item : m_drawList) { if (item.m_texture != lastTexture || bufferIsFull) { // Unlock dynamic buffers and draw anything you put in there already. // Take lock again with which lock flag you need. // reset your lastTexture to item.m_Texture // reset your offset into dynamic buffer if you discarded } // Fill in the vertex data for the 'item'. You could do this by casting the pointer you got // back from the lock into a vertex structure, or fill in a local vertex structure and then // copy to the dynamic buffer, whatever you find most friendly to your style. // Each time you put more data in the dynamic buffer just keep track of where you were at so // next time through the loop you can figure out if the buffer is full or not. } } You can add whatever data you need to that object.  You don't have to figure out all the final vertex data in the draw call.  You could if you wanted to though, really up to you.  Either you need to track enough data that you can expand it all into the dynamic buffer during EndBatch, or put all that data in the DrawData structure and fill it in during the draw call.   Your steps to follow sound about right to me.  What doesn't feel right to you with the workflow?  This is just my take on a SpriteBatch, others probably would do things differently, although I would expect most of them would based around the dynamic buffers in a similar conceptual way.  
  9. DX11

    I drew a picture, maybe this will help visualize it.   Scenario: Your SpriteBatch has an internal list that keeps track of the draw calls.  This is of unlimited size. You have dynamic vertex buffer that can hold up to 3 draw calls worth of data.   Picture starts in upper left, flows down the left side then over the next column top and flows down again.   Flow: 6 Draw calls are made: Blue, Green, Red, Green, Blue Green The internal list stored the draw calls in the order they came in and looks like the "Internal buffer" EndBatch is called. You want to be efficient, so you sort the internal buffer by texture which then looks like "Sort by Texture" Now that it is sorted, you can walk the internal buffer watching for texture changes. The dynamic buffer has not yet had any data written to it, it is empty. You take a lock with the NoOverwrite flag because the buffer is empty. You are keeping track of where you last inserted into the dynamic buffer. Empty, so you are at the top of the buffer.  The orange arrow is the next insertion point. You insert the blue quad data, followed by another blue quad, then you encounter a green and that will require a texture change.  So you unlock the buffer and DrawIndexed for the 2 blue quads in the buffer. Now you want to draw the green quads.  So you take another Lock, NoOverwrite again because there is still room in the dynamic buffer.  You have kept track of the insertion point, orange arrow again.  You insert the data for the green quad, but only the one fits.  You need to unlock the buffer, DrawIndexed for the 1 green quad. You filled up the buffer, but have more green quads to draw.  So this time you take the lock with the Discard flag which gives you a fresh buffer to write data to.  Empty buffer because you called Discard, so the insertion point (orange arrow) is back at the top.  You insert two more green quads then realize a red quad is next.  So you unlock the buffer, DrawIndexed on the 2 green quads in the dynamic buffer. Only the red quad is left in your internal list.  Need to take a lock again, using NoOverwrite because there is still room for it in the dynamic buffer.  Orange arrow again is your insertion point.  You put the red quad data in, unlock the buffer, DrawIndexed on the red quad data. The internal buffer doesn't have to be in the vertex format.  You need to keep track of enough data to tell when a state change is needed (texture change in this case), and enough data to be able to create the vertex data for the quads.    
  10. DX11

      Yes, the goal being to find some balance.  Too small and you have to call DISCARD too often, too big and you are just wasting a bunch of memory.   Your example of creating them in the constructor still didn't create them as dynamic buffers though.  Look at the docs for CreateVertexBuffer and at the usage flags:   http://msdn.microsoft.com/en-us/library/windows/desktop/bb147263(v=vs.85).aspx#Using_Dynamic_Vertex_and_Index_Buffers http://msdn.microsoft.com/en-us/library/windows/desktop/bb174364(v=vs.85).aspx http://msdn.microsoft.com/en-us/library/windows/desktop/bb172625(v=vs.85).aspx   Since you know you'll have data changing frequently (per frame), you want to do it in the most efficient way you can.  Creating and destroying static buffers each frame is going to be spendy.  Dynamic buffers are the solution to frequently changing data.  They are designed to support frequent updates.   That's the biggest part to wrap your mind around.  Most of the rest of what I was saying are just implementation flavors -- do whatever you need to make your life easier.  Understand the dynamic buffers and their pattern of usage, then wrap whatever code you need around that in your SpriteBatch.     You are in the right track with your sample code.  However, I would stay away from using a fixed size array for queuing up the data.  What you want to be able to do is take an unknown number of SpriteBatch::Draw calls, queue them up, then when you call SpriteBatch::EndBatch, you want to fill up the dynamic buffer as many times as it takes to draw everything in your internal queue.  If your internal size is fixed what happens if Draw is called more times than you have storage for?   Have you looked at DirectXTK?  http://directxtk.codeplex.com/   It actually has a SpriteBatch in it, which might save you a lot of time trying to create your own.  The source is there for you to study, or adapt into your own creations.  I haven't looked at the source, but I'm pretty sure it follows a similar pattern with dynamic buffers -- based on Shawn's writeup of DISCARD/NO_OVERWRITE dynamic buffers in XNA : http://blogs.msdn.com/b/shawnhar/archive/2010/07/07/setdataoptions-nooverwrite-versus-discard.aspx  
  11. Are you looking at all of the HRESULT codes returned from the DX functions?  Could be something hasn't been created to the satisfaction of DX.   Also create the device with the debug flag and DX will spew warnings for a lot of things.  This can be really helpful.   You also might want to look at one of the graphic debuggers that are available.  You can pause your app and analyze how a frame is being constructed.  If using VS2012, there is one built in now.  PIX if on older API (don't think it works with DX11).  NSight if you have an nvidia card is a really nice.  I forgot what AMD's version is called, but they have one too.   Personally, I would start with the debug output if you don't have it on.      
  12. I'm not real clear on what you are asking...?   Are you happy with the dark textured cube?  Sounds kind of like you are saying that the cube is correct like that?  I can barely see there is some texture on it, so I would say you are missing some ambient adjustment because those are on sides opposite the light, or the cube's normals are messed up.   The cube mesh is a little overly complicate for a cube, although that shouldn't really matter if all your rendering is that cube (assuming the normals and lighting are correct).   As to why you got an in-your-face texture, I would guess that you rendered the quad with the cube's material still bound, not the render target you are meaning to display.   If you can see the cube when you don't render the quad then you rendered the cube to the back buffer, not a separate render target/texture.  The screen would be blank (clear color), if you rendered it to a separate texture and then didn't paste the image back via the full screen quad.
  13. DX11

    Instead of creating your vertex and index buffers in endBatch, you would create them once:  Create them in the constructor (as dynamic buffers), free them in the destructor.  I am not really sure if there is a "golden" size to pick, but the goal would be that you lock the buffers more often with NOOVERWRITE than DISCARD (discard being more expensive if the drivers need to allocate a new buffer for you).   You need another buffer of some sort to keep track of your Draw calls.  You don't want to immediately send all your Draw calls to the GPU; you want to batch them up instead (SpriteBATCH).  This buffer has nothing to do with the GPU, it is just a list you maintain in some way that is convenient for you.  If you call Draw 200 times, this internal list would keep track of the data from those 200 calls.  When you call your endBatch, that is when you want to worry about getting the data to the GPU.   When you call endBatch, if those 200 Draw calls didn't need to change any state, you might be able to get away with sending them to the GPU with a single draw call -- they would all have to use the same state and same image in that case (like a sprite font).   There are a couple of ways you could look at making the SpriteBatch.  You could set up all the state external to the SpriteBatch, call BeginBatch, do all your draw calls, then call endBatch for each state change (you would be externally driving it this way), or you could make your SpriteBatch more complicated, such as giving your draw call different textures, and let your SpriteBatch worry about sorting out the details -- 5 different images to render, that's 5 state changes.  Then when you EndBatch you can sort your list by texture, set up the gpu states, put all the vertices/indices in the dynamic buffer, call draw, then start processing the next image's quads, until you have sent all your Draw calls to the GPU.   For the dynamic buffers, you copy the data in like you are currently doing, but the buffer is always there and you need to make sure you insert new data for drawing after the last data you inserted (NOOVERWRITE).  In your lock calls, you didn't use any flags.  For a dynamic buffer the flag needs to be the NOOVERWRITE or DISCARD flag.   Scenario: You queue up 150 quads to draw via your SpriteBatch::Draw calls. You call EndBatch. Your dynamic buffer is big enough to hold 100 quads at a time. You have not yet done any sprite batch processing so your insert point into the dynamic buffers is at the start, 0. Process: Need to get locks on your vertex and index buffers.  When you take the lock, you need to figure out whether you want to lock it with the NOOVERWRITE or DISCARD flags.  Since we are looking at an empty buffer (haven't inserted anything yet), you want to use NOOVERWRITE.  You only want to lock with DISCARD when the buffer is full or almost full.   When you lock with NOOVERWRITE, you immediately get a pointer to the buffer, even if the GPU is currently pulling data from it to draw things.  That's why you say NOOVERWRITE (don't overwrite anything you have previously put in there).   Fresh buffer, you are at the start of it, 100 quads will fit.  Fill up the 100 quads worth of data.   Unlock the buffers.   DrawIndex on all the data in the dynamic buffer.   You still have 50 quads to draw.   Take another lock on the dynamic buffer, but this time use the DISCARD flag because you already filled it up.  If the GPU is still drawing with the data you gave it, you will get a pointer to different memory.   Start back at the top (fresh buffer), add your 50 quads worth a data to it.   Unlock the buffers.   DrawIndex on the dynamic buffer.   Call Present, see all your quads via the 2 batches you sent.   For the next frame, I am not sure what the best practice is.  In my implementation, I remember that I already drew 50 quads to the dynamic buffer and any quads I add next time I take the lock with NOOVERWRITE again and continue filling it up.  However, since the quads have been drawn, you could probably make your life easier and use DISCARD always at the start of a new frame.  I'm unsure on that one.  You would want to be careful about doing that in BeginBatch, since you could technically call BeginBatch/EndBatch multiple times per frame if you wanted to.  In that case calling NOOVERWRITE would be the better choice since it is in the same frame and probably still drawing your previous quads.
  14. DX11

    You are kinda on the right track with the sprite batch, but there are some concepts that have been missed.  Although I haven't done this in DX9, I have in DX10 and DX11 and conceptually I would imagine it is the same in DX9.   You don't want your sprite batch calling Present.  It currently is doing this.  This definitely will cause some flickering since you will only partially draw your sprites in each frame.   Should be more like: Clear back buffer. Draw sprites, as many as you want. Present (from somewhere outside the spritebatch when you know nothing else will be rendered). In your SpriteBatch you should create dynamic vertex and index buffers.  The key here is to make them dynamic.  You are currently creating new buffers each time endBatch is called which I would imagine is slow if you do it a lot.  I didn't see them cleaned up either, but I didn't look through all of the code.   Your sprite batch needs a buffer to track your queued up quads and also needs to keep track of the last position your wrote to in the dynamic buffers.  When your local buffer is full, you need to get a lock on the dynamic buffers, write your new vertex and index data to wherever you left off, release the lock, then DrawIndex (whatever the DX9 equivalent is) giving the offset into your dynamic buffer that you just wrote new data to.   When you lock the dynamic buffers you want to use lock flags to let the api know what your intentions are.  If you are still filling up the dynamic buffer use D3DLOCK_NOOVERWRITE and you are promising you are not going to overwrite any data that has already been submitted and might currently be drawing.  Technically, I think you CAN overwrite if you want, but you'll mess things up and probably see it.  Pretty sure if you don't lock it with this flag you wait until the gpu is done drawing the contents before you get the lock (bad/slow).    If you are towards the end of your dynamic buffer, you want to specify the DISCARD flag.  This returns you a pointer that you can start filling from the top again.  If the gpu is still processing data you already submitted you get a new pointer here.  Point is, you don't have to care whether the gpu is done with it or not, if you say DISCARD you get a pointer that is valid to write to from the top.   Process is something like this: Take lock on buffers: Full or Almost full, use DISCARD Plenty of room left, use NOOVERWRITE Add vertex/index data to buffers. Release lock. DrawIndexed You probably want your own list of quads in your sprite batch rather than directly writing them to the dynamic buffers.  This lets you sort later on down the road if you need to.  Fill up internal list of quads via your draw calls, call EndBatch, sort your list by image (or whatever causes a state change), fill dynamic buffers (possibly multiple lock/fill/unlock/DrawIndex).   You will need to consider how you sort, or if you sort, quads if you batch them up like I mentioned.  If your quads are drawn in an order-dependent manner (some have to be on top of others), then sorting by image alone isn't enough.  You'll have to decide what forces your sprite batch to submit new quads.