• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

reaperrar

Members
  • Content count

    100
  • Joined

  • Last visited

Community Reputation

136 Neutral

About reaperrar

  • Rank
    Member
  1. Can you enlighten as to the difference between 2D voxels and pixels?   My interpretation would be a voxel has volume and a pixel is more of a point. A 2D voxel is one that exists in 2D space, so a 2D volume.   Two dimensional space does not have concept for volume. Do you mean area?   Yes, area would be more accurate.
  2. Can you enlighten as to the difference between 2D voxels and pixels?   My interpretation would be a voxel has volume and a pixel is more of a point. A 2D voxel is one that exists in 2D space, so a 2D volume.   Thank you for your answer.
  3. For a 2d voxel game, is it viable to store the procedurally generated mesh to file?   I know disk operations can be a bottleneck, but so can generated meshes on the fly. I'm wondering which generally has the least performance impact.   Minecraft saves a lot of data with its chunks but there is no mention of the actual meshes being saved afaik.
  4. In a texture I have a square sprite (though it does not take up the whole texture, just a small portion).   I set up my UV coordinates to target just the sprite.   Sampler code:     oSamplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;     oSamplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;     oSamplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;     oSamplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;     oSamplerDesc.MipLODBias = 0.0f;     oSamplerDesc.MaxAnisotropy = 1;     oSamplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;     oSamplerDesc.BorderColor[0] = 0;     oSamplerDesc.BorderColor[1] = 0;     oSamplerDesc.BorderColor[2] = 0;     oSamplerDesc.BorderColor[3] = 0;     oSamplerDesc.MinLOD = 0;     oSamplerDesc.MaxLOD = D3D11_FLOAT32_MAX; When rendering the sprite to the screen using an orthographic projection with no rotating it renders perfectly.   However, if I scale the sprite non-uniformly (as it is being use as a frame that stretches) then the parts of the texture surrounding the sprite seem to be sampled.   I drew a coloured border around my square sprite and can see the red bleeding in to the edges of the texture in the above scenario.   A more detailed explanation... Here are some frame pieces for a frame, a bar and a corner. The bar is to be stretched to accommodate the frame dimensions whereas the corner is not. Here is the frame rendering in-game. Notice the coloured borders are ignored, they were not included using the UV coordinates I set.   Now here is the exact same example as the above except I've changed the overall scale for the frame to reduce its size. (I've re-shaped the frame as that is part of its functionality)   I get the positions for my texture sampling like this... The squares represent pixels, the blue squares are pixels in a sprite, the red circles are the positions I use.   Why is there sampling around the sprite when the frame is scaled smaller?
  5. What's HZBLENDVALUE_ONE supposted to be? Typo in the post, not in the code... my bad. Edited
  6. SOLVED   Compiling with visual studio 2010...   If Compiling in DEBUG mode with D3D11_CREATE_DEVICE_DEBUG set for device creation, alpha blending works fine. If Compiling in DEBUG mode with D3D11_CREATE_DEVICE_DEBUG NOT set for device creation, alpha blending works fine. If Compiling in RELEASE mode with D3D11_CREATE_DEVICE_DEBUG set for device creation, alpha blending works fine. If Compiling in RELEASE mode with D3D11_CREATE_DEVICE_DEBUG NOT set for device creation, alpha blending does not work... looks like alpha testing. D3D11_BLEND_DESC oBlendStateDesc; oBlendStateDesc.AlphaToCoverageEnable = 0; oBlendStateDesc.IndependentBlendEnable = 0; for (unsigned int a = 0; a < D3D11_SIMULTANEOUS_RENDER_TARGET_COUNT; ++a) { oBlendStateDesc.RenderTarget[a].BlendEnable = 1; oBlendStateDesc.RenderTarget[a].SrcBlend = D3D11_BLEND_SRC_ALPHA; oBlendStateDesc.RenderTarget[a].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; oBlendStateDesc.RenderTarget[a].BlendOp = D3D11_BLEND_OP_ADD; oBlendStateDesc.RenderTarget[a].SrcBlendAlpha = D3D11_BLEND_ONE; oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ONE; oBlendStateDesc.RenderTarget[a].BlendOpAlpha = D3D11_BLEND_OP_ADD; oBlendStateDesc.RenderTarget[a].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; } HRESULT HResult = m_poDevice->CreateBlendState(&oBlendStateDesc, &m_poBlendState); //HResult returns S_OK Anyone have any idea what may be causing this?   UPDATE: Tested on two different PC's... Both windows 7 64 bit, one has a GTX 460 & the other is a GTX 580. The one with the GTX 580 does not have any problems with alpha blending.   SOLVED: There was an initialized variable in there somewhere and I can only assume creating in debug mode caught the error. Another difference between the two tested machines is the visual studio version. So perhaps the later version machine (gtx 580) caught the error also.
  7. I'm interested in using a low-level c++ framework (engine?) for creating games in 2D while not costing much and containing the following cross-platform (across PC) functionality: Graphics Input Networking Physics Sound I haven't had much luck finding anything that matches this criteria... Torque2D seeming the closest. I couldn't find much documenation/tutorials about working with the engine on a core level (seems mostly scripting examples). In my spare time I created a DirectX11 framework that suits most of my needs listed above (c++, 2D, graphics, Input, Physics(Box2D)). Since I couldn't find much else I'm thinking its best to continue developing the framework, making it cross-platform (across PC). At present I'm trying to do this with the graphics & input. So I'm looking into openGL 3.3... I'm thinking it will match all of the functionality I'm using in dx11 (Currently using feature level for dx10) allowing games I created to be run on Windows XP/vista, Mac & Linux as well instead of just windows 7+. Currently I'm focusing on making these components cross-platform: Input (Keyboard and mouse) Window (Will need to work with both dx11 and opengl 3.3) Timing ...Followed by implementing opengl 3.3. Research I've done so far: Going cross platform in this way across PC means I can target roughly another 20% of people using PCs (according to steam surveys) freeGlut - Stay away from, don't even have control over render loop SDL - Input, Window (usable with dx11 & opengl 3.3?), Timing I'm unsure if SDL is usable with opengl 3.3, especially since I'd be using an "unstable" version. I'd like to control initialization of opengl directly if possible/reasonable and sending the handle of the window (if that is how it is done on other OS's besides Windows) I'm thinking a good source of information would be the fifth edition of the OpenGL SuperBible. Given all this... can anyone give advice of mention any other resources.
  8. Ty for the help. The problem was in fact somewhere unrelated though not too far fortunately. Stopped looking at the list itself though thx to your replies.
  9. What are some common causes for unhandled exceptions from a std::list's Clear() call? Given...   *The list does not contain pointers, the resource is never released manually e.g std::list<int>   *It breaks here (inside list) #if _ITERATOR_DEBUG_LEVEL == 2 void _Orphan_ptr(_Myt& _Cont, _Nodeptr _Ptr) const { // orphan iterators with specified node pointers _Lockit _Lock(_LOCK_DEBUG); const_iterator **_Pnext = (const_iterator **)_Cont._Getpfirst(); if (_Pnext != 0) while (*_Pnext != 0) if ((*_Pnext)->_Ptr == this->_Myhead || _Ptr != 0 && (*_Pnext)->_Ptr != _Ptr) //!!BREAKS HERE - On second pass through loop (while (*_Pnext != 0)) ) _Pnext = (const_iterator **)(*_Pnext)->_Getpnext(); else { // orphan the iterator (*_Pnext)->_Clrcont(); *_Pnext = *(const_iterator **)(*_Pnext)->_Getpnext(); } } #endif /* _ITERATOR_DEBUG_LEVEL == 2 */
  10. Trying to set up a class which renders text to a texture (specifically in 2D). I have text rendering atm but obviously if the string never changes I could save on performance a lot by rendering the static text into a single texture.   I got it working to some degree, though I have another post about that issue which I think is unrelated to this post... http://www.gamedev.net/topic/637666-render-target-alpha-blending/   I'm posting this to ask if these are the normal steps you would take to acheive this: Disable depth & stencil testing (2D) Enable alpha blending Set up projection to accomodate text block dimensions Set up render target texture2D to accomodate text block dimensions Set up viewport to accomodate text block dimensions EDIT: Clear render target (forgot to mention that, ty L.Spiro for reminding me) Render text into texture Set back to default projection, view port and render target One thing I forgot when following these steps was that textures dimensions are supposed to a power of 2. Though it appeared to render undistorted (text block dimensions were not a power of 2 or even close).
  11. FINAL EDIT: Resolved... just needed to learn how alpha blending works in-depth. I should have had: oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO; ...set to D3D11_BLEND_ONE to preserve the alpha. When rendering to the backbuffer once the problem would not be noticed as the colours blend normal and that is the final output. When rendering to the texture the same thing applies, just then rendering the texture to the backbuffer the incorrect alpha plays a role in incorrectly blending the texture into the backbuffer. I then ran into another issue where the alpha seemed to be decreasing. This is because the colour is blended twice, for example... Source.RBGA = 1.0f, 0.0f, 0.0f, 0.5f Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f Render into texture... Result.RGB = Source.RBG * Source.A +  Dest.RGB * (1 - Source.A) = 0.5f, 0.0f, 0.0f Result.A = Source.A * 1 + Dest.A * 1 = 0.5f Now... Source.RBGA = 0.5f, 0.0f, 0.0f, 0.5f Dest.RGBA = 0.0f, 0.0f, 0.0f, 0.0f Render into backbuffer... Result.RGB = Source.RBG * Source.A +  Dest.RGB * (1 - Source.A) = 0.25f, 0.0f, 0.0f Result.A = Source.A * 1 + Dest.A * 1 = 0.5f To resolve this, when rendering the texture into the backbuffer I use the same blendstate but change the SrcBlend to D3D11_BLEND_ONE so the colour is not blended twice. Hopefully this helps anyone else having a similar problem.... EDITEND   When rendering to a render target there doesn't seem to be any alpha blending happening between what is being rendered and what is being render with the render target background.   I've disabled the depth and stencil testing (EDIT: rendering in 2D)   I've enabled alpha blending with the following:   D3D11_BLEND_DESC oBlendStateDesc; oBlendStateDesc.AlphaToCoverageEnable = 0; oBlendStateDesc.IndependentBlendEnable = 0; //set to false, dont need loop below... but just incase for (unsigned int a = 0; a < 8; ++a) { oBlendStateDesc.RenderTarget[a].BlendEnable = 1; oBlendStateDesc.RenderTarget[a].SrcBlend = D3D11_BLEND_SRC_ALPHA; oBlendStateDesc.RenderTarget[a].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; oBlendStateDesc.RenderTarget[a].BlendOp = D3D11_BLEND_OP_ADD; oBlendStateDesc.RenderTarget[a].SrcBlendAlpha = D3D11_BLEND_ONE; oBlendStateDesc.RenderTarget[a].DestBlendAlpha = D3D11_BLEND_ZERO; oBlendStateDesc.RenderTarget[a].BlendOpAlpha = D3D11_BLEND_OP_ADD; oBlendStateDesc.RenderTarget[a].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; } // Create the blend state from the description HResult = m_poDevice->CreateBlendState(&oBlendStateDesc, &m_poBlendState_Default); m_poDeviceContext->OMSetBlendState(m_poBlendState_Default, nullptr, 0xffffff);   If rendering to the backbuffer alpha blending works and I am not changing the blend state. When rendering the render target texture to the backbuffer the alpha blending works fine as well.   Anyone know how I can fix?   EDIT: It seems it just ignores whatever pixel is inside the render target texture already and just overwrites it with what is rendering in. The alpha is preserved though, so when I render the texture to the screen I see my sprites on the texture blending with the backbuffer (they just dont appear blended with each other within the texture).   EDIT: I'm under the impression that I'm missing an important step to get blending working that you wouldn't normally take unless not rendering to the backbuffer.   EDIT: Since I'm getting no replies I'm thinking that there are no extra steps, that the alpha blending should work & i should keep going over what I have... can anyone confirm this?   EDIT: If I set AlphaToCoverageEnable to true it blends, but looks terrible. That at least confirms it is using the same blend state... just works differently depending on when rendering to backbuffer or a texture : /   Here's some visualization...   1. Rendering to backbuffer - AlphaBlending enabled. 2. Rendering to texture - AlphaBlending enabled. 3. Rendering to backbuffer - AlphaBlending disabled. 4. Letter T taken from the font file *When rendering with AB disabled, the letters match exactly (compare 4 & 3) *When rendering to the backbuffer with AB enabled, the letters render slightly (hardly noticeable) washed out but still blend (compare 4 & 1) *When rendering to a texture with AB enabled, the letters render even more noticeably washed out while not blending at all. (compare 4 & 2) Not sure why the colours are washed out with alpha blending enabled... but maybe its a clue?   EDIT: If I clear the render target texture to say... 0.0f, 0.0f, 1.0f, 1.0f (RGBA, blue)... this is the result:   Only the pixels with alpha > 0.0f & < 1.0f blend with the colour. Another clue but I have no idea how to resolve this issue...
  12. When you create a 2D texture with ID3D11Device::CreateTexture2D you specifiy a width & height inside the description arguement. Can you change the width & height if it is created with D3D11_USAGE_DEFAULT or does the texture needed to be recreated?
  13. The largest project I've ever worked on took 1min to compile lol. It's hard to grasp the reasons for avoiding T-PC because of this I beleive. I haven't much real c++ experience though I realised my design was wrong because of the research I did into it. Came here to the begginers area for convincing and your post was most helpful, ty.   I have used placement new before though didn't make the connection that the allocation/construction happened in a identical way to my two-phase approach. I was using a pool, allocating space for the object and calling initialise when a new object was requested.   I'll avoid using two-phase construction flippantly in my design.
  14.   If possible, I'd like memory required by the object to be local to the object rather than off in some random place. Plus I'm under the impression the vector will copy the object into each instance (as opposed to call each instance's constructer with the value individually), so if the object is not meant to be copied that could be a problem down the track I'm thinking.   I guess I'm just seeing if there are alternatives to the vector though it appears there isn't without two-phase construction.
  15. I'm a little confused about how could go about doing this...   class MyClass { public: MyClass(int i) : m_i(i); ~MyClass(); private: int m_i; } int main() { const unsigned int uiArraySize = 1000; int iInitializeValue = 5; MyClass oClass[uiArraySize] = {iInitializeValue, iInitializeValue, iInitializeValue...}; return 0; } If I want all instances of the class to intialize with value "iInitializeValue" is there some automatic way this can be done instead of copy paste 1000 times xD?   EDIT: ...Without using anything other than syntax? I don't want to be forced to use something in std:: EDIT: Also, I assume acheiving the same thing is out of the question with the new operator when creating an array, though it would then seem more appropriate to use the std::vector.... unless the object provided to be copied from is non-copyable. /sigh