thewayout_is_through

Members
  • Content count

    61
  • Joined

  • Last visited

Community Reputation

182 Neutral

About thewayout_is_through

  • Rank
    Member
  1. OK, well this situation is showing that Windows 10 installs targeting DX11 development are a lot more combersome than you'd think...   According to an MSDN blog it seems that Windows 10 the D3D SDK Layers is an 'optional graphics tool' for Visual Studio; so you either have to enable it from the optional Windows Features or via Visual Studio's optional graphics tools: http://blogs.msdn.com/b/vcblog/archive/2015/03/31/visual-studio-2015-and-graphics-tools-for-windows-10.aspx   Just seems odd, since if I'm a developer going for a full install of Visual Studio and/or Windows SDKs shouldn't it at least ask if I want the Graphics Tools for debugging enabled?!?!
  2. MSDN is your friend: https://msdn.microsoft.com/en-us/library/windows/desktop/jj863687%28v=vs.85%29.aspx   For developers currently working on applications in Microsoft Visual Studio 2010 or earlier using the D3D11_CREATE_DEVICE_DEBUG flag, be aware that calls to D3D11CreateDevice will fail. This is because the D3D11.1 runtime now requires D3D11_1SDKLayers.dll instead of D3D11SDKLayers.dll. To get this new DLL (D3D11_1SDKLayers.dll), install the Windows 8 SDK, or Visual Studio 2012, or the Visual Studio 2012 remote debugging tools.
  3. Propagate properties using a scene graph

    This link gives a pretty detailed introduction into the history and evolution of scenegraphs: http://www.realityprime.com/blog/2007/06/scenegraphs-past-present-and-future/   As people itterated on the concept they ended up trying to leverage it for too many things that it wasn't optimal for, so implementations became bloated and inefficent messes many times.  All that gave using the word 'scenegraph' somewhat a less than positive connotation.   Even if they don't call it a scenegraph most engines have something that still fits the basic description of the concept.  Yet good implementations keep them directed to a task, being smart about reserving it for what it's does well and avoiding the temptation of a one-size fits all needs approach.    Today most wouldn't store the data in the graph, as a graph's best use is to optimize traversal of a hiearchy vs. efficency of data storage for varied types of objects.  Instead you can use specialized graphs that are organized in a way that's ideal for the task that graph needs to solve, they reference objects/entities that are constructed in a way provides things like composition and that references data that's stored in the way that is most effecient for the processing, transfer and storage of data vs. forcing everything to couple to a representation that's good for graph searches and/or object composition.
  4. You installed the Windows 10 SDK, but I was seeing this note on https://msdn.microsoft.com/en-us/library/windows/desktop/ff476107%28v=vs.85%29.aspx:   D3D11_CREATE_DEVICE_DEBUG Creates a device that supports the debug layer. To use this flag, you must have D3D11*SDKLayers.dll installed; otherwise, device creation fails. To get D3D11_1SDKLayers.dll, install the SDK for Windows 8.   So naturally that raises the question if you have the SDK for Windows 8 installed and if you search for D3D11_1SDKLayers.dll on the system do you find it?  Windows 10 has DX12 built-in, so the Windows 10 SDK may not nessesarily include DX11 libraries.
  5. Dynamic Difficulty Adjustment

    There's a few different ways off-hand I'm aware that have been used adjust perceived difficulty: Scale Quantities - Increase/decrease the number of elements; e.g. enemies, items, etc. Scale Frequencies - Increase/decrease the frequency of elements; e.g. enemies, items, events, etc. Scale Stats - Increase/decrease the effects of player actions or actions on the player; RPG like leveling anyone? Scale Information - Increase/decrease the context clues/hints provided to the player; e.g. character says "If I only had/knew 'x' I could [blank]", light glints off an object to draw attention, exaggerated movements, hint systems, etc. Scale Solutions - Provide multiple avenues to achieve objectives as not everyone is as good at the same tasks; e.g. stealth, brute-force, coercion, etc. Hybrid Scaling - Mix and match the above approaches to achieve a flexible balance of options. A lot of people have tried dynamic difficulty systems with varying degrees of success; generally it has been done by measuring and statistically analyzing effectiveness at various tasks and setting upper/lower limits to trigger parameter changes.     Some examples of this would be varying parameters based on: Ratio of damage done vs. received. Mean time to complete objectives. Number of attempts. etc... However it's really difficult and time consuming (lots of tweaking) to get it 'right', and it's very easy to get it wrong.   Get it wrong it and suddenly it's obvious, obnoxious, intrusive and unnatural to the player; hence the people that loathe the concept itself. If you do really get it 'right', then probably no one ever truly knows it's there because it seems transparent and natural. Which makes it difficult to market as a feature... instead it becomes part of the intrinsic qualities of the game experience for the player vs. something you can example to people.    Odds are games you're playing that felt 'right' to players of multiple skill levels already are doing this; even if the developers and designers haven't realized that's what they were doing as they tweaked game systems, added different paths and included RPG mechanics.  It's one of those situations that's part science and part art, and most people can't explain it so much as know when it begins to feel 'right'.  
  6. A Critique of the Entity Component Model

    I think you are massively overstating this problem. Are your components really so light on data that you can fit multiple different components in a single cache line? A single 4x4 float matrix takes up an entire cache line on most systems, so I strongly doubt it. Realistically, you want to go even further with the 'outboard' nature of the ECS design if you want to solve cache coherency issues. Establish a single, stand-alone array of position data, which is traversed by both the physics and render systems in turn.     Even if you can't keep the data in a single cache line, there's still a fairly significant performance hit for having non-contiguous memory reads. Thanks to eager pre-fetching on most modern hardware, a failure to align or fully utilize those cache lines will add refetching and will waste cycles.   Taking into account row-major ordering in most languages (C/C++, Pascal, Python, etc.) simple code may run four times slower traversing columns in an array vs. rows. When it comes to memory management always keep these facts in mind: Bandwith - The memory bus is limited, abuse limits scalablity. Latency - Irregular access patterns cause cache misses, which causes stalling. Locality - Unused data in caches not only waste cache, but also memory bandwidth. Contention - Multi-core CPUs and threads contending over data in caches causes stalls. Generally you'll get a noticable boost in performance out of well-aligned contiguous memory; e.g. plain-old object arrays. Different needs and limitations faced by different sub-systems require different solutions, you blend in what works best for each challenge to get the best results. In other words; our model the game development takes place against is just a facade that's comfortable to work with and is abstracted from the engine's underlying data and memory management details, which are all about getting the best performance out of the hardware.
  7. A Critique of the Entity Component Model

    First, I have to say when people say they don't like/support OOP yet they use an Entity-Component model I have to roll my eyes. The Entity-Component model concept is an application of GoF software architecture patterns, rather than something used in place of OOP really. OOP gets a bad rep from reckless and lazy design practices we've all seen; mixing of core behavior or interfaces with concrete implementations, confusing options with operands, failure to understand the problem domain, etc. The Entity-Component model you can say is built from the Iterator, Composite, Decorator, Command & Strategy patterns in the classic GoF OOP patterns. The terms 'Entity' and 'Component' themselves are still very broad in terms of implementations; sharing some basic features in common, yet different approaches around a general assemblage of patterns.   There's probably as many variations in expression of the model as there are development teams applying such a model.  None of the perfect, of that we can probably be sure. Also a MVC pattern is not necessarily mutually exclusive to such an approach; Model (Data/Domain Model), View (Representation) and Controller (Behavior) really being about separation of concerns into bounded logical contexts vs. concrete implementation. In the end we really just create abstractions that help to make the whole thing managable and deliver on our needs.  So ideally we mix the best-of-breed approaches to suit the individual challenges those needs create in a way we can cope with.
  8. Question For Writers and game developers in general

    When I say larger project I mean something along the lines of: Dev Budget > $6,000,000 Team Members > 25 Assets > 10,000 When that's the type of resources you're intending to contribute toward the project, then the cash outlay and learning curve Articy Draft definately could make sense.   If you're a young studio that needs to stay 'lean and mean' regarding where you spend time and money, then the money is probably better spent other places (like content production tools, engine technology and talent).   Should you find the main driver is being able to have a central repository to do things like: Make files accessible to multiple people. Allow collaboration on those files. rack who made which changes to what, revert to earlier versions when nessesary. Then it sounds like you need some form of version controlled repository to manage your assets really, and you'd need to license one to really use Articy Draft as it's makers intend anyway.    Just keep in mind that has to be administered by someone, so a team member will need to become your expert on that system and devote some time to it's 'care and feeding'.  There are ways to accomplish the goal without going to a version control system, so you need to consider if something simpler (e.g. DropBox, SharePoint, etc.) will accomplish what you need or not.   If you choose version control there's a lot of systems for doing that, but here's a short list: Perforce - The premiere commerical offering in the space. Ease of setup/administration. Speed of operations on large repositories. Integrations (UDK/UnrealEngine, Eclipse, Visual Studio, Max, Maya, MS Office, etc.) Good professional support. Most industry professionals are used to working with it. Subversion - The premiere open-source offering in the space. Multiple clients available (choose your preferred UI) Integrations (Eclipse, Visual Studio, Windows Shell, etc.) On-premise or a choice of hosting providers. Open-source, but optionally purchase a support contract. Huge community around it. Team Foundation Server - Really nice if your tool-chain and target platform is well supported. Automated Build Management Project Management features. Automated Testing Management Automated Lab Management Integrations (Eclipse and just about anything Microsoft related) Hosted and on-premise available.
  9. Question For Writers and game developers in general

    In all truth what Articy Draft amounts to is an integration of common tools into a workflow. Realistically you can accomplish the same tasks with a combination of graphing (Visio, yEd, OpenOffice Draw, etc.), document publishing (Word, WordPerfect, OpenOffice Writer, etc.) and content management (Subversion, Perforce, TFS, etc.) software. The main advantage is that rather than using systems of citation/reference to establish conceptual links between related content, the application links it all together in a way that makes it easier to manage.  So you can easily bounce back and forth between related content, as well as keep changes to related assets in synch without extra steps.  If you pay the extra fees for API use you can have a tools developer code exports that can let you take content directly into your engine pipeline. As for if it's worth the money; that really depends on the size and scope of the projects: Large/complex projects that are heavily character or plot driven - Potentially huge time savings, a more coherent point of view for managing the project and potentially higher quality. Small to medium size projects that are heavily character or plot driven - The time savings wouldn't be as dramatic, but it will likely be easier to manage the project and it could contribute to a higher quality product. Other large/complex projects - Some potential time savings and ease of project management. Other small to medium size projects - Less likely to have much effect toward time savings, easy of managing the project or quality of the output. On the alternatives: Chat Mapper - Does many of the same things, used by game developers that have produced well respected titles and it's not expensive really.  Complex branching dialog requires Lua scripting, but it's conversation simulator and the fact it will export standard screenplay document format for voice actors may outweight the learning curve. You can just use different stand-alone 'off the shelf' products in a manual or semi-automated workflow as most already do. Some applications you may be using anyway for the back-office (e.g. Microsoft Office suite) can fit some of the requriements for your development needs.  A recent survey indicated that a considerable amount of dialog-tree development for games has been done in Excel oddly enough, but I wouldn't really recommend it. If you've got a good tools developer on staff you might see if they can integrate the pieces of your tool-chain into a more optimal workflow.  Such as plug-ins for your text editor, graphing or other applications that allow you to directly check-in/out documents from your CMS, browse and link to other files in the CMS from the documents.
  10. Collision detection

    Everyone's already hit on similar ideas, but here's what I've done before. First test to see if the whole tile collision takes place. Once you know some part of the tile collides you can have a bitmap/array for each tile which represents positions in the tile. With a 32x32 pixel tile and 32x32 2 color bitmap for the tile, of which black=non-walkable and white=walkable, for checking the detailed collision against you have pixel perfect collision detection.
  11. At a quick glance I'd suggest: Using powers of 8 for tiles/screensize due to storage/speed/precision and sticking to fixed point math unless you have a compelling reason not to. Avoid mixing integer with floats in an equation as it introduces conversion between the data types when you cast, which can add some inaccuracy to the result.
  12. How to handle LARGE objects

    I might be misunderstanding since I've not spent much time with RTS games to know the example you try to make, but... One way to get multiple tile spanning, movable characters/objects is to implement sprite handling separate from background tiling. The background tiling lays down in a mostly fixed way where the sprite you want to be rotated and moved about the screen. That suggests two different approaches for them would be more efficient rather than trying to make one that's extended to do the other. Even if you still compose your sprite of tiles the sprite engine can be made to treat the tile for an object as a single object. So 4x4 tiles is still 1 sprite and operations are done for the sprite object which the engine applies to the tiles as a group for coherent rotation/scaling/depth testing. Easy way to make closer objects occlude (partially occlude really if you have transparency in the tiles) more distant ones, like would occur in reality, is to add simple depth testing to your sprites. If we assume coordinate 0,0 to be the upper left coordinate than we can say objects with a larger 'y' value for their bottom left coordinate occlude those with a smaller 'y' value that occupied the same range of 'x' values. With a sprite's lower left tile's lower left a 48,48 it would be drawn in front of another sprite at 48,12. Example in an ASCII representation where 'A' is a tile of object A, 'B' is a tile of object B, and '0' is background: -------------...-------------...------------- |0|0|0|0|0|0|...|0|0|0|0|0|0|...|0|0|0|0|0|0| -------------...-------------...------------- |0|A|A|A|0|0|...|0|0|0|0|0|0|...|0|A|A|A|0|0| -------------.+.-------------.=.------------- |0|A|A|A|0|0|...|0|0|B|B|0|0|...|0|A|B|B|0|0| -------------...-------------...------------- |0|0|0|0|0|0|...|0|0|B|B|0|0|...|0|0|B|B|0|0| -------------...-------------...------------- As you see in that example the boundaries of A and B cross so we determined which had a larger 'y' value and draw it over the other since it's 'closer to screen' in an isometric view.
  13. Color Palette

    What you are looking to do is termed color matching/management and is used in pre-press and higher end imaging. Unfortunately, I've not known of any mobile phone companies that have felt it nessesary to support color matching or create color matching profiles that you can use to simplify the situation. Color matching tends not to be common in comsumer level devices or operating systems, with the exception of the Mac OS which is why it's used often in pre-press enviroments. You can make sure the color encoding format matches the targets and that's about it without an investment in a densometer and a lot of mobile phones to color profile. If you went that route you'd really want to be using sofware that performs accurate color space converison and natively supports color matching so I'd hope you know someone with a Mac and PhotoShop.