• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Phil123

Members
  • Content count

    82
  • Joined

  • Last visited

Community Reputation

1276 Excellent

About Phil123

  • Rank
    Member
  1.   The main purpose (aside from the obvious enjoyment) is to see through the entire development cycle of a non-trivial project where I'm the only programmer on the team (both for learning purposes and because it's a major goal I set for myself since I graduated from school).  I should probably have rephrased the last part to "...what the hell should I work on, taking into account points 1 and 2?"  I think I know what I'd want to work on if I stopped worrying about #2 so much, though I'll admit it's been hard to get over that so I'm looking for some advice so I can make sure I take a step in the right direction.
  2. What are some methods that you've used to determine what game you're going to make (and eventually release)?     My main concerns are the following:   1. Over scoping the project and asset requirements (specifically things I cannot do: art, music, and sound effects). 2. The project ending up being too similar to a game that's already been released.   There are quite a few games that I've played where I would love to take the core gameplay concepts and greatly improve upon them, but this comes back to #2 as a potential problem, and I don't want to simply clone a game or have it labelled as such, but I'm also not confident enough in my design ability to take something and make it truly unique (yeah, I know, "unique" is being used loosely here).   I do have some self-imposed constraints to keep the scope reasonable (2d, no online features, no major story), but this hasn't helped me really narrow down what I want to make.  I'm stuck in a limbo of "I'd love to work on a game, but what the hell should I work on?"   Thoughts?
  3.   Hmm, yeah, this is one of the major problems.  The code is tightly coupled and I won't have the time to refactor it.  There is, indeed, an enormous amount of string literals as well.  In a perfect world I'd be able to fix both of these issues, but right now I'm going to explore other less-invasive options first (writing something to let me know what should be added to the precompiled header, writing something that compiles each cpp minus one include to provide a report on what includes can be removed, and combining .cpps or splitting header files as required, and potentially implementing unity builds).     Yeah, a portion of it is data driven, which helps.  Though it isn't possible to avoid it completely in this scenario.     Agreed...I'm really starting to favour barely having anything in header files these days.  If I can get away with passing dependencies to stand alone static functions (which are 100% in the cpp file), then I do so.     Premake is used, and I am strongly considering to take some spare time to set up a different premake script that includes some unity builds.     Yep, that would be quite amazing, however, for reasons I won't get in to, this won't be possible.     Thanks for the suggestions guys.  It looks like DumpBin won't help me after all but I should be able to survive without it.
  4.   Thanks for the reply, and yeah, I wouldn't find it acceptable if I just said to myself "yeah, it's a large project, there's nothing I can do about it."  I also don't plan on guessing anything, rather I need to develop a process that identifies the problem so I can solve it.     Indeed.  I think it's preferred to avoid this as much as possible.     Agreed.  I really like this idea...and I'm going to add it as one of the solutions to explore.     I'll read all of these.  Thanks.     I'll try this, thank you.       Thanks for the replies, though I'd still like to be able to analyze each .obj (or any other generated file really as necessary) to find out what's costing me so much for link times.
  5. I'm currently working on optimizing somewhat of a moderately sized code base with maybe 40 or so external libraries.  There are quite a few things that have helped that I've done, but the time it takes to link is still a serious productivity problem.   I understand what contributes to slow compile and link times in general, but I need to be able to quickly and efficiently locate the problems in code.  I don't have time to manually go through and "guess," so I've been trying to develop a process in order to locate the most significant problems in the code base (so I can fix it).  And to that end, I've looked into using Microsoft's DumpBin tool (using VS2015).  I've tried to make sense of the data it generates, though I have had little immediate success in that regard, and on top of that, there doesn't seem to be a wealth of information on this topic.   So, as per usual, here I am, looking for suggestions.
  6.    /pushthreadhijack   https://soundcloud.com/l-spiro/zeal-island What software do you use for this?    /popthreadhijack
  7.   That makes sense.  I read a portion of the course notes, and damn, now I have even more questions (though they aren't completely relevant to this thread's title so I'll post it in a new thread).
  8.   Sounds easy enough, I could definitely implement that, thank you.  I'd appreciate your thoughts on how ambient lighting should be handled with PBR, as I'm quite unsure about this myself.
  9. Here's the problem: I'd prefer to use deferred rendering, however, I'd also like to support multiple shading models (strauss, ward, ashikhmin-shirley, etc).  The necessary data (position, normal, diffuse, material, etc) is output to various textures which, as standard, is read by a shader that creates the final picture.   Right now, I would have to switch on the material type in this shader and use the appropriate formulas to get the result, though I've read that heavy branching like this should be avoided (and for legitimate reasons).  What's the standard solution here?    Also, how should ambient lighting be calculated in a scene for non-blinn phong shading models?  Normally you could just leverage the mesh's diffuse texture to trivially set a base "lit" value, but this won't work for shading models that are drastically different (as parts of the mesh that are lit dynamically will look very different from those that are just lit by ambient lighting).
  10.   Agreed.  I try to do this as much as I can in my personal projects.  If I can design code in a manner such that it does not require obscene amounts of error checking, then that's a good thing.       Also agreed.  Unless you mean ignoring an error is only using an assert and not an if check, in which case, that's what I'm not sure yet, heh.       This seems more reasonable than using if checks everywhere in some functions (in this case, internal functions).       Yeah.  I guess my next questions would be: how do you personally decide what to do when an error has occurred?  Where do you draw the line for performance and error handling?  As an extreme example...are you really going to handle NaNs with if checks in a math/physics library?       And this is why I'm on the fence about how I should approach error handling.  If you even use a single assert (without a matching if check to exit the function or otherwise handle the problem) to eliminate one possible code path in a Debug build, your code is technically broken (or has a bug, at least) in Release builds.     Maybe what I should also ask is that in what cases should you: only use asserts, use asserts and some appropriate if checks*, or use asserts and if check* absolutely everything?  Or is the slight performance hit and the development time cost of gracefully handling every single error worth it? *By if check, I either mean setting the value to something appropriate so the function can continue running, exiting the function and/or deciding to crash the program (if the error is severe enough).
  11. I believe that handling input and output errors of each and every function is absolutely vital and that doing so will help a code base's long term debug-ability as errors are immediately caught.  However, I'm not exactly sure what the standard in the industry is for doing this (both from an efficiency stand point and a maintainability standpoint, as introducing additional code paths increases code complexity).  I believe the long term cost of assert statements is very minimal as it doesn't introduce additional potential code paths that you have to maintain and they're generally compiled out for release builds anyway for efficiency, so my questions are as follows:   1 - What errors do you if check?  (Or more specifically...) 2 - Do you if check every single potentially null pointer? 3 - Do you if check programmer errors, or do you just leave it up to the assert to catch it? 4 - If you believe you should if check absolutely everything, do you think that this has an impact on the maintainability and readability of the code base? 5 - How would your answers change in a team of 5, 20, or 100 people?   Thoughts?
  12. I'm not convinced this is beautiful code, and here's why:   int a; for (a=0   Declaration and initialization on two separate lines is unnecessary in this case.  In fact, declaring "int a" there is completely unnecessary as that value will always be (MAXTGTS - 1) following the end of that loop unless one of those methods secretly modifies it.   a=0; a<MAXTGTS   Writing code like this becomes unreadable quick.   sstgt[a].active == 0   I'm not sure what sstgt means.  Also, you're iterating through an entire array of objects despite some of them (likely) being inactive, and thus, ignored.   if (sstgt[a].active == 0) { continue; } if (camera_can_see_tgt(a) == 0) { continue; }   There's some duplicate code here.     I'm hoping that this method is nested inside a class.  Otherwise it means you've throwing around globals.  Though perhaps I'm missing something here.  Can you elaborate on your methods?
  13. I'd love to try this.  Though I don't know how much time I can actually sink into it :(
  14.   Yeah, you're right.  I think I'll interpolate between two previous states instead of estimating where they'll be.     Good to know.  Thank you.     Woops...That'll definitely be removed in my actual loop.     Yeah, I see what you mean.     Thanks for the input everyone.  I have an even clearer idea in regards to how I'll be implementing this.
  15.   Yeah, I could see a networked game being interpolated in this fashion.  Though I feel that it might become problematic if there aren't additional prediction methods.       t is for time, and I chose the values 1 and 2 for a simple example.   Interpolating physics based objects is quite simple as you will always have that data in memory and in use.  The question here is when you aren't using physics based movement for some objects and you need to interpolate for those objects as well, which again is simple until you consider that some objects will not even be rendered, and therefore you shouldn't be wasting time saving their previous frame data.  I'm looking for a clean solution for that.