• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Cosmic314

Members
  • Content count

    127
  • Joined

  • Last visited

Community Reputation

2002 Excellent

About Cosmic314

  • Rank
    Member

Personal Information

  • Location
    Raleigh, NC
  1. One obvious problem is that the center of masses might come in contact with each other.  If an object collides with Earth, it is separated by the Earth's radius before it hits the center of mass.  The prevents the "infinite acceleration" artifacts you might be seeing. You might consider modeling a distance around the body for which the object is considered collided with it to eliminate the effect.
  2.   You got it.  Someone else mentioned the intuition behind these problems.  Imagine if they had 100 kids, all with brown eyes.  It would be highly unlikely that the next one will be blue.  So at each new kid the chances of brown increase while the chances of blue decrease.
  3. If your time step is small then as the point masses approach each other, the squared distance starts to dominate the fixed values of your point masses.  At very small distances the F will tend towards infinity. I imagine, for true accuracy, you'd need to break each body into a collection of point masses and sum over each point mass to point mass between the bodies.  That sounds like O(n^2) complexity.
  4. For Monty Hall, simple enumeration of all equal possibilities gives a straightforward explanation: If P represents the initial door you pick, G = goat, C = car.  Here's the full table with outcome in the right column for the strategy of always switching:      // Door 1 Door 2 Door 3 Outcome // PG G C Win // G PG C Win // G G PC Lose // PG C G Win // G PC G Lose // G C PG Win // PC G G Lose // C PG G Win // C G PG Win Nine possibilities of which six are winners, hence 2/3.
  5. I'm taking some graduate courses, currently, one on computer performance modeling.  In some of the review questions for probability, we did this one which reminded me very much of the Monty Hall problem. In genetics, some traits are recessive and dominant.  For example, eye color has a dominant trait of brown over blue.  Because two genes determine eye color, so long as one of them is brown, the person will have brown eyes.  Only if both are blue will they have blue eyes.  A child inherits one gene from each parent with equal likelihood. Suppose John has brown eyes and his parents have brown eyes.  But his sister has blue eyes.  Further, suppose John has a child with his blue-eyed wife.  That child has brown eyes.  What is the probability the next child will have brown eyes?
  6. There's a donut shop near where I live that makes exactly one type of donut.  Despite no choices people flock from afar to enjoy them. Sometimes just having one option is the easiest!
  7. Thanks Ravyne.  The little hat operators, and their ilk, do rub me the wrong way.  I will definitely grab that library and give it a spin.
  8. Thanks for the reply, SmkViper.  The examples I'm exploring with the WinRT flow are clearer than the venerable Win32 model.  To boot, the new Visual Studio 2015 Community hearkens back to the day when Studio releases actually included all the nifty built-in tools.  I was kinda bummed in VS 2010-2013 that for things like IDE extension you needed to purchase the $200+ version.  I just kick around Windows programming as an amateur / hobbyist and that price tag was certainly a barrier to entry.  Now I can march along, learn a little about what XAML is and the integrated GUI developer (that AFAIK only existed in .NET in recent releases of VS). I know lots of people tend to have strong opinions about OS, particularly Windows, but I am impressed with where they are headed and what they offer.  I'm tempted to get an Xbox One merely to trying streaming to the PC.
  9. I'm crafting some small games and decided to make a full leap into Windows 10.  I've been watching some of Microsoft's Virtual Academy presentations.  They've created a unified development environment called Universal Windows Platform (UWP).  The platform's objective is to create one code base that supports all Windows devices:  Xbox One, PC, phone, tablet, their version of smart glasses, and anything else under the sun that will support Windows.  The new development model is an 'App' which calls the Windows Run Time (WRT) and supports managed code.  (This is different then the CLR which still only supported PC development). When I hear 'managed' code I automatically associate it with 'performance hit' via garbage collection, etc.  Is this concern justified?   Underneath the hood are now three models (I paint with broad strokes): Win32 -- classic API which uses structs to query and update the OS/App COM -- MS' object-oriented methodology for OS/App communication WinRT -- managed code which is largely supported across platforms For you Windows developers, will the UWP model change how you do things?  Are there concerns over performance?  How many will still develop Win32 / COM?   I'm trying to get an understanding of the overall picture.  
  10.   Sorry to be a little off topic, but you do a disservice to yourself.  Experience is a huge asset.  Your accumulated knowledge has benefited this site quite often. Consider this scenario.  You seek medical surgery and you have a choice:  (1) select a doctor who did nothing but read books and study theory but never did any medical procedure, (2) or one who learned no formal theory, yet learned by watching other doctors and performing surgeries under their watchful eye? Obviously, to be the best doctor you want both and that's why there is such a strong requirement for medical school and residency.  Doctors need a strong competency in theory and working memory, because these areas stimulate different parts of the brain.  (Also, consider that doctors can't just practice surgery whenever they want;  they can only operate on maladies that people actually present them.  A computer programmer can simply learn whatever they want, whenever they want.)   To dovetail back to the OP and to echo an expression that GDNET continuously expounds.  You learn programming by programming.  Sure, take some suggestions, read books and articles, and maybe even get that PhD to get a deep, specialized knowledge of computers.  But make an effort to dig in and do it. Style seems to be such a simple question to ask yet its answer touches upon many root aspects of programming.  
  11. That a computer language even exists is testament to the benefits of clarity.  We could all be doing punch cards or pure assembler programming if this aspect made no difference. Some standouts: Once you establish your style, stick with it. Keep consistent tabs for each new lexical scope. Organize headers / routines so public, protected, private, etc. functions appear in the same order. When possible, keep functions relatively small -- no more than a page at a time.  If I start to see spaghetti code I build hierarchy to replace strands of code.  It's a good way to establish concepts and start to see patterns.  Spaghetti code is hard to understand especially when you're constantly scrolling up and down within the same function. Give descriptive names to your functions, variables, files, etc. You'll eventually write some code that you may return to later.  With a clean, consistent style you'll get to the source of your bug when you find things where you expect to see them.
  12. For sure, I plan to make habitual use of them.  I've just read a good deal of RAII which has been touted as 'sufficient' in the face resource leaks.  Perhaps that's all it is, sufficient, but not exemplary.   That's a good point, unique_ptr confers an expected meaning of how the pointer is to be used.  Barring a comment, a naked pointer doesn't say anything about its expected purpose.   While I haven't been on a large project, one that I'm currently working on is coming close.  Relationships are established such that it's important to keep track of ownership issues.  I am vaguely aware that the system is complex enough, that despite my best intentions, I'm going to miss some critical step that will cause future headaches.   A (newbie) usage question:     Suppose I have a base class and several derived classes.  I maintain a vector of Base (abstract) objects and one for each Derived type objects.  The Base vector is a superset of all the Derived vectors.  A factory creates each derived object and I add it to the Base vector and the appropriate derived vector.  Where should unique_ptr/shared_ptr/etc. live in this scheme?  My inexperienced guess is that it lives between factory (allocation) and storage on the vectors after which point it is moved onto the vector.  Or should it be something different?   BTW, thanks Servant and Ravyne.
  13. That would completely kill C++ as being a pay-for-what-you-use / opt-in language. Most embedded systems that I've worked on still use raw-pointers, or a smart pointer that acts like a raw one, but only performs leak detection during development.   Is a valid use of raw pointers for things within a class?  For example, if I construct an object (and correctly release resources if the object fails to construct) and then provide the correct release in the destructor would this be a valid use?  I guess that's the RAII paradigm in a nutshell.  Or would you advocate smart pointers even in this scenario?   I suppose if I need to share a dynamic resource to things outside of the raw pointer containing class then smart pointers become essential.  But even general purpose libraries intended for mass consumption avoid smart pointers because there's no hard bound guarantee that the same smart point convention is followed by programs which use different implementations.  How then is the problem of detecting resource leaks handled?  Is there a method to the madness or are designs/tools up to the task?   If there's any doubt, these are genuine questions and not quibbles with the concept of smart pointers.  I just haven't been on a large enough project to see what disasters might befall the unwary.
  14. Probably not, but it's also not main.  I'm guessing that in a different execution module, which happens to define main(), it will do some setup of the graphics library and then eventually call ccc_win_main().  You may need to link against this module or you may even need to compile it and then link.