• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Shinkage

Members
  • Content count

    567
  • Joined

  • Last visited

Community Reputation

595 Good

About Shinkage

  • Rank
    Advanced Member

Personal Information

  • Location
    San Francisco
  1.   Just thought it should be point out that this isn't at all what computers are really doing.  They do know how to multiply and generally do long (or partial product, or something even more clever perhaps) multiplication (and division) pretty much like any person would.  Please forgive me if you correct this later in the article and I missed it, but it seems a pretty glaring oversight given the subject matter.   Would have made a good example of the classic teaching mechanism of "What I told you earlier isn't entirely true..." to reveal how computers actually use much more clever algorithms and the binary long multiplication is in fact surprisingly trivial.
  2. I'll try to address your points individually: [quote name='ATC' timestamp='1351407231' post='4994646'] For starters, while your idea is great and makes me want to try it myself, which of the two is cheaper: 1) free download of Sun's Virtualbox and/or free download of Microsoft's Virtual PC 2) going out and buying physical hardware like you describe? [/quote] ARM processors can not be virtualized on a PC, but they can be emulated with [url="http://wiki.qemu.org/Main_Page"]QEMU[/url], which is actually a good way to start now that you mention it. It'll let you get off the ground with stuff like figuring out how to get a bootloader working without the trouble of having to constantly flash new code onto an embedded board. A good way to get your feet wet before diving in and buying a $150 board and then figuring out a good process for flashing new builds to it, etc... and then buying the even more expensive debug and analysis equipment. In fact, I'd say QEMU is probably a better way to start then the setup I suggested, I just hadn't thought of it! [quote name='ATC' timestamp='1351407231' post='4994646'] Secondly, most of us programmers are writing software for x86/64 systems. And when we want to learn about OS development it's usually not to try to become a professional OS developer (very tough market to get into, dominated by Microsoft) but to go on an "academic" venture; to become more knowledgeable and skillful programmers on our primary development platform. So for those two reasons alone I would not go as far as saying that the x86 platform is a "horrible" choice... I think that if you're in OS development for academic reasons you should work with the platform you do most of your userland/everyday programming on... and if you're in it to make a profession out of OS development you should work with the platform you intend to support. [/quote] This is actually precisely why I'd say x86 is a bad choice. If you're in it for academic reasons, what you should really be interested in is the general theory behind everything you're doing, rather than the minute implementation details. Believe it or not, the vast majority of what you learn working on [i]any[/i] even vaguely comparable platform (i.e. 32-bit von Neumann architecture) will be equally applicable wherever you go when it comes to userland. Things like the gritty details of how to manage your system's MMU are largely irrelevant outside of OS design, but the general principles behind how a paged MMU works are very useful and pretty consistent among most modern platforms. Assuming you buy the story that what you learn will be equally applicable, then I can almost guarantee you'll have an easier time actually getting off the ground on a simple SoC. Can't really underestimate how having the entire system on one chip with one detailed reference manual can simplify figuring everything out. [quote name='Ohforf sake' timestamp='1351413483' post='4994655'] Also in our chip, we could disable certain functionalities (cache, MMU, ...) in order to get everything right one step at a time. I don't know, if intel lets you globally disable the caches "just like that". [/quote] This is actually a very important point that I thought I'd quote for emphasis here. I also don't know if that's the case on Intel, but it can make quite the difference.
  3. (responding for people who genuinely are interested in OS development) I think x86 is an [b]absolutely horrible[/b] platform to start learning this kind of low level stuff on. Honestly, f you want to learn how to make an OS, for the love of god don't do it on an x86 machine. If you're really serious about it, start with an embedded SoC (system on a chip) evaluation board (like the Beagleboard). There are a few things that make these kind of systems a [i]whole lot[/i] easier to do low-level work on, especially if you're just learning:[list] [*]They're designed with that kind of work in mind. In the embedded world it's pretty much [i]assumed [/i]that you're going to want to talk directly to peripherals (MMU, GPIO, etc...) and so all that stuff is pretty thoroughly documented, right down to the register bit layout and usage procedures. [*]Everything in one place. With an SoC, basically the entire computer system is all stuffed into a single chip, with a single reference manual to figure it all out. [*]JTAG. There's really no substitute for even a cheap USB JTAG debugger when it comes to doing the kind of low level stuff where standard debugging isn't an option. With a JTAG debugger you can literally step through every instruction the processor executes, one by one, anywhere, any time. Boot loader, kernel code, you can debug it all with JTAG. No 3 beeps and wondering what went wrong like Ryan_001 had to deal with. [*]GPIO+logic analyzer. Toggle a GPIO pin (basically just a metal pin on the board that you can [i]very quickly[/i] switch high/low voltage) and watch exactly when it happens on a logic analyzer. With enough pins hooked up to an analyzer, you can get a [i]really [/i]good idea of what's going on. It's like printf's, but way way better because the timing is precise down to the nanosecond range. You can even encode more sophisticated messages using I2C or SPI or such and have the logic analyzer decode them (again, like printf's but even better). Also, unlike printf's, it'll work in any execution context like, for example, a harware interrupt handler (don't know how I'd debug those without my trusty logic analyzer). [*]Simple instruction set. You're going to have to do some work in assembly. I, personally, find ARM an absolute joy to used compared to x86 when it comes to working in assembly. [/list] For anybody who actually is interested in learning this kind of stuff, it's (my humble opinion) by far the most enjoyable kind of software engineering out there. That said, it's not cheap. The above equipment, for example: [b]SoC Board:[/b] [url="http://beagleboard.org/hardware-xm"]http://beagleboard.org/hardware-xm[/url] [i]OMAP processor, and TI documentation tends to be some of the better.[/i] [b]JTAG Debugger[/b]: [url="http://www.tincantools.com/product.php?productid=16153&cat=251&page=1"]http://www.tincantools.com/product.php?productid=16153&cat=251&page=1[/url] [i]Every JTAG debugger on earth is a pain in the ass to get working. This is no different.[/i] [b]Logic Analyzer[/b]: [url="http://www.pctestinstruments.com/"]http://www.pctestinstruments.com/[/url] [i]Fantastic piece of equipment. Really just works perfectly.[/i] It'll set you back close to a grand and it's about the cheapest setup you could put together, but there's really no better setup for learning how the real nitty-gritty of low level software engineering works. Plus you get to feel like a mad scientist with wires and pins going everywhere.
  4. Getting to the documentation on the project is very counterintuitive, but see here: [url="http://ffmpeg.org/doxygen/trunk/modules.html"]http://ffmpeg.org/doxygen/trunk/modules.html[/url] Particularly the following two pages: [url="http://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html"]http://ffmpeg.org/doxygen/trunk/group__lavf__decoding.html[/url] [url="http://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html"]http://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html[/url] Ffmpeg/libav may have many strengths, but a clean well specified interface certainly isn't one of them.
  5. I only speak from my experiences as having evaluated them at my work. There's no doubt that, like pretty much every fork of this sort, there's bad blood between the developers and a bit of slapfighting and name calling. When I settled on libav it was because of a (very slightly) cleaner public ABI (which gave me the possibly incorrect perception that libav is more invested in trying to standardize the interface so that it doesn't change wildly all the time like ffmpeg seems to, which is important to us) and their being much more cautious about merging new features (note, the links you provided are ffmpeg developers complaining after they went and merged a bunch of incompatible libav stuff, and, gasp, regressions). I can almost guarantee though that as far as the OP is concerned the difference between them is totally academic. EDIT: Also, I should note that since the evaluation was for use on an embedded system running a Debian based Linux, the fact that Debian seems to have settled on libav was influential.
  6. Ffmpeg emphasizes features while libav emphasizes stability. For the most part, you'll find ffmpeg can do more than libav, but that more tends to be buggy. Basically it's "bleeding edge vs. stable." Honestly unless you're concerned with the gritty details of their media handling (most likely not) or need one of the features ffmpeg provides but libav doesn't (most likely not) then it really doesn't matter. They're nearly identical as far as API goes.
  7. Completely wild guess here, but have you tried wrapping the #includes for SDL in an extern "C" { ... }? It may have been compiled with C linkage.
  8. Stating the obvious, but have you tried just searching for the file in explorer? If Windows is anything like Linux, then it won't be in any of the directories you're adding to the include file search path and will probably be somewhere like C:\gtk+\lib\glib-2.0\include\. On Linux that file lives at /usr/lib/glib-2.0/include/.
  9. If you find yourself needing to pass smart pointers to your object from within its constructor, you should be using [b]intrusive_ptr[/b] rather than [b]shared_ptr[/b]. The same can probably be said for the case where you find [b]enable_shared_from_this[/b] to be the norm rather than the exception--intrusive pointers are probably a better choice. The reason being that a raw pointer can be implicitly converted to an intrusive pointer, because the reference counting information is embedded in the pointee itself. Of course, the same caveats pertaining to passing a raw pointer from a constructor will apply to passing an intrusive pointer, as well as some additional caveats to boot.
  10. Not really sure what you're suggesting here. Type information is already tracked perfectly, the problem is that the conversion is initiated from the virtual machine which doesn't partake of the C++ type system. The only option (as far as I can figure) for doing the conversion in the host would be to register a conversion function for every possible conversion.
  11. [source lang="cpp"]const char *data; // ... char *endptr; strtol(data, &endptr, 10); if(data != endptr) { // String is an integer. } else { strtod(data, &endptr); if(data != endptr) { // String is a real. } else { // String is a string. } }[/source] Keep in mind, this only tells you if the string [i]begins[/i] with the specified data type. If you want to test whether the entire string is consumed by the conversion, you'll have to test endptr against the end of the string as well.
  12. The replies have convinced me to reduce the scope of the interface and only allow registration of class hierarchies which derive from the library's shared pointer base. It may not be quite as general purpose, but it's going to be a hell of a lot simpler [b]and[/b] totally standard-compliant as well. Thanks.
  13. [quote name='Prefect' timestamp='1306785491' post='4817635'] This doesn't explain why you believe your offset voodoo to be necessary. The proper way to deal with such things is to remember which type the void* was cast from, and then first cast back to that exact same type. Then, afterwards, you use C++ style casts to get the right behaviour.[/quote] The problem with this is the logic behind the casting takes place inside the virtual machine when it calls back into the host environment and, as I said, the virtual machine is ignorant of the C++ type system. I suppose one option would be to register a conversion function for every possible conversion, but I'm working with some target platforms with limited system memory and that would definitely increase the executable size a fair bit in addition to increasing the memory footprint of the virtual machine.
  14. [quote name='Krohm' timestamp='1306774068' post='4817568'] You're playing with [b][color="#8b0000"]fire [/color][/b]here. [/quote] Sometimes you have to play with fire. I can say with absolute certainty that there is [b]no[/b] way to a void* intermediate stage; the problem comes about with respect to interfacing with a virtual machine that is totally ignorant of the C++ type system. The interface requires passing arbitrary types which may or may not be derived from any arbitrary number of bases, and those types being manipulable by the virtual machine as well as the host program. Consider the following code with comments to maybe better explain my thought process: [source lang="cpp"] struct Base1 { ... }; struct Base2 { ... }; struct Intermediate1 { ... }; struct Intermediate2 : public Base1, public Base2 { ... } struct Derived : public Intermediate1, public Intermediate2 { ... } Derived *d = new Derived; Intermediate2 *i2 = d; // i2 now points to a location in memory at d+offset Base2 *b2 = i2; // b2 now points to a location in memory at i2+offset' // Since i2=d+offset and b2=i2+offset', then logically it would seem as if it must // be the case that b2=d+offset+offset'. And further: b2 = d; // This must result in the same memory address as the combination of the two casts above, // or at least I would assume it would have to. // In other words, if a cast introduces a memory offset, then that offset must be uniform // across all such casts, because the compiler can't know what type has actually been instantiated.[/source] [i]Caveat: Virtual [b]inheritance[/b] is not being considered--casts to/from virtually inherited (public virtual Base) base classes are not supported in this interface.[/i] [i]Caveat 2: My logic may be [b]completely[/b] off base here, and that's why I'm asking the internet![/i]
  15. A project I'm currently working on requires casting from classes in a hierarchy to void* and subsequently casting from that void* to a [i]different[/i] class in the same hierarchy. Now, I'm fully aware that naively speaking this isn't a valid thing to do, because casting among classes in a hierarchy may introduce offsets into the actual physical pointer. For example: [source lang="cpp"]class Base1 { ... } class Base2 { ... } class Derived : public Base1, public Base2 { ... } // --------------------- Derived *d = new Derived; Base2 *b2 = d; // b2 will probably point to a different place in memory from d[/source] That being said, my current approach involves storing the offset for valid casts along with the void*, and adding it to the pointer value when I retrieve it. Now that I've gotten the preliminaries out of the way, my question is this: will the offset of casting any derived class to any base class [i]always[/i] be the sum of the offsets of every intermediate cast? My initial thought is that it would have to be because of the associativity (? .. not sure if this is precisely the right term to express what I'm thinking) of pointer casting, but can anybody see any problems with my assumptions here? It's been a while since I've had to deal with this particular detail of the way pointers and inheritance work.