Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

595 Good

About Shinkage

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1.   Just thought it should be point out that this isn't at all what computers are really doing.  They do know how to multiply and generally do long (or partial product, or something even more clever perhaps) multiplication (and division) pretty much like any person would.  Please forgive me if you correct this later in the article and I missed it, but it seems a pretty glaring oversight given the subject matter.   Would have made a good example of the classic teaching mechanism of "What I told you earlier isn't entirely true..." to reveal how computers actually use much more clever algorithms and the binary long multiplication is in fact surprisingly trivial.
  2. Shinkage

    Create an operating system

    I'll try to address your points individually: ARM processors can not be virtualized on a PC, but they can be emulated with QEMU, which is actually a good way to start now that you mention it. It'll let you get off the ground with stuff like figuring out how to get a bootloader working without the trouble of having to constantly flash new code onto an embedded board. A good way to get your feet wet before diving in and buying a $150 board and then figuring out a good process for flashing new builds to it, etc... and then buying the even more expensive debug and analysis equipment. In fact, I'd say QEMU is probably a better way to start then the setup I suggested, I just hadn't thought of it! This is actually precisely why I'd say x86 is a bad choice. If you're in it for academic reasons, what you should really be interested in is the general theory behind everything you're doing, rather than the minute implementation details. Believe it or not, the vast majority of what you learn working on any even vaguely comparable platform (i.e. 32-bit von Neumann architecture) will be equally applicable wherever you go when it comes to userland. Things like the gritty details of how to manage your system's MMU are largely irrelevant outside of OS design, but the general principles behind how a paged MMU works are very useful and pretty consistent among most modern platforms. Assuming you buy the story that what you learn will be equally applicable, then I can almost guarantee you'll have an easier time actually getting off the ground on a simple SoC. Can't really underestimate how having the entire system on one chip with one detailed reference manual can simplify figuring everything out. This is actually a very important point that I thought I'd quote for emphasis here. I also don't know if that's the case on Intel, but it can make quite the difference.
  3. Shinkage

    Create an operating system

    (responding for people who genuinely are interested in OS development) I think x86 is an absolutely horrible platform to start learning this kind of low level stuff on. Honestly, f you want to learn how to make an OS, for the love of god don't do it on an x86 machine. If you're really serious about it, start with an embedded SoC (system on a chip) evaluation board (like the Beagleboard). There are a few things that make these kind of systems a whole lot easier to do low-level work on, especially if you're just learning: They're designed with that kind of work in mind. In the embedded world it's pretty much assumed that you're going to want to talk directly to peripherals (MMU, GPIO, etc...) and so all that stuff is pretty thoroughly documented, right down to the register bit layout and usage procedures. Everything in one place. With an SoC, basically the entire computer system is all stuffed into a single chip, with a single reference manual to figure it all out. JTAG. There's really no substitute for even a cheap USB JTAG debugger when it comes to doing the kind of low level stuff where standard debugging isn't an option. With a JTAG debugger you can literally step through every instruction the processor executes, one by one, anywhere, any time. Boot loader, kernel code, you can debug it all with JTAG. No 3 beeps and wondering what went wrong like Ryan_001 had to deal with. GPIO+logic analyzer. Toggle a GPIO pin (basically just a metal pin on the board that you can very quickly switch high/low voltage) and watch exactly when it happens on a logic analyzer. With enough pins hooked up to an analyzer, you can get a really good idea of what's going on. It's like printf's, but way way better because the timing is precise down to the nanosecond range. You can even encode more sophisticated messages using I2C or SPI or such and have the logic analyzer decode them (again, like printf's but even better). Also, unlike printf's, it'll work in any execution context like, for example, a harware interrupt handler (don't know how I'd debug those without my trusty logic analyzer). Simple instruction set. You're going to have to do some work in assembly. I, personally, find ARM an absolute joy to used compared to x86 when it comes to working in assembly. For anybody who actually is interested in learning this kind of stuff, it's (my humble opinion) by far the most enjoyable kind of software engineering out there. That said, it's not cheap. The above equipment, for example: SoC Board: OMAP processor, and TI documentation tends to be some of the better. JTAG Debugger: Every JTAG debugger on earth is a pain in the ass to get working. This is no different. Logic Analyzer: Fantastic piece of equipment. Really just works perfectly. It'll set you back close to a grand and it's about the cheapest setup you could put together, but there's really no better setup for learning how the real nitty-gritty of low level software engineering works. Plus you get to feel like a mad scientist with wires and pins going everywhere.
  4. Shinkage

    How to read an audio file with ffmpeg in c++?

    Getting to the documentation on the project is very counterintuitive, but see here: Particularly the following two pages: Ffmpeg/libav may have many strengths, but a clean well specified interface certainly isn't one of them.
  5. Shinkage

    libav vs ffmpeg?

    I only speak from my experiences as having evaluated them at my work. There's no doubt that, like pretty much every fork of this sort, there's bad blood between the developers and a bit of slapfighting and name calling. When I settled on libav it was because of a (very slightly) cleaner public ABI (which gave me the possibly incorrect perception that libav is more invested in trying to standardize the interface so that it doesn't change wildly all the time like ffmpeg seems to, which is important to us) and their being much more cautious about merging new features (note, the links you provided are ffmpeg developers complaining after they went and merged a bunch of incompatible libav stuff, and, gasp, regressions). I can almost guarantee though that as far as the OP is concerned the difference between them is totally academic. EDIT: Also, I should note that since the evaluation was for use on an embedded system running a Debian based Linux, the fact that Debian seems to have settled on libav was influential.
  6. Shinkage

    libav vs ffmpeg?

    Ffmpeg emphasizes features while libav emphasizes stability. For the most part, you'll find ffmpeg can do more than libav, but that more tends to be buggy. Basically it's "bleeding edge vs. stable." Honestly unless you're concerned with the gritty details of their media handling (most likely not) or need one of the features ffmpeg provides but libav doesn't (most likely not) then it really doesn't matter. They're nearly identical as far as API goes.
  7. Shinkage

    linking problem with mingw

    Completely wild guess here, but have you tried wrapping the #includes for SDL in an extern "C" { ... }? It may have been compiled with C linkage.
  8. Stating the obvious, but have you tried just searching for the file in explorer? If Windows is anything like Linux, then it won't be in any of the directories you're adding to the include file search path and will probably be somewhere like C:\gtk+\lib\glib-2.0\include\. On Linux that file lives at /usr/lib/glib-2.0/include/.
  9. If you find yourself needing to pass smart pointers to your object from within its constructor, you should be using intrusive_ptr rather than shared_ptr. The same can probably be said for the case where you find enable_shared_from_this to be the norm rather than the exception--intrusive pointers are probably a better choice. The reason being that a raw pointer can be implicitly converted to an intrusive pointer, because the reference counting information is embedded in the pointee itself. Of course, the same caveats pertaining to passing a raw pointer from a constructor will apply to passing an intrusive pointer, as well as some additional caveats to boot.
  10. Not really sure what you're suggesting here. Type information is already tracked perfectly, the problem is that the conversion is initiated from the virtual machine which doesn't partake of the C++ type system. The only option (as far as I can figure) for doing the conversion in the host would be to register a conversion function for every possible conversion.
  11. Shinkage

    get type of string in C++

    [source lang="cpp"]const char *data; // ... char *endptr; strtol(data, &endptr, 10); if(data != endptr) { // String is an integer. } else { strtod(data, &endptr); if(data != endptr) { // String is a real. } else { // String is a string. } }[/source] Keep in mind, this only tells you if the string begins with the specified data type. If you want to test whether the entire string is consumed by the conversion, you'll have to test endptr against the end of the string as well.
  12. The replies have convinced me to reduce the scope of the interface and only allow registration of class hierarchies which derive from the library's shared pointer base. It may not be quite as general purpose, but it's going to be a hell of a lot simpler and totally standard-compliant as well. Thanks.
  13. The problem with this is the logic behind the casting takes place inside the virtual machine when it calls back into the host environment and, as I said, the virtual machine is ignorant of the C++ type system. I suppose one option would be to register a conversion function for every possible conversion, but I'm working with some target platforms with limited system memory and that would definitely increase the executable size a fair bit in addition to increasing the memory footprint of the virtual machine.
  14. Sometimes you have to play with fire. I can say with absolute certainty that there is no way to a void* intermediate stage; the problem comes about with respect to interfacing with a virtual machine that is totally ignorant of the C++ type system. The interface requires passing arbitrary types which may or may not be derived from any arbitrary number of bases, and those types being manipulable by the virtual machine as well as the host program. Consider the following code with comments to maybe better explain my thought process: [source lang="cpp"] struct Base1 { ... }; struct Base2 { ... }; struct Intermediate1 { ... }; struct Intermediate2 : public Base1, public Base2 { ... } struct Derived : public Intermediate1, public Intermediate2 { ... } Derived *d = new Derived; Intermediate2 *i2 = d; // i2 now points to a location in memory at d+offset Base2 *b2 = i2; // b2 now points to a location in memory at i2+offset' // Since i2=d+offset and b2=i2+offset', then logically it would seem as if it must // be the case that b2=d+offset+offset'. And further: b2 = d; // This must result in the same memory address as the combination of the two casts above, // or at least I would assume it would have to. // In other words, if a cast introduces a memory offset, then that offset must be uniform // across all such casts, because the compiler can't know what type has actually been instantiated.[/source] Caveat: Virtual inheritance is not being considered--casts to/from virtually inherited (public virtual Base) base classes are not supported in this interface. Caveat 2: My logic may be completely off base here, and that's why I'm asking the internet!
  15. A project I'm currently working on requires casting from classes in a hierarchy to void* and subsequently casting from that void* to a different class in the same hierarchy. Now, I'm fully aware that naively speaking this isn't a valid thing to do, because casting among classes in a hierarchy may introduce offsets into the actual physical pointer. For example: [source lang="cpp"]class Base1 { ... } class Base2 { ... } class Derived : public Base1, public Base2 { ... } // --------------------- Derived *d = new Derived; Base2 *b2 = d; // b2 will probably point to a different place in memory from d[/source] That being said, my current approach involves storing the offset for valid casts along with the void*, and adding it to the pointer value when I retrieve it. Now that I've gotten the preliminaries out of the way, my question is this: will the offset of casting any derived class to any base class always be the sum of the offsets of every intermediate cast? My initial thought is that it would have to be because of the associativity (? .. not sure if this is precisely the right term to express what I'm thinking) of pointer casting, but can anybody see any problems with my assumptions here? It's been a while since I've had to deal with this particular detail of the way pointers and inheritance work.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!