Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

  • Days Won


Bregma last won the day on July 19

Bregma had the most liked content!

Community Reputation

9413 Excellent

1 Follower

About Bregma

  • Rank


  • Github
  • Twitch
  • Steam

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The answer really depends on the operating system you're using, if any. If you're not using an operating system, it depends on the hardware you're using. At a basic level, input is handled by reading from a memory location, device file, or message events from an input server. You might have to poll the input, or receive input asynchronously. Output is, as you say, rendering 2-D colours to a memory region and taking the appropriate action to have that memory region displayed on an output device (you might be writing directly to screen memory, or you might have to submit the particulars of your memory area, like address and size, to another memory area, device file, or buffer stream going to a display server. You may have to instruct the output to swap buffers. You will need to handle timesteps to synchronize input, output, and in-game processing. If you want to know how SDL doe these jobs, read the SDL code. It's all open source. If you want to do your own input and output, read the SDL code to learn how they do it to get an understanding of how input and output work on a number of operating systems. It turns out there isn't a small simple answer to your question. No one can point to a function in the C standard library for getting input and rending in the DOM. C is not JavaScript or a full-stack web development kit.
  2. Without seeing code, my guess is that you include unnecessary headers, or else you're using a lot of 'header only' libraries. You can reduce the number of unnecessary headers by only including those you need, and by using forward declarations (which, for compile safety, can be placed in smaller simpler headers) and making sure you use include guards properly. Header only libraries make distribution of the library easier, but the cost is increased compilation times. My next guess is that you're using template-heavy code. It's going to increase compilation times. Header only libraries and template-heavy code go hand-in-hand, and both are compile-time performance killers. Sometimes you jut need to go back to the olden days. When I was young, compilation could take hours between when you submitted the job at the card reader and when you got the results back at the line printer. The best time-saving technique was to make sure your code was correct in the first place, a term called 'desk checking'. It seems modern technology is expanding to consume all of the time- and labour-saving convenience it has introduced.
  3. If used correctly, a single-allocated vector (capacity is known at creation time, same as an array), vector will always be faster than array because optimizing the data flow of memory pointer to by degenerate pointers is hard. We had to disable the part of GCC that optimizes that because it would lose track of the fact that memory was being written to and treat the pointer (array) as read-only as it travelled through function calls, under certain edge conditions (eg. a lambda reference capture in which it was the second capture and the first capture was larger than 12 bytes -- the life of a compiler support engineer can be interesting). The compiler always has knowledge about a vector as it gets passed around, so that problem can not happen and amazing optimization opportunities can obtain. The C++ standard library was developed a generation earlier than Microsoft's internal coding guidelines were even though of. Microsoft's style is based on the style of Apple Toolbox they copied, which in turn was based on the Pascal language. The Pascal conventions developed in Europe differed from the Unix and C conventions developed at Bell Labs in the USA in many ways, including the use of upper-case letters in identifiers and the use of non-alphanumeric characters that do not appear on localized keyboards in the respective countries (trigraphs anyone??). One is not objectively better than the other. It's not really reasonable to dismiss something just because it's not what you're used to. The answer to the question of what's wrong with "Vector" instead of "vector" is this. Nothing, except that's not how it is. A decision was made between arbitrary choices in which one had the greater weight of tradition and consistency at the time, and trying to retrofit later social trends has no tangible benefit and great cost. As for me, I can't stand capital case. I didn't cut my teeth on Microsoft, I learned using C, Fortran, Algol, Bourne shell, and everything on Unix and VMS all of which used all-lower-case. Capital case is too unlike writing plain English, in which I capitalize only the first word of a sentence and some proper nouns. I do not Capitalize Common Nouns (identifiers) or any Verbs (functions). Programs are lityerature intended for other readers, and forcing them to read Chaucer is undesirable.
  4. Bregma

    My main complaint with OOP

    The problem with OOP is that you can always write bad code in any language, using any paradigm. Coming up with example of bad code written using a given language or paradigm does not in fact act as an example of why the language or paradigm is bad, it's just supporting evidence of how bad coders can create bad code. Languages and paradigms are forever being touted as a great way to get less-expensive labour to create product at greater profit (although not always in those words -- but do a close reading on claims of "reduced time to release" and "less error-prone"). It turns out software development is like those squishy things where if you squeeze one part it bulges out in another part. OOP makes reasoning about many things easier (and reasoning about things is the biggest cost in software development and maintenance) at the cost of more typing (always cheap) and either better planning (large up-front cost but very low cost amortized over the lifetime of the software) or constant refactoring (lower initial cost but larger cost amortized over the lifetime of the software). If your goal is a write-once-and-throw-it-over-the-wall app, OOP is a poor choice and Agile is your friend. If you need your stuff to run on a HA server for years processing trillion-dollar financial transactions, double-down on OOP with a lot of design up front. tl;dr OOP is no more problematic than any other design paradigm, but can be abused and can certainly add to development and maintenance cost when used incorrectly.
  5. You'll love this proposal in front of the C++ standards committee. Why use a DSL when you can just customize the language itself?
  6. You can have const reference members, but you have to follow the rules. Be aware that having a const reference member means you no longer have value semantics in your object, and that means things might not work the way you intuitively expect them to. If you use pointers, you still have to follow the rules. It takes the same amount of space in memory, but what it tells the reader is subtly different.
  7. Bregma

    Has C# replaced C++?

    Hardly. DSOs (dynamic shared objects -- DLLs, .so files, .dylibs) are relative newcomers. I remember when AIX on the RS/6000 didn't support them at all. and certainly RSX on the PDP/11 didn;t support them -- how could it, it didn't even support virtual memory, although it supported shared pages through the FORTRAN /COMMON/ construct. Nope, DSOs are maybe a few decades old at best. Programming has been around for centuries. @Gnollrunner is completely correct when he distinguishes between native binaries and those that need a native interpreter. They are different solutions to the problem of portability, with different trade-offs. They all, however, require non-trivial runtimes.
  8. What do your LOD copy constructor and assignment operators look like?
  9. Bregma

    Has C# replaced C++?

    Not sure what you're trying to say with that sentence fragment, but std::thread and friends is a completely portable C++ API for threading supplied by the C++ runtime. It provides a standard, portable thread API built on top of the OS layer. If you were to skip the runtime, you would need to write some assembly to load registers with the appropriate values and raise a software exception to switch to the kernel ... in the case of Linux on x86_64, for example, you would load the value 56 into the extended A register (among other values and registers) and issue an INT 0x80). Naturally you would need to set up tables and track values and adjust things both before and after the OS call, keeping in mind you call it once but it returns twice. Other OSes have a completely different set of registers and values involved. Or a one-liner of std::async(myfunc) gets you going using the C++ runtime. Fact is, if you're not loading hardware registers manually you're using a language runtime. That goes for both CPU, GPU, and peripherals like your hard drive, network card, keyboard, and mouse. It includes OS concepts like threads and processes. Like almost everything else you might want to do in a program (including high-quality graphics-intrensive games) you never need to go directly to the OS. The C standard library provides everything you need to do basic text-based programming (what kiddies these days call 'console programming' not to be confused with programming for consoles), and libraries like opengl provide an additional facility for hiding the nasty business of talking to a GPU. Sure, the C (or even C++) runtime is not the same as the JRE or .NET in terms of how the programs get linked together. It's the same as Java or C# in that the runtime has to be available on the system and the program has to be loaded in to it in order to run. PS. some might be surprised that the "loader" that takes a .EXE file and turns it into an executing program is actually part of the C runtime, not part of the OS. The whole question of "will C# replace C++" is like asking if Uber will replace Honda.
  10. Bregma

    Has C# replaced C++?

    When you throw an exception in C++ and it gets caught, how does the machine language instruction know to execute the destructors in all the intervening contexts to implement the miracle of RAII? When I'm writing assembly, which mnemonic do I use to do that? You might be surprised by just how much C runtime there is. You might also be surprised that not every OS bundles the C runtime, since it's not in fact a part of the base OS support. It just happens to always be installed on machines you have experience with. On the flip side, I've dealt with plenty of embedded systems developed without an OS but written in C.
  11. Bregma

    Has C# replaced C++?

    Well hey, I'm a toolchain developer by profession, I feel qualified to answer that. Yes, C has a runtime. It's called "libc" (or MSVCRT/MSVCRTD on one special platform). C programs are built using a memory model that rarely coincides wit hthe OS kernel's ideas of memory, and very rarely talk to drivers. It's all done through the abstract layer provided by the libc runtime. In addition, the toolchain will provide some embedded runtime code for application startup/teardown and a few other things related to the C memory model (like thread-local storage, atomics support, asynchronous signal handling, atexit handling, setjmp/longjmp, and some interactions with floating-point coprocessors). Your toolchain driver will invisibly handle all the grohdy bits for you behind the scenes; few people are fully aware of what goes on below the abstract C machine. Things like "the stack" in C are just implicit and part of the C abstract machine. The OS is unaware of how the applicaiton is using its memory. The C "heap" is part of the C runtime: it usually uses a threadsafe caching slab allocator, but again the OS is unaware of how the application is using its memory. Sometimes applications talk directly to drivers through a slim shim in the C runtime called "ioctl()" but more and more of that is done directly through the /proc filesystem on modern POSIX systems (I don't know how it's done on Windows) using just regular open/read/write calls provided by libc. C++ has an even larger runtime to handle exceptions, RTTI, and of course there's the C++ standard library (no, it's not all templates and some common templates like std::string usually have concrete implementations in a shared object too). The C++ standard library links to the C standard library, and C++ has a different startup/teardown than C (consider constructors and destructors for namespace-level objects with static or thread-local storage duration). Interestingly, you'll find things like the .NET runtime and the Java runtime are written in C (or maybe C++) and are usually built on top of the C runtime. That makes the runtimes very (compile-time) portable. The C runtime has a Unix heritage (just like DOS/Windows, Mac OS, and Linux/Android do). It's not the only way to do things. There are development environments that do not build on libc. For example, the Go language has it's own runtime that does not use libc. Go is extremely portable as along as you're targeting an OS that Google supports. Some of that portability is because Go requires static linking, so the entire runtime is directly embedded in each and every applicaiton. Ada is another such language: it's Ada turtles all the way down to the kernel context switches. It's possible to build a spycam or missile guidance system with no C on it at all. Didn't mean to write an essay here, but it's not often I get to talk about what I do for a living to someone who may be remotely interested.
  12. You're talking about a non-uniform probability density function (PFD). Typically you'd want a Gaussian, sometimes called Normal, distribution (the classic single-humped camel graph) which can be generated using something the like Box-Muller algorithm. If you use the the right search terms you can find several libraries that will give you what you're looking for.
  13. If you're on linux, use Wireshark to watch what is going over the wire. If the response is coming in to your machine, but you're not receiving it, the problem is when you read the socket. If you're not getting a response back at all, the problem is elsewhere. Use telnet to connect to the remote and and emulate what you think your program is doing, while watching the Wireshark output. See if they're different.
  14. Bregma

    Has C# replaced C++?

    At least one popular commercial game development product uses C# as its primary development language. Because of its popularity among independent and hobbyist developers, you will encounter a lot of C# questions and code in game developer social media. It's sort of a selection bias. You might find most big commercial game development shops don't use these third-party tools (and don't use C#), but then again they don't hang around on social media asking about how to use their tools, either. It's interesting to note that the products that provide a C# interface for customers are themselves written in C++. If you want to go deeper, it's also interesting to note that the C++ runtime is itself written in C, although most modern C toolchains are written in C++.
  15. Some suggestions: (1) Don't use 'Connection:close" in the header (shouldn't hurt, but why complicate things when troubleshooting?) (2) You're using some kind of mysterious third-party library. Start by consulting the documentation on the library. Since the problem appears to be either your use of the library or the library itself, and you have posted nothing about the library including its name and where someone else can find information on it, it's not possible to offer any kind of help about your problem here.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!