Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Online Last Active Today, 12:19 PM

#5256043 Drawing graphics in C++ w/out APIs?

Posted by Ravyne on 07 October 2015 - 11:27 AM

Without an OS getting in the way, talking to hardware still requires poking interrupt handlers and hardware registers and DMA transfers and so on, none of which can be done with pure C++.


Well, that's not quite true. You certainly *could* do those things if you had access to (and understood) the bare-metal. Its true of course that going through at least a BIOS is common and more than enough "to the metal" for most everyone's tastes, but there's nothing inherently special about a BIOS; its just machine code implementing an abstraction layer over the barest of hardware. All those interrupt handlers and hardware registers, including those that control DMA, can be reached from the CPU, so I don't understand how that would prohibit C++.


Agreed, though, that its entirely impractical to attempt talking to modern hardware and getting modern features out of it. If that's your hobby, take up driver development, but even that's done at a substantially higher level of abstraction (with the OS providing IOCTL interfaces and other necessary or useful primitives).

#5255928 Drawing graphics in C++ w/out APIs?

Posted by Ravyne on 06 October 2015 - 07:28 PM

You can write a software rasterizer -- basically you create a region in memory that's an array of pixels, and then you write your own individual pixels into it. When you're finished, you can use you host's windowing system to display it, or you can shuffle it off to OpenGL/Direct3D through the usual layers, but not using any of their drawing routines. In the old days of DOS that's how graphics were done, except DOS was single user so you could take direct ownership of the graphics adapter's memory and write straight into it.


Drawing pixels, lines, circles, elipses, and bitmaps this way is pretty typical of a 100-level course in computer graphics. A 150 or 200-level course often extends this to 3D, where you first transform the 3D geometry in a software rendering pipeline, and then you rasterize shaded and texture-mapped triangles to the bitmap.


On most platforms you can't easily talk directly to the GPU, and GPU vendors aren't terribly open about their drivers' API surfaces or hardware command registers (Though, they've started being more open lately with the call for open-source drivers) and even if they were you're talking about 2000 pages or more of datasheet to get your head around.


If you want to talk directly to a GPU, your best bet would be something like the DreamCast or Raspberry Pi, but understand that even those "simpler" systems are vastly complicated, and the act of talking to them at a low level doesn't look much like graphics programming, if that's the part that interests you.

#5255925 Using a physics engine on the server

Posted by Ravyne on 06 October 2015 - 06:57 PM

This reminds me of a statement i once read in an internet RFC (request for comments) document:
Be permissive in what you accept, but strict in what you send.

This statement holds true in any protocol including games.


Sure -- actually, a good application of that mantra would be something like "Clients and servers should be able to deal with bad data (corrupted, maliciously crafted), and send only good data". For example, when I said earlier that the client shouldn't ask the server to move through a wall, the server should never trust a client not to do so -- lots of hacks for different games involve sending messages to the server that are deliberately misleading -- the server needs to validate what the client attempts to do. On the flip side, non-compromised clients -- and especially the server -- should be very strict about what they send and how its sent (for example, don't just let unused bits/bytes in a message be sent out uninitialized).


[Edited to add] Defending against malicious packets is super important. If you recall the Heartbleed security vulnerability from a couple years ago or so, that was a maliciously-crafted packet where the client requested a response longer than it knew the data to be, and the server simply trusted the client to be honest, though I don't believe it was intentional -- more of a logic bug. Bad idea in any event, this compromised tons of services of all sizes -- webmail, banking, Facebook even, IIRC... I remember changing basically all of my passwords because of it. 

#5255894 Using a physics engine on the server

Posted by Ravyne on 06 October 2015 - 02:51 PM

I'm not really equipped to give low-level advice on the topic, but from a higher level, I find it useful to think of the client as a kind of "smart terminal" into the game as the server sees it -- the client is neither the game, nor is it a "dumb terminal" which only sends keystrokes and renders what the server last told it to.


The client is a "smart terminal" because it handles on its own the things that either don't matter for simulation consistency between player clients, or which can be initialized once by the server and then left to run in a way that will remain consistent -- for example, debris/decals/particle effects can fall into either of these categories depending on whether they are purely cosmetic or have an affect on gameplay. Another reason is that the client must make it's best-guess about the current state of the simulation between messages from the server, as a basic example, you might render at 60 frames per second, but you might only get network updates at 10, 15, or 20 times per second (and each of which might have between one-half and several frames of latency by the time you receive it) -- So when a projectile or player is last known to be moving in a given direction, the client has to assume it keeps doing so, but also smoothly interpolate back inline with what the server says to be true when it receives the next relevant message. Similarly, for things like collision detection, the client can know that its impossible to walk through a wall and so should not permit the player to do so while waiting for the server to reprimand it, nor should it even ask the server whether its possible. In short, the client should be smart enough to give its best estimate of what the server will say to be true, based on its imperfect information, and be able to get back in line by smoothly integrating the latest authoritative information from the server.

#5255893 What's the best system on which to learn ASM?

Posted by Ravyne on 06 October 2015 - 02:34 PM

It really depends on what you value.


For accessibility, x86/x64 is simple because you can do it right on your PC. This means fewer headaches -- debugging your assembly code is a lot easier when its running on your host, and its also easier to write the high-level parts of your program in, say, C or C++, without the slight pain of dealing with cross-compilers, deployment to a device, etc. ARM, in the form of a raspberry Pi is also a good choice for the same reasons (With the Pi being a complete, if modest, linux-based PC).


What you don't on either of those platforms is unfettered access to the machine. If you're interested in that, you probably want something like an arduino or another microcontroller kit -- something where you can run bare-metal without an OS. Depending on your platform, this experience may or may not be super-transferable to larger devices if that's a goal; for example, many small microcontrollers have no caches (or very simple caches), and CPU clock-speeds low enough that RAM access is essentially single-cycle -- which is not at all true of PCs or even many higher-end microcontrollers. The Pi makes another showing here -- Its reasonably documented now, and I've seen material on programming it bare-metal-style; the only downside is that its a lot to try to understand and in many ways its more accurate to say that the Pi's SOC is a GPU that happens to have a CPU, rather than the other way around. For my money, the Gameboy Advance is a good platform in this space -- Its very well-documented by the homebrew community, tools are readily available (emulators are a great resource), the hardware capabilities are interesting but not overwhelming in complexity or number. Bonus: you can distribute what you create as a ROM if you're interested in creating complete games, or even play on a real GBA.


Going full-retro, its hard to recommend against the C64 (6502) or Amiga (if you want to go 68k) -- either are very interesting, very capable, and very well-documented machines with communities that are still going strong. For their times, either one can be said to be a paragon of hardware design not just for how much they were capable of, but for how elegantly they achieved it.

#5255029 Game Development as a Career

Posted by Ravyne on 01 October 2015 - 01:18 PM

I agree with ~70% of Katie's post, disagree with the other 30%, but so over-agree with this following bit that I think Katie deserves +500 points:

Have you written any games? Why not?

Your degree will be borderline useless whether you graduate or not; what's going to make all the difference is whether you can actually demonstrate any level of competence at game development. Make games.





I am currently trying to develop a fps


Horrible idea. 10 small completed fleshed-out projects with whistles and bells are 100× more impressive to a potential employer than a half-finished unpolished FPS game. How do I know you will end up making a half-assed FPS game? Because you are not a major studio with a huge budget and you are a beginner.
Why do you think it is a good idea to use beginner-level skills to pursue such a demanding project?

A polished pack of games, including Tetris, Pac-Man, a space shooter, etc., even if all in 2D, will get you much farther as portfolios go.

I good portfolio can make up for a poor education (I dropped out of high school, even), so no matter what you choose to do regarding your school (I’m much more middle-ground than the above posters—there are down-sides to dropping out, but also for sticking with it) it is extremely important that you don’t screw up your portfolio.


And this too.


I'll also add that in addition to full, polished, small-scale games its also a good idea to pick individual systems that you can make a non-game demo for, and try to built that out to state of the art. The stable of simple, well-polished games will show your breadth, while a focused demo of a state-of-the-art system will show that you can do depth too.


An example of a good system to pursue in depth would be something like particles -- its sufficiently complex, but non-trivial, and a state-of-the-art implementation would give you a good survey of modern design concerns both high (interfaces, composability) and low-level (data-oriented design, SIMD/GPGPU, rendering lots of particles with correct transparency). Mind you, state-of-the-art is a goal to work torwards, not an end -- you don't have to get all the way there, but the closer you end up, all the better. You could easily spend as much time on this one state-of-the-art system as you do on any of the other polished, small-scale games.

#5254844 How can I make a text parser?

Posted by Ravyne on 30 September 2015 - 12:01 PM

C++ has regex now, but regex alone isn't enough to implement a parser for a programming language -- the basic shortcoming is that regex can't count things for itself, and counting tokens in one form or another is necessary (e.g. keeping track of matching brackets is a form of counting) in any programming language you'd want to use. Regex is sufficient for a more declarative sytnax though -- say, key-value pairs for an initialization file.


Regex is a reasonable tool for tokenizing symbols though -- you just need to program the parsing/semantic analysis around that.

#5254840 Why did COD: AW move every file into the same folder where .exe is?

Posted by Ravyne on 30 September 2015 - 11:51 AM

I don't imagine its any sort of advantage. Its certainly not a performance advantage. Don't take it as something you should do. Or shouldn't do, for that matter. Its entirely incidental.

#5254552 What exactly is API-First?

Posted by Ravyne on 29 September 2015 - 01:24 AM

Having a solid, well-designed, strongly-versioned I
API surface certainly is important, especially on the "web" (Whereby a web service provides the business logic for a diverse array of web-based and app-based interfaces, across a diverse set of devices with different connectivity and differing abilities to cache content, etc.) Its certainly non-trivial to do so, and IMO the diversity of ways we access the same fundamental service is what's novel this time 'round the wheel. Add to that that often the implementers of these ways and means are third parties, working to optimize the experience for their special device or what-have-you (e.g. I'm pretty sure Netflix didn't write the Netflix app on my Samsung TV) and its clear why a good API is super important -- UNIX did pretty well, and there are strong (though not universal) conventions, but its not a paradise either -- versioning of interfaces used in scripts can be pretty janky, and text isn't always an optimal medium for programs to talk over (though it does have a strong benefit in being a pretty-good lowest-common-denominator).

#5254509 What exactly is API-First?

Posted by Ravyne on 28 September 2015 - 05:33 PM

All this is really advocating is for the separation of user interface from the programmatic interface (to wit: API), specifically here in the case of the web. It basically espouses to build your web service as an API, and let an independent team of designers (first or third-party) worry about how to expose that in a way that has end-user appeal; it also seems to include API aspects such as versioning, and other concerns.


None of that is really novel, but its certainly very true -- give me a great command-line-first tool (or, better yet, have that command-line interface load the logic from a .DLL and let me do the same) that I or someone else can provide a great UI on top of and I'm a very happy guy. GUI-first applications get in my way when I want to script something the tool does -- I can't do it from a command-line, over SSH, or on a headless server. Likewise, you don't want to hide your great web service or web-app behind its user-facing UI or muddle them together -- it puts you into a walled garden, and sooner or later someone will make the same service available in an open, inter-operable way that will make you obsolete.


Unix/Linux philosophy is based on highly-interoperable command-line tools, and the things you can do with it are extremely powerful. This is just that, but for web services, and with a few things we've learned over the years thrown in.

#5254417 pass an array? (stock c++)

Posted by Ravyne on 28 September 2015 - 11:34 AM

Unless you have a really good reason to avoid heap allocation, I would always recommend writing a wrapper around a single dimensional std::vector that provides a 2D interface.
Multidimensional array syntax is a nightmare in C and C++.

I don't think OP is doing that necessarily. They're looking for a way to pass an array of unknown size to a function. A pointer along with a length or stride is perfectly workable and efficient (though makes the interface prone to mistakes).

BTW, those mistakes are exactly the kind of thing SmkViper noted above that array_view is meant to fix, however I believe array_view only works on a single dimension -- I could be mistaken though. It's probably a good idea to adopt that as a means of passing (segments of) arrays around -- even for a sequential container like vector, array_view is likely better than an iterator pair. 


[EDIT] Now that I've had a chance to check, I was indeed mistaken, array_view does indeed support multiple dimensions as SmkViper points out below. 

#5254045 What is more charged?

Posted by Ravyne on 25 September 2015 - 02:15 PM

As Frob says, financially speaking, game development is about the worst thing you can do with the requisite skillset -- that is, someone qualified to do game development, whatever their level, can certainly earn a higher income using those same skills (e.g. programming and all the things that go into it -- not "Unity 5") elsewhere. Unless you're lucky enough to be a founding member of a studio, occupy a real position of power in the company (e.g. someone who sets the technology directions for the entire studio or manages multiple teams of people) then you're unlikely to even achieve a salary that is common in other industries. The disparity can be pretty marked -- I've known games industry jobs to offer half to 2/3rds the starting salary of other places, it tends to get closer to the 80% range once you've shipped a title or two, and then doesn't really change unless you're really a stand-out. Not only are "high" salaries in the games industry lower, they're also less common to see, in my estimation. Likewise, bonuses tend to be lower and/or less frequent.


I don't think anyone would advise to become a game developer for financial reward alone. You need to really enjoy the work itself (not just the end-product, either) -- loving games does not equal loving making games. And if you don't love making games, its going to be really hard to stomach looking at people with similar or lesser skillsets making more money than you, not working 50 or 60 hours per week, having time for their families, and being able to afford nicer things for their families. I don't want to make it all sound too stark, because there's a lot of reward there for a certain kind of person, but you shouldn't go in with rose-tinted glasses either.

#5254026 Game Development Laptop

Posted by Ravyne on 25 September 2015 - 12:17 PM

Another bit worth mentioning is that it seems like finally this year and next, external GPUs are actually reaching maturity. With USB 3C or thunderbolt, you can drive an external mobile or even desktop-class GPU over that one cable. Streaming content and back-and-forth compute loads will suffer some due to the narrower interface, but framerates should be mostly good, D3D12/Vulkan should help to, since they allow devs to leave more stuff on the GPU and to re-use buffers more flexibly.


If you can deal with "good enough" integrated/mobile GPU on the go, but wan't more graphics oomph at home, that's possible. There are even some people making "mobile" GPU docks that have higher-end mobile GPUs inside (the kind you'd find in an 8lb, 17" gaming laptop -- which is still a big step up compared to integrated or lower-power discrete GPUs found in thinner and lighter laptops) so if you don't need a full desktop GPU, its a bit easier to pack along if you wanted to take the dock anywhere.

#5253915 [Java] Most Efficient Way To Iterate Through A File ?

Posted by Ravyne on 24 September 2015 - 06:24 PM

You're right that its not purely due to visible string allocation, but I'd wager its still a significant portion -- the invisible kind. You talk about string comparisons, character sequences, splitting strings -- all of those produce allocations of their own, behind the scenes. The invisibility of them makes them something worth being concerned about, and its not inconceivable that millions of invisible allocations could arise from a largish config file, if it were processed in a pathologically-bad (but all too common) way.


Furthermore, I'll point out that by the looks of it, your benchmark almost certainly isn't capturing GC time since A) Java garbage collection (like most) is non-deterministic, and B) your container of strings outlives your ending timestamp anyways. Heck, without printing out a random sampling of the stored strings, you can't even really be sure the compiler hasn't elided them away entirely. Neither do your pure allocations reflect the constant copying of data that would be incurred by common parse-by-one patterns (as I said earlier -- read an additional character and decide whether we have enough to do something with it). I don't mean to pick on you, and I don't even mean to pick on Java, but without seeing OP's code, your isolated micro-benchmark doesn't really tell us anything about how allocations might be affecting OP's code.

#5253901 Game Development Laptop

Posted by Ravyne on 24 September 2015 - 04:49 PM

I've moved to a (higher-end) laptop for serious work a few years ago. You pay a premium for portability, but its nice for being able to get out of your home office (maybe to class, maybe to a coffee shop, whatever), and important when you need to show off what you're working on. That said, I'm not fond of working at a small screen (or just one screen), so I have a matching dock and several large monitors, along with a nice keyboard. My lappy is a Lenovo w530 -- 15.6" 1080p screen, Quad i7 2.6Ghz (boost, 2.0Ghz nominal), nVidia Quadro K1000m (with optimus), plenty of RAM (32GB), and a quality SSD (Samsung 840 pro).


I have a higher-end desktop for gaming, which I can always call into service if the laptop comes up short, but so far there's been no need. If you don't need the mobility, a desktop is indeed bigger bang for the buck; that said, for my money, I'm becoming increasingly convinced that a laptop/server combo is the best solution -- I like having a solid home server anyways, so selecting a beefier CPU and additional RAM to handle the occasional load adds only a marginal cost (say $200) -- its fairly easy these days to farm out software compilation and asset builds to these additional cores (and your laptop is probably just fine for incremental builds anyways, unless your code has structural problems) -- you can even add a GPU if your tasks make use of it. If you're so inclined, you can even do most of that server stuff using cloud services, or on-demand (hourly) VPS/Docker instances.