Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 06:10 PM

#5313102 Coding-Style Poll

Posted by on Today, 02:35 PM

I have my preferences, of course, and when I'm in control of the code those are what I use.

 

When I'm not in control, I have my pet peeves, but honestly what I care about is that whatever style is chosen is enforced by tooling (and in general, there should be an escape hatch when the situation arises that a particular bit of code is better formatted elsewise). I want for the chosen format to not be overbearing with details, I want for hard rules (e.g. breaks and spacing) to be enforced before or by check-in, and for soft-rules (e.g. naming conventions like numWidget vs. widgetCount) to be flagged as part of code-review.

 

Without tooling to help it be enforced, any convention is just entropy and impending technical debt. without tooling, you'll eventually reach a point where you decide to pay the debt down (and had better apply tooling as a first step), or you'll acquiesce to accepting that the code will never conform and that future conformance will be best-effort. In general, without enforcement, coding standards will be the first thing to suffer under a deadline.




#5312741 Why C# all of a sudden?

Posted by on 26 September 2016 - 05:06 PM

Slightly off-topic, but apropos of C++ overhead compared to C, the things that a modern C++ compiler can do with well-written C++ (that is, code that takes care to give the compiler the full and correct context of the code) is just amazing. Correct use of const (and volatile), constexpr, inlining, and templates, together with idiomatic code that's simple rather than clever give the compiler tons of information it can use to make the best possible decisions. Armed with that knowlege, a compiler can entirely optimize away many or most of its "expensive" features, deeply-recursive functions, and more.

 

A case in point: Rich Code for Tiny Computers: A Simple Commodore 64 game in C++17

 

It may have been more true in the past, before C++ compilers got really good, but these days the argument that C is somehow fundamentally faster than C++ is no longer true in the general case. C had a reputation for being faster because the compiler simply wasn't doing a lot of work for you in secret, compared to C++ compilers which do; when C++ compilers were immature they sometime did this secret work more poorly than a C programmer could role their own equivalent, but now that C++ compilers have matured a great deal and become very good at doing this secret work, the C++ programmer can unlock that potential by writing good-quality C++ code for a much lower cost than what the C programmer would have to endure.




#5312185 Alternative to DirectX and XNA?

Posted by on 23 September 2016 - 06:05 PM

I'm primarily a C# guy anyway so that would be my first choice, but I think it's probably fair to say the game industry is  still C++

I live around 60 seconds away from Epic and they're still using C++. I'm not going to walk in there and get a C# job anytime soon LOL

 

You might be surprised. Yes, C++ is for now and in the future the go-to language for developing the guts of the game, but at the same time its becoming less and less the go-to language for writing gameplay code, tools, network services, and other things. You can be certain that Epic has members of its staff writing code in C# right now -- not the guys responsible for the guts of their engine, no, but others for sure. Probably a significant number of them. IIRC, Unreal 4 is not C# scripting like Unity is, but C# is not an uncommon choice for scripting languages in other engines (probably being second to Lua).

 

It really depends on what you want to do. If your interest is tools, online services, or gameplay then C# skills can absolutely land you a job. If your interest is rendering or other core engine systems then C++ is probably the right bet, especially if you want to do bleeding-edge technology. If you're content making games that aren't bleeding edge, a good number of people and studios are making those in C# now, and have been for years -- and I'm not talking ultra-casual, simple games. You can quite handily create a game in C# today that's to the standard of contemporary AAA titles.

 

If advancing your C++ skills aligns to your goals, then by all means use it. If its your preference, use it. If you're choosing it for whatever your own reasons are, use it. But don't use it just because you think its a hard requirement now or in the future, because that's becoming less and less true except for the niche of developing the game engine systems themselves. I love C++, and I'm a low-level kind of guy myself, but its really not the only gig in town anymore.




#5312151 Alternative to DirectX and XNA?

Posted by on 23 September 2016 - 12:14 PM

XNA was a really unfortunate and premature casualty. Killing it counts among Microsoft's significant blunders, IMO. Thankfully Monogame has picked up that torch, and is well-supported on all platforms -- the Xbox One is getting its first Monogame title, Axiom Verge, very soon.

 

DirectX isn't going anywhere. You no doubt have heard of Direct3D12 and Vulkan, which are great and are the way forward long-term, but Direct3D11 and classic OpenGL aren't going anywhere. They're not dead, they're not dying. We're just getting more options.

 

Then you have other frameworks like Cocos2D, SFML, SDL 2, and more, as well as cheap and readily available commercial game engines like Unity, Unreal Engine 4, Lumberyard, Gamemaker, and even specialized ones like RPG Maker. All of which can be and have been used commercially.

 

 

For C++ I'd probably recommend you look into SFML, SDL 2, and Cocos2D, go through some tutorials and then choose the one you like best. For C#, Monogame is pretty great, and put aside any silly notion that "real game programmers" only use C++; unless learning C++ is an explicit goal or you already have *serious* C++ skills, there's little reason to choose it over C#, and you're giving up a great many productivity advantages of C# in doing so.




#5312144 Why C# all of a sudden?

Posted by on 23 September 2016 - 11:58 AM

C# is a rather ergonomic language coupled with expansive, inter-operable libraries and generally great tooling. It performs well enough, and even though highly-tuned C++ will never lose to even highly-tuned C#, a great deal of typical code will never receive that kind of attention -- in many non-gaming applications, perhaps no code at all gets that kind of attention. I would argue that non-performance-critical code created by a typical C# developer is cleaner, more maintainable, and makes better use of libraries than the typical C++ developer likewise creates for non-performance-critical code. I would further argue that the C# developer can do it in less time and be within a reasonable margin of performance, even winning in some cases. I would note that idiomatic use of the latest C++ language/library features can mitigate, or perhaps even overturn that relationship, but almost no one is really embracing that yet and there's a ton of legacy C++ out there that's difficult to move off of the old idioms. CPU cycles aren't infinite, but they're abundant and cheap -- If I've got more cycles available than what I need, its perfectly reasonable to spend some of  them on my own improved productivity as a developer.

 

Neither language is going anywhere though--C++ stalled and even retreated a bit during the period between about 1998 and 2011, but the recent standards work has put its prospects on a steeper upward trajectory than ever before. That's not hyperbole, that's what the stats and trends are showing. Recent and upcoming standard advancements have really made it a better language.

 

C# is doing similarly well -- .Net Core and the open-sourcing of pretty much all of the C#/CLR/.Net Framework ecosystem is going to make for a better future too. Its effects are already feeding into Mono, Unity, and in other ways. All the desperate .Net implementations are moving towards parity, if not unity.

 

And its not zero-sum -- a rising C# isn't going to cannibalize C++ entirely, nor is a renewed C++ going to cannibalize C# entirely. They'll each be given as much work as they're fit for, and there's plenty of work to go around. Its a symbiotic relationship when you think about it -- A competent C# allows for C++ experts to focus their attention where its most beneficial, expands the hiring pool, and has allowed people and teams to create a wealth and variety of games that never would be created with C++ because the expertise or efforts required are too high a hurdle.




#5311146 Would you try a language if all you had to do was install a Visual Studio ext...

Posted by on 16 September 2016 - 06:38 PM

I'd honestly prefer a Visual Studio Code extension over a full VS one.

I don't install VS extensions unless I really need them. I'll happily install a VS Code extension to play around with for an afternoon, though.

 

I Second this.

 

Bias aside as a Microsoft employee formerly writing a few of the VS Code docs, its actually a pretty excellent feature set to launch a language testbed on. By comparison to VS extensions, Code extensions are easier to get to grips with, IMO, and there are quite a few languages that already provide plugins that use many of the features Code provides, which are really focused on the core edit-build-debug loop.

 

Plus, if/when you're ready to make the cross-platform leap, you'll already have a well-tested development environment available. Believe it or not, VS Code has actually gained quite a lot of traction among Linux and MacOS developers, in addition to Windows developers looking for something a little lighter than full-blown Visual Studio.




#5311008 Would you try a language if all you had to do was install a Visual Studio ext...

Posted by on 15 September 2016 - 06:27 PM

Honestly, as I look across the field of languages that have gained traction and notoriety in recent years, most of their successes are on the backs of having a killer app -- Ruby on Rails being a prime example, R is another good one. Even when the language itself is more general in nature, its often a niche application--either for which the language itself is especially well-suited, or for which someone happened to write a killer library in this language--that imbued it with momentum. Sometimes the choice of language is not much more than accidental in this case.

 

That tells me that its hard for languages that are explicitly general purpose or at least without a clear target, and harder still if you have system-level aspirations. I'm admittedly a bit of a fan-boy, but Rust seems to be the only such language that carries any real momentum currently, and its major promise is that you can achieve safety without giving up performance to a heavy run-time sandbox. Its kind of the holy grail if it works.

 

I don't think the nice install experience is a draw, but its nevertheless important. Attention is a fickle thing, and friction is its Kryptonite. Its your language pitch that makes your acquisition funnel big, you still need to work that out, but a lack of friction is what makes it wide. The bonus creature-comforts are what make people linger.




#5310182 (2D) Handling large number of game objects, no quadtree

Posted by on 09 September 2016 - 07:25 PM

Simple methods are perfectly fine -- yes, maybe a quad-tree or some such is the optimal algorithm, but its also a rather generalized one; oftentimes a simpler solution will do when it fits your own needs better, and sometimes its just as fast--or faster--than the optimal general solution. What Sonic did is a neat trick for exactly that kind of game, its an excellent example of looking at your needs and constraints, and coming up with a solution that does what it needs and no more.

 

That said, nothing on the genesis or contemporary systems would have approached "large numbers" of enemies by computational-complexity standards. In 3D space, or with a great number of objects to consider (especially if you have to test whether they interact, as in an n-bodies simulation) you probably do need a way to easily partition the objects that might interact. In 2D, with relatively few objects to consider, its often sufficient to simply check whether an object is potentially on-screen before blitting it, or within a slightly larger rectangle to "activate" autonomous entities ahead of time, just so that it doesn't seem like life suddenly springs into being right at the screen's edge.

 

In a 2D overhead RPG I wrote in my early days, that's exactly what I did -- NPCs activated half-a-screen outside the visible area in every direction, so even if the player stood still, nearby NPCs would come on-screen, leave, and come back at will. If they wandered too far they'd suspend and remain near where the player would have expected them to be. This all lent to a more immersive kind of feel, without spending resources to calculate all the NPCs in the entire town.




#5309908 Rendering large tile-based ships

Posted by on 07 September 2016 - 09:35 PM

A good search term is PTEX (Per-face TEXture mapping).

 

However, I spoke without my coffee this morning, and if you're pretty green with 3D its more straight-forward to just duplicate the vertices -- as long as the vertices are identical going in, and as long as they always get transformed by the exact same matrix (if you regenerate it, you have to build it by composing the component matrices in the same order), then the vertices won't diverge, and the filling rules ought to prevent pixels from fighting in a steady scene -- you might perceive some fighting as the ship rotates though -- MSAA/FSAA should help smooth that out.

 

On a separate note, if your very large ships become very small on-screen (say, where a tile occupies maybe < 4x4 pixels, you might then want to switch to rendering it as a single textured quad that's generated via means similar to the 2D render target method; once you can't see the detail in the tiles anyways, its a lot of work to go through for no real benefit, and rendering very small triangles is a huge performance drain. I would choose the transition point to be where the animation of the tiles gets lost in the smallness of them on-screen, then you don't have to worry about animating this small ship texture.

 

I assumed this whole time that you're rendering moving things like crewmembers separately, so you'd do the same for them when the ship takes only a small portion of the screen (assuming the position of crew members has meaning to the player when zoomed out this far), so you could transition them at some point to being rendered as single pixels. Likewise, you could do the same for an important indicators that would normally be shown with a tile animation.




#5308780 Should i use "caveman-speak" for dialogs?

Posted by on 30 August 2016 - 11:18 PM

I think it could work too, but you're going to have to be wary of it wearing thin as other's have suggested. I second the idea of accomplishing this by developing a restricted grammar productions and word list -- don't just start with full and arbitrary modern dialog and then caveman-iffy it.

 

I'd maybe look at how much vocabulary Coco (the Guerrilla who was taught to speak sign-language) had, and how well she formed sentences. Its hardly scientific, but at least its some kind of modern analog.

 

I'd half-jokingly suggest considering a pictographic language based on cave-paintings (There actually are real-life recurring cave symbols that some scientists believe are a pictograph language of sorts) -- its probably too high a barrier to entry to work for real, but thinking about dialog in those terms might help. If you did actually do a pictographic language in the game, it might be an interesting mechanic in itself, but I think you'd have to ramp it up pretty slowly -- very short clauses made of the most obvious symbols -- so that the player could build up to more complex thoughts and abstract symbols, but learning to decipher things could be part of the fun.




#5308537 Visual Studio Hardware Requirements Seem Lower

Posted by on 29 August 2016 - 01:40 PM

Also, depending on where you live and whether you can wait on a good deal, you can really get a lot of bang-for-your buck. At least in the US, late fall seems to be a great time to get a good deal, because computer sellers are blowing out old stock before the late-fall hardware refresh cycle. If you can wait, consider that, but don't hold yourself back if you can't. Sales typically start around a month before school starts back up, and again after thanksgiving/black Friday.

 

Lenovo seems to always have pretty great deals around that time, and makes excellent machines.




#5308534 Visual Studio Hardware Requirements Seem Lower

Posted by on 29 August 2016 - 01:31 PM

I want a really nice experience, no lag, etc.

 

 

I would use an external SSD with either.

 

It was said to get the most ram and processing power for my money so thats what I tried to do.  However now I'm looking at the i3s and the money is less and the display is 17" and beyond.

 

What do you all recommend is the larger screen still much better or is the smaller laptop screen (15.6 ") just what's used now, and just as good for amount viewed?

 

Honestly, you're going to have to make some compromises at the price-point you seem to be targeting, you can get a pretty decent computer experience out of a $350-$400 laptop these days, but its not going to be the nicest experience.

 

The machines you linked to have low-resolution displays -- 1366x768, and that's really too low to be very productive with Visual Studio or other productivity tools, and the pixels will be quite chunky on a 15.6 inch screen. Plus, its not screen size alone that gives you usable real-estate, its having a balance of screen size and resolution -- I find that you really want 1920x1080 or better, and that its a good fit for most screen sizes (though, its too many pixels for anything smaller than 13.3 inches, and even that is stretching it) -- 1600x900 is a good resolution for a 13.3" screen too, though I'd say its the bare minimum resolution for on-the-go productivity. Screen quality and viewing angles are also important considerations for your comfort and ergonomics.

 

You also want an internal SSD. You can get some pretty speedy external SSDs, but they're not inexpensive and you could have an internal SSD for the same price. Try to find a laptop with a 128GB SSD inside or larger; or get one with a mechanical drive that you can easily change yourself without voiding your warranty -- and then buy an SSD and install it yourself --  afterwards, you can put the mechanical drive in an external USB enclosure and use it for extra space and backups. Be aware that if you install your own SSD this way, you'll need to jump through a couple hoops to get your OS on it, but its doable.

 

In a laptop, screen-size also directly affects portability. A 17" inch screen sounds good at first, but you might not feel that way after lugging the thing around for a day. If its too heavy or bulky that you never want to move the thing, you might as well have built/gotten a desktop instead, since you'll usually get more computer for the same money and have more options to expand and upgrade. If you're going to pay the laptop premium, it needs to be portable in practice, not just portable in theory.




#5308528 2D huge tile maps handling?

Posted by on 29 August 2016 - 01:01 PM

Tangletail explains the concept well. To implement this, you'll want a map data structure and file format that lets you handle 'chunks' of the map as a unit, rather than trying to pull only what you want from a monoliithic map structure (e.g. a big array of tiles in memory or on disk).

 

So, rather than a giant array of tiles, you want to break your map into regular pieces -- you can choose different sizes to suit your needs, but for now lets just say that chunks are 16x16 tiles (power-of-two squares have some nice properties that can be used to optimize some of the math you'll need to do). The chunks themselves are just smaller versions of the big array.

 

On disk, your map file won't be a big array of tiles any more, it will be a smaller array of chunk references, and a list of chunks. From disk, you load whole chunks whenever any part of it is within the radius you care about.

 

In memory, your map structure is similar to the disk contents -- One option is to load the entire array of chunk references into memory, which you use to determine which chunks are nearby and need to be loaded. If your maps are really big, you might increase the chunk size or even chunk up the array of chunk references (giving you two layers of indirection, instead of just one). When a chunk is inside your area of interest, it needs to be in memory -- and when a chunk you've previously loaded goes outside the area of interest, you can re-claim that memory so it can be used for another chunk. This works really well with memory management pools -- You set aside enough memory for the maximum number of chunks that fit inside your area of interest, and maybe a bit more, and then you just re-use those chunk structures over and over to hold different chunks as the player moves around. The chunks in memory don't have to be stored in map-order, the array of chunk references takes care of spacial ordering.

 

This is all pretty straight-forward, but you're dealing with a few different coordinate systems here (the camera view, the area of interest, the coordinate system of the array of chunk references (and one of those for each layer of chunking), and the coordinate system of the chunks themselves) so there are a lot of little details to get right.

 

 

Now, all that assumes that the world is sort of static -- if not, you'll need to additionally track which chunks the player has changed, and write those changes back to disk somehow, and you might need to do that per-player if they have independent worlds. 




#5307987 Style preferences in for loop structure

Posted by on 26 August 2016 - 01:25 AM

I'm personally a fan of decrementing my for-loops rather than incrementing. So instead of this:

for(int i = 0; i < MAX; i ++) {
    // ...
}

There's this:

for(int i = MAX; i --> 0;) {
    // ...
}

It comes in handy when you're iterating over a container and are deleting elements along the way. This way you don't have to add as much logic for moving to the next element after deleting the one prior.

 

Methinks that's one of those "too clever by half" moves. Its a neat syntactic side-show, but the result is mostly that anyone who reads your code scratches their head for a minute, has a minor eureka, then hits google to confirm and winds up at that stackoverflow thread. It wouldn't pass any code review I was part of, just on the grounds of it not being idomatic.

 

Theoretically, comparisons against 0 can be faster, but this form also gives up the pre-decrement operator which is similarly theoretically faster, and, IMO, if you're doing anything interesting with those indices you've probably then got to jump through hoops to adjust your thinking to operate with 'unnatural' relationships between indexes.

 

Looping backwards does make good sense when you want to modify elements of a container in-place based on preceding elements, though. As far as removing elements from the container, probably something like the std::remove_if-std::erase idiom is better, especially if you leave those "removed" elements around, as in an object pool.




#5307734 Visual Studio Hardware Requirements Seem Lower

Posted by on 24 August 2016 - 05:57 PM

Most any new laptop these days should be enough -- you probably want an i3 at a minimum, you probably want 8GB at a minimum, and you probably want an SSD drive, and it sounds like you want at least 1 external video port (easy) if not two (many -- if not most -- laptops have this) -- Even with one external port, you should still be able to use the laptop's display for debugging while your program runs on an external monitor.

 

On the topic of GPUs, even integrated graphics are quite capable these days -- You can play a game like Bioshock Infinite on modest settings at lower resolutions at 30FPS or more with a mid-tier integrated Intel GPU, and the Skylake ones, at least, are Direct3D 12. They're more than enough for 2D, and more than enough to experiment and grow into modest 3D graphics. They'll just never be a 3D powerhouse.






PARTNERS