Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Yesterday, 11:41 AM

#5316131 Moving from plain "ascii" to XML?

Posted by on 21 October 2016 - 03:46 PM

Note that parsers which respect XSDs are about 10x larger libraries and much slower, too.


And on this point, note that you probably don't need the functionality that XSD provides at runtime in any case. XSD is useful for validation, and for authoring content in aware text editors, and for enabling XSLT transformations, but you don't need any of that after release. Your XML files will already have been authored and transformed and ought already have been validated by the time you ship them to customers.

#5316129 Moving from plain "ascii" to XML?

Posted by on 21 October 2016 - 03:39 PM

Honestly, XML is pretty verbose and doesn't carry its weight in simple scenarios where flat files, ASCII, JSON, or Yaml would do just as well. XML is an awesome thing (with its share of flaws as well), but it really only shines when you need to represent complex, arbitrarily-nested information in a structured hierarchy, and it only comes into its own when that structure is formally specified, verified, and transformed through the XML ecosystem.


If what you really need is just a list of variables, flat files or ascii text files are just fine. If you need arbitrarily-nested information -- but don't need formal structure -- JSON or Yaml are perfect; in fact, they're great options when you explicitly *don't want* a formal structure -- many times the lack of strict structure is a benefit, all JSON and Yaml structure is ad-hoc. Only when you need a formally specified, hierarchical structure that can be verified and transformed is XML necessary. XML might be sometimes beneficial for interop scenarios as well, or certain encoding scenarios, but JSON is usually just as good unless you're tied to some (rare) party that understands XML but not JSON.

#5314601 "Self-taught" 18yo programmer asking for carrier advice. Seriously.

Posted by on 10 October 2016 - 08:30 PM

What you typically get out of being self-taught -- even if you are accomplished in the result of that self-teaching -- is a fairly broad, but also not-very-deep understanding. Speaking as someone who was similarly self-taught (with a few small-and-medium-sized accomplishments to show by the time I had graduated high-school), the odds are good that your understanding is neither as deep, nor as broad as you assume, as were my own.


My advice would be either to engage fully in academics (and if you already have marketable skills, you can engage in freelancing or entrepreneurship on the side to pay your way -- you'll be earning a lot more than your classmates doing deliveries or waiting tables), or engage fully in making your own way as an entrepreneur. In all likelihood, self-taught is not a path towards a typical industry career that's successful -- it happens for some, but it is by far the exception.


If you choose the academic path, I suspect you'll find a lot to benefit from in the standard course progression -- you should be seeking out a school that's challenging and has a good reputation anyways, but especially so if you already have the level of experience that you do. Don't choose a program that you know you can glide through just to get the paper at the end, that's not giving you real value. If, even in choosing a challenging program, you find it to be less than you'd like, that's an excellent opportunity to specialize -- take additional credit hours and perform independent, deep research into AI or another area that interests you; get a minor in mathematics, or some other topic that's adjacent to computer science; take courses in management or business that will prime you for leadership roles, or to run your own business more effectively. Heck -- be a mentor or teacher's assistant after you advance some in your degree: the experience of teaching others can be a great catalyst to cement and deepen your own understanding of things, and it develops a great leadership skill as well.


If you choose the path of self-study and entrepreneurship, throw yourself at it fully, and accept that its not a substitute for a degree -- its a challenge and an achievement valuable in its own right, but not a replacement for a degree as far as most organizations are concerned. Realize that this path is putting you fully in charge of succeeding or failing, and demands that you're accountable for your growth, for recognizing and correcting your own blind spots, and for knowing when you're in over your head. If its possible, try to find a mentor who can help you recognize your shortfalls and help you grow, you'll be better off than going entirely alone; if that's not an option, its important to find other venues to make connections and be stimulated by other people doing smart things -- join in, attend, and participate in local entrepreneur or developers' groups. If you don't have them locally, find them online. Watch presentations online -- many top-tier conferences, top schools, and local user groups put tons of great presentations, lectures, and courses on YouTube or other places for free. Khan Accademy, Udacity, and others -- I do believe there's enough information out there to give you a good education and that eventually those with jobs to offer will realize that Universities don't hold a monopoly on knowledge like the once did, however, Universities are still very good at knowing what you need to know, and that's not something the average individual is very good at knowing for themselves, or which the a'la carte internet education ecosystem has figured out yet.

#5313962 Your Opinion of Software Dev Resume assertion

Posted by on 05 October 2016 - 12:22 AM

It can be harder sometimes for the people in the trenches to directly tie their work to business impacts, but you should try to tie your contributions to some kind of measurable impact as much as you reasonably can, and try to use more-dynamic, less-passive language even where you list tasks.

Why, in the grand scheme of things, was the work important or helpful? Did it increase performance? Strengthen network security? Improve your team's productivity? Did you replace old, buggy code with something simpler and more nimble?

Try to think less in terms of what you did for your manager, and more in terms of how what you did improved your team's workflow, your product,or the bottom line.

#5313424 Alternative to DirectX and XNA?

Posted by on 30 September 2016 - 04:07 PM

Unity and XNA were always meant to be quite different -- Unity meant to be cross-platform, higher-level, and GUI-driven pretty much from the start, it de-emphasizes coding to a great degree. XNA is a framework and content pipeline, its not really even an engine, it was meant to be cross-platform among a subset of Windows devices, but didn't itself ever have any grander ambitions, its lower-level and code-focused, rather than GUI focused. Monogame still follows that basic approach, but they've expanded the platform support and have moved to extend and go beyond XNA 4.0 (on which it was originally based) because they have to keep pace with new platforms and hardware.


Microsoft certainly dropped the ball with XNA, but it never was a threat to Unity (contemporary to XNA, Unity's stronghold was iOS, moving to android -- neither of which XNA could touch) and Unity was never a threat to XNA. What killed XNA was that Microsoft didn't have a clear image of what they wanted out of it. Had they gone all-in on XNA, it could have morphed into something that usurped the stake Unity holds in Windows today, or further afield if they had been even more ambitious, but the former never happened, much less the latter.

#5313102 Coding-Style Poll

Posted by on 28 September 2016 - 02:35 PM

I have my preferences, of course, and when I'm in control of the code those are what I use.


When I'm not in control, I have my pet peeves, but honestly what I care about is that whatever style is chosen is enforced by tooling (and in general, there should be an escape hatch when the situation arises that a particular bit of code is better formatted elsewise). I want for the chosen format to not be overbearing with details, I want for hard rules (e.g. breaks and spacing) to be enforced before or by check-in, and for soft-rules (e.g. naming conventions like numWidget vs. widgetCount) to be flagged as part of code-review.


Without tooling to help it be enforced, any convention is just entropy and impending technical debt. without tooling, you'll eventually reach a point where you decide to pay the debt down (and had better apply tooling as a first step), or you'll acquiesce to accepting that the code will never conform and that future conformance will be best-effort. In general, without enforcement, coding standards will be the first thing to suffer under a deadline.

#5312741 Why C# all of a sudden?

Posted by on 26 September 2016 - 05:06 PM

Slightly off-topic, but apropos of C++ overhead compared to C, the things that a modern C++ compiler can do with well-written C++ (that is, code that takes care to give the compiler the full and correct context of the code) is just amazing. Correct use of const (and volatile), constexpr, inlining, and templates, together with idiomatic code that's simple rather than clever give the compiler tons of information it can use to make the best possible decisions. Armed with that knowlege, a compiler can entirely optimize away many or most of its "expensive" features, deeply-recursive functions, and more.


A case in point: Rich Code for Tiny Computers: A Simple Commodore 64 game in C++17


It may have been more true in the past, before C++ compilers got really good, but these days the argument that C is somehow fundamentally faster than C++ is no longer true in the general case. C had a reputation for being faster because the compiler simply wasn't doing a lot of work for you in secret, compared to C++ compilers which do; when C++ compilers were immature they sometime did this secret work more poorly than a C programmer could role their own equivalent, but now that C++ compilers have matured a great deal and become very good at doing this secret work, the C++ programmer can unlock that potential by writing good-quality C++ code for a much lower cost than what the C programmer would have to endure.

#5312185 Alternative to DirectX and XNA?

Posted by on 23 September 2016 - 06:05 PM

I'm primarily a C# guy anyway so that would be my first choice, but I think it's probably fair to say the game industry is  still C++

I live around 60 seconds away from Epic and they're still using C++. I'm not going to walk in there and get a C# job anytime soon LOL


You might be surprised. Yes, C++ is for now and in the future the go-to language for developing the guts of the game, but at the same time its becoming less and less the go-to language for writing gameplay code, tools, network services, and other things. You can be certain that Epic has members of its staff writing code in C# right now -- not the guys responsible for the guts of their engine, no, but others for sure. Probably a significant number of them. IIRC, Unreal 4 is not C# scripting like Unity is, but C# is not an uncommon choice for scripting languages in other engines (probably being second to Lua).


It really depends on what you want to do. If your interest is tools, online services, or gameplay then C# skills can absolutely land you a job. If your interest is rendering or other core engine systems then C++ is probably the right bet, especially if you want to do bleeding-edge technology. If you're content making games that aren't bleeding edge, a good number of people and studios are making those in C# now, and have been for years -- and I'm not talking ultra-casual, simple games. You can quite handily create a game in C# today that's to the standard of contemporary AAA titles.


If advancing your C++ skills aligns to your goals, then by all means use it. If its your preference, use it. If you're choosing it for whatever your own reasons are, use it. But don't use it just because you think its a hard requirement now or in the future, because that's becoming less and less true except for the niche of developing the game engine systems themselves. I love C++, and I'm a low-level kind of guy myself, but its really not the only gig in town anymore.

#5312151 Alternative to DirectX and XNA?

Posted by on 23 September 2016 - 12:14 PM

XNA was a really unfortunate and premature casualty. Killing it counts among Microsoft's significant blunders, IMO. Thankfully Monogame has picked up that torch, and is well-supported on all platforms -- the Xbox One is getting its first Monogame title, Axiom Verge, very soon.


DirectX isn't going anywhere. You no doubt have heard of Direct3D12 and Vulkan, which are great and are the way forward long-term, but Direct3D11 and classic OpenGL aren't going anywhere. They're not dead, they're not dying. We're just getting more options.


Then you have other frameworks like Cocos2D, SFML, SDL 2, and more, as well as cheap and readily available commercial game engines like Unity, Unreal Engine 4, Lumberyard, Gamemaker, and even specialized ones like RPG Maker. All of which can be and have been used commercially.



For C++ I'd probably recommend you look into SFML, SDL 2, and Cocos2D, go through some tutorials and then choose the one you like best. For C#, Monogame is pretty great, and put aside any silly notion that "real game programmers" only use C++; unless learning C++ is an explicit goal or you already have *serious* C++ skills, there's little reason to choose it over C#, and you're giving up a great many productivity advantages of C# in doing so.

#5312144 Why C# all of a sudden?

Posted by on 23 September 2016 - 11:58 AM

C# is a rather ergonomic language coupled with expansive, inter-operable libraries and generally great tooling. It performs well enough, and even though highly-tuned C++ will never lose to even highly-tuned C#, a great deal of typical code will never receive that kind of attention -- in many non-gaming applications, perhaps no code at all gets that kind of attention. I would argue that non-performance-critical code created by a typical C# developer is cleaner, more maintainable, and makes better use of libraries than the typical C++ developer likewise creates for non-performance-critical code. I would further argue that the C# developer can do it in less time and be within a reasonable margin of performance, even winning in some cases. I would note that idiomatic use of the latest C++ language/library features can mitigate, or perhaps even overturn that relationship, but almost no one is really embracing that yet and there's a ton of legacy C++ out there that's difficult to move off of the old idioms. CPU cycles aren't infinite, but they're abundant and cheap -- If I've got more cycles available than what I need, its perfectly reasonable to spend some of  them on my own improved productivity as a developer.


Neither language is going anywhere though--C++ stalled and even retreated a bit during the period between about 1998 and 2011, but the recent standards work has put its prospects on a steeper upward trajectory than ever before. That's not hyperbole, that's what the stats and trends are showing. Recent and upcoming standard advancements have really made it a better language.


C# is doing similarly well -- .Net Core and the open-sourcing of pretty much all of the C#/CLR/.Net Framework ecosystem is going to make for a better future too. Its effects are already feeding into Mono, Unity, and in other ways. All the desperate .Net implementations are moving towards parity, if not unity.


And its not zero-sum -- a rising C# isn't going to cannibalize C++ entirely, nor is a renewed C++ going to cannibalize C# entirely. They'll each be given as much work as they're fit for, and there's plenty of work to go around. Its a symbiotic relationship when you think about it -- A competent C# allows for C++ experts to focus their attention where its most beneficial, expands the hiring pool, and has allowed people and teams to create a wealth and variety of games that never would be created with C++ because the expertise or efforts required are too high a hurdle.

#5311146 Would you try a language if all you had to do was install a Visual Studio ext...

Posted by on 16 September 2016 - 06:38 PM

I'd honestly prefer a Visual Studio Code extension over a full VS one.

I don't install VS extensions unless I really need them. I'll happily install a VS Code extension to play around with for an afternoon, though.


I Second this.


Bias aside as a Microsoft employee formerly writing a few of the VS Code docs, its actually a pretty excellent feature set to launch a language testbed on. By comparison to VS extensions, Code extensions are easier to get to grips with, IMO, and there are quite a few languages that already provide plugins that use many of the features Code provides, which are really focused on the core edit-build-debug loop.


Plus, if/when you're ready to make the cross-platform leap, you'll already have a well-tested development environment available. Believe it or not, VS Code has actually gained quite a lot of traction among Linux and MacOS developers, in addition to Windows developers looking for something a little lighter than full-blown Visual Studio.

#5311008 Would you try a language if all you had to do was install a Visual Studio ext...

Posted by on 15 September 2016 - 06:27 PM

Honestly, as I look across the field of languages that have gained traction and notoriety in recent years, most of their successes are on the backs of having a killer app -- Ruby on Rails being a prime example, R is another good one. Even when the language itself is more general in nature, its often a niche application--either for which the language itself is especially well-suited, or for which someone happened to write a killer library in this language--that imbued it with momentum. Sometimes the choice of language is not much more than accidental in this case.


That tells me that its hard for languages that are explicitly general purpose or at least without a clear target, and harder still if you have system-level aspirations. I'm admittedly a bit of a fan-boy, but Rust seems to be the only such language that carries any real momentum currently, and its major promise is that you can achieve safety without giving up performance to a heavy run-time sandbox. Its kind of the holy grail if it works.


I don't think the nice install experience is a draw, but its nevertheless important. Attention is a fickle thing, and friction is its Kryptonite. Its your language pitch that makes your acquisition funnel big, you still need to work that out, but a lack of friction is what makes it wide. The bonus creature-comforts are what make people linger.

#5310182 (2D) Handling large number of game objects, no quadtree

Posted by on 09 September 2016 - 07:25 PM

Simple methods are perfectly fine -- yes, maybe a quad-tree or some such is the optimal algorithm, but its also a rather generalized one; oftentimes a simpler solution will do when it fits your own needs better, and sometimes its just as fast--or faster--than the optimal general solution. What Sonic did is a neat trick for exactly that kind of game, its an excellent example of looking at your needs and constraints, and coming up with a solution that does what it needs and no more.


That said, nothing on the genesis or contemporary systems would have approached "large numbers" of enemies by computational-complexity standards. In 3D space, or with a great number of objects to consider (especially if you have to test whether they interact, as in an n-bodies simulation) you probably do need a way to easily partition the objects that might interact. In 2D, with relatively few objects to consider, its often sufficient to simply check whether an object is potentially on-screen before blitting it, or within a slightly larger rectangle to "activate" autonomous entities ahead of time, just so that it doesn't seem like life suddenly springs into being right at the screen's edge.


In a 2D overhead RPG I wrote in my early days, that's exactly what I did -- NPCs activated half-a-screen outside the visible area in every direction, so even if the player stood still, nearby NPCs would come on-screen, leave, and come back at will. If they wandered too far they'd suspend and remain near where the player would have expected them to be. This all lent to a more immersive kind of feel, without spending resources to calculate all the NPCs in the entire town.

#5309908 Rendering large tile-based ships

Posted by on 07 September 2016 - 09:35 PM

A good search term is PTEX (Per-face TEXture mapping).


However, I spoke without my coffee this morning, and if you're pretty green with 3D its more straight-forward to just duplicate the vertices -- as long as the vertices are identical going in, and as long as they always get transformed by the exact same matrix (if you regenerate it, you have to build it by composing the component matrices in the same order), then the vertices won't diverge, and the filling rules ought to prevent pixels from fighting in a steady scene -- you might perceive some fighting as the ship rotates though -- MSAA/FSAA should help smooth that out.


On a separate note, if your very large ships become very small on-screen (say, where a tile occupies maybe < 4x4 pixels, you might then want to switch to rendering it as a single textured quad that's generated via means similar to the 2D render target method; once you can't see the detail in the tiles anyways, its a lot of work to go through for no real benefit, and rendering very small triangles is a huge performance drain. I would choose the transition point to be where the animation of the tiles gets lost in the smallness of them on-screen, then you don't have to worry about animating this small ship texture.


I assumed this whole time that you're rendering moving things like crewmembers separately, so you'd do the same for them when the ship takes only a small portion of the screen (assuming the position of crew members has meaning to the player when zoomed out this far), so you could transition them at some point to being rendered as single pixels. Likewise, you could do the same for an important indicators that would normally be shown with a tile animation.

#5308780 Should i use "caveman-speak" for dialogs?

Posted by on 30 August 2016 - 11:18 PM

I think it could work too, but you're going to have to be wary of it wearing thin as other's have suggested. I second the idea of accomplishing this by developing a restricted grammar productions and word list -- don't just start with full and arbitrary modern dialog and then caveman-iffy it.


I'd maybe look at how much vocabulary Coco (the Guerrilla who was taught to speak sign-language) had, and how well she formed sentences. Its hardly scientific, but at least its some kind of modern analog.


I'd half-jokingly suggest considering a pictographic language based on cave-paintings (There actually are real-life recurring cave symbols that some scientists believe are a pictograph language of sorts) -- its probably too high a barrier to entry to work for real, but thinking about dialog in those terms might help. If you did actually do a pictographic language in the game, it might be an interesting mechanic in itself, but I think you'd have to ramp it up pretty slowly -- very short clauses made of the most obvious symbols -- so that the player could build up to more complex thoughts and abstract symbols, but learning to decipher things could be part of the fun.