I've used Xerces in the past -- I didn't spend any significant amount of time researching or weighing its pros and cons versus competitors. The impression I get is that Xerces is probably the most conformant (its an Apache project, and is a validating parser) and is likely the most heavyweight library, for good and for ill. The other options mentioned here will likely get the job done in any case, and with conformant input themselves, even if they don't accept all conformant XML (that is, my impression is that they accept a pure subset of valid XML that differs in little-used ways, or ways not relevant to their use cases -- and that they do not accept invalid XML).
RavyneMember Since 26 Feb 2007
Offline Last Active Jul 28 2016 07:45 PM
Digipen grad and independent game developer in North Seattle, otherwise employed at Microsoft.
- Group GDNet+
- Active Posts 4,797
- Profile Views 23,404
- Submitted Links 0
- Member Title Member
- Age 33 years old
- Birthday June 10, 1983
Aside from game development: Computer Languages, Old school gaming (Particularly RPGs), Embedded Systems, Electronics.
Outstanding Forum Member
Posted by Ravyne on 27 July 2016 - 04:10 PM
Its "ternary", not "tertiary" -- though its an easy mistake I was once guitly of myself. "Ternary" means "composed of three parts", "tertiary" means "of the third order, as in primary->secondary->tertiary."
I've never seen this form of ternary operator before though -- what language is this? In most (all?) languages, ternary operators are expressions (meaning that they have a value which can be assigned, passed to a function or otherwise consumed directly), whereas if-else constructs in most* procedural languages (C, C++, and cousins) are not expressions, they're statements. If you don't take advantage of of their status as an expression, code using ternary operators usually is more ugly than if-else-style code, and its often the case that when if-else-style code changed into a ternary expression is still ugly, then it was probably better off as if-else anyways.
Using C-and-friends-style ternary operators, cleaner code might look like this:
auto move_speed = (is_running) ? normal_speed * run_speed : normal_speed;
Or even this:
auto move_speed = normal_speed * ((is_running) ? run_speed : 1);
* Many functional or functional-inspired languages have if-else expressions rather than statements.
Posted by Ravyne on 25 July 2016 - 07:18 PM
Whatever you do, don't send mixed messages -- Apply as a programmer and showcase your programming, or apply as an artist and showcase your art/modelling. These will be distinct Portfolios for sure, and should be distinct resumes as well -- even if they draw from the same education and experiences list, each resume should have a different angle focusing either on your programming or artistic prowess.
Now, an important question to ask yourself is whether or not you actually have the requisite background and skill level as an artist to compete for jobs against those with an arts background. Its entirely possible you might have the talent, if not the paperwork, but be aware that there's a whole different background expected of an artist -- even if they're title is "modeller" its still usually expected to have some level of competency with traditional art (sketch, painting, etc), digital 2D art, concept development, and a general familiarity with art history, terminology, and all the foundational techniques a pure artist would have gained.
If your modeling skills are strong and your general art background/capability not lacking, your ability to program might make you uniquely attractive to a small team looking for a first/only artist because you'll need a strong ability to interface with programmers on a technical level. In the industry, there's a title something like "Technical Artist" who performs this same kind of function for a larger team -- being the liaison between the art and programming sides of the house, working with programmers to specify what the art team needs of the engine, and helping the artists work towards what the programmers enable. On a larger team this person doesn't usually do art themselves, at least not primarily, but for a team with only 1 or 2 artists, a similar capacity to collaborate on a technical level can be useful -- it could be a programmer who understands art, or an artist who understands programming.
Posted by Ravyne on 25 July 2016 - 03:32 PM
A good design pattern for this is the Decorator pattern. Applied to this problem, you can think of it as an object-based formalization of Hodgeman's stack suggestion, or some of the other suggestions here.
How I use it is this --
First, in my case, characters' stats are fixed per-level, and stored in a read-only array-of-structs -- there's one struct for each character and for each level. The character data structure points to this for its base stats. Weapon, armor, and item stats are similar, but separate -- You can think of all of this as being a big dictionary, or set of spreadsheets or database tables -- in fact, that's usually how I author this data, which gets fed into the build (or could be read from a file a load-time).
*side note* If you have non-static leveling (e.g. where the user allocates their own points when they level up) you would manage this base data differently, but the basic pattern and application stays the same. You have a couple obvious choices: One is that you do away with the dictionary and just have a base-stat data structure for each character/item that you modify directly; The second way is that you still have the dictionary of read-only base stats, and you implement the user-allocated points themselves using this pattern (just apply them before any other buffs/debuffs/effects).
Now you apply the decorator pattern to that base stat structure -- basically, you have a decorator that presents the same interface as the structure itself, and when a decorator is applied, the character points to the decorator, and the decorator points to the base stat. This means the character reads from the decorator, which is where it gets its opportunity to modify the base values. What about multiple effects/buffs/debuffs? Simple! Since the decorator has the same interface as the base state, a decorator can point to either the base state or another decorator as long as they derive from the same interface and you use a pointer to the base class -- you can stack them arbitrarily deep, and each decorator will modify the sum of the base stat and all the decorators that came before it.
This might sound like an awful lot of engineering for something so simple, but its very robust and the pattern isn't terribly complicated. One of the things that makes it nice is that it really decouples the implementation of effects from one another and from the base stats structure. They don't need to know anything about what the other states look like, how many there are, or what they can do. It communicates only the language of the character stats (though, in my implementation, I have a side-channel that allows an effect to know if a certain effect has already been applied -- for items, think of a set of armor that's stronger when all its pieces are equipped.)
Also worth noting, is that the reason I like to pass in the entire character stat-set, rather than having individual decorators for each stat is that it makes it much easier to have multi-dimensional effects that modify several stats, or which take into account multiple stats in determining its value (e.g. a character's defense against spells might be modified based on both their magic resistance and their intelligence) -- its also a whole lot less to manage and a whole lot less code to write.
To talk about implementation of the interface for a bit, it'll depend on whether you're programming in a language with properties (like C#) or without (like C++) -- if you have properties, then you can use them to implement getters for each base stat (and setters, if you don't go the dictionary of read-only structs route). Otherwise, you need to implement member functions to get the values out.
Another good shortcut to know about is that in C++ a virtual function can have a default implementation -- so what I do is when I define the base class for the decorators is give the virtual getter a default implementation that just reads the value form whatever it points to and passes it along unmodified. That way, when an implementation of a particular decorator only modified one stat, I only have to implement that one stat's getter without any other boilerplate. Not only does it saves me typing, it removes the opportunity to make mistakes.
Posted by Ravyne on 21 July 2016 - 06:06 PM
Agreed -- Lighting.
Tim Sweeney of Epic Games and a graphs wiz every bit as good as Carmack--if not better--roughly lumps generational lines of rendering advancements as to how many bounces of light they simulate. In raycasters like Doom, light bounced just once off of a surface of the world directly to a pixel on your screen and nothing intervened in it -- light didn't come from anywhere in the world, it was just an ambient value, ever-present, constant, radiating in all directions evenly. In the first generation of polygon games like Quake or Unreal, light "bounced" twice -- light affecting a particular surface had a single origin in the world and each origin could have different properties, an ambient lighting factor continued to stand in for all the indirect bounces; pre-baked light-maps helped color-in the illusion of localized lighting and occlusion. For a long time, lighting advances came by increasing the number of lights in a scene, not increasing the number of bounces -- Even through Doom 3 was more lights, not more bounces. Modern games of today simulate ~3 bounces -- IIRC, the ambient light factor bounces off of one surface, picking up its properties, propagates that to a nearby surface, and finally the sum of this and the localized, two-bounce lighting to your eye. The coming generation--maybe today's bleeding edge--should make a good run at subsurface scattering.
Adaptive animation is another of the current frontiers that builds on realism. After that, probably AI/Behavior that results in more than a simple choice between pre-canned responses that are blended at best -- something more natural than that will be needed to cross the uncanny valley once we reach realistic-appearing humans in real-time..
A parallel advancement has been Physically-based lighting, which gives materials a logical consistency like you see in the real world. In the past, materials were often bespoke and could have their knobs tuned to wildly different values to achieve an appearance consistent with the scene as a whole.
Posted by Ravyne on 20 July 2016 - 07:44 PM
Its not that its impossible given the right research, team, and discipline -- its that only rarely do those big dreamers have any of those things. And even when they do, its still not a sure bet. If you manage to have all of those things, you still need to develop the right kind of connections, gain the right kind of attention, and bring it all to bear at the right kind of time in the market. Success doesn't follow a formula, you can do everything right and still fail for no apparent reason, or you can mess up some things and be wildly successful for reasons mostly beyond your doing or understanding.
The best you can really hope for is to do your best to stack the deck in favor of your success. Doing that means being on your game all the time, always being in position to take advantage of favorable winds. And your team needs to maintain that despite what life throws at them, and--without funding--for an uncertain and uncompensated future, all while doing whatever it is they do to feed themselves or their families. Its a lot to ask for, and only a very small chance of a worthwhile payday in return.
By all means reach for success and do everything you can to tip the field in favor of achieving it, but don't place real or emotional stakes that you can't afford on success alone, especially a big success. Small successes in service of an eventual bigger success are often more achievable and even failures in service of that goal are often more palatable than betting all your hopes and dreams on one singular push.
Posted by Ravyne on 18 July 2016 - 02:21 PM
So, assume there is a game. Also assume that the law stuff would allow you to copy it, share it, use it etc. Now assume you take this, port it to another plattform, aybe tweak it a bit and then release it as a commercial product.
Being granted a license to use/share/distribute a software product freely (as in freeware or shareware) doesn't grant you any license to make derivative works of said product. Some kinds of freely-available software, such as "open source" software give you all of these rights, but usually on a conditional basis. For example, the GNU license, among other things, says that you can modify the software and distribute your modifications or modified versions, but you must not close your source code, you must not remove or modify the license, and you must contribute your modifications back to the original project so they can be integrated there (though they might not be); Also, you give up your copyright on the code you've added. Other licenses, give you different rights and require different things of you in return. Also, be aware that the license governing the source code may not be the same license governing other program assets like graphics or sound -- even if the source code may be modified and distributed, it does not imply that the same is true of other program assets.
As for where the legal and ethical lines lie its less concrete. Legally speaking, copyright laws do not protect ideas but do protect a particular expression of an idea; which is necessarily subjective matter in ways that other kinds of torts are not. Recent court rulings have resulted in damages being awarded in cases where clone products were effectively identical in all but aesthetic differences. Read up on the Tripple Town / Yeti Town lawsuite as an example. Basically, the judge ruled for damages because Yeti Town was a direct clone, down to the rules, item function, and general progression of play. Still, this is not cut-and-dried, because there's no clear consensus on how much sharing of rules, item function, or progression constitutes a clone.
Also, copyright law is absolutely clear that you cannot create derivative works which steal characters, settings, specific premises or other established elements of a creative work for your own; so don't do that in any case.
Ethically, I'd say you're best steering clear of clones and near-clones, even if you change up the artistic elements, characters, etc. Take inspiration, pay homage -- hell, even satirize -- but don't copy. For me, and what I've always advised, is that if your work doesn't stand on its own merits, then its not really your own, and furthermore don't do anything you'd be upset by if the shoe were on the other foot (also, trying to imagine that on this other foot, the product being cloned might be feeding someone's family). If you avoid those things, you don't have an ethical duty to even ask -- it can be a nice gesture, but be aware that if you get a polite "Thanks for asking, but no.", and you then cloned or heavily borrowed from them anyways, you've given them great ammunition to go after you with in court, if they decided to -- and they could tell you no and later go after you even if they don't have solid legal grounds to stand on. If you ask, you need to abide by the terms you're given.
Posted by Ravyne on 18 July 2016 - 01:49 PM
Advantages: - better C++ language standard compliance
Non of there are advantages of non-MSVS nor they are disadvantages of MSVS, as they are easily achievable with the Microsoft compiler.
Microsoft's compiler still lags a bit behind on conformance issues, not much now, but some. There are some language features that can't be fully supported (or at all) until their compiler rejuvenation project is complete. They've written about it on their blog, but the basic problem is that for historical reasons the compiler took things from source to final representation as quickly as possible, so unlike most other compilers there's no intermediate form representation of the entire translation unit, and this makes a few of the new language features very difficult to achieve. Good progress is being made on the rejuvenation though, they announced their SSA optimizer recently, which is a first step in resolving the legacy problem. There is an interim sort of work around, though, that I'll speak to in a bit.
First, we need to settle on some terms -- A Compiler is the program that transforms your source code to machine code in an object file; notable examples are GCC, Clang, and Cl (Microsoft's compiler). An IDE is the tool many people use to write their source code, manage their projects, debug, and do other things; notable examples (for C++) are DevC++, XCode, and Visual Studio. You don't need an IDE to write code -- you can use any text editor you like, or other IDE-lite offerings -- as long as you're comfortable doing your compiling and linking from the command-line, and with debugging with stand-along tools like WinDBG or GDB. For running your builds, things like MAKE and MSBuild exist -- those allow you to build your program for various targets easily, from the command-line.
For an IDE, Visual Studio Community is probably your best bet right now -- As long as you select the C++ tools during install, everything should be set up. Other IDE options, IDE-lites, and text-editor-based workflows usually require a bit of extra setup.
For the compiler, there are a few things to address -- I'd wager that DevC++ in your instance is pointing to a GCC compiler, or possibly to Clang. Either of these have different command-line options and language support than Microsoft's compiler. If its the case that you'll need to use a compatible compiler for you course-work, you'll need to set them up. Possibly, one option is Microsoft's Clang/C2 compiler -- basically this is the Clang front-end, strapped to Microsoft's Code Generator; this gives you a GNU-style compiler (command-line options) that has Clang's level of language support, and creates object files, libraries, and DLLs that are compatible with programs written with Microsoft's normal toolchain. It might be enough for you to do your coursework with. I've used it myself to write code using Boost::Spirit::V3 (a template library that doesn't compile with Microsoft's toolchain).
If your school is a linux house, another good option around the corner is Bash on Ubuntu on Windows -- This is basically the entire Ubuntu user-land running on top of windows, and gives you a full Ubuntu command-line environment right out of the box -- its not "like" Ubuntu, it is Ubuntu -- You can run any Ubuntu binaries you get right from Apt, or you can compile and build from source, just the same as any Ubuntu machine. I believe that's being released to the public with the Windows Anniversary update on August 2nd (based on Ubuntu 16.04), but I'm not 100% certain; you can get it on the fast-ring now though (based on Ubuntu 14.04).
In any event, you'll always want to ensure that your program compiles and runs correctly using whatever environment your work is graded against. Even if you think your code should work and be portable, you don't want to start racking up zeros because you didn't test it.
Posted by Ravyne on 13 July 2016 - 04:49 PM
Copyright grants its holder two broad categories of rights. The first is essentially a government-granted monopoly to the creator of an original work for the exclusive right to distribute it -- the holder is the only one who can publish, reproduce, perform, display or export the work, and is the only one who can license or sell those rights to a third party. The second right is the exclusive right to create derivative works -- only they can produce sequels, prequels, side-stories, etc. using the characters, settings, and the "universe" they've developed. Again, they can license or sell any of these rights individually, or altogether, but they are not free for the taking.
Keep in mind that these are matters of civil law, and the standard for proof of infringement is something like "would the similarities lead the average consumer to believe that these films are drawn from the same pot as the games" -- if the answer is yes, you would be found to be infringing. That means you must avoid even the appearance of continuity between your story and theirs (by which I mean logical continuity, not literal continuity) -- you certainly cannot imply continuity, much less claim it outright.
So, you can create your film epic using the framework of what you've been inspired by, but you'd need to strip off any features readily-identifiable as being taken from that creative universe. There's also the matter of context -- Orcs are a general enough concept, war between races of men and monsters is general enough, but put together in culmination towards something that looks an awful lot like the warhammer universe, is hard to argue that its mere coincidence, even if you don't use specific characters or places.
A good rule of thumb is to avoid riding on the properties coat-tails. If the thing you're doing doesn't make sense without the other properties existence, or wouldn't be viable without an implied relationship between them, then you're riding their coat-tails and there is only the barest of chances that you're not infringing.
Another good rule of thumb is that you should always consult a lawyer if its not 100% obvious that what you want to do is safe; and that even if it is you should still consult a lawyer to make sure you're doing it right for your own protection.
Standard disclaimer -- I am not a lawyer, neither are most people here. Even if they are, they are not your lawyer. None of what's here constitutes legal advice.
Posted by Ravyne on 24 June 2016 - 12:20 AM
Posted by Ravyne on 23 June 2016 - 08:57 PM
Maybe its your resume?
Certainly you're experienced, but if you've had a long stretch without having to job-seek its possible your resume isn't in a contemporary style or doesn't use the right buzzwords, etc. Especially if you're looking at medium and large-sized studios the very first thing a resume has to do is get past the HR drones. Last month, after 5 years in my current position, I had to update my resume for a different role I was interested in inside the company -- It was a lot more work than I would have thought to bring my old resume 5 years forward all at once.
How are these companies that aren't interviewing you getting wind of your age anyways? If its in your resume or explicitly in any professional profiles you might consider removing or making that information less front-and-center. Hopefully I'm not out of step with the resume angle, its just the fact that they're apparently getting your age from it which gives me concern that it could be part of the issue.
Finally, some standard but seemingly-uncommon advice on job-seeking:
- Use your contacts -- I've read that referred resumes are 20x as likely to land a follow-up (phone screen or interview).
- Cover letters should always be customized to the position -- show interested in the company and position, and share why you think you'd be a great fit.
- Remember that the purpose of a resume is not to get a job, the purpose of a resume is to get an interview (or whatever next steps are).
- Stay positive -- If they're looking at your resume, they want to give you an interview; if they give you an interview, they want to hire you. its just the process of whittling down.
- Be aware -- The biggest reason job-seekers are cut short of the position they want is not lack of knowlege, its risk. Be specific about what you know and have done, and don't do anything that puts doubt in the hiring manager's mind about what you're really capable of. I know of enough who've tried to appear more knowlegable than they were, and it wasn't the lack of knowledge that killed them, it was the the grandstanding that painted them as a risk.
Posted by Ravyne on 21 June 2016 - 04:58 PM
One problem is that the style of OOP taught in many colleges, universities, and books borrows very heavily from Java's over-orthodox view of what OOP is.
Eh no. What you find in universities are a lot of people who are not exposed to codebases beyond the really basics they teach in the courses. They read software engineering books (that are for the most part, language agnostic, and UML centered), and try to apply that to their "hello world" examples. Results are obviously disastrous since software engineering books are targeted at big organizations, with big codebases, and with complex projects, often much more complex than whatever the professor has ever done.
Worst of all is that people get out of those courses thinking "This is how Java is done!" "This is how C is done!"
I don't want to derail too far from the original question, but yes -- this is also true. The trouble is that nearly all bread-and-butter courses are taught in Java because its been adopted as the language that testing is administered in. Other languages get some road-time too, but Java is by far the most prevalent. Why that becomes something of a danger doesn't have much to do with Java being a poor language (its a perfectly fine language for its stated design goals, I just disagree with the merit of those goals), as much as it has to do with its dogmatic orthodoxy -- You cannot do procedural programming in Java, the best you can do is fake it by wrapping it in a superfluous veneer of Java-isms, because the Java Clergy sayeth so. Writing a simple 'Hello World' program in Java isn't contrived OOP as an exercise, its contrived OOP because that's what the language demands.
We can agree though, that most early programming habits anyone picks up in any language are best to hit the dustbin sooner than later. Rarely are first habits good habits.
Having learned Java-style OOP hasa lasting effect on a programmer. Its like an embarrassing accent
Oh, C++ has its warts alright. Its far from perfect but Bjarne is explicitly not a prophet -- C++ simply doesn't enforce dogma on you in the way that Java does. As I said, Java is a suitable language for its design goals, it just so happens that among those goals are enforcing a particular and rigid view of OOP best-practices, protecting fully-functioning programmers from themselves, and a misguided attempt to make Java programmers interchangeable by creating a language that forces them into the lowest-common denominator. If you're a big enterprise, those things are features. It just makes for an obstinately-opinionated language driven by business decisions rather than technical ones.
IMO, C# did a much better job achieving Java's technical goals, and was better for throwing off as much dogma as it could.
I wouldn't want to derail the conversation any further but I'm happy to discuss further elsewhere. The relevance to OP's question has to do with the prevalence of Java combined with its limiting orthodoxy, and there's not much more to say on that topic.
Posted by Ravyne on 20 June 2016 - 09:13 AM
One problem is that the style of OOP taught in many colleges, universities, and books borrows very heavily from Java's over-orthodox view of what OOP is. Java doesn't allow things like free functions or operator overloading, so it forces you into expressing solutions composed of classes, and in this very verbose, over-engineered way.
Graduates come out of school mostly having only experienced this way of OOP programming, and so they carry it forward. They usually have no earthly idea how to organize a procedural program in C, and when asked to write C++ will mostly give you a Java program that makes the C++ compiler happy enough to compile it.
Having learned Java-style OOP hasa lasting effect on a programmer. Its like an embarrassing accent -- first something you have to consciously work at hiding, and hopefully something that fades away over time.
An expanded, less-orthodox view of OOP (as in C++) embraces mixing other styles. Good C++ programs usually mix OOP, procedural, and functional styles together.
Posted by Ravyne on 17 June 2016 - 09:09 PM
What do you guys reckon? Would you throw away a bit of portability between C compilers to use Checked C?
Honestly, I'd just use C++. I would welcome this work being looked at for inclusion in the next C standard, but I suspect that would be unlikely. On most platforms today C++ is just as available as C -- even on something as tiny as certain varieties of 8bit microcontrollers you can do C++ with free and open toolchains. Only relatively few (and in general, esoteric, legacy, or both) platforms that support C don't support C++; usually those that have proprietary toolchains.
Between the family of C++ smart pointers and the work going on around C++ Core Guidelines with Microsoft providing a proof-of-concept implementation (and working with Bjarne Stroustrup) its meant to solve many of these things.
Or, there's the Rust language, which has commercial support from Mozilla and a great community, and which has a great C-linkage story + works even on bare-metal/embeded platforms.
Its a decent enough idea, but don't see checked-C gaining any level of real support.
Posted by Ravyne on 17 June 2016 - 05:13 PM
I found that video -- turns out it was a GDC talk that spoke about cameras in side-scrollers. Its a really excellent video.