DXTK (DirectX Tool Kit) has WIC-based texture loading for a variety of file formats.
RavyneMember Since 26 Feb 2007
Offline Last Active Mar 07 2014 04:47 PM
- Group Crossbones+
- Active Posts 3,794
- Profile Views 11,692
- Submitted Links 0
- Member Title Member
- Age 30 years old
- Birthday June 10, 1983
Aside from game development: Computer Languages, Old school gaming (Particularly RPGs), Embedded Systems, Electronics.
Outstanding Forum Member
Posted by Ravyne on 27 February 2014 - 03:31 PM
I'm not super knowledgeable in this area, perhaps someone can illuminate me -- aren't the code and data sections mostly throwbacks to the 16bit segmented memory days?
I recall that when I learned to program in a compilable version of QuickBASIC (v4.5 for those who remember) they actually had instructions for setting the code and data segments that were active. One of the major optimizations I discovered in one of my programs once, was that the structure of my map rendering was causing me to jump all over memory and even across data segments -- which was obviously horrible. When I fixed it, I went from 5fps to 50fps, in a tight rendering loop where blitting should have been the bottleneck.
Posted by Ravyne on 21 February 2014 - 03:52 PM
so does that means they don't understand what runs their system?
As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).
If this is true, what is the benefit of having what you don't understand and isn't supported?
There's no benefit, other than that the system works and will likely continue to work. The implication of this definition of "legacy" is that no one is left who understands the old code precisely enough to replace it with newer, better, cleaner code. Happens all the time -- some medium-sized utility at a large warehouse operation becomes critical to the business, developed by one lone programmer, and hacked to add new features over the years, and suddenly that programmer leaves the company, retires, or dies. The business relies on that software, but can't risk changing it and can't afford to hire a consultant to come in and analyze it to the necessary degree. You'd like to think that just any programmer can come in and read the code and know what's going on, but its not the case in reality. In reality there are all sorts of little assumptions and gotchas in most code bases (particularly of the type I describe here), and documentation, if it even exists, is often outdated for just plain wrong.
Yes, the business is precariously positioned to rely on such software, but what can they do?
There's somewhat less of this in the games industry, for example, but I recall reading in an article that Madden Football -- all the way up until its Xbox 360 and PS3 incarnations -- still had some C code in it that had originated in the Sega Genesis version. The first few Halo games, originally developed for the MAC, had a codebase that sort of hacked some C++-like features into their C codebase (The mac platform didn't have great support for C++ back in the day, and their frameworks to this day are written in (or at least exposed as) Objective-C rather than C++ or even vanilla C). I'd wager that the latest unreal engine has code stretching back to the original unreal.
Legacy code is just older code that's either difficult or unfeasible to replace, but which is nonetheless necessary to continue using. It's most often stated in relation to C++ just because C and C++ were among the first very popular, widely used languages. But there's a non-trivial amount of legacy COBOL and FORTRAN code running all kinds of finance and business institutions. In fact, if you understand either one along with the hardware and related software systems of the day as well as the old-timers did, and also a modern language well enough to re-implement it in a more-modern language, you can probably earn yourself a half-million-dollar salary or more through consulting work.
Posted by Ravyne on 20 February 2014 - 06:58 PM
To sum it up in a single phrase: C++ allows you to take tighter control of how the underlying hardware executes. To expand a bit further, when you as the programmer determines that the compiler, library, or runtime just aren't doing something at the level you need it to, you can do a great deal to push those things out of the way and take the wheel yourself.
There's a number of benchmarks floating around that pit various languages against one another to see who does what fastest, or to prove claimed performance parities. Often you see fairly literal translations from one language to another and the better ones at least do idiomatic programming in each language. Rarely, though, do you see comparisons of complex problems where the only requirement is to come up with the correct solution. If that were more common, you would see a pretty clear trend of C and C++ winning, and very often by significant factors. That's not at all to say that other languages cannot beat C or C++ at certain kinds of problems -- languages with lazy evaluation, for example, have a significant advantage at certain kinds of problems.
But, across the broad range of software and problems, C and C++ let you optimize your code by influencing how the hardware executes your code -- For example, directly controlling data packing and alignment, or having relatively unfettered access to assembly instructions through intrinsic functions, including for advanced vector instructions like SSE and AVX. There are ways to get at that from languages like C# or Java, but you're always a step or two more-removed from exercising direct control.
The ability to manage memory and object lifetimes to an incredible degree is not to be overlooked either. Its one of the significant optimization opportunities that other languages mostly don't offer. Its relative lightness compared to languages that require a heavier runtime environment also helps it span down to smaller devices -- the .NET run time might not seem too onerous a requirement to make on a PC or Xbox game, but its a different story if you're talking about the Nintendo DS, for example.
Of coarse, you pay dearly to exercise that level of control over your software, and the currency of trade is your productivity. When you need to achieve the highest levels of performance, and are willing to pay in lessened productivity, C and C++ are no-brainers. But, often time or skill are the limiting factors, and if performance is not critical, C#, Java, or those that are even slower still might offer the best balance of all factors.
On top of all that is simple momentum -- There's a legacy of C and C++ code in the games industry that's venerable, tested, and works on every relevant platform. All the popular and necessary middleware in use is C or C++, as are the the kinds of high-performance libraries used in making games -- from math, to audio, to OpenGL and DirectX. Again, there are usually ways to access these things from other languages, but there's always an extra layer or two between you and them. Its not just the relative performance of C++ to, say, C# for running game code internally that can be a burden, but also the translation and penalties associated with talking between the two.
Posted by Ravyne on 20 February 2014 - 03:54 PM
The general lay of the mobile landscape today seems to be that no one makes money on pay-to-own games -- that is, ones where you pay once to purchase the game and that's it, you own all there is to own. There are many reasons, but chief among them is that you *have* to get on a top apps list to gain visibility so that you can generate a user-base that's large enough to support you. Even a $0.99 pricetag makes people think twice about buying your app, so you never get that early spike in downloads that's necessary to crack those lists. A secondary reason is that, on Android, side-loading apps is so easy and popular that pay-to-own games are simply pirated to a massive degree -- and because Android is so large (even moreso than Apple, worldwide, in install base) and necessitates various non-pay-to-own schemes, basically the entire market follows.
Even many of the larger 'AAA quality' mobile titles from the likes of Epic don't often turn much (if any) profit from the initial sale of a title alone -- its often just one piece of the pie, and DLC or micro-transactions make up the rest, often the majority. The thing is, once additional content or monetization schemes start to account for 40% or more of your profits, its often then more profitable to just give away your game, because the additional DLC revenues will more than make up for it.
All this is kind of tangential to your question, but it frames any response. The question is the same, but different -- How many people are engaged in the Windows Phone marketplace, how many of their eyeballs can you capture, and how many of those can you convert to paying customers? You always want to get more return out of your port than you put in, but in general having access to more eyeballs is always good, especially when you support yourself by selling additional content.
Some pluses to the Windows Phone market:
- Once you support Phone, its relatively easy to also support the Windows store (Surface and Windows 8+ apps), and presumably Xbox One when the SDK is released.
- There's currently less competition on Windows Phone, making is easier to get noticed. Getting recognition for your Windows Phone version could help build your presence on other platforms (e.g. a popular reviewer notices you on Windows Phone, and you get some press. The story also mentions the other platforms you support).
- Like iOS, Windows Phone is much less prone to piracy than Android.
- Although they're third in market share now, Windows Phone seems to be generally well-liked and well-reviewed. Devices are getting better, market share is growing and at a rate that outpaces the other two platforms. It might not be something you decide to do based on today, but on your expectations for 6 months or a year from now.
- If you pass $50,000 in Windows Phone revenue, you get to keep 80% of revenue thereafter, instead of the perpetual 70% that other platforms offer.
- Obviously to start, Windows Phone market share is relatively low now, even if its gaining on the other contenders.
- If you wrote the majority of your game in C or C++, and used good abstractions, a port should be straight-forward. But you will have to implement a new DirectX-based renderer, audio playback, UI, and other miscellaneous platform things.
- Less communal experience with that ecosystem, no one knows what's different about the Windows Phone market. Mostly people treat it no differently than iOS or Android, and often as an afterthought as well. This may or may not be close to an optimal reality.
- The analytics platform across all ecosystems is less than optimal, but this is especially true of Windows Phone / Store. On other platforms, serious devs tend to use third-party solutions. Not all of those third party solutions support Windows Phone and Store yet.
Posted by Ravyne on 18 February 2014 - 02:54 PM
You can't really design a fast language -- certain language decisions influence the potential of a program to run quickly, but you're talking about taking a plaintext program description and ending up with something that runs quickly -- there's a metric crapton of magic that goes on between one and the other.
Ultimately, its my belief that for a new language to be successful today, it has to be compelling in a way that's not already cornered by existing languages. Its not enough to say "My language will be as fast as C++, just as general-purpose, but with a smaller, more-regular, and more-expressive syntax." Even if you achieved all those goals in spades, you will have only an "interesting" language -- one that can be admired for its achievements but still for which almost no one will be willing to retrain their programmers for, or use in production. All existing languages have momentum no matter how bad, your new language has effectively zero momentum no matter how good. Its exceedingly difficult to overcome that disparity without offering something more than a subjectively-better new-old-thing.
Any new language today that cannot find its pitch, and a voice strong enough to shout above the cacophony of other boisterous languages is doomed.
Posted by Ravyne on 18 February 2014 - 01:24 PM
A decent-enough option if you can spare some money for your hobby would just be to get a cheapish Windows 8 laptop and take it wherever you can better concentrate than home. Bringing it to work has all the same legal issues, and possibly more if you put it on their corporate network, but go to a quiet coffee shop or, heck, go sit under a tree if it suits your mood.
Setting that option aside, what you *could* do is structure your game in such a way that the UI and platform code is abstracted away from the game itself. Not only is this something that will allow you to continue with productive work in both locations, but its a good idea in general. What you end up with is one game with two different "faces" -- one for Windows 7 desktop and one for the Windows 8 Store model. The 'face' is not just UI though -- it could extend to other things like how you access the disk or network, or things like the hard requirements Windows 8 store apps make on how your app behaves well (e.g. your game must not take longer than 2 seconds to load and become responsive, so you can't block the main thread to load resources). But most of a game can be well-insulated from that, and if you care only about Windows 7 and Windows Store, you can use D3D11 for rendering and most other common gaming APIs just fine (if you wanted to reach back to XP, you'd have to include a separate D3D9 or OpenGL-based renderer).
Posted by Ravyne on 18 February 2014 - 01:13 PM
It depends somewhat on what kinds of operations you're using, but a fair number of HLSL instructions map fairly directly to different flavors of SSE -- If you can use SSE, that will be your highest-performing option. Otherwise, you need a scalar fall-back using normal x86 instructions. You can use intrinsic functions for the former.
Now, that said -- you don't need to port the HLSL to assembly language. What you need to do is understand what the HLSL is accomplishing and then write a version of teh same algorithm in whatever language you choose. It doesn't have to be x86 or SSE assembly/intrinsic functions.
How much HLSL is it? Can you post the code?
Posted by Ravyne on 17 February 2014 - 09:06 PM
The idea is that you'll be able to access and use a Manager as an Employee when you want to treat them as just another employee. For example, if you just wanted to do payroll, you don't care who's a manager or what their title is.
So you can create a function, for example, that takes an employee by pointer and prints out the payroll check, and you can pass it pointers to regular employees or pointers to managers by casting them as pointers to employees. The one function will handle both -- If you continue to access a Manager's Employee attributes through the Super member variable as you're doing in your code, then you'd need to separate functions.
You'd be doing functionally the same thing by passing the address of a manager's 'super' member variable -- but because its the first struct element, simply casting the Manager pointer to an Employee pointer achieves the same thing. All it says is "Treat this manager as a regular employee."
Posted by Ravyne on 17 February 2014 - 06:48 PM
At the very least you'll need some kind of lobby server to connect you to your friend, at least to be even moderately convenient.
Posted by Ravyne on 17 February 2014 - 11:57 AM
For me, any sort of self-consistent alternative timeline/universe is fun because it sets up a world that's slightly askew from the one we know. In general, the physical world is much the same (for instance, gravity), but they've gone down a different path to augment their own reality for their benefit -- Where we have cell phones, which are terribly mundane for us, they have a crystal or some such that performs similar functions (perhaps with different properties or limitations). If the steampunk universe were real, it'd be more than likely they're reading about something akin to our world, and cell phones would seem utterly fantastic to them as their devices seem to us.
When you think about it, the modern world we live in is increadibly fantastic. Imagine the progress that was seen first hand by someone who was born around the turn of last century and lived to be 100 or more. I mean, probably millions of people who were alive for news that the Wright brothers had achieved the first flight also lived long enough to see commercial air-flight become common and affordable, and also to see a man set foot on the friggen moon. To paraphrase Louis CK on airtravel "I don't see how anyone can complain! You're sitting in a chair IN THE SKY!" Any kind of alternative universe resets our expectations, and removes the solutions we take for granted from the equation, and I think that sometimes seeing their alternative-universe solution, in some way, reawakens our respect for the problem. I think that's satisfying too.
Posted by Ravyne on 14 February 2014 - 02:39 PM
Its sad but true, I find Gamedev to be a rather hostile forum.
I don't think its overly hostile unless you ask overtly silly questions, or take a wrong-headed opinion and really stick to your guns. I do think people are unfairly down-voted on occasion, and when I see people in the negative for it I take a look through the thread, and at the post and if they don't seem to be in the wrong I'm happy to upvote even if they're incorrect about something.
People can make honest mistakes or be genuinely confused and shouldn't fear being punished for it. And there does seem to be a piling-on effect that's really not helpful or productive.
But I think the system mostly works -- Its pretty rare that I downvote anyone. When I do they're objectively wrong and unwilling to admit it, or rude. I also tend to follow an axiom that I don't downvote unless I or someone else has/can correct the information. It seems unfair to downvote without letting its target know why, or how they're wrong.
Posted by Ravyne on 11 February 2014 - 02:02 AM
Things were simpler back in the day just as a function of systems complexity -- The simple microcontrollers you might find in a modern games controller or running the front blinkenlights/disc-eject panel on the console itself is orders of magnitude more capable and complex than, say, an Atari 2600. What once the whole, is now relagated to a fractional measure of a tiny percentage of the system's overall complexity. Back in the day, you read a 200 page paberback book that described literally every last detail of a system -- today, that's the chapter on GPU register usage.
No one human being today can master an entire modern system, let alone also wearing the artist, designer, producer, audio engineer, and marketer hats. Back in the day, the entire company was not uncommonly one dude in his bedroom, part-time.
On one hand, the limitations they faced were increadibly limiting, but on the other, there's something freeing about it. You knew with certainty what the hard limits of the system were, and that with skilled programming could achieve them exactly. Today, we have a good idea on the upper bound, but other considerations bottleneck peak theoretical performance, and we spin round and round figuring out how to make the bottleneck just a little bit wider, moving our own goalpost. The sky is the limit these days, and the sheer number of options can be paralyzing -- back in the day, with relatively limited options, you just figured out a creative solution and got on with it.
But the thing is, the details we have today are more-or-less the same details we had to deal with back then -- how can I shed a few more cycles? How can I best allocate my registers? How can I squeeze more information into the same amount of memory? These are questions modern and old-school engineers both recognize. The only thing that's changed, really, is the sheer number of details that have to be considered in concert.
Posted by Ravyne on 06 February 2014 - 02:43 PM
So, here's really the one thing that game designers forget and which is especially true of single-player games: Try as you might to corral and funnel players into the experience you've imagined, you don't actually get to decide how any player plays your game, or how they have fun doing it.
You can put all the fun you want into a game--probably you'll end up putting some extra in that you didn't even know about--but ultimately you don't control how people take that fun back out. Who are you to tell anyone else how they should have fun in their own hard-earned free time? Especially when their fun doesn't unfairly infringe on other players' fun. Just stop worrying about it and let them eat cake. Whatever smorgasbord you've laid out through considerable effort just doesn't interest them right now, perhaps it will later. In the meantime, focus on making the best smorgasbord you can for those that want to experience it in a way closer to what you expect.
I mean, imagine if the creators of Final Fantasy 7 had said "Sure we've got this Chocobo Race thing, but this is an RPG and if people sit around racing birds all day, they'll never enjoy our grand RPG vision, so lets limit the number of Chocobo races they can do." What harm is there in letting them have fun in the way they want to?
I'm not saying that players should be able to do absolutely whatever they can imagine, or that you should enable every conceivable desire. I'm saying to make the best game you know how, and not worry if someone has fun taking it outside the box you so firmly want to place it in. The box is unnecessary and brittle, and all the time you spend trying to build a stronger box will only leave you with a significantly weaker game inside a slightly less brittle box.
Spend your time to best effect, and let your players do the same.
Posted by Ravyne on 06 February 2014 - 01:02 PM
So, like I said, the Mac Mini is actually overdue for an update based on past trends. The rumor mill has been swirling strongly for the past 3 months that an update was forthcoming, and just last week a UK retailer's site leaked a page for the updated mini, with a sales date towards the end of the month, but without updated images or specs. This still very much amounts to a rumor, but its the kind of thing that precedes an actual product update more often than not. The iMac has had it's scheduled update and so you're right to say that its specs are quite a bit ahead right now. The Mini's update is likely to bring a Haswell CPU, Iris graphics, flash-based storage, and 8GB ram as standard, so it will be much more inline with the iMac, which has already received those upgrades.
Personally, I don't much like the all-in-one designs like the iMac, but its a personal preference and to be honest, no current mac is all that much more upgrade friendly than the others anyway. I mostly just dislike that I may end up with a broken screen tied to a working computer, or vice-versa. YMMV.
Anyhow, if you don't need to get one today (and it sounds like you don't), keep an ear to the ground about the Mac Mini refresh, and then compare it with the iMac before you settle on one or the other, and get the best value-package you can.
And keep in mind that for audio work, what you mostly need is a good amount of RAM (16GB+), fast storage (local flash), and more speed/cores (audio work can be computed in parallel pretty well). Bulk storage isn't so much an issue, and you probably want to farm it out to an external RAID array anyhow.