Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Today, 04:16 PM

#5225325 I'm trying to make a GBA game in C

Posted by on 24 April 2015 - 02:51 PM

The best way to develop GBA roms is using an emulator, both for the greater turn-around time, and the availability of easy debugging.


No one ever really produced "blank carts" for the GBA -- not as in "I can load my program on here and sell them". There were flash-carts, which are developer tools for people like yourself to be able to test, carry, and show off your work (and which could also be abused for piracy) though. Today, no one I'm aware of is still manufacturing or retailing any GBA flash-carts. The best you can do is probably Ebay -- see what turns up when you search for "GBA flashcart" and do your research on what turns up because plenty of flashcarts have limitations and bugs that you might be happy with. If you can find and EZFlash 4, though, that one seems to have the best reputation -- I picked one up on Ebay a couple months back for like $70.

#5224896 GPU Ternary Operator

Posted by on 22 April 2015 - 12:51 PM

I don't know for certain, but detecting a "complex" (with an inline expression) or "simple" (a < register-sized value) ternary expression is trivial -- CPU compilers have detected this (and equivalents, expressed as if-else form) for ages and transformed between the two instruction sequences as needed, regardless of ternary or if-else form.


I also would presume that if even one of the operands is "complex", then something more interesting will happen -- either it devolves into a branch, or possibly it would pre-compute the result of the complex expression and then do a CMOV-type operation substituting that result for the expression. For simple computations, the latter might be faster as it avoids a branch, but it might not be so for a non-trivial expression.


Now, be aware that as soon as there's a possibility of a branch there's also a possibility of a divergence if even one thread in the warp/wavefront goes the other way, which is what will really kill your GPU perf. If you have a mixed case -- that is, one simple and one complex operand to the ternary expression -- then you might want to consider doing that pre-computation of the complex operand yourself to help ensure you avoid branches.

#5224594 software contract (good or bad)

Posted by on 20 April 2015 - 07:32 PM

Yes, it sounds like a bad deal -- particularly if that "you cover the costs of what you can't develop yourself clause" is worded or interpretted broadly -- as in "You're on the hook to deliver the software you promised us, and if that means you have to farm out the whole thing, that's on you, even if it puts you in the hole." I wouldn't assume any risk that is not my own on behalf of another entity -- not for a 15% stake -- If I'm assuming equal risk, I would want a commensurate portion of the company/project.


Accepting a percent of earnings (by the way, net or gross? There are horrible loopholes here that can screw you -- just see how the movie industry uses shell companies to make it look like they didn't turn any profit at all.) can be a savvy move, but I would not take that as sole payment. Your costs and your dev time don't work on a percentage basis, they have real, nominal costs. I would seek up-front, milestone-based payment for my costs at least (plus either a fixed profit or a percentage).

#5224592 do float operations give different results in different GPUs?

Posted by on 20 April 2015 - 07:21 PM

Its probably just a bad idea to rely on exact, bitwise equality for any floating point number on any real (digital) floating-point machine, unless you are absolutely sure that you can gain and maintain complete control of that environment. I would imagine that even a super-scalar floating-point unit must have potential to make things go haywire, given non-deterministic scheduling, and how floating point error naturally and inevitably accrues. Such super-scaler systems are invisible to the programmer (I suppose there might be ways (settings, fences) to explicitly control execution order, but not sure what real systems provide, if any) and what they do can be affected by what other external processes are in flight (or maybe have just ended).


If you have a fixed platform with fixed input and a known starting state, you can probably rely on bitwise identical results, but it all goes out the window if any of those variables change.

#5224572 Remaking Mario 64 from scratch, how long?

Posted by on 20 April 2015 - 04:40 PM

The amount of assets comprising the game is pretty gigantic, though.


Quantity is an issue, but the low fidelity of the old assets and better tools of today can certainly save some time -- plus the design and tweaking is already done if we're talking about a literal clone, so no iterations. Its way easier to get where you're going when you know exactly where that is.



All of that said, if we're not talking about the kind of clone that Nintendo's gonna shut down, and you're talking about a new project of similar design, scope, and execution you're back at square one, where low-fi assets save you some, but not bucket-loads of, time.

#5223308 Template or Macro

Posted by on 14 April 2015 - 08:06 PM

All Snark aside -- in general you should prefer a template if a template can reasonably do the thing you need; however, you need to temper that with other constraints -- there are things that macros can do that templates can't (and vice versa), or sometimes a macro just might fit into an existing codebase more cleanly (assuming you've audited that codebase and are sure your macro isn't going to cause any conflicts in the first place).


Macros are dumb; don't do dumb things unnecessarily. But once in a rare while, something dumb is the smart choice.

#5223295 inline virtual is a nonsense ?

Posted by on 14 April 2015 - 07:38 PM

I would have to assume so -- you certainly can't have a function that is both virtual and inlined -- virtual inheritance is run-time polymorphism, not compile-time.


To achieve compile-time polymorphism you could employ different techniques -- simple inheritance and override is one form, templates give way to more forms (Even simple templates do this, but this kind of compile-time optimization is much of what template meta-programming is about). But in order to use any of it, you need to know exactly the types involved at compile time. There's no way to rectify run-time polymorphism with inlining that I know of, and I would assume there are none.



EDIT -- Ah, yes. Nyperen brings up a good point that it would be possible to de-virtualize such a call. However, I would guess that the way the compiler picks up on this opportunity is fairly brittle -- that is, it probably relies on a concrete type being known in the surrounding context and if a well-meaning refactoring were to come along and change that type to one of the baser types in the hierarchy, then your inlining vanishes and takes the performance along with it. If you view that performance as a pure bonus, then this is probably OK -- but if you're relying on inlining to achieve suitable performance, then this is no good.


In that respect, its sort of like auto-vectorization -- it might seem like a good idea at first to get SIMD instructions by writing your C++ such that the optimizer can auto-vectrize, but you end up wtih the same problem -- an innocuous code change can modify the assumptions the compiler relies upon to enable that optimization, and so it can't auto-vectorize it any more -- You make a tiny change and suddenly performance goes through the floor. If that performance was just a bonus, no biggie, if it was necessary, this approach is just plain broken. So if you're going to rely on the compiler doing a certain thing, its really best to say what you need in the language that most explicitly expresses that to the compiler -- whether that's using SIMD intrinsics, or templates over virtual functions you hope will get de-virtualized.

#5223272 How to protect the idea?

Posted by on 14 April 2015 - 06:08 PM

The harsh truth is that an idea on its own isn't worth anything. At best, its value is latent -- like a lump coal that will turn to diamond if enough heat and pressure are applied or the grain of sand in an oyster that will become a pearl one day. As a thought exercise, take any of the popular game franchises today -- Mario, Final Fantasy, Tetris, Halo -- and imagine what their germ of an idea might have been. Even if you work backwards from an existing success story, its hard to argue that that germ of an idea has any value of its own if we're being honest with ourselves. Those ideas didn't have any intrinsic value, and sadly neither do mine or yours.


An idea is just too simple to even communicate what latent value it might hold -- because an idea of not so many words leaves lots of room for the party hearing it to make different interpretations and additions of their own -- an idea at this level isn't even a singular thing, once you've shared it, it cannot be. You need to elevate a mere idea into a design before its even capable of communicating itself effectively, if imperfectly, but at least then two people can agree on what its value might be -- which is key to establishing collaborators. But even then we're talking about the intrinsic value of the thing in its own terms (that is, how its experienced by whoever experiences it) -- not its monetary value. To raise your once-idea to that level, you now have to rally and transform your design into a tangible thing that can be experienced, and which will entice consumers to pay for the privilege. Ideas are important because they're the first step in this value chain, but perhaps counter-intuitively they are the one link that lacks any value of its own what-so-ever.


I cannot think of even a single idea that's ever been sold. Designs, sometimes, but never an idea. Anyone telling you different is reciting fairy tales. So you've got an idea -- that's great -- now comes the hard part. Only you will will decide whether that first link will become something valuable to the world, or become another single link piled deeply in a Scroodge McDuckian vault of 'valuable' ideas.



TL;DR: Ideas are just the thing that gets you to the thing that's valuable.

#5222474 C++ cant find a match for 16 bit float and how to convert 32 bit float to 16...

Posted by on 10 April 2015 - 12:36 PM

C++ itself doesn't define a 16-bit float type as far as I know, though I can imagine there are platform-specific tool-chains that might support them (Lots of DSPs use 24bit floats and their compilers support them, presumably).


You see 16bit floats mostly in things like GLSL/HLSL shaders, or an API like OpenGL ES might provide a library type and conversions.


You can also do the conversion yourself by banging bits if your intent is to simply upload them to a device that handles them naively. I don't think you can just truncate the mantissa/exponent , but the math is straight forward if a bit exacting.

#5222120 CMake or Custom ?

Posted by on 08 April 2015 - 03:06 PM

My two cents: I honestly can't imagine making a better bespoke build system than what already exists. In my experience, a build system is to be leveraged and extended, but not replaced -- that is, when you feel limited by it, its usually because you don't fully understand how to use or extend it with custom steps.


But its absolutely the norm that a complex project *does* extend its build system, it won't every last thing you need out of the box. And they're built to be extended with custom steps -- there are hooks all over to allow you to interject into the build process.



But fully custom, or even heavily modified (beyond easy recognition) or just plain complicated builds, are to be avoided. I had a gig once where my job was to perform static analysis of Xbox 360 titles and report back where there were dangerous practices or performance anti-patterns, so we'd get projects from various AAA studios. Once there was one with a very large and complicated build system. The normal process was that there was a Wisual Studio project and it was just good to go, but this one with the complicated build system was crazy -- in the end, what they ended up doing was having their build engineers create us a Hyper-V image on a physical hard drive and literally shipping it to us overnight. Even then, we had one of their build engineers on-call who would remote into the virtual machine to fix things when our tools changed how things worked (which was usually not an issue) -- they also were doing things that broke our tools, like issuing *single compiler commands* that spanned larger than 4096 characters in length (which exposed an incorrectly handled buffer copy in our tool, and took us a week to figure out).


Long story short, bespoke systems are usually brittle and often complicated, almost by definition. Existing build systems do a good job at the job they're meant to handle, otherwise they would not be so widely spread. Attempting a bespoke system on your own means you have to do all that mundane stuff just as well as the others before you might even get to gain any ground with your unique vision -- unless its a requirement, you're probably better of extending one of the existing systems with what you need.

#5221928 Using Vector Graphics for games

Posted by on 07 April 2015 - 02:28 PM

Does it make sense to build a game that has SVG images and then allows the user to select a resolution then converts the images to the proper sizes for the resolution and saves them as png files. I'm thinking this could save a lot of HD/memory space on the lower end systems running my game.


Rather than doing this all client-side and at runtime, another approach would be to do it at install time -- possibly with each level of detail occupying a compressed archive on a server somewhere. You can have the installer check the resolution and only download the necessary assets. You'll want to allow the user to get another package later (say, if they upgrade their display), and you'll probably also want to allow them to download multiple LOD packages (a user might use low-res textures on his laptop on the go, but want the high-res textures when he's docked to a larger, high-res display at home.)


For physical distribution (e.g. on a DVD) you can put all the LODs on disk but only install the ones the user wants.


This whole system can be extended to support other variations, as in localization, where language glyphs might be translated in the textures, or assets have different design cues based on different cultural norms.

#5221912 Lossless compression usage

Posted by on 07 April 2015 - 01:10 PM

The fact that LZW is used for that is interesting—thanks for that.  I've been generally uninterested in adding LZW support to Squash, I'll have to rethink that.  Any chance you could share the names of some of the platforms which can decode LZW like that?


Yes, I had to check whether its been revealed on a non-NDA basis first. Its the XBox One. This public powerpoint from Microsoft/AMD reveal that the asynchronous 'Move engines' (a fancier DMA unity, basically) can both compress and decompress LZ data -- I'm not sure how inclusive of all the different LZ compression formats that is, though. It also can decompress JPEG, can swizzle textures, and perform memset-type operations. I believe that the PS4 does at least jpeg, and might do LZ as well, but I'm not certain.


I don't know if you want to ignore the memory-mapped files -- They are platform specific, but in practice the two platforms (Xbox One and PS4) are similar enough that there might be common ground. But I'm speculating' I don't know, and probably couldn't say if I did.

#5221720 Lossless compression usage

Posted by on 06 April 2015 - 05:04 PM

Just a quick note -- when you're talking about what kind of files are used in games, you're really talking about two sets of files -- the ones used during development to iterate on, and the ones that are delivered in the published product that are derived from the development files. You wouldn't normally have cause to compress the development files, but it might be interesting none-the-less -- game assets are huge and getting huger, which ought to be clear from the fact that some large games weigh in at 70GB or more as they sit on store shelves. But even if development files are not interesting, at the very least you'll want people to keep straight which type of files (development or production)they're speaking about in this thread.


You'll also find it common that many games rely heavily on bespoke file formats that might even vary from platform to platform even for a single game. Game developers like to be able to stream on-disk content straight into memory sometimes, so that the file contents don't need to be parsed. This means that platform-specific compatibility considerations such as padding and alignment (or other, performance-impacting considerations) get built into the on-disk formats for each given platform. Sometimes this 'raw' form of the platform specific data is compressed -- though I think this will almost universally be LZW, because at least one of the current platforms has hardware that can essentially DMA a file into memory and do the decode in transit -- I believe both current consoles also do this for Jpeg as well.

#5221488 Is it realistic to expect to make money in Unity Asset Store/UE4 Marketplace?

Posted by on 05 April 2015 - 11:11 AM

Okay, thank you again, frob. It seems that as a beginner, I have months, or even years of work ahead of me, before I even get the chance to earn some small amounts of money. Not very encouraging sad.png


If you can do something really well, right now, then there's nothing holding you back. Asset makers with a professional pedigree can make money because they deliver professional results -- not because they went to school or worked at a well-known studio. They probably did do those things and that probably does play a big role in the level of results they produce, but its not a requirement. No one's going to dig up your school transcripts and resume before buying assets from you -- if they're good, that is.


Keep in mind that we're talking about non-exclusive assets if we're talking about the Unity Asset Store -- that means also that the assets that will make real money need to be universally appealing, as well as high-quality. You could deliver a very well-executed but niche texture and only see a handful of lifetime sales, and it still wouldn't be worthwhile unless you were able to price it 10 or 100 times higher than a similar-quality texture with more universal appeal. That's another part where experience comes into play -- you need a sense of what will sell, you can't just throw things against the wall to see what sticks.


I would suggest though, that most people do not start by delivering a level of quality that's fit for retail. Most everyone at some point started at the bottom where they literally could not pay someone to take their work, and then they got better and couldn't give their work away, and then they got a better still -- good enough to give their work away, and then the got much better and could sell their work at discount rates, and then they got better again -- more than they ever had before -- and finally they could hope to make a living at it. Everyone starts at the bottom unless they're some kind of virtuoso.


Its not meant to be discouraging, but its reality, and you'll need to pay your dues like everyone else did before you.

#5221199 University Degree - CS vs. CSGM

Posted by on 03 April 2015 - 03:22 PM

Switching to a standard CS degree and simple taking the games classes as technical electives may be something to look into. From what I know, it's not difficult to switch majors, especially if they're in the same department.


Am I missing something? You said that the core CS curriculum is identical except for the difference between taking 10 free CS electives or the 10 pre-chosen games electives -- ergo, if you enroll in one program but also take the 10 different courses from the other program, then you will in effect have both degrees. There is no switching. I would presume that the school will give you both degrees (literally) if you complete it all satisfactorily. There is no 'switching', you just start with one and then do more.



Since most entry-level jobs are fairly siloed, could a dual degree balancing both sides potentially work against me? Or would some studios see it as a possible benefit in the future?


When it comes to skilled work, I've never really heard of being more educated being a problem. You might be perceived as 'overqualified' I suppose, but if that's so clear cut then surely you'll be able to get a presumably better position anyways. Now, if you go into an interview for a technical role, and when they ask about why you took cinematography and suddenly your face lights up and its all you want to talk about, that could cost you that job because they might become concerned you're not in it for the long haul and are really just waiting for a film job to come along -- but there'd be no problem just saying its something you were always interested in and took it mostly for your own enjoyment, usually that kind of initiative is seen as a positive.


I think in general most employers are looking to either fill an immediate need (e.g. they need brains in chairs to do a finite amount of defined work) in which case they only care whether you'll be able to jump in be immediately productive, or they're looking to make a longer-term hire (someone who has a well-rounded base, and can be shaped into more specific roles) in which case they certainly do want you to be well-rounded. Basically, their feeling on your less-related educational experiences is not going to be harmful to you, it's generally a positive if they care at all.