Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Ravyne

Member Since 26 Feb 2007
Online Last Active Today, 01:40 AM

#5198406 Mobile Arcade

Posted by Ravyne on 15 December 2014 - 02:48 PM

Yes, your identification scheme won't work, IPs change and can easily be spoofed. MAC addresses too. You're talking about Identity/authentication -- that's what logins are for. I suggest you simply integrate with social networking sites (to lower friction for those who use them) to provide identity, and have your own login service for those who don't. Alternatively, you could put some kind of user-specific certificate that expires monthly behind a paywall (you'll still need to provide a way for a user to get it again if it gets lost, and if you want to automate that process you'll need a login system anyhow), and use it as a token necessary to access your site's content or access its services.




#5197495 How2Ensure UDP-Packets reach target (high performance)?

Posted by Ravyne on 10 December 2014 - 08:37 PM

Also, your scheme fails because what if an ack packet is lost? Do you also keep sent Ack packets in the AwaitingAck queue, themselves waiting for an Ack-Ack? And what then of them, an Ack-Ack-Ack scheme? Hopefully you can see where this is going -- its unreliable all the way down.

Keep sending the message blindly until you get a ack that's equal or greater than the message. I'm not aware of any way to do the kind of ping-pong ack packet-for-packet the way you describe.


Also, some messages are latency-tolerant and can't really be acted upon until you have the complete message in order, when you hear latency-tolerant and in-order, think TCP. User chat messages are a good example of this, and a candidate for sticking into a TCP stream, which frees up your UDP stream for un-ordered but latency-sensitive data.


#5197086 Freemium and Whaling

Posted by Ravyne on 08 December 2014 - 09:36 PM

Most of the thinking today around freemium monetization is such that whales and "chum" (as I like to call Freemium freeloaders who won't participate in the ecosystem) are a sort of natural consequence of the way in which the developer tunes their games. In short, developers tend to take the simplistic view that their sole goal is to increase revenue without consideration pf what share of that revenue comes from a certain amount of people. To their models, earning more revenue means the average player is more happy, even if all increases in revenue were due to whales. Its not a very realistic picture of the health of the game, and I find it pretty cynical, but its what they do. These games don't have a flatter revenue spread because they don't consider optimizing for it.

 

Given the dearth of "free" entertainment available today, you will always have people who are unwilling to buy into the entertainment they consume -- they'll grind forever if it doesn't cost them anything, or they'll just move on to the next thing. You'll never generate any revenue from them except from advertising. However, because of the need for a large, immediate user-base to penetrate the top-apps list where you need to be to make real money, they are necessary to the success of your app. The typical freemium titles use them to gain placement, which in turn draws whales to the game -- that's why I call them "chum".

 

Likewise, you'll always have whales who will buy every transaction available, either because they really enjoy the game or because of their own compulsions. The trade-off of relying on whales is that most games encourage whales by making high-end items exclusive, scarce, and/or powerful so that they can charge a high premium to an exclusive clutch of players. This has obvious effects on game balance, and also impacts who the developer will prefer to keep happy as the game changes and evolves -- a move that would alienate whales will never be considered, even if it would make the average player happier at a rate of 100-1.

 

I don't think eliminating the whales or the chum is the point, but I do think that a healthy game would have a strong "middle-class" of players who have spent something and that you'd see a gentle slope from people who have spent the least to those who have spent more, but are not whales. I also think that you'd make the most revenue by establishing such a middle-class of gamers and then focusing on raising the average revenue generated from them, its just that most games haven't figured that out. League of Legends is one that approaches that, so is TF2 -- both games do so by deliberately choosing to do so -- for example, in Leage of Legends, you can't really buy your way to victory because consumable buffs tend to affect the entire match rather than a specific player, which increases the enjoyment for everyone and encourages all players to participate in that economy. TF2 achieves it by making their entire economy of in-game stuff tradable, and through their crate-system which randomizes loot such that its not only the whales who have the best stuff. This is counter to a whale-based economy where exclusivity and scarcity drive revenues.




#5197084 Why is math transformation taxing to most CPUs?

Posted by Ravyne on 08 December 2014 - 08:55 PM

 

[...] and threads (2-8 as many operations per clock, if being unrealistically ideal) [...]

 

 

I nearly blew a fuse reading that. Please tell me that's a typo, and not how you think threading improves performance.

 

In general no, but if all we're talking about is vertex transforms or another "embarrassingly parallel" problem, then yes. You could very well write a software T&L engine and simply replicate it across any and all cores not already consumed with other duties and achieve essentially linear speedup on vertex transformations, limited only by available memory bandwidth. The same properties that make this problem suitable for the massive parallelism of GPUs make this equally possible on CPUs. This is more or less what GPUs do, except they're massively scaled up (and of course they have other optimizations appropriate for their problem domain. 




#5196503 Optimizing 2D Tile Map

Posted by Ravyne on 05 December 2014 - 03:06 PM

Another thing that might be affecting you if you're a Visual Studio user is _ITERATOR_DEBUG_LEVEL -- in release mode, this defaults to level 0 which is no checks, and in debug mode it defaults to level 2 which is iterator debugging. iterator performance is markedly different between these two settings, I've seen differences as much as an order of magnitude. You can also set it to level 1 which performs basic iterator checks that cost less than level 2. You can't do level 3 in release mode.

 

To set these to another value, for each configuration of the solution:

  • In Project Pages / Configuration Properties / C,C++ / Preprocessor / Preprocessor Definitions., add "_ITERATOR_DEBUG_LEVEL=<LEVEL>" where <LEVEL> is the level of iterator debugging that you want for that configuration.



#5196354 NULL vs nullptr

Posted by Ravyne on 04 December 2014 - 07:08 PM


Is it just me, or are ambiguous overloads like Write(int) and Write(int*) a giant code smell in the first place?

 

Arbitrarily encoding parameter information into the function name is a bad idea -- one that has outlived its usefulness stemming from languages and times before such overloading was possible. For one, what do you do when parameter types change? If you're being consistent, you then have to change the function name and update it at every callsite as well -- even if otherwise the change in parameter type was transparent to the callsite (e.g. whatever conversions were introduced due to the change were correct and desirable). Plus, every distinct name is a new verb that you have to remember in the language of your program's source code.

 

A function's name should say only what it does. The fact that "Write" is ambiguous is not the fault of its parameters -- You might want to Write an integer just as you might want to write a float, or you might want to WriteWithTerminator an integer just as you want to WriteWithTerminator a float. Conceptually, these latter functions perform the same action regardless of their parameters, even if the terminator for an integer is different from the terminator for a float. Indeed, this ability to change behavior solely on the type of a parameter is very desirable -- its an analog to polymorphism but for single functions, where the name of the function itself is effectively acting as an abstract base, and the various overloads acting as specializations of that base.




#5196326 Blood in a top-down shooter

Posted by Ravyne on 04 December 2014 - 03:50 PM

At its basic level what you're talking about is a decal system, which is sounds like you've already figured out, but its used for all kinds of things like graffiti and bullet-holes. In many games, these effects eventually fade, and are typically limited to some maximum amount for the same reason you are encountering -- eventually, you just can't keep everything around if you render it every frame -- or rather, you can, provided you have budgeted sufficiently for it, but which might not be practical to maintain at your desired framerate/art quality/number of objects.

 

If you really want to maintain it on a massive scale, what you probably want is something closer to a very large virtual texture map system, as was used in Rage/idTech5 -- basically, you have a huge logical texture that's made up of many smaller physical textures (as in, a texture in ram on the GPU) and the textures are essentially a very large unwrapping of level geometry, then, you spatter into this texture. Those textures are used to overlay the spattering when a) the area corresponding to that physical texture is visible, and b) when there's spatter or other texels drawn into it.

 

Of course, this consumes memory and requires some work to perform the splatter (mapping back to the unwrapped faces) but after that its largely fire-and-forget, and your budgetary unit is now a whole section of geometry and all its splatters, instead of each individual splatter.

 

I don't know whether thats the most efficient way to do things, but I think it would be a reasonable solution. New GPUs and 3D APIs even have direct support for these kinds of virtual texturing schemes. You could, of course, adopt virtual texturing whole hog, using it for all your texture data, and then draw your spatters directly into it too, which would save you the overhead of additional textures.




#5195935 Browser game development: Flash or HTML5?

Posted by Ravyne on 02 December 2014 - 02:08 PM

The future of the web is increasingly plugin free, with HTML5 subsuming all the multimedia capabilities that flash once provided. Flash continues to live on as a standalone player and as a plugin for those places its still supported, and its tooling also lives on, able to publish to flash and, I think, to HTML5 in a limited fashion -- over time, the HTML5 publishing will become more feature-complete. That's important because while flash first made certain client-side multimedia practical, the real value for developers was in the tooling.

 

That said, HTML5 still faces some hurdles at the moment. Audio--not just simple latency-tolerant sounds and music, but actual, low-latency, mixed audio--remains to be one hole that needs to be filled across browsers. In fact, some HTML5 web games use a Flash-shim just for audio and/or fall back to a lower-quality audio experience when a better option is not available. However, the Web Audio API is hopefully on its way to solving this problem, it recently gained broad support in all major modern browsers except IE, however Microsoft has announced that they plan to support it in a future version of IE, likely IE12. I think that will mostly have caught HTML5 up to Flash, and from there things can only get better.




#5195927 Unity or C++?

Posted by Ravyne on 02 December 2014 - 01:32 PM

If you're more interested in completing games than you are in designing engines, just stick to Unity for now. Maybe your priorities will change, but writing an engine to make the same game you could've/would've made in Unity, just so no one can say you're "not a real developer" is a poor use of your time -- real developers ship and support products.

 

That said, if your needs for an engine are simple to start with and you're interested in getting your hands dirty -- or if you simply have unique needs that would be difficult to support in a 3rd party engine -- then rolling your own can be a valid and rewarding exercise. Just don't fall into the trap of writing an uber-engine because you feel it won't be complete until its features rival Unity and friends. Write only the engine you need at the moment to complete the project you're working on (being forward-looking is fine, just don't try to guess at what's over the horizon), and over the course of two or three projects you'll find that you probably have a fairly robust engine.




#5195822 A Good Story Idea for an RPG?

Posted by Ravyne on 01 December 2014 - 09:23 PM

Old tropes to be sure, but there's nothing wrong with that, really. Sometimes the classic tropes are just comfortable, though they may have worn too thin for some.

 

My constructive criticism would be that it sounds like you have a premise and the beginnings of the top-level, linear story-arc, but you do not yet have a story. In my opinion, and I don't think I'm alone in it, the story-line in an RPG is almost secondary to the characters -- the story serves the characters rather than the other way around, reason being that the story should unfold as if they were events driven by the decisions of the characters, both good and bad (or neither). If you start with a story and then try to put characters into it to suit a predestined outcome, its very hard to make the characters or their place in the story feel natural, they'll tend to fall into a kind of literary uncanny valley. I mention this because the characters' personalities and their motivations are absent from your summary, but perhaps that's only because you are being succinct. I would just encourage you to really think them through, and try to give them personalities and motivations that aren't one-dimensional. At any major event or fork in the story, you should be able to imagine what the characters think and feel about it -- maybe they have different opinions or priorities along the way, maybe they have other considerations or loyalties outside the party that influence them, maybe long-forgotten relationships are drug up as events unfold.

 

Another criticism would be that the witch as described seems like a weak antagonist. Conflict stories really need a strong, multifaceted antagonist to thrive. My favorite villains are always the ones I want to see completely destroyed and demoralized, while not being able to fully shake the feeling that some past trauma is responsible for setting their path to villainy -- or, sometimes better yet, when their motivations started pure but became twisted. Said another way, villains who are evil for the sake of being evil or who's only goal is seeking power so that they can perpetrate more and greater evil are pretty bland. A good villian has history, personality, and maybe even a conflict or two -- but most importantly they have vision, there's something more to their madness than madness itself.




#5194899 Needing Help For Coding a Sega Genesis Homebrew Game

Posted by Ravyne on 26 November 2014 - 09:50 PM

No such thing exists for the genesis. The only small "break" the genesis offers is that CPU is at least capable of running compiled C relatively well (the 68000 has enough registers to support decent code generation, unlike the 658c16 in the SNES) -- still, the slow speed of the CPU will require 68000 assembly for any performance-critical loops. You also get a flat memory space (no paging) for cart data, unlike the 8 bit systems and their gang of mapper chips.

 

Check out SDCC, the Small Device C Compiler -- I think it supports 68000. GCC might still support 68000 code-gen too, if not in main then someone still probably maintains it in a branch somewhere. You'll need a 68000 assembler too. and tools to generate graphics and sounds in a format the genesis can deal with. I'm sure there are Genesis homebrew sites that can help you get set up.

 

You can compile your game then run/debug it on your PC using a genesis emulator, rather than trying to run on the real-deal using a memory cart. You'll still want to test on the real deal occasionally, and on a variety of emulators, but being able to use one primary emulator with good debugging on your PC is way easier than debugging on the real thing, or debugging with more primitive methods like error messages.

 

If you don't care especially for the genesis for any good reason, the GBA was far more explored by the homebrew community than the genesis, so there's more docs and tools available (still no game-maker though), and offers a bit more CPU power so that you can most-likely avoid writing assembly at all.




#5194892 Quick C++ question on pointers

Posted by Ravyne on 26 November 2014 - 09:16 PM


In the class ChangeChar, the following needed to be a reference to the class rather than a pointer to a pointer.
 
void fChangeHealth(MyClass &myChar, int h) { myChar.fAlterHealth(h); }
void fChangeName(MyClass &myChar, string n) { myChar.fAlterName(n); }

 

You can use either a pointer (no need for pointer to pointer here, likely you have mixed up the level of dereferencing somewhere else that makes you think this), or a reference.

 

 

 

This is obviously not great as working with pointers is a better way round, but for anyone searching I thought it best to illustrate that both pointers and reference are possible outcomes.

 

Pointers are not "better" than references, nor are references "better" than pointers -- they do different things, with some overlap.

 

A pointer is simply an address in memory -- it tells you the house-number and street of your data, just like an address in the real world. With a pointer, you can add and subtract from the address to reach the neighbors houses, and you can keep doing so to walk down the street; or, you can write down a new address and end up in a completely different neighborhood. However, if you write down any random address and handed it to a friend, they might end up someplace dangerous, somewhere they're not allowed, or trying to find a place that doesn't exist, so you have to be sure of what you're writing down. A pointer can also be 'null' which is a way of telling everyone who sees the address that its not a real address -- like if you wrote down the real-life address '123 Don't Go Here St."

 

A reference is also an address in memory, exactly like a pointer except in two critical ways -- first, you cannot re-assign, add, subtract or do anything else to a reference -- in the street-address analogy, you can go to exactly one house, but you can't then go to their neighbors or walk down their block, you can't write down a completely new address either, once you write down an address in a reference its magically transfixed and cannot be undone or changed; second, a reference must always be a valid address and therefore can never be null -- if you have a reference to something, you're guaranteed to find what you expect, you won't be surprised by an empty lot or to find yourself someplace you're not allowed. These two differences, while they make references less flexible than pointers, also make references safer.

 

Therefore, its actually correct to prefer references to pointers -- whenever a job *can* be done with a reference, it almost certainly *should* be done with a reference -- the only exception to this guidance is that sometimes third-party code that you're using might be written in a way that makes using a pointer more convenient (or necessary), then its sometimes okay to use a pointer even when a reference would do just as well. Otherwise, you should only use pointers when you might need to signal that the pointee is invalid (null), visit the neighbors, or move all around the city.

 

 

Finally, returning to the original question of when might you have a dangling pointer and/or memory leak and how do you avoid it, well, you have the potential for a dangling pointer / memory leak whenever you use new -- all it takes is to fail to call delete. But there's lots of ways to fail at calling delete: you might simply forget to type 'delete thing;', or elsewhere in the code you might find that you just forgot about what it was you needed to delete, or you might not hit the code that calls delete like you expect because of a bug or because of conditions you didn't anticipate -- or trickiest of all you might have done all that right, but an exception might have caused your program to skip the line that deletes the thing. Deleting everything you allocate, every time, 100% of the time, no matter what went wrong, is hard. People who have programmed most of their lives can still make these kinds of mistakes.

 

But good news: there are tools to help you built right into C++! The tools are called unique_ptr and shared_ptr (and shared_ptr's friend, weak_ptr, but that's a story for another time). When you allocate something and store that pointer in a unique_ptr, it always gets deleted the moment you can no longer use that address, no matter what, even if an exception is thrown -- this is super, super useful. But unique_ptr has to be the only one that owns that allocation to do its work, others can look at it, but they all have to agree that only the unique_ptr owns the thing, and will take care of deallocating it, and as a consequence of that, the unique_ptr must stick around longer than anyone else who might want to look at the thing, or the thing won't be there when they come to look at it (like your friend had arrived at the address to find what looks to be an uprooted house, plumbing coming up from the ground still spurting water into the air).

 

That's where shared_ptr comes in. shared_ptr is a way of saying that several shared pointers share the responsibility of deleting the thing. The way it works is like this: every shared pointer makes a mark on a chalk-board that says they're using the thing, when they're done using the thing they erase their mark. After awhile, there's only one shared_ptr left, and when its done using the thing there are no other marks on the chalk-board, this means he's the shared_ptr that has to delete the thing, and that its safe to do so because everyone else is done with it. Its like they're a basketball team during practice, the last player off the court has to put the ball away.




#5194838 C#: XNA and Xamarin

Posted by Ravyne on 26 November 2014 - 03:19 PM


so is .Net like a language translator that takes a program in one language and translates the entire program into a new language making the program ready to be published instantly?

 

Its a translator of sorts, yes -- but don't think of it as "I speak C#, and .net translates to some other programming language" -- at a very detailed level that's true, but its more helpful to you to think of it as "I speak .Net -> Mono understands .Net -> Mono speaks Windows, Mac, Linux, etc." Also, I speak MonoGame -> MonoGame exists on Windows, it speaks DirectX, etc; MonoGame exists on Linux, it speaks OpenGL, etc; MonoGame exists on Mac, it speaks OpenGL, etc. The libraries have to exist on each platform, because they have to tie to the platform itself to do the right thing, but as long as you speak the MonoGame library through .Net/Mono, it'll all work out on each of those platforms.

 

Anyways, at your level you don't need to worry too much about how it works, you just need to know that it does work, so long as you choose to use libraries that are available on all the platforms you want to support (like MonoGame, or any of the core framework libraries).




#5194835 Optimizing 2D Tile Map

Posted by Ravyne on 26 November 2014 - 03:03 PM


Ravyne's advice is sound, but it's an awful lot of work that might be unnecessary to get your game to an acceptable performance envelope. Figure out your actual problem then just fix that until you have the time to fully revisit your asset conditioning and content loading pipelines.

 

In this case, its sounding more and more like compression would act as sort of a magic bullet. If OP is okay taking a compression library as a dependency, it might really be the path of least resistance while also giving biggest gains -- and as you said, if the data is semi-regular in either form, compressed ASCII ought to be just a bit larger than compressed binary (The number of dictionary entries should be the same or about the same, but the entiries themselves will likely be a bit larger). I generally assume that people want to put off dependencies if they can, which is where the other advice is relevant.

 

I think the thing I would do in either case is to just evaluate whether OPs current format is wasteful -- are you writing 32bit values to disk when the data would fit in 16bits? Are you padding ascii strings with additional "empty" characters to make parsing easier? Stop doing those kinds of things first. Also do pre-allocate large data structures and profile for unexpected bottlenecks (like in your physics system insert example) -- then, if performance is still to be desired at that point, compression is the next step.




#5194708 C#: XNA and Xamarin

Posted by Ravyne on 25 November 2014 - 07:30 PM

.Net is a runtime environment, it defines a sort of "virtual" computer platform that runs on top of a host operating system. The .Net framework refers to this runtime, plus the standard .Net core libraries. This is why the same .Net application can run run on Windows, Linux, and Mac -- just as long as the developer has chosen libraries that are available on all of those platforms. In Java, the Java Virtual Machine is equivilent to the .Net runtime, and there is a standard set of Java libraries as well. This description simplifies things a bit, but its sufficient for this discussion.

 

Mono is an open-source implementation of the .Net runtime and .Net framework which runs on Windows, Linux, Mac, etc. MonoGame is an open-source implementation of the XNA framework that also runs on Windows, Linux, and Mac. Thus, if you program for Mono and MonoGame, your apps will run on Windows, Linux, Mac, etc, with the same code -- in some cases you might have to, or want to, write a little platform-specific code to work around issues or to take advantage of platform-specific features, but otherwise your code should just work on any of the platforms that mono and monogame support.

 

Mono differs slightly from the .net runtime/framework in that it lags a bit behind Microsoft's official .Net runtime; However, they've just open-sourced a ton of .net, including the runtime. I expect that within the next year, Mono will either switch over to that core as is, or start pulling the newest parts they're still missing (C# 6.0 just hit preview, along with a version-bump of the runtime). A year from now, Mono and Microsoft's runtime should have feature parity.

 

Now, monogame itself might be targetting an older version of the framework (XNA would have), but I don't use it so I'm not sure. If so, you might not see any new-fangled goodness, but the existing support today should have already caught up to where XNA was. In theory, you should be able to take a monogame project, copy it over into XNA studio (which I think Microsoft still offers and you can still publish to Xbox 360?), and publish to Xbox 360 with little code changes (maybe none?). That's the power of the .net runtime and framework in a nutshell -- you can take an application (as a binary in some cases, or source code in others) and run it on a completely different hardware architecture, on a completely different OS, and a completely different .net runtime/framework implementation and it should "just work" for the most part.






PARTNERS