Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Today, 05:05 AM

#5127794 AMD's Mantle API

Posted by Ravyne on 31 January 2014 - 12:49 PM

Really waiting to give it a shot -- I bought a 290x w/ Battlefield deal before the prices on Radeon cards went sky-high. It mostly sings when you're CPU bound, but there are smaller improvements when GPU bound. But really I think the bigger boon to the graphics side will be it makes more console-like effects possible because you're not bound by draw-calls or other OS-related things -- all the bindless state, memory management, and thread-management stuff -- can allow devs to make a better looking scene, even if the pixel throughput is only marginally improved.

#5127786 Why it is so hard?

Posted by Ravyne on 31 January 2014 - 12:33 PM

I don't think that profit-share is a non-starter out-of-hand, but you'll need to manage your expectations. Imagine that roles are reversed -- you work as a professional programmer in your day job, and an artist approaches you to work on a project for profit-share. Their idea sounds somewhat interesting, but it isn't quite your cup of tea, and you've got a hundred ideas of your own that would probably be more fulfilling. What do you do?


This is why people already making money on their talents are hard--though not impossible--to secure on a profit-share basis. And you may not be competing only with their day jobs, but also other freelance or moonlighting work that might either be paying up front, or which simply might hold more appeal for them. A talented artist can literally pick any project of their choosing to become involved in.


Less experienced, qualified, or skilled artists are more available, but may not give the level of work you would prefer. If you're not willing to settle, a less-skilled artist might be willing to work for less, and perhaps their skill will grow, or they will at least be able to carry you to a point where you can recruit a more skilled artist to help polish things up. If you were to end up taking this route, you need to be clear and honest up front about what you're offering and what you expect, and you need to stick to it -- everyone needs to be on the same page, or that artist is going to feel used when someone better comes along. Even if their art never makes it to the final product, they should still be compensated in a measure equal to the time they put in and their skill level, and they should get a game credit or whatever other fringe benefits they have coming (The credit and fringe benefits are often what lesser-skilled artists are after, because they're trying to build up their portfolio and experience).


Money does two things for a part-time developer in securing an artist -- firstly, it guarantees the artist some compensation for his work, and secondly it says that you are serious about completing this project. A profit share of an unreleased project is exactly $0, regardless of how many hours of work everyone has put in. You need to trust that the other person is just as committed to you to delivering a product that's complete and suitable for sale. Its hard to trust a stranger, or even someone known to you when you're talking business, but being compensated or holding some kind of collateral can make that less of an issue.


Keep in mind, you don't have to do pure up-front payment or pure profit-share. You still need to offer enough up front to be taken seriously, but there's many more people willing to work for 25-50% up-front than there are for 0%. If you don't know what kind of offer to make, you can make a similar proposal to what they make in many creative industries where a producer or publisher is involved -- You define the pay-rate as a royalty or profit-share, but pay some figure $X up-front to secure the work; when the project is complete and earning money, you begin to tally their share, but you don't pay the first $X because its already been paid up front. They only get the profit-share they've earned above and beyond $X, and if their share of profit never grows beyond that, then $X is all they get.

#5127525 How is CSG done?

Posted by Ravyne on 30 January 2014 - 01:02 PM

The short version is that there's no magic here -- You remember all those algebra questions from high school where the instructor asked you to find the point where two lines intersect? Its basically just like that, except you have more complex 3-Dimensional shapes involved. If you can describe the shape mathematically, and you have another mathematically-defined shape in the same vector space, then in principle you can plug in 2 inputs into both equations (say, X and Y), and see if the third comes out the same (Z), if so, that point is an intersection of the two shapes.


Of course, simply sampling points like this is inefficient, so they do things more cleverly, but the optimized forms are all based on this basic fact. For example, lets say you wanted to calculate the intersection of a cube and a sphere -- well, a sphere is about as simple as it gets, having a center and a radius, and a cube is just 6 planes that are constrained; there are well-known algorithms for finding out if and how a sphere intersects a plane: Take any plane in X-Y, and if the sphere intersects it, the center of the resulting circle is Xs,Ys,Zp (where sub-p means 'plane', and sub-s means 'sphere') -- Now, you can use the Pythagorean theorem the find the radius of the circle that's struck on the plane by the sphere -- The longest-side-term (hypotenuse of the right triangle) is given by the radius of the sphere, and one of the sides is given by the distance between the center of the sphere, and the center of the circle we calculated earlier. By the way, if the distance between the centers is longer than the radius of the sphere, you know the sphere doesn't interact with that plane. Now you have a description of exactly how the sphere interacts with that plane, and you can now apply the same constraints to the circle as the side of the cube (this is a similar process, except now you're operating inside a plane, and the former 'plane' is now a line), and if you still have some circle left, you have a description of how the sphere interacts with that side of the cube. Repeat for all 6 sides, and you're done. You know a lot about how the sphere and the cube interact and you can use that information to modify the shapes involved.

#5127124 99 Bottles Of Beer Challenge With Least Amount Of Characters ?

Posted by Ravyne on 28 January 2014 - 10:07 PM

Codepad seems to be messing up your output. At the 1 bottle of beer line, it prints bottle? instead of bottle, although my console seems to get it right.


It must not support the escape sequence for backspace then. I have to feed one of my macros, and I do so with a \?\b (question mark, backspace) sequence to keep it happy. I suppose I could have just used an \a (alert/bell) sequence and saved 4 characters, but I felt like that was cheating.


Cleaning up, hoisting " on the wall", and using \a, now I've got:

#include <stdio.h>

int main()
#define P(M)printf(M,i,i,i) //27:27
#define W " on the wall" //24:51
#define B(A,S)#A" bottle"#S" of beer" //37:88
#define V(S)B(%d,S)W", "B(%d,S)".\nTake one down and pass it around, " //70:158

	int i = 99; //11:169
	P(V(s)); //8:177

#define R(S)B(%d,S)W".\n\n"V(S) //31:208

	while (--i>1) //13:221
		P(R(s)); //8: 229

	P(R(\a)B(no more, s)W".\n\n"B(No more, s)W", "B(no more, s)".\nGo to the store and buy some more, "B(99, s)W"."); //109:338
} //+18 (#include) +12 (main) = 370 characters

Which is 338 characters for the body, and 370 in total (#include, main, body), including significant whitespace. And its actually fairly reasonable to follow.

#5127089 99 Bottles Of Beer Challenge With Least Amount Of Characters ?

Posted by Ravyne on 28 January 2014 - 06:10 PM

#include <stdio.h>

int main (void)
#define P(A)printf(A,i,i,i)
#define B(A,P)#A" bottle"#P" of beer on the wall"
#define V(P)B(%d,P)", %d bottle"#P" of beer.\nTake one down and pass it around, "

    int i=99;

#define R B(%d,s)".\n\n"V(s)


#define R B(1,\?\b)".\n\n"V(\?\b)

    P(R B(no more,s)".\n\n"B(No more,s)", no more bottles of beer.\nGo to the store and buy some more, "B(99,s)".\n");

378 characters including significant whitespace, 396 with the #include . Output is perfect including caps, punctuation, newlines and final verses. I could probably hoist out a few #defines, or #define the #defines and save another 20-30 chars, but I'm weary of this exercise for the moment smile.png Gotta love code that only reads top to bottom!

#5127031 Is HTML5 a legit language for developing game?

Posted by Ravyne on 28 January 2014 - 02:36 PM

On the topic of cross-browser code, its a great ideal, and I don't think its beyond workable, even if its painful -- but, on the other hand, you can side-step most of it if you can simply accept not running on absolutely every browser. In that case, you are simply saying "Browser X is effectively my game client application". I think that's mostly fine, millions of people will already have your game client, your game client is probably already white-listed in most school/work environments (whether people should be playing in those locations is up to them to decide when and when its not appropriate), and for someone that really want's to play your game, what difference, really, does it make whether they download a proprietary client or a specific browser? When you start looking at issues of security and trust and potential liability, a browser is actually better in many respects for both the developer and the end user.


You still might have to deal with intra-version performance regressions or broken features -- it's not a guarantee that new versions contain changes that affect the way your game works, but its common enough you'll probably bump against it from time to time. If you picked Chrome as your target browser, Google tends to be on or near the bleeding edge, and chrome alone accounts for almost 60% of web traffic, so you can probably say conservatively that 50% of web users already have your client installed, which is huge. If you were able to add Mozilla to your list of supported browsers, that's another ~25%. IE, another 10%. And all three of those are working towards merging their desktop and mobile versions over the long term, Google is phasing out their mobile browser to be replaced by mobile Chrome, and IE, although behind the others on features, is already the same codebase across Desktop, Windows RT, and Windows Phone (They're not at feature parity yet, though).


So, yes, its a lot of work to maximize the promised potential of HTML5 today, but even just 50-60 percent of web users as a potential audience is not to be understated -- That's bigger than all of the big app stores and steam combined. Chrome has over 750 million active users; if you could make a dollar from even 1 percent of those users, you'd be a rich man. Easier said than done, of course, but its got the potential. For transparency's sake, Adobe reports that Flash can reach 2 billion users, but that number rolls up essentially all PC and mobile install-base, because those platforms can be targeted by Adobe Air (compiles flash to native apps), so its not people who have Flash Player et all installed.

#5126845 Compute Shader Slower

Posted by Ravyne on 27 January 2014 - 07:03 PM

The way to really understand when one is better than the other is to understand the kinds of problems that each was developed to solve.


CPUs have been developed to provide a single answer at a time with as low a latency as possible -- that is, do one single thing at a time with one set of input as quickly as possible, per available thread of execution.


GPUs have been developed to provide multiple answers at a time while giving away low-latency operation in favor of higher aggregate throughput -- that is, do one single thing to many sets of input at lower frequency, but with performance multiplied having a high number of threads (each set of input) running in parallel.


You also need to understand that the threads of execution on a GPU have been simplified in various ways so that they occupy less silicon real-estate compared to threads of execution on a CPU.

A thread in a GPU doesn't really have its own program counter, registers, or cache, it shares them in lock-step with the rest of its sibling threads. For example, if you run code on a GPU that has a single "if" statement, and even a single thread chooses a different branch than all the rest of its siblings, then the code inside both branches has to be executed by the whole batch of siblings (then the right results are masked off, recombined to gather the results, and execution can continue). If, again, there's another if inside each branch and one of the siblings goes its own way, both new branches have to be solved and recombined again -- if statements on a GPU cause the execution time to grow exponentially as long as the threads "diverge" (take different paths). Sometimes you can come up with a clever solution to make sure all the threads go the same way, and then a GPU will do great, but its not always possible or predictable to do so.

A thread on a CPU has its own program counter, registers, and caches -- hyper-threading competes for execution resources on a CPU, but generally only hops in when the other possible thread is stalled waiting for memory, so its a win -- it doesn't really share with anyone, so it can go wherever it wants, whenever it wants, for the most part. When a CPU takes a branch, it completely ignores the untaken path and wastes no time executing it. Because of this, there is no exponential growth in execution time for "branchy code" on a CPU vs. on a GPU.


If I can indulge in a car analogy -- a 4-core CPU is like a pack of motorcycles, everyone goes exactly to their destination as quickly as possible. A GPU execution core (which groups 16-32 "passengers" inside each "vehicle") is like a schoolbus, everyone goes to the same destination, and more slowly. When you have a few people going to different places, the motorcycles will get everyone there more quickly, but when lots of people are going to the same place, a bus delivers more people sooner (remember, your CPU only has 4-8 threads, so you can only have 4-8 motorcycles on your roadway at once -- but a modern, high-end GPU is like a fleet of 32-44 buses, each carrying 16 people).


You also have vector instruction sets on the CPU core, which are kind of a middle-ground between CPUs and GPUs -- like CPUs they operate at higher speeds, but like GPUs their threads share the program counter, registers, and caches with their siblings; however, because they combine fewer threads at once (4 for SSE, Altivec, or NEON, and 8 for AVX) its a bit more managable to load up the vehicle so that everyone is going to the same place. Vector instructions are like SUVs in my car analogy.



The other factor that can make GPUs slower is that to get data onto a discrete GPU, you have to copy it across the relatively-slow PCIe bus, perform the calculation, and then you have to copy the result back again. The movement of data itself has a bandwidth thats about 1/4th that of the bandwidth between CPU and main-memory, which already puts you at a disadvantage. But there's also a fixed-cost per transfer that results from the driver telling the GPU's hardware to prepare and execute the transfer. As a result, the smaller the amount of data you move, the higher the amortized cost is per unit of work accomplished. In practice this means that you need to have a lot of data to operate on, have a lot of operations to perform, or both, in order to overcome this penalty and achieve a performance win. Lots of interesting problems fall into that category, and lots don't.


Just last week AMD introduced their new APUs that integrate a smallish GPU into their CPUs, and which shares the same memory space for the first time (Intel's latest Haswell i-series processors do too), which can reduce or eliminate the transfer overhead of PCIe. They're only about 1/6th the size of a high-end GPU, and they don't have dedicated GDDR5 to lend them super-high bandwidth, so some really big problems still win on the GPU, and they still don't do branches and divergent code well, so those problems are still best left to the CPU, but there are some problems -- like for instance, facial recognition -- that do a moderate amount of computation on a moderately-sized data set, and those kinds of problems do better on these kinds of APUs than they do on either CPUs or discreet GPUs. My car analogy begins to break down here, because these APU execution units are still buses, they just have much more efficient protocols for loading and unloading their passengers.

#5126179 640x512, is it safe?

Posted by Ravyne on 24 January 2014 - 02:15 PM

The thing about LCDs is that they only have one "true" resolution -- that is, the native resolution of the panel. Driving the display at anything other than the native resolution causes scaling, and any scaling that's not a simple fraction of the native resolution (1/2, 1/3, 1/4, ...) will show ugly scaling artifacts where the edges of the intended pixels and actual hardware pixels don't align. Choosing a simple fraction ensures that 1 intended pixel spans exactly 2, 3, or 4 hardware pixels. The CRTs of old actually did effectively change the pixel size to be proportional to display size being utilized (of course, the hardware had limits.)


To get the pixel-perfect image quality from an LCD, you should choose a target resolution for your application, and render off-screen to a buffer that size. Then you should draw that buffer to the screen while scaling it up by the largest whole number that yields a new size that fits entirely inside the native resolution of the panel. With luck, or by choosing your target resolution wisely, this can often be the entire screen. If there are a few pixels left over in one or both axes, fill them with black or with a border image to fill the space. As an example, when I do a retro-looking 2D game, I choose a target resolution of 640x360 because its 1/2 of 720p and 1/3 of 1080p, so I can render at the two predominant television resolutions. On a laptop or PC monitor, many screens are 1080p, but nearly no screens are exactly 720p (1280x800, are semi-common, 1366x768 is near-ubiquitous) -- in those cases I just draw a border image and I have a pretty good story for all the common resolutions I'm likely to encounter, but there are still semi-oddball resolutions that result in less than ideal results -- 1680x1050 and 1600x900, or any old 4:3 ratio (or 5:4, like your 1280x1024) come to mind, and I end up drawing more border than I'd like.


As far as damaging monitors, I wouldn't worry about it (although driving an old monitor out of spec certainly can damage it), in most cases where you aren't exceeding what the monitor can do, it will either accept / adjust to the signal, actively deny it, or display harmless garbage. I'd advise you to make sure pressing escape exits your game or reverts to a safe resolution though -- you don't want to bork their screen and leave the user with no way to clear it without resorting to a reboot or ctrl-alt-del.

#5126169 Keep getting rejected by interviewers

Posted by Ravyne on 24 January 2014 - 01:38 PM

Thanks Ravyne.


I got the following feedback from the last interview at anitvirus company, I really do not know what to reply to him, so that I open new branches



I’d like to thank you for the application.

We actually favored another job seeker who fitted better into the team in terms of qualification and previous knowledge in some very specific domains.


I’ll have you in mind when we look for a new position in the near future.


Because of the way its worded, I actually think there's an opportunity to continue to build a favorable impression with a simple response. Something along the lines of "Thanks again for having me in for an interview. I came away impressed with your company culture, and would be excited to be considered for future openings that match my skills."


That's all the more it needs to be. To be too long-winded, or appear overly excited in your response could very easily work against you, and torpedo your chances at being asked back. I would only encourage you to respond in this case because the precise way its worded sounds to me at face value that they actually do have a specific opening coming soon, and that the interviewer actually will consider you for it. But there's always the possibility that they're being overly pleasant, or the even more remote possibility that they've actively decided not to seriously consider you for future positions, and they think telling you that you will be considered is the quickest way to make you go away. You should honestly know whether that outside chance is the reality or not, and if it is, you should seriously examine what it is about you that might be irritating people.


In the general case of a "Thanks, but no thanks." response, its not necessary to respond, and it usually will only work against you if you do -- If you've solicited for feedback or followed up after the interview, you should already have thanked them for the interview and there should be nothing more that needs to be said -- thus, continuing on is just distracting a busy professional and dragging out a long process that no one enjoys (that is, filling an open position -- which is time-consuming, boring, and tedious) and they aren't going to think of you any better for that.

#5125801 Database modeling

Posted by Ravyne on 23 January 2014 - 12:13 AM

Its a handy bit of knowlege for a couple reasons, 1) designing database tables is not entirely unlike designing a set of classes to express a similar concept--although you're sometimes designing with different ends in mind, and 2) databases are useful in implementing games -- Nearly all persistent (that is, kept for a long time) data in online games is stored in a database (although nosql databases are increasingly used), and databases can also be useful on the near-side of your game's build system if you have a lot of data to manage.


regarding 1 -- its not just MMORPGs using big databases in games. There are huge databases backing up the online experience for the halo games, where you can go back and look through your past games, see where you were killed, made kills, and with what weapons. I think COD is similar with possibly even more of a social-aspect. So if you want to *build* games its definitely another good tool to have in your belt, but for a pure *designer* role its probably less important (but could still be leveraged as a tool).

#5125273 Struct versus class?

Posted by Ravyne on 21 January 2014 - 02:07 AM

I'm going to suggest that maybe you should do some more reading on class vs. struct, and on interface design. It sounds to me like you've got an unseasoned understanding of some things. That's not a dig at you, its just something I'm picking up on, and its very common for people to have misunderstandings that lead to here.

Just for example, you say "global functions" like its a bad thing, when in fact such "free functions" should very often be prefered. This is a common misunderstanding that stems from "object-orientitis" where one believes that all interfaces belong literally inside the class, but in reality the portion of an interface that has access to a class's internal state should be as small as possible to maximize encapsulation and maximize cohesion.

#5125145 Is HTML5 a legit language for developing game?

Posted by Ravyne on 20 January 2014 - 02:40 PM

Flash is available as a 100% open source solution (you can use FlashDevelop or Adobe/Apache Flex). I bet you'll find lots of commentary claiming the exact opposite, usually alongside claims that HTML5 is the best and only alternative. Lies lies lies lies lies...


Without turning this into a flame war, it seems a little unfair to put this forward as some kind of salvation while neglecting to mention that web-browsers on the whole are moving away from even supporting plugins like flash *at all* or that to get flash content onto something like an iPad you've got to create a kind of separate distribution and (last I heard), you needed to pay Adobe for the tooling for that.


Its entirely fair to say that HTML is not fully mature, or that HTML5's platform support issues are too numerous to whitewash over, but its also been demonstrated that they are not insurmountable either. There are complete HTML5 games that support all the features you'd expect of a flash game, and do so across a reasonable number of browsers. Flash is absolutely the more-mature platform, particularly with how uniform its platform support is -- it ought to be, given that it had 10 year head-start -- but its misleading to pretend that Flash isn't dying out, that it doesn't have its own problems, or that its some kind of panecea.


Its also a little misleading to present the fact that different browsers might require different media assets as a major problem. In this case a little browser-detection causes the right file to be pulled down with little effort on the part of the developer, and their build process ought to provide the necessary conversions from whatever format the assets are produced in. It might be annoying that its necessary, impure, or fail to live up to the "promise" of HTML5, but it makes little difference from a practical standpoint.


To be absolutely clear, we're still in the time period when you're going to pay some early-adopter tax for implementing software in HTML5, and furthermore a time when better and best practices are not widely understood or disseminated. HTML5 is the Western Frontier of web development, its not ready for those who are more conservative in their approach, or who cannot operate independently. But for those who can deal with that, there's promise of adventure and undiscovered rewards.

#5125098 Is HTML5 a legit language for developing game?

Posted by Ravyne on 20 January 2014 - 11:57 AM

Its still and up-and-comer, but its absolutely legitimate. There are a number of things that aren't as mature as in other environments, but none are unworkable. Sound has been one of the difficulties, and the entire ecosystem has not yet converged, but its looking like the WebAudio API is gaining traction and will likely be the way things go.


And for those who don't want to work in javascript directly, tools like emscripten make it possible to write code in more familiar languages. There are also other languages that have been developed to target javascript for code generation, like Google's dart and Microsoft's typescript -- Dart, last I looked, produced quite verbose code, so I tend to prefer typescript of the two since it seems to be tighter.


On the server-side, node.js allows you to run javascript server processes (or javascript generated from emscripten, Dart, Typescript, etc), so you can share code between client and server when appropriate, whatever your choice is.

#5124484 Is there a market for old-fashioned RTS games?

Posted by Ravyne on 17 January 2014 - 02:41 PM

That's sort of my point. If you basically just had the gameplay from the games I mentioned, and created a decent campaign, or regular updates of new content, would people who love RTS buy them for their next RTS fix? Or would they only buy a game which does something new and exciting? 


That's really the heart of the question -- The classic games you mention already exist and already have mindshare, so why aren't they still being played? Is it because they're difficult to get running on a modern machine? Is it because its player base now prefers different kinds of platforms (e.g. tablets). Is it because the community of players has dwindled? Is it because the content has gone stale? Is it because the gameplay hasn't aged well? Is it because player expectations have outgrown them?


You're not selling fruit, so simply being a fresh rehash of some old games isn't going to cut it. Not without addressing the reasons those games have been left by the wayside. Likewise, can you determine what was good about those old games so that you can retain their core while bringing those ideas forward. And exectly how much *does* need to change to get to a game that's worth playing in this day and age, anyhow? Is it so much that you've essentially got another "modern" RTS?


Luckily you have 10+ years of hindsight to aid in your analysis, so finding things to improve shouldn't be impossible. The worst thing you could do is to assume the old games were infallible in their design, put them on a pedestal, and not add, remove, or evolve anything. If there were nothing to fix, people would still be playing those games en masse. They're not. 

#5123758 Classes and use of 'New'

Posted by Ravyne on 14 January 2014 - 09:38 PM

In the following code, ctest is created either in your program's data area (if declared outside a function), or on your program's stack (if declared inside a function). In either case, the compiler then manages the lifetime of this SomeClass instance, and destroys it either when the program terminates, or when the functions's scope ends, respectively. 


SomeClass ctest;


In the following code, ctest points to an object that's created on the heap -- that's what default 'new' does. You can also overload operator new to re-reoute the allocation of specified types to your own allocator, for which storage might be managed in any number of ways. There's also "placement new" which constructs the specified object at an address you supply, and space for which you have already allocated--but that's a fairly advanced C++ feature.


SomeClass ctest=new SomeClass();


The former style is generally preferable when the object lifetime is statically-known (in fact, its called static allocation) -- that is, the useful lifetime of the object is predetermined, usually because its needed for the life of the program, or some subset thereof determined by the function call-graph. The latter style is used when the useful life of the object is not predetermined -- say, a file that the user could close at any time, or an enemy which might die at any moment (sometimes in cases such as these, for performance reasons, you might mark a dead enemy as empty and re-use it later, rather than calling new/delete frequently, because they're expensive.) Also, pointers and new/delete are necessary when ownership of the pointed-to object can be transferred or shared, although this use is best hidden behind one of the smart pointer classes, unique_ptr and shared_ptr, respectively.