Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 07:24 PM

#5297776 Have I been aged out of the industry? And where else can I go?

Posted by on 23 June 2016 - 08:57 PM

Maybe its your resume?

 

Certainly you're experienced, but if you've had a long stretch without having to job-seek its possible your resume isn't in a contemporary style or doesn't use the right buzzwords, etc. Especially if you're looking at medium and large-sized studios the very first thing a resume has to do is get past the HR drones. Last month, after 5 years in my current position, I had to update my resume for a different role I was interested in inside the company -- It was a lot more work than I would have thought to bring my old resume 5 years forward all at once.

 

How are these companies that aren't interviewing you getting wind of your age anyways? If its in your resume or explicitly in any professional profiles you might consider removing or making that information less front-and-center. Hopefully I'm not out of step with the resume angle, its just the fact that they're apparently getting your age from it which gives me concern that it could be part of the issue.

 

Finally, some standard but seemingly-uncommon advice on job-seeking:

  • Use your contacts -- I've read that referred resumes are 20x as likely to land a follow-up (phone screen or interview).
  • Cover letters should always be customized to the position -- show interested in the company and position, and share why you think you'd be a great fit.
  • Remember that the purpose of a resume is not to get a job, the purpose of a resume is to get an interview (or whatever next steps are).
  • Stay positive -- If they're looking at your resume, they want to give you an interview; if they give you an interview, they want to hire you. its just the process of whittling down.
  • Be aware -- The biggest reason job-seekers are cut short of the position they want is not lack of knowlege, its risk. Be specific about what you know and have done, and don't do anything that puts doubt in the hiring manager's mind about what you're really capable of. I know of enough who've tried to appear more knowlegable than they were, and it wasn't the lack of knowledge that killed them, it was the the grandstanding that painted them as a risk.



#5297505 Is there any reason to prefer procedural programming over OOP

Posted by on 21 June 2016 - 04:58 PM

 

One problem is that the style of OOP taught in many colleges, universities, and books borrows very heavily from Java's over-orthodox view of what OOP is.

 

Eh no. What you find in universities are a lot of people who are not exposed to codebases beyond the really basics they teach in the courses. They read software engineering books (that are for the most part, language agnostic, and UML centered), and try to apply that to their "hello world" examples. Results are obviously disastrous since software engineering books are targeted at big organizations, with big codebases, and with complex projects, often much more complex than whatever the professor has ever done.

 

...

 

Worst of all is that people get out of those courses thinking "This is how Java is done!" "This is how C is done!"

 

I don't want to derail too far from the original question, but yes -- this is also true. The trouble is that nearly all bread-and-butter courses are taught in Java because its been adopted as the language that testing is administered in. Other languages get some road-time too, but Java is by far the most prevalent. Why that becomes something of a danger doesn't have much to do with Java being a poor language (its a perfectly fine language for its stated design goals, I just disagree with the merit of those goals), as much as it has to do with its dogmatic orthodoxy -- You cannot do procedural programming in Java, the best you can do is fake it by wrapping it in a superfluous veneer of Java-isms, because the Java Clergy sayeth so. Writing a simple 'Hello World' program in Java isn't contrived OOP as an exercise, its contrived OOP because that's what the language demands.

 

We can agree though, that most early programming habits anyone picks up in any language are best to hit the dustbin sooner than later. Rarely are first habits good habits.

 

 

Having learned Java-style OOP hasa lasting effect on a programmer. Its like an embarrassing accent

 

Oh, C++ has its warts alright. Its far from perfect but Bjarne is explicitly not a prophet -- C++ simply doesn't enforce dogma on you in the way that Java does. As I said, Java is a suitable language for its design goals, it just so happens that among those goals are enforcing a particular and rigid view of OOP best-practices, protecting fully-functioning programmers from themselves, and a misguided attempt to make Java programmers interchangeable by creating a language that forces them into the lowest-common denominator. If you're a big enterprise, those things are features. It just makes for an obstinately-opinionated language driven by business decisions rather than technical ones.

 

IMO, C# did a much better job achieving Java's technical goals, and was better for throwing off as much dogma as it could.

 

 

I wouldn't want to derail the conversation any further but I'm happy to discuss further elsewhere. The relevance to OP's question has to do with the prevalence of Java combined with its limiting orthodoxy, and there's not much more to say on that topic.




#5297317 Is there any reason to prefer procedural programming over OOP

Posted by on 20 June 2016 - 09:13 AM

Lots of good answers already.

One problem is that the style of OOP taught in many colleges, universities, and books borrows very heavily from Java's over-orthodox view of what OOP is. Java doesn't allow things like free functions or operator overloading, so it forces you into expressing solutions composed of classes, and in this very verbose, over-engineered way.

Graduates come out of school mostly having only experienced this way of OOP programming, and so they carry it forward. They usually have no earthly idea how to organize a procedural program in C, and when asked to write C++ will mostly give you a Java program that makes the C++ compiler happy enough to compile it.

Having learned Java-style OOP hasa lasting effect on a programmer. Its like an embarrassing accent -- first something you have to consciously work at hiding, and hopefully something that fades away over time.

An expanded, less-orthodox view of OOP (as in C++) embraces mixing other styles. Good C++ programs usually mix OOP, procedural, and functional styles together.


#5297068 Microsoft Checked C

Posted by on 17 June 2016 - 09:09 PM

What do you guys reckon? Would you throw away a bit of portability between C compilers to use Checked C?

 

Honestly, I'd just use C++. I would welcome this work being looked at for inclusion in the next C standard, but I suspect that would be unlikely. On most platforms today C++ is just as available as C -- even on something as tiny as certain varieties of 8bit microcontrollers you can do C++ with free and open toolchains. Only relatively few (and in general, esoteric, legacy, or both) platforms that support C don't support C++; usually those that have proprietary toolchains.

 

Between the family of C++ smart pointers and the work going on around C++ Core Guidelines with Microsoft providing a proof-of-concept implementation (and working with Bjarne Stroustrup) its meant to solve many of these things.

 

Or, there's the Rust language, which has commercial support from Mozilla and a great community, and which has a great C-linkage story + works even on bare-metal/embeded platforms.

 

Its a decent enough idea, but don't see checked-C gaining any level of real support.




#5297043 My game ends up being boring

Posted by on 17 June 2016 - 05:13 PM

I found that video -- turns out it was a GDC talk that spoke about cameras in side-scrollers. Its a really excellent video.

 




#5296421 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 13 June 2016 - 09:04 PM

Yes, Boost is very cool library, i hear, some boost features, include in new CPP specifications. Thanks. But i'm interested why microsoft in MFC uses C prefix, if it not correct for CPP?

 

Because it was the fashion of the day when MFC was introduced.

 

It was a different time then -- most significantly, IDEs were much more primitive and had no or poor support for modern conveniences like intellisense or auto-completion. It was believed by some people and organizations in those days that encoding that kind of information into type, function, and variable names would provide a similar benefit to what we get today from intellisense and other modern IDE features. Whether it was a great idea at the time or not is a matter for debate, but times have changed. Just because something was a good idea yesterday, doesn't make it a good idea today.

 

Even assuming that the supposed benefit was real, this approach is brittle because what's encoded in the name often comes to disagree with reality as the code evolves. Take the C-prefix of classes you promote -- what if, after some reflection, you decide to refactor that class into a struct instead, because you thought you'd need some member functions but it turns out you didn't after all (and if you're pedantic enough that classes are prefixed with C, you're probably also pedantic enough that 'structs don't have member functions'). Now, normally all you'd have to do to refactor this situation is to change the declaration/definition to use the struct keyword, rather than class and you'd be done, but because you duplicated this information in the name of the type, the name now disagrees with the reality of what the type is, and you need to go and change it everywhere you use a parameter or variable of that type.

 

Now, that's not a great example because structs and classes are semantically close enough in C++ that it may not actually matter, semantically, even if the cosmetic difference bothers you--but if you care enough that "C belongs on every class", then you can't back out of cosmetic correctness now.

 

I would go one further to say that repeating type information by encoding it into the name of a code entity is a violation of the DRY (Don't Repeat Yourself) principle, and is bad for all the same reasons -- primarily that every time you repeat the same information, you introduce another opportunity for disagreement to seep in, which undermines the conceptual purity of what you have to reason with day in and day out. If you don't repeat yourself, there's only one source of information and you know that it is the one and only truth; thus, the purity of what you have to reason with is never compromised.

 

There's a lesser-known, but more correct, version of such prefixes (the true form of hungarian notation) -- where the prefixes encoded usage information rather than type information -- for example, you might have floating point values where one usage represents radians, and another that represents degrees of an arc. Programmers realized that those types, in a language like C (and still C++ to this day), were syntactically interchangeable (e.g. Compiler don't care) but not semantically interchangeable (e.g. Program do care). This is because typedefs in C and C++ aren't strong -- that is, a typedef is just another word for the its type, but is not a distinct type of its own. For complex types, structs and classes largely side-step the issue of weak typedefs, and in other languages strong typedefs are offered instead of (or in addition to) weak typedefs. In general, code today still uses this latter convention of specifying usage where it remains unclear, but using the hungarian-style prefix has fallen out of fashion -- today you'd more likely see robot.orientation_as_degrees, rather than robot.degreesOrientation.




#5296363 What are your thoughts on how to support 21:9 in 2D games?

Posted by on 13 June 2016 - 12:19 PM

There are still some 4:3 screens kicking around I'm sure, but its been nearly impossible to buy one off-the-shelf for ages, except for a handful of PC monitors marketed mainly to corporate environments. Back in the day, the question was "how do I design for 4:3 and 16:9?" and now its "How do I design for 16:9 and 21:9?". I think the difference is that 16:9 was clearly going to be where things moved to, but it doesn't seem so clear for 21:9. Oh, unless you're also dealing with iPads, or the myriad other devices as well (though, then you get a pass on 21:9, so far)

 

But the approach is the same -- either you design such that the extra information isn't beneficial, or you hide it. When you take the design approach, what you're really doing is letter-boxing the design process itself -- you can't place essential information outside the safe area, and you have to think about whether even revealing information sooner to players with different aspect ratios gives them an advantage. Letter-boxing the player's screen is essentially a design-choice to simply ignore these same questions.

 

Now, something to keep in mind is that this matters most for competitive scenarios -- that's when one player gets a direct advantage over another and fairness is the primary objective. In single-player, non-competitive scenarios, the questions can still matter, but the effect is different; there, your objective is more concerned with ensuring that the pace of progress and level of exploration is not affected. Even in single-player games, though, there can be competitive elements like leaderboards, speed-runs, etc. What's important will depend on what kind of game you're making.

 

I tend to think, myself, that the best way to approach this in this day and age is to employ a dynamic camera that adapts to a design-safe area that's independent of any particular aspect ratio, and which itself may not match any popular aspect ratio one finds in the real world. That approach doesn't really work for pixel-perfect, sprite-based 2D games, but is a very good approach anywhere that the pixels aren't crucial to the design or aesthetics.

 

You also have to address on-screen UI in a flexible way -- not only might it change how things are obscured as aspect ratio shifts, but you might also be inclined to deliver different UI properties to tablet users, PC users, and console users anyways. Most serious developers should already have moved to that way of thinking if its important to where they deliver their games.




#5293857 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 27 May 2016 - 02:07 PM

I've simplified to the essentials over the years. I prefix for interfaces, cause C# still rolls that way and I like it. (Non-pure abstract base classes are suffixed with Base, usually.) Single underscore to label _private or _protected class variables*. And g_ for globals because those should look gross. That's pretty much the extent of it.

 

* The underscore thing also works great in languages that don't HAVE private scoping, or when I don't actually private scope them but they're considered implementation details.

 

I follow this kind of minimalist approach as well. Prefixing classes with C, or variables with their atomic type, or similar things doesn't really tell you anything useful with modern tooling. Even people who work with text-mode editors like emacs or Vi(m) will have CTags or some kind of IDE-like intellisense-thing going on.

 

Scope is still something helpful to know so I do 'g_' prefixes for globals and 'm_' (or sometimes, just '_') prefixes for private/protected members. I also like to know things like whether a variable is static or volatile at a glance, so I use 's_' and 'v_' prefixes there -- these are rare though.

 

As far as general naming, I use plural word-forms when dealing with collections of things, and boolean variables/functions are almost always prefixed with a word like 'is' or 'has' to reveal the question being answered -- like 'is_dead' or 'has_children()', among other things similar to what DonaldHays quotes above.

 

In C++ (as are all of my examples above), I defer to my own personal axiom of "do as the standard library does" regarding naming and other externally-visible conventions. For example, this is the reason I now use lower-case-with-underscores style, rather than CamelCase or pascalCase as I did at different points in the past. When I'm doing C#, my naming conventions follow the style of its standard libraries. My rationalization for why this axiom of mine is a good approach is that 1) these standard libraries represent the only style that has any credible claim that it "should be" the universal style, and 2) they've seen every odd corner case and combination thereof, and have laid down an answer; I don't have to waste brain-cells thinking it through and then being self-consistent on each rare occasion it comes up, months or years between.

 

Of course, if the place that makes sure your paychecks don't bounce has a house style -- and most do -- then you follow along with that because its part of what you're paid to do.




#5293847 Are Third Party Game Engines the Future

Posted by on 27 May 2016 - 01:06 PM

@Ravyne

Though if you look at Unreal Engine 4, you will see a huge variety of different types of games being made with it. UE4 is very different to UE3. It supports large open worlds out of the box and small studios are making very big games with it. For example, Ark: Survival Evolved was created by a virtual team of indie developers. The engine is very flexible and although it may not be optimal for a specific genre, the source can be modified to make it optimal. An indie team starting out will not be making boundary-pushing games. If the team is successful and grows, they can hire more programmers and heavily modify the engine for more ambitious projects. UE4 is a lot more flexible. The Oculus team replaced UE4's renderer with their own for VR optimization and used it in the games they are making. They've also made this branch of the engine publicly available.

 

For sure, and UE4 is worlds better than 3 ever was -- I just meant to say that FPS is more-or-less the only "turn-key" option that Unreal gives you, or at least the one that's most turn-key. For all its flaws and over-engineered inefficiencies (keeping in mind that their "inefficiencies" are still faster than anything you or I would likely crank out without a ton of iteration), no one can deny Unreal Engine has powered one of the top-tier FPS series. I'd wager there's still a lot of code/design decisions in there stem from its FPS roots, though, obviously, they're going to be a lot more subtle and a lot less limiting than having an FPS-centric API surface.

 

On the topic of derision, what's popular and what's good are often barely-related axes. When last I interviewed for games positions before my current full-time gig of 5 years, it was a popular interview practice to show you some piece of shipping code and point out the bugs, or suggest ways it could be improved. After a time, I realized that every single one of these exercises, at several different studios, all used source code from UE3. That tells me two things: 1) that UE3 was not held up as any sort of paragon, and 2) that despite this, it was still popular enough that all these studios were familiar with the engine and cared that new hires could follow its source code.




#5293632 Are Third Party Game Engines the Future

Posted by on 26 May 2016 - 12:26 PM

I think these engines are here to stay,and will improve over time; Unity is especially dominant in the indie space now, and Unreal is making inroads there (slowly) but has more mindshare among pro studios. Cryengine doesn't seem to have much uptake at all. Also in the indie space are simpler engines and frameworks like Cocoas, Monogame, DXTK, and others.

Every single one of these has their own point-of-view, quirks, and warts. All of the engines have their own way of doing things that you need to learn to work with. Frameworks are sort if similar, except you're not so locked into doing things "their way" just by virtue of the fact that they do less and have fewer intertwined systems.

"Jack of all trades, master of none" as they say--most people take that to be an insult, but the full quote goes on to end with "--but better than a master of one." These engines are a great value proposition, but they don't fill that need for masterful execution (well, unless you're using UE4 to make a shooter). That's why in-house engines will always be a thing on the high-end.

On the low end, the mental lock-in and quirks of engines can be more hindrance than help for very simple or very unique small-scale games. Especially using those frameworks I mentioned, it can be less headache to roll your own purpose-built engine than to fight against an engine's natural currents to modify it to your needs; or simply to sidestep all those engine abstractions that are more complex than your game needs.

Where engines are worth their while is really in the middle ground -- your game is complex enough that rolling your own tech is more costly (money, time, risk, market opportunity), but not so unusual as to be a poor fit, and also not so complex or boundary-pushing that it risks outgrowing an off-the-shelf solution. Many games great and small fit into that box, and that's why Unity and Epic can staff hundreds of people behind these offerings and make healthy businesses of it.

Another side-effect, for good or ill, is that these engines instill a certain amount of liquidity among developers, and particularly those who aren't ninja-level engine developers. Unity and Unreal are concrete skill sets that you can recruit and hire on--before these engines became popular, every new hire had to spend time picking up the house engine, house scripting language, house toolchain, house pipeline. Nowadays that's still true many times--but not at the same rate it used to be. Part of the attraction for using Unity or Unreal among larger studios is that they gain a significant hiring pool (even including people who may not have a traditional CS background) and that those people can hit the ground running, more or less.


#5292361 Best gaming platform in the future with marketing perspective.

Posted by on 18 May 2016 - 04:43 PM

I hate to rain on the anti-Microsoft parade, but all this advice to avoid Microsoft or vendor lock-in is tangential at best, and at the least seems outdated. But to start from fair ground, I'll throw out the disclaimer that I'm a writer (docs and such) on the Visual Studio team.

 

If you haven't been following along lately, Microsoft as a whole is really leaving the our-way-or-no-way mentality behind. To be frank, today's devs have more options that are good than was the case years ago, so there's a lot more mobility in dev tools, platforms, languages, etc -- they don't accept our-way-or-no-way anymore. Microsoft's continued success and relevance actually requires them to get with that program, and so they have. Today, Visual Studio is already a damn fine IDE for iOS, Android, Linux, and IoT development, in addition to the usual Microsoft platforms -- even just a couple years ago, Eclipse would have been basically the only "serious" IDE for those scenarios (and its still got inertia today). For example, you can do your programming using Visual Studio on Windows today, and the build/run/debug commands will talk to a Linux box where your code will be built (using your typical Linux development stack), launched, and hooked to GDB, and GDB in turn talks back to Visual Studio and looks just like a local debugging session of your Windows apps. And that's basically the same scenario for Linux-based IoT, Android, and iOS as I've described for Linux on the desktop and server; The android stuff can target a local emulator running atop Windows Virtualization, and is actually considered to be better than the stock emulators provided by other Android development environments, even if that sounds a bit unbelievable. Soon, you'll be able to run an entire Ubuntu Linux environment right inside Windows 10, so that developers will have all those familiar *nix tools right at hand.

 

Believe it or not, "old Microsoft" is basically dead and buried, especially in the server and developer tools division. They're pretty hellbent on making sure that Visual Studio is everyone's preferred IDE, regardless of what platform or scenario they're targeting -- and for those that like lighter-weight editors there's Visual Studio Code. Stuff is being open-sourced left-and-right, all our open-source development happens on GitHub, and a bunch of our docs and samples are already on github too.

 

By all means, people should find and use whatever tools and platforms they like; they should target whatever platforms they like, and as many as they like. Odds are, Microsoft and Visual Studio are relevant to where you are and where you're going, or will be soon. It's silly to dismiss them just because they're Microsoft. I use lots of tools every day in my work here that came from the *nix world -- Vim, Git, and Clang to name a few -- and they serve me well; partisanship between open/free and proprietary software isn't a very worthwhile thing IMO, unless you're talking about the very philosophy of it all.




#5292137 Difference Between 2D Images And Textures?

Posted by on 17 May 2016 - 02:07 PM

Of course if you expect to release on console, and most people only have 1080p televisions, then you'd ship 1080p images for them, and that cuts the storage requirements by 75% immediately -- but even 25MB is still a lot of geometry and textures. Realistically, you probably want to release on PC too, and may only be able to release on PC since getting access to the consoles is not currently wide-open; On PC, 4k is an increasingly common thing, you don't *have* to support it, but you ought to. And even if you chose not to now, you'll at least want to render and keep the files on hand in a very high resolution because if you ever need to recover, retouch, or remaster the files, you'll want to start from those. As a rule of thumb, you usually want to keep a copy of all game assets in at least 2x greater fidelity than the greatest fidelity you can imagine shipping -- the basic reason is that you can always downsample without really loosing information, up-sampling always requires a guess even if its a really well-informed one.

 

Also, there's no conflict between pre-rendered/real-time and static images. You can have static backgrounds that are pre-rendered, or you can render them in real-time. In general, a static view of any scene, especially an indoor scene (or more generally, any scene dominated by near occluders) is going to render very quickly -- even if you render it fresh every frame, you're not making any costly changes to it. 




#5292125 Difference Between 2D Images And Textures?

Posted by on 17 May 2016 - 01:17 PM

There's a more-extensive answer in my previous post, but TL;DR -- 

 

Pre-rendered backgrounds will be very large (4k resolution, if not 8k), you'll have as many as a half-dozen of them per scene, and you won't probably won't be able to apply lossy compression techniques to get really good compression rates. Lets say you have 5 4k (color, depth, specular, normal, and occlusion) buffers and get 60% compression on average -- if we assume that each buffer is 32bits per element (some will be less, some might be more) that's going to be 5 x 32mb x 0.60 -- right around 100MB per scene. You can fit a *ton* of geometry and texture data into 100MB -- and there's a good chance you can re-use most textures and some geometry elsewhere, which lowers the effective cost of the run-time rendered solution even further.




#5292120 Difference Between 2D Images And Textures?

Posted by on 17 May 2016 - 12:59 PM

It really depends -- on the one hand, you can render very realistic scenes in realtime, and while this has a runtime cost associated, it also gives you freedom to move the camera around naturally if you like. From a production standpoint, that flexibility means that someone like a designer can move around a virtual camera and get immediate feedback, rather than having to get an artist to content pipeline tool in the mix -- being able to iterate that rapidly is really helpful.

 

On the other hand, pre-rendered backgrounds can look really great for what's basically a fixed cost, meaning that you can run on a lower spec or pour more power into high-end rendering of characters and other movable objects. If you go back to Resident Evil, -- or to Alone in the Dark, before that -- that's basically why they did it that way; They used pre-rendered backgrounds to give great scene detail combined with what were relatively high number of relatively high-quality 3D models (The models in RE were as good or better than comparable character models from 3D fighting games of the day, but with many more onscreen potentially).

 

If you were going to do pre-rendered backgrounds today, such that it mixed well with modern rendering techniques for non-prerendered elements, you would probably do something like a modern deferred renderer does -- you wouldn't have just a bitmap and depth buffer (like RE probably did), you'd have your albedo, normal, depth, specular, etc buffers, draw your 3D objects into each of them, and then combine them all for the final framebuffer. You could do the static parts offline in your build chain, or you could even to them in-engine at each scene change.

 

Its not cut-and-dried which approach (offline or runtime-at-scene-change) would occupy less disk space. Geometry isn't a big deal usually, and if you get a lot of re-use out of your textures and/or if they compress well, you could come out ahead with the runtime approach -- especially so if you can utilize procedural textures. offline (pre-rendered) images will have a fixed and small runtime cost, but will use a lot of disk space because the buffers will be large (you'd want to do at least 4k, and probably even 8k) buffers and you probably don't want to apply any lossy compression to them either.




#5291962 Best gaming platform in the future with marketing perspective.

Posted by on 16 May 2016 - 04:32 PM

Which do you think should I particularly focus on? Where is the most revenue? I am in the view that if I spend my time learning unnecessary things (those which I will understand later are of little or no use in the future) then I will simply waste my time.

...

Please answer as descriptively and elaborately as possible, and if possible provide further references for statistical information, I'm serious :mellow:.

 

Stop. How much time will you waste choosing this ideal platform? How much time have you already wasted? How much time will you have wasted when the decision you make proves wrong? Will you throw it all away to pursue your new choice from a fresh start?

 

None of us are omniscient. Some of the best minds are tasked with making their best guesses at what things will be like just 5 years from now, and still most of them are wrong most of the time -- 5 years is about the outside limit for what anyone is actually willing to bet serious money or resources on. People will think about 10+ years sometimes, but very rarely are they making any bets -- usually they're just looking for things to keep an eye on.

 

Take VR -- it was in arcades in the 90s. People were doing it even back then, we've had the basic idea and technical footing to pull it off all this time, but it wasn't clear if or how it could be brought to the mass-market. The guys at what's now Occulus bet early and bet big (in blood, sweat, and tears -- not so much money) and showed the way to bring it to the masses -- only after that was anyone with real money or resources willing to place stakes on the table; some of the best technical minds with access to the deepest pockets on the planet didn't see the way on their own. And its all well and good to say you want to be the next Ocullus or the next Mojang, but reality is littered with 1000 wrong guesses for every right one.

 

Learn, do, and adapt is usually a better strategy than betting it all on a predestined outcome.






PARTNERS