• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Promit

Senior Moderators
  • Content count

    15705
  • Joined

  • Last visited

Community Reputation

13246 Excellent

About Promit

Personal Information

  1. MessageBox(0, L"Whatever", 0, 0); That requires UNICODE defined in the preprocessor to work correctly. Which is irritating. I want my code to build correctly with the minimum amount of configuration. Using the actual function name also has the added perk that Intellisense gives you an actual function declaration to work with.
  2. I wish. No, you will have to prefix everything with L. And don't bother using the TEXT macro at all, as it's pointless. Personally I explicitly call the W versions of Windows API functions - always MessageBoxW, never MessageBox. I am not on board with the macro horrors MS manufactured.
  3. I'm not sure there's an actual problem at all. And these days I'm very much against creating a solution before there's a specific problem being solved.
  4. This is true in the world where your game lives on a hard drive. Some of us still remember the world where the game lives on optical media which practically mandates this type of packaging to have any semblance of control over read patterns. If you can block out a package to load a bunch of assets in one long sequential read to get through loading from optical, it makes a world of difference. Probably not relevant to the question at hand of course, but I don't want to lose the plot of why some of these things were developed in the first place.
  5. Going to take a slightly different tack here. I've had a couple informal discussions and it appears that executives increasingly consider custom engines/tech to be a liability. This is different from years past. It used to be that an engine was considered a very large but viable investment. In other words a lot of studios used licensed tech mostly because they couldn't afford to build out custom systems, but there was generally a sense that there was indeed value in doing so. In 2017 however, it's considered a poor use of funds to even build custom tech at a studio level, regardless of whether they can afford it or not. It's only worthwhile, in management's eyes, at a publisher level where the engine can be shared across a lot of teams. That's the energy behind Frostbite, for example. There are some studios which have long term investments in custom tech, although publishers are largely working to share those codebases across their member studios. Otherwise, execs don't want to have custom tech on hand. It's considered a problem that can be farmed out to someone else, with minimal gain to doing it yourself. Having your own engine means a permanent staff of X people there, some of which can't be replaced at any cost because of how well they know that system. Managers don't like irreplaceable staff. They also don't like long term variable costs. Training/onboarding new hires is harder. Hell, hiring engine devs has itself become difficult. Homebrew mobile tech is so rare as to be effectively nonexistent in the marketplace. What that really leaves us with is a rather small group of studios who are building tech as a significant differentiating factor. Games in this category include No Man's Sky and Ashes of the Singularity. While there are arguments to be made both ways, I personally dislike the increased homogenization that is the inevitable result of centralization to a handful of technology packages. There are consequences from architectural decisions that impact final look and feel of games, more so than people often wish to admit.
  6. The game industry is relatively forgiving in substituting experience for a degree - but not just any random work experience. If I have an applicant who doesn't have a degree, I need to be assured of a couple things: There's a good reason for not having a degree. Money (in the US) is a valid one. "I didn't think it was useful" is not. The work experience and independent projects are sufficient to produce a capable engineer in general. The actual abilities and skillsets are directly relevant to the job at hand. There is intellectual potential for continued growth and advancement. In that light, I have some comments. "It's two years of formation, VERY practical, it doesn't enter into abstract things like algebra." Red flag. Algebra (and linear algebra) are not abstract nonsense, they're absolutely foundational. If you can't demonstrate basic mathematical competence then I have zero interest in hiring you for any programming position. "Right now, I'm doing a 1 month Unity course in my city, organized by an university, very solid (I was given a scholarship)." While this is a perfectly good way to achieve personal growth, it has no relevance to employment. "The design teacher (a really good professional) told us that he can help us to get a job as QA tester in a videogame company." While I have some friends who have followed this route, I don't advise it. In general, I'd rather see someone doing non-game programming work than spending time in the QA trenches just to say they were in games. QA is a better route for those who have minimal relevant skills of any sort. "Also, I've given a job offer for Java Programmer in a consulting company. It's not videogames, but it's programming experience. That's a safer route." Unfortunately, Java consulting is very unlikely to get you into the game industry. It's a good way to make ends meet while doing independent projects that can demonstrate your abilities, though. To be candid, I get the distinct vibe that foundational Java programming is all you know. That is not even remotely sufficient to get a games job. While both the university and job experience routes are ultimately viable, neither one automatically gets you there. In either situation, you're going to need to do substantial independent work to be considered qualified for a game programmer job. Generally the university path is much more likely to give you a smooth entry into the industry, as it raises less questions about your abilities and decisions. The main reason for not doing so in the US is related to the financial challenges of going to a university. If you're going to attempt to use job experience instead, it should either be because you need to make ends meet or because the job experience is strongly relevant to the game industry.
  7. There are occasional bugfixes in GitHub but we haven't done a fully packaged release basically since MS stopped doing "DirectX" releases. I'm happy to merge this change, but at this stage we tend to encourage people to do their own builds. I've been thinking about doing a modernized version of the library (DX11/12, XA2, XI against current languages and libs) but it hasn't materialized yet.
  8. Insert South Park reference here
  9. In general, the reason for different types of seemingly similar resources is that at least one major IHV has (potentially legacy) fast-path hardware that differentiates between them. There are a number of buffer types which perform differently on NV GPUs while AMD's GCN GPUs simply don't care. You're seeing hardware design issues leaking through the software abstractions. Ideally, we would just have buffers read by shaders and nothing else, not even textures. (I mean come on, texture buffers?) GPUs haven't reached that level of generalized functionality yet. MS originally pitched this design when they were sketching out D3D 10 and of course the IHVs explained it wasn't feasible.
  10. While people have covered the social side of things, I'm going to jump in and claim that C# is overall a better technical choice of language than Java. Yes, I went there. Java was designed first and foremost as safety scissors, a tool for people who couldn't be trusted not to hurt themselves. And honestly, that's true for most developers, particularly in the web and client/business app space. There was absolutely no desire to expose any "system level" functionality. It was meant to be a simple, sandboxy, safe environment to do most of the boring every day software development that makes the world tick. While C# partly shares this worldview as well, both the language and the underlying runtime were designed with the option to step outside that box, as long as you do it explicitly. (Notably, VB.NET was not designed this way.) There's a lot more capability in C# to manipulate memory, integrate with native libraries, control allocation, and do a lot of the direct manipulation of buffers that is inappropriate for most types of apps but is crucial for graphics code in particular and to some extent game code in general. It's the relative ease of doing many common game and graphics programming tasks that has made C# preferable here and in the industry in general. It's not that you can't do things in Java, but it always feels like you're fighting the language, working through kludges like FloatBuffer to get things done.
  11. I exhibited at this conference in 2016, immediately adjacent to the prize winning VR Spacewalk. At one point, two actual astronauts who had spacewalked previously came by and tried it out. I can very much vouch for the value and credibility of FoST.
  12. Oh ho, seems things are getting a bit more interesting still. Fearing Shadow Brokers leak, NSA reported critical flaw to Microsoft
  13. There's some evidence supporting this view, while the other two are purely speculation. There is no need to speculate on this point, as MS has a well established source code access program which goes out to many different organizations. For that matter, I personally had full Windows source access. Of course China is simply using hardware and software where they added the backdoors themselves, so it's not particularly helpful to those who would like neither China nor the US to have access to their systems. And no, before someone invariably brings it up, going to open source doesn't even remotely address the problem. All backdoors are broken eventually... At the end of the day, the US government requires significant visibility into systems running all kinds of operating systems and software, whether the parties responsible for that software are cooperative or not. That includes a variety of foreign and non-consumer equipment This means that they have to have a major program to penetrate all of those systems and we know factually that they do exactly that. Once you invest in all of that infrastructure, there is essentially no need to coerce Microsoft into adding or protecting vulnerabilities (which weren't present in W10 in the WCry case, by the way). You already have everything and you have it on your own terms. The conspiracy theory adds a bunch of extra idiotic steps for no reason. Spooks are nothing if not ruthlessly efficient.
  14. What complete and utter nonsense. Take your conspiracy mongering garbage somewhere else. This was a system bug like anything else, including heartbleed. Entirely false. System components are now checksummed, admin privileges are not assigned by default, and there aren't significant configuration problems. While these things CAN be circumvented, the circumvention approaches are equally as effective on other operating systems. We live in a world where it's now likely possible at any given time to attack a Linux server running on a VM, jump the Xen hypervisor, and take over the host. We have SEEN these bugs being sold in the wild. The registry does not work that way. I said it already but the permission system is a perfectly robust ACL based design shared by many other systems. I'm more concerned that you might think the old owner/group/user octal permission system is a good thing, which would be a shockingly lax security approach. In what way, exactly? You don't know, do you. Come back when you can explain why it's somehow less separated than Linux or OSX. Are you just making shit up now? No, it wasn't. It was assigning admin access to all users by default, which was bad. That's no longer the case, and the exploits we see in the wild are privileges escalations that exist in some form or another on all operating systems. Not only is this wrong, it's also not how the exploits today work. Because it's wrong. That is not happening, save a few cases where program installers are deliberately assigning bad permissions to their own files. I've seen that all the time on Linux boxes. No, it's not. The kernel is one file and it loads dynamic drivers pretty much just like Linux loads drivers. Yeah, that's called a privilege escalation exploit. They happen to every OS. Yes, they're bad. No, they're not at all the same thing as users having admin permissions. You don't know the first fucking thing about how NHS databases are configured. You don't know the first thing about how medical systems are configured. Frankly, a lot of the companies that put these systems together don't understand security in the first place and no OS could save them from their own idiocy. These are frequently people who would have a chmod -R 777 in their install script if it were a unix style platform. Go back to Slashdot or whatever random hole you crawled out of to waste our time.
  15. If the position was advertised as "graphics programmer" and they couldn't answer that question, I'd fail them for that alone. Yes, even a fresh out of college dev. Same for any "physics programmer" position. If the position is not specifically somebody who should know their graphics/physics programming chops, it gets murky. If they're still expected to be doing "graphicsy stuff" and they can't answer that, it might not be an instant disqualification but it'd put them on pretty thin ice. It raises the question of how they fail too, whether we're talking about a relatively minor failure to understand the subtlety of a 4x4 transform (not so bad) or if matrices are just magic boxes to them (pretty bad). If the position doesn't call for graphics or physics knowledge at all, I'd most likely let it slide as 'unfortunate but fixable'. It's something that could tip me to another candidate but wouldn't necessarily do so. From your description of the job listing - assuming it's accurate - it seems like this is more the box that applies but evidently that was not a shared understanding among the team. --- On a personal sidenote, I'm hearing more and more that the democratization of engines and core tech, led in no small part by Unity, has resulted in a large junior workforce who are flubbing basics because "oh it's all abstracted away". This is not a healthy way to attempt to enter game development, a world where tools are fleeting but core fundamentals are forever. Don't do this to yourselves, people. Don't assume you can always live on top of convenient abstractions to hide you away from the scary mathematics and computer architecture.