Promit

Senior Moderators
  • Content count

    15702
  • Joined

  • Last visited

Community Reputation

13238 Excellent

About Promit

  • Rank
    Moderator - Graphics and GPU Programming

Personal Information

  1. This is true in the world where your game lives on a hard drive. Some of us still remember the world where the game lives on optical media which practically mandates this type of packaging to have any semblance of control over read patterns. If you can block out a package to load a bunch of assets in one long sequential read to get through loading from optical, it makes a world of difference. Probably not relevant to the question at hand of course, but I don't want to lose the plot of why some of these things were developed in the first place.
  2. Going to take a slightly different tack here. I've had a couple informal discussions and it appears that executives increasingly consider custom engines/tech to be a liability. This is different from years past. It used to be that an engine was considered a very large but viable investment. In other words a lot of studios used licensed tech mostly because they couldn't afford to build out custom systems, but there was generally a sense that there was indeed value in doing so. In 2017 however, it's considered a poor use of funds to even build custom tech at a studio level, regardless of whether they can afford it or not. It's only worthwhile, in management's eyes, at a publisher level where the engine can be shared across a lot of teams. That's the energy behind Frostbite, for example. There are some studios which have long term investments in custom tech, although publishers are largely working to share those codebases across their member studios. Otherwise, execs don't want to have custom tech on hand. It's considered a problem that can be farmed out to someone else, with minimal gain to doing it yourself. Having your own engine means a permanent staff of X people there, some of which can't be replaced at any cost because of how well they know that system. Managers don't like irreplaceable staff. They also don't like long term variable costs. Training/onboarding new hires is harder. Hell, hiring engine devs has itself become difficult. Homebrew mobile tech is so rare as to be effectively nonexistent in the marketplace. What that really leaves us with is a rather small group of studios who are building tech as a significant differentiating factor. Games in this category include No Man's Sky and Ashes of the Singularity. While there are arguments to be made both ways, I personally dislike the increased homogenization that is the inevitable result of centralization to a handful of technology packages. There are consequences from architectural decisions that impact final look and feel of games, more so than people often wish to admit.
  3. The game industry is relatively forgiving in substituting experience for a degree - but not just any random work experience. If I have an applicant who doesn't have a degree, I need to be assured of a couple things: There's a good reason for not having a degree. Money (in the US) is a valid one. "I didn't think it was useful" is not. The work experience and independent projects are sufficient to produce a capable engineer in general. The actual abilities and skillsets are directly relevant to the job at hand. There is intellectual potential for continued growth and advancement. In that light, I have some comments. "It's two years of formation, VERY practical, it doesn't enter into abstract things like algebra." Red flag. Algebra (and linear algebra) are not abstract nonsense, they're absolutely foundational. If you can't demonstrate basic mathematical competence then I have zero interest in hiring you for any programming position. "Right now, I'm doing a 1 month Unity course in my city, organized by an university, very solid (I was given a scholarship)." While this is a perfectly good way to achieve personal growth, it has no relevance to employment. "The design teacher (a really good professional) told us that he can help us to get a job as QA tester in a videogame company." While I have some friends who have followed this route, I don't advise it. In general, I'd rather see someone doing non-game programming work than spending time in the QA trenches just to say they were in games. QA is a better route for those who have minimal relevant skills of any sort. "Also, I've given a job offer for Java Programmer in a consulting company. It's not videogames, but it's programming experience. That's a safer route." Unfortunately, Java consulting is very unlikely to get you into the game industry. It's a good way to make ends meet while doing independent projects that can demonstrate your abilities, though. To be candid, I get the distinct vibe that foundational Java programming is all you know. That is not even remotely sufficient to get a games job. While both the university and job experience routes are ultimately viable, neither one automatically gets you there. In either situation, you're going to need to do substantial independent work to be considered qualified for a game programmer job. Generally the university path is much more likely to give you a smooth entry into the industry, as it raises less questions about your abilities and decisions. The main reason for not doing so in the US is related to the financial challenges of going to a university. If you're going to attempt to use job experience instead, it should either be because you need to make ends meet or because the job experience is strongly relevant to the game industry.
  4. There are occasional bugfixes in GitHub but we haven't done a fully packaged release basically since MS stopped doing "DirectX" releases. I'm happy to merge this change, but at this stage we tend to encourage people to do their own builds. I've been thinking about doing a modernized version of the library (DX11/12, XA2, XI against current languages and libs) but it hasn't materialized yet.
  5. Insert South Park reference here
  6. In general, the reason for different types of seemingly similar resources is that at least one major IHV has (potentially legacy) fast-path hardware that differentiates between them. There are a number of buffer types which perform differently on NV GPUs while AMD's GCN GPUs simply don't care. You're seeing hardware design issues leaking through the software abstractions. Ideally, we would just have buffers read by shaders and nothing else, not even textures. (I mean come on, texture buffers?) GPUs haven't reached that level of generalized functionality yet. MS originally pitched this design when they were sketching out D3D 10 and of course the IHVs explained it wasn't feasible.
  7. While people have covered the social side of things, I'm going to jump in and claim that C# is overall a better technical choice of language than Java. Yes, I went there. Java was designed first and foremost as safety scissors, a tool for people who couldn't be trusted not to hurt themselves. And honestly, that's true for most developers, particularly in the web and client/business app space. There was absolutely no desire to expose any "system level" functionality. It was meant to be a simple, sandboxy, safe environment to do most of the boring every day software development that makes the world tick. While C# partly shares this worldview as well, both the language and the underlying runtime were designed with the option to step outside that box, as long as you do it explicitly. (Notably, VB.NET was not designed this way.) There's a lot more capability in C# to manipulate memory, integrate with native libraries, control allocation, and do a lot of the direct manipulation of buffers that is inappropriate for most types of apps but is crucial for graphics code in particular and to some extent game code in general. It's the relative ease of doing many common game and graphics programming tasks that has made C# preferable here and in the industry in general. It's not that you can't do things in Java, but it always feels like you're fighting the language, working through kludges like FloatBuffer to get things done.
  8. I exhibited at this conference in 2016, immediately adjacent to the prize winning VR Spacewalk. At one point, two actual astronauts who had spacewalked previously came by and tried it out. I can very much vouch for the value and credibility of FoST.
  9. Oh ho, seems things are getting a bit more interesting still. Fearing Shadow Brokers leak, NSA reported critical flaw to Microsoft
  10. There's some evidence supporting this view, while the other two are purely speculation. There is no need to speculate on this point, as MS has a well established source code access program which goes out to many different organizations. For that matter, I personally had full Windows source access. Of course China is simply using hardware and software where they added the backdoors themselves, so it's not particularly helpful to those who would like neither China nor the US to have access to their systems. And no, before someone invariably brings it up, going to open source doesn't even remotely address the problem. All backdoors are broken eventually... At the end of the day, the US government requires significant visibility into systems running all kinds of operating systems and software, whether the parties responsible for that software are cooperative or not. That includes a variety of foreign and non-consumer equipment This means that they have to have a major program to penetrate all of those systems and we know factually that they do exactly that. Once you invest in all of that infrastructure, there is essentially no need to coerce Microsoft into adding or protecting vulnerabilities (which weren't present in W10 in the WCry case, by the way). You already have everything and you have it on your own terms. The conspiracy theory adds a bunch of extra idiotic steps for no reason. Spooks are nothing if not ruthlessly efficient.
  11. What complete and utter nonsense. Take your conspiracy mongering garbage somewhere else. This was a system bug like anything else, including heartbleed. Entirely false. System components are now checksummed, admin privileges are not assigned by default, and there aren't significant configuration problems. While these things CAN be circumvented, the circumvention approaches are equally as effective on other operating systems. We live in a world where it's now likely possible at any given time to attack a Linux server running on a VM, jump the Xen hypervisor, and take over the host. We have SEEN these bugs being sold in the wild. The registry does not work that way. I said it already but the permission system is a perfectly robust ACL based design shared by many other systems. I'm more concerned that you might think the old owner/group/user octal permission system is a good thing, which would be a shockingly lax security approach. In what way, exactly? You don't know, do you. Come back when you can explain why it's somehow less separated than Linux or OSX. Are you just making shit up now? No, it wasn't. It was assigning admin access to all users by default, which was bad. That's no longer the case, and the exploits we see in the wild are privileges escalations that exist in some form or another on all operating systems. Not only is this wrong, it's also not how the exploits today work. Because it's wrong. That is not happening, save a few cases where program installers are deliberately assigning bad permissions to their own files. I've seen that all the time on Linux boxes. No, it's not. The kernel is one file and it loads dynamic drivers pretty much just like Linux loads drivers. Yeah, that's called a privilege escalation exploit. They happen to every OS. Yes, they're bad. No, they're not at all the same thing as users having admin permissions. You don't know the first fucking thing about how NHS databases are configured. You don't know the first thing about how medical systems are configured. Frankly, a lot of the companies that put these systems together don't understand security in the first place and no OS could save them from their own idiocy. These are frequently people who would have a chmod -R 777 in their install script if it were a unix style platform. Go back to Slashdot or whatever random hole you crawled out of to waste our time.
  12. If the position was advertised as "graphics programmer" and they couldn't answer that question, I'd fail them for that alone. Yes, even a fresh out of college dev. Same for any "physics programmer" position. If the position is not specifically somebody who should know their graphics/physics programming chops, it gets murky. If they're still expected to be doing "graphicsy stuff" and they can't answer that, it might not be an instant disqualification but it'd put them on pretty thin ice. It raises the question of how they fail too, whether we're talking about a relatively minor failure to understand the subtlety of a 4x4 transform (not so bad) or if matrices are just magic boxes to them (pretty bad). If the position doesn't call for graphics or physics knowledge at all, I'd most likely let it slide as 'unfortunate but fixable'. It's something that could tip me to another candidate but wouldn't necessarily do so. From your description of the job listing - assuming it's accurate - it seems like this is more the box that applies but evidently that was not a shared understanding among the team. --- On a personal sidenote, I'm hearing more and more that the democratization of engines and core tech, led in no small part by Unity, has resulted in a large junior workforce who are flubbing basics because "oh it's all abstracted away". This is not a healthy way to attempt to enter game development, a world where tools are fleeting but core fundamentals are forever. Don't do this to yourselves, people. Don't assume you can always live on top of convenient abstractions to hide you away from the scary mathematics and computer architecture.
  13. You can't get a KMD into a running Windows instance so trivially. Driver signing was introduced to shut off this very attack vector, and modern day 64 bit Windows will not load your unsigned driver unless you put it into dev mode at boot time. While you could do that, it does undermine the effectiveness of the demo. It is possible to crash most GPU kernel drivers with the right combination of bugs. Unfortunately it's pretty hard to predict what that combination is likely to be ahead of time if you know nothing about the target system.
  14. I've been discussing this recently with several industry contacts and it seems that visible contributions to a project on GitHub - yours or someone else's - are worth quite a lot nowadays. Potentially more than having "polished demos" of your own as people seem to care less about the fit and finish these days. The nice thing about seeing code work on those sites is it's a glimpse of how your actual workflow looks, which is actually job-relevant. Even if it's your own portfolio demo, it should be open source with the development process visible.  Mind you, this is my anecdotal experience. But when I'm asking people what they want to see in junior candidates, I'm not hearing "demos" anymore. I'm hearing "internships" and "open source work".
  15. In D3D 11 or OpenGL, there is little or no benefit to actual multithreaded rendering. There is a benefit from doing as much work as possible in multithreaded setup for rendering - you can map a bunch of buffers and write to them simultaneously from threads, you can load geometry and store draw calls, you can sort for the most efficient render order, etc. The benefit from these things just depends on you to set up your system to do as much work as possible with independent threads working in their own memory space and compiling the results efficiently. In the case of D3D 12 or Vulkan, there are massive gains to be had from fully multithreaded draw submission as the draw call counts get large. Note that a thousand isn't considered large anymore if you're setting up your rendering efficiently, not stalling, not doing lots of state changes, etc. Think tens of thousands.