Hodgman

Moderator
  • Content count

    14617
  • Joined

  • Last visited

Community Reputation

51631 Excellent

About Hodgman

  • Rank
    Moderator - APIs & Tools

Personal Information

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

80147 profile views
  1. Why A.I is impossible

    According to the people who keep attacking wage growth while also complaining that people don't spend enough money any more...
  2. DX12 Descriptor Resource Sets

    I think your terminology is a bit off -- a descriptor heap is a huge area of memory where descriptors can be allocated. You can only have a single combined SRV/CBV/UAV-type descriptor heap bound to the device at a time, and changing this binding is expensive, so you're encouraged to only ever have a single one bound. You can create extra ones as staging areas where you can pre-create SRV's which can later be copied into your main/bound heap. Within a heap, you create descriptor-tables, which get bound to the root signature. The resource binding model in our engine has 8 "resource list" slots, which each contain an array of SRV's. In D3D11, each "resource list" is mapped to a contiguous range of t# registers in the shader. e.g. If a shader has ResList#0 with 4 textures and ResList#1 with 2 textures, the binding system is configured to copy ResList#0 into SRV slots [0,3] and ResList#1 into SRV slots [4,5]. In D3D12, each "resource list" is mapped to a root-descriptor-table parameter. Each shader generates a root-signature where param#0 is a table of CBV's, and param #1,2... are the res-list SRV tables. When submitting a draw-call, we determine if any res-list slots have changed since the previous draw-call (or if the previous draw-call used a different root signature). If so, a descriptor table is allocated for each new res-list within a ring-buffer, the SRV's for that root signature are copied into that new allocation from a non-shader-visible (staging) descriptor heap, and these new tables are set as root parameters. When creating a texture, it's SRV is pre-created in the non-shader-visible descriptor heap, while the shader-visble descriptor heap is just a ring-buffer of these transient tables.
  3. For our friends down under...

    Yeah, it's pretty fun™. Long story short: a conservative government was elected in 2013, who immediately passed an "austerity" budget (and ate an onion), which was the style at the time. Part of that budget involved scrapping a $10M federal fund that invested in games industry projects and actually banned the federal film funding agency from ever investing in gavedev whatsoever... despite the fact that this project was actually turning a profit, making money for the taxpayer, making jobs for the local economy, and helping the industry rebuild after the GFC had destroyed 1500 gamedev jobs a few years prior. Fun side note - it wasn't actually an austerity budget because despite cutting billions of dollars from public projects, they then spent all of their savings and more on new policy, like making sure every school has a chaplain. Predictably, the industry wrote a strongly worded letter to the government. A minority party set up a senate inquiry to formally investigate this 'WTF' moment. That investigation recommended putting the $10M back in place and lifting the ban, because money and jobs and growth... and then, predictably, the govt ignored it. But they're obligated to publish some kind of response, hence the deadline mentioned in that article, which, predictably, has again been ignored. None of us are really holding our breath. The conservatives love to repeat "jobs and growth!" and "infrastructure!" and "innovation!", but we all know it's hot air. Over the same period they decided to cancel our national gigabit fiber-optic internet infrastructure project and instead opt to spend the same amount of money just buying back the past-shelf-life, decrepit, failing national copper network that they privatized in the 90's, and then pretend that 20mbps DSL is enough speed that anyone will ever need. That's the forward-thinking, innovative souls that we're dealing with here. Thankfully, the state govt's are not quite as insane, and a few of them have really stepped up to help the industry rebuild, which completely offsets the attempts of the federal govt to sabotage us.
  4. Yeah D3D11 is safe in that situation by default. To get the unsafe behaviour of D3D12/Vulkan/GL, you can use the overlapping UAV extension from AGS/NVAPI.
  5. Why A.I is impossible

    FWIW, the one group of religious dogmas that you're referring to does not equal all religious people. Spiritualism is pretty diverse. Plenty of individuals and belief systems assign souls to animals. Not every religion is about having to "save souls" either. Many also feature a single soul, being kind of god itself, running through everything, which renders the question as to whether any specific thing has a soul or not, a nonsensical question. That's actually the famous Mu-koan in Asia. Even Catholicism tries to incorporate this with the holy ghost, but we all know how full of contradictions it can be Chemical signalling and electrical signalling are completely linked. You can't have one without the other. Any sufficiently advanced simulation would have to incorporate models of both in order to function. That's also not an impossible task. We do complex chemical, atomic and even quantum simulations all the time. It's just a matter of scale and cost...
  6. Why A.I is impossible

    Isn't that field of science called psychology? Or science itself is just applied philosophy at the end of the day... BTW, they already have metrics for measuring whether an object is conscious or not, which have been demonstrated to be able to tell the difference between normal awake brains, sleeping brains, dreaming brains, anesthetized brains, vegetative comatose brains, minimally conscious brains, and "locked in syndrome" brains (which would otherwise appear similar to other comatose brains, but on this metric, shows high levels of consciousness). Science does peer into those mechanisms. People's free will is surprisingly easy to influence... Again, that's psychology (or hypnotism too, if you like). The exact mechanisms of exactly how this process works though -- or any specific human action, when trying to explain the entire chain of consequence from genesis of thought to action -- are too complex for any human to ever understand (a thousand trillion synaptic connections, multiplied by all the other variables is an inconceivable amount of data, even when just considering a single moment in time...). There's also the camp who believes that the actual physical mechanisms behind thought is rooted in quantum behavior, which is probabilistic, which makes the whole thing "just physics" without having to say that it's deterministic (keeping the "free" part of "free will" free, and leaving the door open for a God who rolls dice).
  7. Why A.I is impossible

    Yeah, so why can't we build a robot suit that an external consciousness can wear? If humans are all just molecules and robots are just molecules, what's the problem?
  8. Why A.I is impossible

    You're doing this: http://highexistence.com/spiritual-bypassing-how-spirituality-sabotaged-my-growth/ http://highexistence.com/10-spiritual-bypassing-things-people-total-bullshit/ Even if our brains are some kind of magical antenna that channels in a magical spirit consciousness from another plane of existence... what's stopping us from building our own mechanical antennae that channel magical spirit consciousness into our AI's?
  9. I was adding PSVR support. The existing Oculus support was a complete hack - it read pitch/yaw movements from the headset and injected fake mouse x/y movement commands to some XML-routed input mammoth (so forget about HMD roll or position tracking -- sickness city coming right up!). As well, every engine will have some kind of GPU abstraction -- e.g. in Unreal it's called RHI -- but I found myself thoroughly confused as to why, in the PS4 renderer source files, there was a mix of Cry's own abstract types, and actual D3D11 code! "WTF, you can't compile D3D11 on a PS4!" I thought... Well, some numpty apparently thought that doing their own half-complete port of some parts of D3D11 over to the PS4 and then continuing to write a mixture of partially-abstracted/partially-raw-D3D11 code would be easier than actually making a solid GPU abstraction layer first... FWIW, in my opinion, part of the "but can it play Crysis" meme is also because Crysis (and the original Far Cry) were graphically outstanding for their release date. Crysis was a GPU killer, but not without excuse (it drew dynamic jungles at a time where that was a dream feature)! Also, professionally, Unreal has always been known for being extremely bloated and having sluggish performance / overly generic architecture. But that's both a feature and a bug depending on your situation
  10. As mentioned above, Melbourne/Victoria is great. Queensland/SA Goverments are getting on board too. It's the federal government who are cooked (they de-funded industry support that was actually making money for the taxpayer -- austerity for the sake of it!!) If you're in Melbourne, look into Film Victoria, Creative Victoria, and local councils (e.g. City of Melbourne) for grants, and IGDAM is a good hub for community (there's likely equivalent IGDA chapters in other capitals...). There's also thearcade.melbourne (a local co-working hub), which was set up by the GDAA (who are also amazing - if you need to find your way around the local industry, they can direct you / make introductions). The GFC decimated the local industry back in ~2008-2010, which led to a lot of developers moving offshore at the time. Many of the ones who remained have started a huge number of indie studios, which has really changed the way that the local industry functions now -- it used to be a lot of work-for-hire for US publishers, but now there's a lot of original IP work. The annual GCAP conference attracts about 1000 developers, so there's still a decent amount of talent here. Wages are depressed in Aus compared to the US, and are depressed in gamedev compared to other tech firms. A veteran game dev can cost under AU$10k/mo here. $6M to make a Diablo style game sounds fair. Lots of "crowdfunded" games only get a small portion of their funds from crowd-funding, but use it as proof of market interest (and prototype funding) in order to attract further investment. If your founders are extremely talented developers, they might be worth over $100k each... however as founders, perhaps they're willing to work for equity/ownership instead of salary. So instead of paying 4 founders $520k in joint salaries, you only pay them $120k bare living expenses between them all, and boom you just saved $400k that you can now spend on other staff (maybe 8x $50k salaried staff?). To keep our company viable, we work in game-dev and also in serious games, which boring government (e.g. Austrade) and corporate backers can get behind. Perhaps you can identify some technology cross-over that you could use for both a game and a training/simulation purpose? e.g. For us, it's racing games and traffic simulation / driver training.
  11. IIRC There is no vertical UV difference between APIs. There is a difference in the texture data loading APIs whether you're providing rows of pixels/blocks from the top down or the bottom up. The encoding of blocks themselves is identical. You can use the same file format and UCs as long as you re-order the rows as required. Or you can use the same file format without reordering the rows (so that your textures are flipped) and then flip all your UVs.
  12. There's certainly quite a bit of overlap between them. A graphics programmer on an engine team will work with APIs like D3D/GL in order to implement rendering of features, like deferred shading or shadow mapping, as well as general stuff like scene management, and generic shaders. They'll also work on tools, such as importers for art files, and have to work with artists as their clients. A graphics programmer on a game team will also work on game-specific special effects, post processing and content challenges. A tech-artist is not as likely to use D3D/GL/etc directly, and won't likely work on engine features such as scene management. They are the glue between the artists and the programmers though - so anything on that interface is stuff that they will work on. That includes shader code, importers, exporters, plug-ins and scripts for art tools, automation of processes such as baking, helping with naming conventions, and making sure that artists actually follow the right conventions. They also should know how to use all the art tools that they're writing plugins/exporters/scripts for (but they don't have to be a good artist - just have the technical knowledge of artists work flow).
  13. They've got billions of dollars behind them. They don't need to make money. They're trying to define and capture a market that doesn't yet exist, but they speculate will exist in the future. If, in the future billions of people are going to own VR devices, then having 50% market share at that point will make up for their current losses. Also look at companies like Amazon who consistently make massive losses, yet continue to consistently expand. As for the plan to get the whole body into VR - I don't have a handy link for that, but heard it first hand at a developer presentation back when they launched the CV1 and Touch engineering samples.
  14. VR in 2017 is not what VR will be forever. Oculus has started with getting three fingers into VR, but the goal is your whole body. FB will jump into it for holographic hangouts when the technology is capable of that. In the meantime they want to maintain market share and a position as a market leader.
  15. Does u,v,w texture mapping exist?

    Cube textures aren't flat; cube textures are a cube. The texture coordinate is a 3D vector pointing in the direction of the texel you'd like to sample...