Jump to content
  • Advertisement

Hodgman

Moderator
  • Content count

    14910
  • Joined

  • Last visited

  • Days Won

    15

Hodgman last won the day on August 14

Hodgman had the most liked content!

Community Reputation

52102 Excellent

About Hodgman

Personal Information

  • Website
  • Role
    Game Designer
    Programmer
    Technical Artist
    Technical Director
  • Interests
    Programming

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

89179 profile views
  1. Hodgman

    Writing portable Vulkan

    I'm focussed on Dx12 over Vulkan, but it's a lot of the same challenges. If you don't use the compute queue, you're in exactly the same boat as you were with D11/GL. It's an amazing new optimisation opportunity, but old drivers don't use it at all, so if you don't use it (yet) then you're not really missing out. The transfer queue is most useful for creating buffers/textures. It's the only way to actually saturate a PCIe bus. Make a code path that uploads textures buffers using this method, then make a fall back that uses the CPU to copy the data (for on board / Intel GPUs). In D12, Intel does actually expose a DMA queue which is backed by actual hardware, but it's slower than the CPU copy implementation. It's only useful when you really want free background copies and are OK with huge latency. Memory management is a pain, but ultimately a better situation than GL/D11. If you went over your budget in the old APIs, the drivers would just silently insert 40ms-long paging operations in the middle of your frames, dropping you off a performance cliff with no notification. Worse, there was no way to query your application's memory budget on old APIs, which ironically made memory scaling more important. You're in the same boat now, but without the automatic "let's break interactivity and pretend everything is fine" auto paging built in. There's kind of two strategies you can choose, or do both... (1) query your app's memory budget and try as best as you can to stay under it. When loading a level, load all the descriptions of your buffers / textures first, then as a large batch operation, determine the memory requirements, then if that would put you over then drop the LOD on some resources (e.g. What if you didn't load the top mip level) and recalculate. Repeat until you've got a configuration that's in budget, and then start streaming in that set of assets. (2) each frame a resource is used, mark the heap that it's allocated within as being required on this frame number. If you hit an out of memory condition, try to find a heap that has not been used for a while and transfer it to system memory. Having relocatable resources is a massive pain in the ass though, so I'd focus on doing #1 more than #2, as the second option destroys performance just to keep limping along anyway... (just like a D11/GL memory oversubscription event). Unfortunately, application memory budgets are dynamic (e.g. The user might start your game after a fresh reboot, then alt+tab and open 100 Chrome tabs...) so you ideally you should respond to changing situations... Alternatively you can just assume that users won't go opening other GPU hungry apps while their game is running...
  2. Yes. This was the normal thing before commercial engine licensing became a thing. When you say "engine" nowadays, people picture the Unity or Unreal editor GUIs... but back in the day, it basically meant the massive framework that had been built up underneath any one specific game. Once you'd made one game, and were about to make another one, you copy the first game, delete all the parts you won't need, and start making the second game from there. The parts that you've kept, are your "engine". This kind of practice still carries on today at a lot of game companies! Early commercial engines, such as Quake or Doom, were never meant to be commercially licensed engines! People went to Id software and begged to buy the code, and they laughed and said "sure, for a million dollars", and people actually paid them... They became a commercial engine provider by total accident. However, they still didn't bother trying to make general purpose engines. Quake 2 and Quake 3 were rewritten mostly from scratch with the specific needs of those games in mind. It just so happened that Quake 1/2/3 were generic enough as FPS games that everyone else wanted to use them as an "engine", so Half Life, Medal of Honour, Call of Duty, etc were born out of Quake by accident too... If you're making an engine, this is exactly what you should be doing. You should have a list of requirements for the game that you're making and then build ONLY engine features that are actually required for that game. Any other method will put you on the path of writing engine code for 6 years and still not having a game.
  3. Hodgman

    C++ Common Runtime

    Microsoft made Managed C++, followed by C++/CLR, followed by C++/CX, which are all basically this idea, using their existing CLR ecosystem. CERN has a 'C++ as a scripting language' project, which is a runtime interpreter (no JIT) for a subset of C++. As mentioned above, you can also compile C++ to JS / asm.js / Web Assembly and then run it via a JS interpreter / JITter, such as V8. If I was going to try and get something like this running, I'd probably just try to glue existing stuff together like that Google also did a safe native code for Web thing (native client? Salt and Pepper?), which I think has been abandoned now, but which allowed cross platform C++ IIRC. However, if you want a WORA environment, you probably should just use a nicer language in the first place. C++ is a very flawed language, so if building something fresh, you could just pick a fresh language too! A lot old old software probably isn't portable / standards compliant anyway, so they would need to be ported to your new runtime anyway...
  4. Also known as: people who don't realize that digital painting takes the same amount of talent and effort... Also known as: impressing people who don't know how things actually work 😆 Depending on what the job is, that could be good or bad... If you're applying for jobs at a game company, having experience with a dozen different game engines will be much more impressive than once having made your own engine that only 3 people ever used. Moreover, experience with the particular game engine that's being used at that particular company is the most important thing Not to say that homebrew engine experience is worthless -- it is definitely still something that gives you a leg up! -- I just have to point out that it's not at all a replacement for experience with other engines.
  5. If you're doing it as a hobby, then satisfaction comes first. If you're doing it as a business, then profitability/efficiency come first, and satisfaction be damned For most teams, Unity is the cheapest way to get their game finished. Also, Unity is really just a game engine starter kit - most Unity mid sized studios will still have "engine programmers" / "tools programmers" / "graphics programmers" who spend a lot of time extending the engine to suit the needs of their game. There's a whole range of satisfying development tasks that still exist here... And frustrating ones when you have to fight the engine... I started my game back when UE3 wanted 25%, Unity was much crappier and Cry Engine wanted like a million dollars... So, we made one, and now it's easy to stick with it out of inertia. If we were starting out right now it would be hard to justify though. You also have to factor in the ease of working with content creators as you grow. Plenty of artists will know how to make great content, quickly, in UE4 or Unity or both. On a custom engine, you may discover that you have to spend extra coding time on tooling, importers, integration tasks, etc... If content creators can't use a normal workflow, can't quickly iterate on their work, or need a programmer to hand-hold them when adding content, then they will be deeply unsatisfied and will either follow a bad job or take much longer to do their job.
  6. Hodgman

    Imperfect Environment Maps

    Yeah, for something like a first person shooter it wouldn't fit as well, because the point clouds would probably exhibit a lot of light leaking in typical indoor scenes. Our track surfaces are typically full of holes anyway, so a bit of light leaking is actually desirable. I guess you could fight against leaking by aggressively using the push-pull method on the depth buffer, as in the ISM paper. Maybe generate depth maps using the ISM technique first (as a Z-pre-pass) and then generate env-maps using only points that pass the depth test... Ill try to post another update soon as I refine the technique a bit more, and capture a video of it in motion. As the car races past each point, a blobby reflection passes over the car, which actually really adds to the feeling of speed! Seeing that it's completely dynamic, another enhancement I want to look into is attaching a point cloud onto each vehicle as well, so that the vehicles can reflect off of each other.
  7. Hodgman

    Imperfect Environment Maps

    In 22 our lighting environment is dominated by sunlight, however there are many small emissive elements everywhere. What we want is for all these bright sunlit metal panels and the many emissive surfaces to be reflected off the vehicles. Being a high speed racing game, we need a technique with minimal performance impacts, and at the same time, we would like to avoid large baked data sets in order to support easy track editing within the game. This week we got around to trying a technique presented 10 years ago for generating large numbers of shadow maps extremely quickly: Imperfect Shadow maps. In 2008, this technique was a bit ahead of its time -- as indicated by the performance data being measured on 640 x 480 image resolutions at 15 frames per second! It is also a technique for generating shadows, for use in conjunction with a different lighting technique -- Virtual Point Lights. In 22, we aren't using Virtual Point Lights or Imperfect Shadow Maps! However, in the paper they mention that ISMs can be used to add shadows to environment map lighting... By staying up too late and misreading this section, you could get the idea that you could use the ISM point-cloud rendering ideas to actually generate large numbers of approximate environment maps at low cost... so that's what we implemented Our gameplay code already had access to a point cloud of the track geometry. This data set was generated by simply extracting the vertex positions from the visual mesh of the track - a portion is visualized below: Next we somehow need to associate lighting values with each of these points... Typically for static environments, you would use a light baking system for this, which can spend a lot of time path-tracing the scene (or similar), before saving the results into the point cloud. To keep everything dynamic, we've instead taken inspiration from screen-space reflections. With SSR, the existing images that you're rendering anyway are re-used to provide data for reflection rays. We are reusing these images to compute lighting values for the points in our point cloud. After the HDR lighting is calculated, the point cloud is frustum culled and each point projected onto the screen (after a small random offset is applied to it). If the projected point is close in depth to the stored Z-buffer value at that screen pixel, then the lighting value at that pixel is transferred to the point cloud using a moving average. The random offsets and moving average allow many different pixels that are nearby the point to contribute to its color. Over many frames, the point cloud will eventually be colored in now. If the lighting conditions change, then the point cloud will update as long as it appears on screen. This works well for a racing game, as the camera is typically looking ahead at sections of track that the car is about to drive into, allowing the point cloud for those sections to be updated with fresh data right before the car drives into those areas. Now, if we take the points that are nearby a particular vehicle and project them onto a sphere, and then unwrap that sphere to 2D UV coordinates (at the moment, we are using a world-space octahedral unwrapping scheme, though spheremaps, hemispheres, etc are also applicable. Using view-space instead of world space could also help hide seams), then we get an image like below. Left is RGB components, right is Alpha, which encodes the solid angle that the point should've covered if we'd actually drawn them as discs/spheres, instead of as points.Nearby points have bright alpha, while distant points have darker alpha. We can then feed this data through a blurring filter. In the ISM paper they do a push-pull technique using mipmaps which I've yet to implement. Currently, this is a separable blur weighted by the alpha channel. After blurring, I wanted to keep track of which pixels initially had valid alpha values, so a sign bit is used to keep track of this. Pixels that contain data only thanks to blurring, store negative alpha values in them. Below, left is RGB, middle is positive alpha, right is negative alpha: Pass 1 - horizontal Pass 2 - vertical Pass three - diagonal Pass four - other diagonal, and alpha mask generation In the final blurring pass, the alpha channel is converted to an actual/traditional alpha value (based on artist-tweakable parameters), which will be used to blend with the regular lighting probes. A typical two-axis separable blur creates distinctive box shapes, but repeating the process with a 45º rotation produces hexagonal patterns instead, which are much closer to circular The result of this is a very approximate, blobby, kind-of-correct environment map, which can be used for image based lighting. After this step we calculate a mip-chain using standard IBL practices for roughness based lookups. The big question, is how much does it cost though? On my home PC with a NVidia GTX780 (not a very modern GPU now!), the in-game profiler showed ~45µs per vehicle to create a probe, and ~215µs to copy the screen-space lighting data to the point cloud. And how does it look? When teams capture sections of our tracks, emissive elements show that team's color. Below you can see a before/after comparison, where the green team color is now actually reflected on our vehicles In those screens you can see the quick artist tweaking GUI on the right side. I have to give a shout out to Omar's Dear ImGui project, which we use to very quickly add these kinds of developer-GUIs. Point Radius - the size of the virtual discs that the points are drawn as (used to compute the pre-blurring alpha value, dictating the blur radius). Gather Radius - the random offset added to each point (in meters) before it's projected to the screen to try and collect some lighting information. Depth Threshold - how close the projected point needs to be to the current Z-Buffer value in order to be able to collect lighting info from that piixel. Lerp Speed - a weight for the moving average. Alpha range - After blurring, scales how softly alpha falls off at the edge of the blurred region. Max Alpha - A global alpha multiplier for these dynamic probes - e.g. 0.75 means that 25% of the normal lighting probes will always be visible.
  8. Hodgman

    Handling emission in a G-buffer

    I use the "metalness" parameterisation already, where you store a single color which is used to calculate the albedo and the specular reflectance colors, based on the metalness value. So yeah, I did what you mentioned and stored a single emission value (as log luminance) and multiplied that with color to get the emissivity color. Most of the time this is completely fine for me - if something is emitting light, it's probably diffusing/reflecting light with the same color. e.g. A glowing red sign is often just red plastic over a light - it should emit red light and also have a red albedo. The situations where I've found trouble are things like TV screens that are in direct sunlight. The emissive color should show the content of the TV show that's playing, but the albedo should be very dark grey. Normally this isn't at all noticeable though, and I don't actually have any TV's in my game, so I put up with it.
  9. Hodgman

    Why Are There Fewer Sci-Fi RPGs Than Fantasy Ones?

    Sci-fi isn't about space, futuristic guns, and technology. Sci-fi is first and foremost about society and people reacting to new technology. If you have a story set in space, it's not necessarily sci-fi. If you have a story set in space where the implications of space-travel on the fabric of society are examined, then it's sci-fi. If you have a story set in the modern day, and the social impact of some minor bit of technology is explored via a plot, then it's Sci-fi. If you have a story set in space with cool space guns and robot suits, but there's no commentary or examination of the consequences of this technology on the people that inhabit the world, then it's not sci-fi, it's just fantasy too. If you have a story set in medieval times where someone discovers how to transmute lead into gold, and how to cast fireballs via combinations of runes, it could also be sci fi if the plot is about how society reacts to these new inventions. Hard sci-fi is about using real, known science and keeping things factual and plausible. Soft sci-fi uses magic, just like fantasy, but usually in the form of technobabble.
  10. I use fixed timestep physics and variable timestep rendering, so the physics data needs to be interpolated before being consumed by the renderer. That job is done by an interpolation system, which consumes physics results and produces visual results. Some nodes of visual skeletons need to be moved by physics, so an attachment system contains pairs of sourve interpolated physics transform indices and destination visual node indices to be copied after an interpolation update. Many visual nodes are parented off each other, so a visual hierarchy system then calculates all the world matrices per node, after an animation system produces all the local node matrices.
  11. Hodgman

    International contract and company law

    Contracts often state in which jurisdiction any disagreements will be settled. But yes, anyone can argue anything in any court at any time - the law is always grey. Consult a lawyer to find out the probability of particular shades of grey ending up as black or white. Company taxes are generally on profits, while individual taxes are generally on income. Steam's 30% cut is pre-tax. So, your game makes $500k, Steam take $150k (30%) how they pay tax on their cut is their problem. You've lodged a W8-BEN / declaration of payments to foreign business form with them (I'm reciting the name from memory, so maybe not that exact name) which allows Steam to send your 70% cut off-shore without the US IRS taxing it whatsoever. That money arrives in your German company's possession, but you don't pay company tax on it yet because it's not yet "profits". You then pay it out as wages/salaries/dividends to your staff and withhold income tax (etc) at that time (say 25% of those payments goes to the government). A year later, if any money left in the company accounts becomes profits then the government takes, say, 20% of that leftover money (minus claimable expenses). All in all, if the company pays out all earnings as salaries, your income tax rate was 25% and Steam take 30%, then you keep a bit over half of your retail earnings.
  12. Just to keep the perspective up, the latest gangstar game had over 100 people work on it and is probably around 100k to 300k man-hours of work. Or if you did it solo at 8 hours of work per day, 365 days a year, it might be done in 50 years. Great games take time. And time is money. If you live at home working on a project for the next 50 years, eventually you're going to have to get a job, and it's going to be hard to keep up that 8 hour unpaid game-dev schedule... Again, don't get carried away with your plans so much. Ideas are only the starting point for a game. The crappy games that you're judging have started as an idea and then gone through the grueling process of actually being made to completion. It's easy to have good ideas if they haven't been made yet Ideas don't create games, the long hard development time (which tests, breaks, deforms and rebirths ideas along the way) creates games. You can give 10 different studios the same idea/design, but you'd get 10 different games at the end. Some might be terrible, and some might be amazing -- all from the same starting idea. Also, many of the crappy games on the app store are deliberate "minimal quality" crap, created by sweatshops, churning out clones in as little time as possible to hopefully make 30c from the few people who download it by accident. Those ones deserve to be judged as shitty
  13. Planning one at a time is enough. Get building! Also, for what it's worth, GTA V has about 1000 to 2000 man-years worth of work gone into it. It's not the kind of thing that a solo dev can produce right now. If you think these kinds of products can feasibly be made without hundreds of hands toiling on them for years, then the title of the thread is very apt!
  14. AAA games cost $10M+ to make, by definition. So, you've got $20M+ investment already? That means you're not at the hobbyist / indie stage any more.
  15. The compiler reports it's own error messages via the "out_opt ID3DBlob ppErrorMsgs" argument. These will include line numbers, syntax error descriptions, etc... It's pretty hard to program without access to your compiler's error messages! You just need something like: ID3DBlob* errorBlob = nullptr; D3DCompileFromFile( ..., &errorBlob ); if ( errorBlob ) { OutputDebugStringA( (char*)errorBlob->GetBufferPointer() ); errorBlob->Release(); }
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!