Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About AlanGameDev

  • Rank

Personal Information

  • Website
  • Role
    Game Designer
  • Interests


  • Twitter
  • Github
  1. I agree that doing game logic on the GPU isn't good for everything, especially in compute shaders which are very limited (I didn't know they were so limited before starting this though). I've tried OpenCL before, but like pretty much anything Khronos do it's unbearable. Sycl seems to be a more sane alternative, but there's no implementation yet, there's a beta one but it's commercial and I'm an indie. Also, OpenCL means no console support, I want to be able to deploy on consoles eventually. I surely could use the CPU for the electricity, I could simply keep an array of the generators/network elements and store the position in the array entries, that would allow me to run some decent simulation, but the problem is scalability. This game is supposed to be massive on an scale never seen before, and that will not scale decently, and to make things worse the interface I use to communicate with the GPU will already be saturated in normal gameplay (because some elements are CPU). Gigabytes of data are unthinkable, I want the game to run at 60fps so the data per frame has to be at most some dozens of MBs. There are of course many limitations in this approach I'm taking and compromises have to be made, but I realized that compromises even though they might sound bad at first they not necessarily affect gameplay negatively. After all the important is having fun, a typical example is constraining positions to a grid and object sizes to a cell of said grid. That's a constraint many games in the past used for technical limitations, but it turns out they might be desirable so even these days when games just use float vectors for positions and unaligned height maps for terrain many still arbitrarily apply those constrains. The challenge I'm facing here is designing fun mechanics within those limitations. EDIT: btw, your github link is incorrect @JWColeman
  2. Well, to be honest I didn't have any strict design, my only requirements were something that makes sense and isn't lame (as global energy), and that is doable in compute shader. I've managed to come up with a decent solution though, I'm maintaining a map of 8x8 tiles, for each power consumer I add the consumption to the respective tile group, and then I simply propagate that number iteratively through the network, so when the electricity 'flows' it 'splits' according to that difference between the groups. One easy way to 'visualize' my solution is to think of the power consumers as having 'weight' and that deforms the terrain accordingly, and then electricity flows down the hills or something. It's not the same thing because the way it's split is according to the proportion of the 'slope' of neighbor cells, but you get the idea. In any case thanks for replying @_WeirdCat_ 👍
  3. An easy solution to this problem would be to simply split the world in chunks of say 8x8 tiles and store the energy in each of them and propagate iteratively in the shader (only in the chunks with network ofc). Additionally a vector field could be calculated form the energy 'flow' in order to propagate it directionally. The only problem is that there are gameplay implications, and a vector field even with halves takes a lot of memory. Maybe a simplified direction could be an option. It would be nice if I could use 16 bit for the energy and 16 for the flow direction but unfortunately the types and interlocked operations are very limited.
  4. Hello there. I'm not 100% sure this is the appropriate sub-forum but this question is probably more about programming than design. I'm making a game with some mechanics loosely inspired by Factorio (plays like Terraria) and since I want to allow the player to build massive machines and factories all the tilemap logic runs in the GPU and the map data stays there in the GPU memory since it's potentially several gigabytes. That implies in many constraints, especially because there's no recursion in compute and overall it's quite limited (I'm using direct compute SM5, OpenCL is too cringy, Sycl is still in its infancy, Cuda is vendor locked and there's no console support for these techs anyway). So here's the problem: I want electricity to play a role in the game mechanics. The simplest solution by far is just making it global and adding/removing from it as generators/consumers are ticked, but that's gonna suck terribly, so I've been thinking about decent solutions to that problem that still meet the technical constraints imposed. I thought about an electrified 'area' around the power generators and also power lines like Factorio (similar to Sim City). The problem is transmission, I'll have to calculate the electricity flow and that's not simple to perform in the GPU. What I could do is make a bitmask for each power network and then do a bfs on CPU and attribute an ID for each connected power network, then when I'm ticking the power generators and the consumers I get the ID of the power network (for that tile) and add/remove electricity from that network, so the electricity is stored in a value accessed through the ID for that map tile. One of the problems here is when power networks overlap in area but don't interconnect, I think in this case exclusion is acceptable (only one network can supply a tile), or just automatically connecting networks that intersect. The real problem though is the potentially catastrophic result of having too many networks. As I said this is a really massive game with possibly hundreds of millions of actively updating 'tiles', and since each network will require not only a bitmask but also an entry in some array that stores the electricity, the number of networks will have to be capped much before a quantity that's reasonable for the overall magnitude of the game. Maybe say 16 or something because passing data to GPU is very slow. So while it's entirely possible to simply arbitrarily limit the number of power networks in the game, that goes against the basic premise of it. Maybe you guys have another idea on how to handle that. The usual practices are already taken into account like for example using reduced maps for the power grid and such (2x2 or maybe even 4x4), but those aren't solutions. A 'solution' would be to pass an int32 for each power grid tile (holding the network ID) but that's prohibitive in terms of performance. It is possible to flood fill the networks iteratively in the GPU so no data has to be passed, but that would not only cause some delay in converging to the correct solution but would require double buffering and optimizations aren't going to be trivial, for a starter looping the whole map and allocating values for the worst case scenario could work, but optimizing that is going to be a major pain in the rear (especially compartmentalization) and definitely bug prone because it's a whole other layer dependent on the base map data that has do bee synced. So do you guys have any other idea on how that can be done?
  5. AlanGameDev


    @CrazyCdn personally I'm also more into indie titles these days. I haven't played Rimworld because I don't really like micro management games, but it's certainly a game that we'll never see a AAA rolling out. Indie titles still have that 'hit and miss' feel though. Early access could be better, and many games are flawed technically. Steam also has a ton of asset flips these days, most are taken down, but that's a disservice to the serious indie developers. For example a game like this which has assets clearly ripped without permission from Driver San Francisco, the developers didn't even remove the license plate text: and that's one of their promotional screenshots, they are so sure that Steam is terrible and flawed that they simply didn't care. And games like that are the vast majority of Steam games I find these days. As an indie developer I'm disgusted by that, and profoundly disappointed on Steam for allowing that kind of outrageous product to reach their virtual shelves. Cheap asset flips are also very common these days, or not even flips, just projects straight out of the Unity Asset store, just built and published on Steam. It's not only unfair competition, but it tarnishes the image of the whole indie community. At least on Steam "Indie" is now synonym of terrible quality asset flips, at least that's what you get when you browse the "Indie" tag, most games are simply pathetic. The strange part is that you get much higher quality in itch.io which is a much more 'open' store, so one can only assume that as long as Steam is making money they couldn't care less. I hope some day indies will have a decent store to sell their products. A store that at least forbids blatant asset flips or games with illegally ripped copyright content. Maybe it's going to be itch.io, maybe GOG is a decent alternative, maybe the Humble guys... I don't know. But it's sad that Steam is the most important store for indies and yet they have absolutely no respect for indies whatsoever... and for the customers too because if you're selling these pathetic products it's pretty clear you don't care.
  6. AlanGameDev


    The real question here is why you would expect a company like that to not take that path. While the current model of casual/accessible (some say 'dumb') games full of IAPs and shady stuff works we can only expect companies to go that route. Unfortunately these days you can't really expect these large companies to have any respect or consideration for their customers.
  7. AlanGameDev


    @CrazyCdn I think the problem is we're all far from the typical 'mainstream' games audience these days. I personally enjoyed the 3D Fallouts, New Vegas being the best by far, but they're distancing from the original games so much that it doesn't really feel like a Fallout game anymore, at that point they lose me, but if for each old times fan they lose they sell 10 more copies to new players because of the more accessible 'call of duty' mechanics and less contradictory story predicaments that's a win for them... who am I to tell them they're wrong simply because I personally didn't like the new iterations or the overall direction the franchise is taking. I really feel your pain. But as you said people are buying it. Bethesda wants to sell games and they're making the games the most people will buy. That's an irreversible tendency in my opinion, and the old fans will have to content themselves with these indie alternatives I guess... unless they decide to split the franchises into separate 'lite' and 'hardcore' versions, what has been done in the past, I don't know if successfully. What will determine that is whether there is money to be made from games that are more similar to the old school ones. Maybe a production like that won't be profitable for a large studio to begin with.
  8. AlanGameDev


    I think in the end it all comes down to the false assumption that AAAs are somehow committed to providing a quality product to their customers. They're not, they are committed to making money, we expect that providing a decent product is part of that process but that's been proven false (fo76 why oh why :trollface:). Their success isn't measured in the quality of their products or satisfaction of their audience, but in the quantity of money they make. It's simple as that. Now I'm not saying that producing a good game isn't part of that equation, but things like appealing to a broader audience might be a lot more important. Especially with the popularization of game development I think niche games are going to be supplied by small and indie studios, the big companies are going to produce games that cater to the widest audience as possible, what means a pretty low common denominator in terms of game mechanics and incredible visuals. I think it's an irreversible tendency that the big franchises are distancing themselves from their past in favor of lower entry barriers for the players. Fallout these days is very different from the isometric versions, and it diverges from them with each iteration. If you want a classic Fallout experience these days, indies got you covered with say Wasteland 2 or other 'spiritual successors'. The same clearly happened to some extent to Diablo, and will only *intensify* in the future in my opinion. I personally lost interest in Diablo 3 when they went for a WoW art style to be honest. What a shallow dude I am :trollface:. @JTippetts I miss the :trollface: so much :sad:.
  9. AlanGameDev


    Sad reality. They don't care about their fans, they have a brand name and will use to make as much money in the shortest time possible. Whether that's a wise long-term strategy is arguable of course, but investors want money as soon as possible, destroying the franchise in the process isn't one of their concerns. Typos in last sentence 🚎
  10. @ajmiles Well, I couldn't find that in the VS2017 installer, only this: I'm on Windows 7 though, don't know if that's why it doesn't show for me. I think the problem is the feature level though, I just ran the DirectXTK "SimpleSampleTK", and when I select FL11.0 from the dropdown list, the WARP device isn't available, it's only available for FL10.1 or below. Perhaps the WARP device that supports FL11 isn't available for Windows 7 systems?
  11. @ajmiles I've tried creating the device with `D3D_DRIVER_TYPE_WARP` but for some reason it's failing with HRESULT 0x887a0004 (DXGI UNSUPPORTED), code is like this: auto features = D3D_FEATURE_LEVEL_11_0; HRESULT hr = D3D11CreateDevice( nullptr, D3D_DRIVER_TYPE_WARP, nullptr, debug ? D3D11_CREATE_DEVICE_DEBUG : 0, &features, 1, D3D11_SDK_VERSION, &Gfx.device, nullptr, &Gfx.context ); Maybe there's some incompatible argument there, I don't know . In any case I'll try to determine where exactly is the threshold, and maybe make a simpler example with a 1D buffer. Also, I just realized that a window isn't at all necessary for a repro .
  12. @ajmiles Definitely not. I'm using the Windows SDK 10.0.17134.0 which I believe is the latest. I just mentioned DXUT because I'm using SDL for windowing and I believe you don't want that dependency so maybe I could copypaste some DXUT snippet to handle windowing using the Win API. I think DXUT is being maintained though, but I could be wrong of course.
  13. @ajmiles 🤣 It's not gonna be very trivial to make a minimal repro because there are some dependencies like DirectXTK and SDL which I'm using for windowing and input. When the loop area is small it works fine, so I really think the problem is loop unrolling which causes some massive code generation, but I'm not 100% sure, if I remove one of those assignments it works again (presumably because it's less code), but then if I keep the assignment and loop 1/4 of the area the problem persists, and that should unroll into less code so nothing is making much sense ... and I don't think these problems should be happening when you pass `D3DCOMPILE_DEBUG` what's even more strange. I also tried using `Load()` instead of the indexer just in case, but that didn't make any difference. @ajmiles I might look into it again soon and if you're OK with those dependencies I could make a simplified branch and share the bitbucket repo, but removing all the deps is too much work although I guess I could copypaste the windowing code from DXUT, I don't know. In the mean time I'm using the cbuffer to store the size what's the ugliest workaround ever 🤷‍♂️ . Thank you
  14. Nope, I don't think that's the case, here's the full shader: RWTexture2D<uint> map : register(u0); cbuffer cbuf : register(b0) { uint curLine; }; [numthreads(1, 1, 1)] void main(uint3 tid : SV_DispatchThreadID, uint gi : SV_GroupIndex, uint3 gid : SV_GroupID, uint3 gtid : SV_GroupThreadID) { int w = 256; //w = curLine; <- this works [loop] for (int x = 0; x < w; ++x) { [loop] for (int y = 1; y < w; ++y) { uint down = map[int2(x, y-1)]; if (!down) { map[int2(x, y-1)] = 64; // map[int2(x, y)]; <- this works map[int2(x, y)] = 0; } } } } That runs on a 256*256 texture with random 0/64 values. As-is it doesn't work, however if you uncomment either of the comments it magically works again. How do I ping the "MS guys"? Also, do you know some place to get DirectX support?
  15. @JoeJ thanks for your reply. I thought about that too, but if that's true then there's a bug in d3dcompiler because it shouldn't be applying any optimization whatsoever. In any case [unroll] only affects loops that are hardcoded and I thought that wasn't supposed to happen implicitly. However, that really seems to be the case so I'll see if I find some way to force loops to not be unrolled. EDIT: After adding [loop] decorators the problem persists
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!