Jump to content
  • Advertisement

AlanGameDev

Member
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

2 Neutral

About AlanGameDev

  • Rank
    Member

Personal Information

  • Website
  • Role
    Game Designer
    Producer
    Programmer
  • Interests
    Art
    Audio
    Design
    DevOps
    Education
    Production
    Programming
    QA

Social

  • Twitter
    @AlanGameDev
  • Github
    Alan-FGR

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree that in the long run cloud gaming is probably not going to be better for the consumer, but the problem is we have to analyze the market. I'm pretty sure that most people here in this forum wouldn't be favorable to cloud gaming, but most gamers overall don't realize all the implications that come with that. Many modern games on Steam for example have some kind of always-online protection, and some like the new Hitman titles rely on technology and copy protection that certainly steps in the direction of cloud gaming, that is, they require online services for contents anyway, whether the contents are data to be rendered locally or pre-rendered frames doesn't change the dependency model, and that still was one of the best games of the year according to the audience, and you don't see people questioning its future availability. Also, online games depend on services that aren't guaranteed to be available in the future, and that doesn't prevent people from playing them. Generally speaking people only care about the experience the game will provide them immediately, most don't really stop to think whether it'll be available in the future. Last I tried to play Max Payne 3 for example, it was simply impossible because of that online activation thing. Anything that requires online action could stop working, and nobody thinks about that when buying a game, and to make things worse, few people really care when a relatively old game stops working, games sell mostly at release, after a few months nobody really cares, when people try to play them and they don't work, most just move on. While it's true game sales have long tail (something that GOG managed to profit on, although classics nostalgia is different), the actual publishers and developers don't seem to be concerned about that at all. Rockstar couldn't care less whether their games stop working few years after their release if they're not milking players with online micro-transactions anyway. As a GTA V modder it was evident to me that Rockstar would go out of their way to make sure the replayability of the single player mode wasn't extended beyond the vanilla experience, they were interested in making money from GTA:O, and people are OK with that. Game publishers are not charities. I think the very fact that there are communities that literally fix/hack games you can't play simply because the publisher couldn't care less about that, and also communities that provide hacked servers and such, is concrete proof that most gamers don't care about that, at least not enough for the publishers to also care (i.e.: if people care, it's not enough to decide not to buy the game). If keeping games working was profitable, the publishers would do that, but after a AAA game goes long tail, nobody really cares, and for the publishers there's yet another problem, because since they give away the binaries, people will start hacking their games, what simply won't happen with a cloud model. In the cloud, if they don't have commercial interest in a given title, they just untick a checkbox and then nobody would be able to play the game anymore. While that sounds terrible, most gamers will be playing other games anyway, and for the publishers it means they won't have to spend money on support or legal actions against people 'hacking' their games after that. Again, I am 100% convinced that *if* cloud gaming becomes affordable and playable, there's no going back unless the audience don't buy into it, but if there are no technical issues, I'm sure most people won't mind the fact that a company has full control over their library, especially if it's a well-known company. That's something you can easily conclude looking at the current state of the internet and other online game stores/services. And also, support will be much cheaper because technical support won't be on a per-game basis, but per-customer... and there will be no local copy protection to care about, so there's not much reason for a game to be shut down anyway, since there's not much money being spent on them anyway. The technical aspects are surely still a big challenge, especially when it comes to providing the service to geographically unfavorable locations, but as I said, that is improving, and it's mostly latency that is the problem, and as I said latency seems to be getting much worse in local systems anyway because of software compromises due to hardware limitations, so the gap seems to be closing very quickly. As a PC and mobile developer, having control over the hardware and environment would certainly be great, and I'm pretty sure that that alone will decrease development costs *very* substantially, debugging would also be incredibly easy. In beta testing you could simply have a 'bug found' hotkey or something like that for all your users. No more manual bug reports, let alone crash log forms. I don't know, it seems to me that gamers don't really care about the implications of cloud gaming not to buy into it if said limitations don't affect them directly and immediately. Also, mobile is really becoming important for less casual games now, so there's a gamer audience there besides the casual puzzle player it seems. I can only guess that having real AAA graphics on mobiles would be a very appealing feature.
  2. Hello there. My game has mechanics somewhat similar to Terraria and Starbound, although the most similar game is probably Dig or Die. At the moment I'm trying to come up with a simple scheme for gamepads, but I can't really decide what would be the best default mapping, and I couldn't find a similar game that supports gamepads to steal some ideas :P. Basically as I mentioned the mechanics are the same but the action is going to be like a twin-stick shooter, the problem is since it's a side scroller there's no good way to have the jump/jetpack controls and the building/shooting, because since you have to use both thumbsticks you've only the triggers and bumpers left for the important actions players should be able to perform promptly:shooting, digging, placing blocks, switching weapon and jumping. I would like to have a dedicated button to dig, but switching to the digging tool could also be an option if it makes sense. The problem is unless that also changes the button for placing blocks it's the same amount of keys unless the digging tool is just like one of the weapons. I thought about this: RT: Shoot RB: Switch Weapon LT: Dig LB: Place But then there's no good button for jumping. Maybe switching weapon could be done by one of the analogs, but I think digging and placing blocks should be on the same side of the controller, I don't know. I would like to avoid having separate build and combat mode. So, do you guys have any suggestions or know other games that did that right that I could check? Thanks in advance.
  3. Greetings fellow developers! So, today I've seen an ad for yet another cloud gaming service and decided to write this. Since I started developing games I haven't played many AAA games and even less on consoles. I have fond memories of the days I used to play F-Zero GX, which was incredible at 60hz on a CRT display. The controls were very tight and satisfying, and the game was so fast that playing it seemed impossible at first, but it was so well designed that it felt natural and rewarding to play it. I haven't played the PS4 until recently when I visited a friend who has the Pro version. As a PC gamer I was expecting to relive some of the experiences of when I was a console gamer. Unfortunately I left disappointed. The problem was latency. The controls just weren't tight and responsive, even on top notch exclusive games. The TV was modern and on 'game' mode, and while that certainly still has some latency, alone it shouldn't be clearly noticeable. The problem is many (if not all) of the games I've played have a fair amount of latency "by design". I know that because I've seen presentations by the very developers of some of those games showing how they did it, and as a developer I'm guilty of that too. The problem is in order to render fancier visuals and increase fluidity, games these days buffer frame tasks one way or another. To extract the maximum from modern hardware you need heavy parallelization in order to use all CPU cores. However, the throughput of a single system never scales linearly, and chances are it starts to scale terribly after a very few threads. In order to address that reasonably, games just run different (multithreaded or not) subsystems in parallel (as opposed to multithreaded systems serially), and to further reduce the holes in your thread pool you have to process them asynchronously. A popular way to do that is by treating your frames as interdependent data blobs, so even if a given thread just finished processing say the particles of frame 0, it can start processing the particles of frame 1 immediately after that if it's the best task for that thread at that point. It's efficient and works fine, the only problem is you need buffering and that means latency. That is the current state of many AAA games. Of course designing better parallelized systems could help solve some of that, but that has a cost, and that's exactly my point. Currently CPUs scale mostly in number of cores, and I can only imagine the complexity involved in scaling systems to 32 or 64 cores. It's several orders of magnitudes more complex than simply distributing tasks as we do today. However, in the cloud none of that is a big problem. You can be inefficient and still provide the best experience at a profit. Developing efficient software to that extent is much more expensive than simply throwing more hardware at inefficient software, especially if you purchase hardware in large quantities directly from the supplier. Considering that many console games run at reduced framerates, it's obvious that few frames of latency represent more than the internet latency from coast to coast. And latency of both internet and display/input devices should only get lower. So if you have a lot of horsepower to quickly render a frame, you can pay for the latency in data transmission very easily. Besides that, modern games tend to be unresponsive. You have to wait for animations and other arbitrary delays, and many games also use 3rd order control for movement, what could be reduced by simply designing/coding the controls correctly. For camera controls, you could simply render the surroundings and transform the image in the client machine. Think of streaming a video on a cube/spheremap for example and rotating the camera locally (ofc you wouldn't use an actual cubemap nor uniform resolution). I see a lot of comments regarding privacy because basically you'll be using somebody else's computer. While that's a very valid concern, most people simply don't seem to care about that (at least not enough to stop using Google Docs or other services), and even if they do, it's just for games anyway, you won't store the nuclear codes there (:trumpface:)... and in my perception not running code in the client machine is a much bigger pro than a con, because cheating is simply going to be impossible, so that's yet another reason why game development costs will lower considerably (depending on the game). Modding is of course going to be impossible without official support, but then again it's a minority of people who use mods. Another thing to keep in mind is that the demand for AAA/hi-fi games on mobile devices is only increasing, and these devices have limited storage, limited power, limited battery life, and also don't have active cooling systems what prevents you from running demanding software for too long. And that will certainly not change considerably in the near future (to match modern gaming PCs) while their popularity should only increase. For those reasons I'm convinced that cloud gaming is the future. I can't say that I'm happy with that realization to be honest, but seeing how modern games are actively abdicating their only technical advantage over cloud gaming (which is latency), I don't see how it can unfold differently. Also, besides the technical aspects, there are practical advantages too, you won't need all that hardware and cables. Basically you'll only need display and input devices. And these days even your light switch has hardware capable of decoding and transforming video. My only worry is that game development won't be accessible anymore. These days anybody with a computer can learn to and develop games. PCs in my opinion were a revolution not because everybody could do spreadsheets, but because it was a machine capable of not only running but developing software. You had at your disposal (roughly) the same tools the developers had, and that is empowering. The difference between a user and a developer was mostly knowledge, which was very accessible if you're willing to invest the time. If cloud gaming catches on, who knows what the future holds. Maybe it'll all run on enterprise level specialized hardware you won't have access to except through some education program. Consoles these days aren't very accessible for indies, and it used to be much worse, I can only imagine what kind of restrictions cloud game development could have. Will they run the same APIs we use in the long run? If you have control over the hardware, you might want to develop software specifically for it. You might have noticed that this post is also a rant on how AAA games these days are bad in terms of gameplay and mechanics. When I was a kid I was also on the 'hi-fi visuals' bandwagon, without knowing that in the long run that would cost almost everything I used to hold dearly in the games I enjoyed: from mechanics to interactivity to complexity, almost everything was corrupted or simplified in favor of larger audiences and fancier visuals to the point that many games are becoming slightly interactive movies. The tight arcade controls I used to love were replaced by floaty simulation controls, and gamedev became an obsession in simulating reality as accurately as possible. I can only assume that's what most of the AAA audience wants these days, and movies are doing very well in the cloud. The games I enjoy these days are 99% indie, and they only exist because anybody can be an indie developer, you just need a PC and some determination. If the cloud happens to be a thing I can see that drastically changing. I would like to know your opinions on this.
  4. I agree that doing game logic on the GPU isn't good for everything, especially in compute shaders which are very limited (I didn't know they were so limited before starting this though). I've tried OpenCL before, but like pretty much anything Khronos do it's unbearable. Sycl seems to be a more sane alternative, but there's no implementation yet, there's a beta one but it's commercial and I'm an indie. Also, OpenCL means no console support, I want to be able to deploy on consoles eventually. I surely could use the CPU for the electricity, I could simply keep an array of the generators/network elements and store the position in the array entries, that would allow me to run some decent simulation, but the problem is scalability. This game is supposed to be massive on an scale never seen before, and that will not scale decently, and to make things worse the interface I use to communicate with the GPU will already be saturated in normal gameplay (because some elements are CPU). Gigabytes of data are unthinkable, I want the game to run at 60fps so the data per frame has to be at most some dozens of MBs. There are of course many limitations in this approach I'm taking and compromises have to be made, but I realized that compromises even though they might sound bad at first they not necessarily affect gameplay negatively. After all the important is having fun, a typical example is constraining positions to a grid and object sizes to a cell of said grid. That's a constraint many games in the past used for technical limitations, but it turns out they might be desirable so even these days when games just use float vectors for positions and unaligned height maps for terrain many still arbitrarily apply those constrains. The challenge I'm facing here is designing fun mechanics within those limitations. EDIT: btw, your github link is incorrect @JWColeman
  5. Well, to be honest I didn't have any strict design, my only requirements were something that makes sense and isn't lame (as global energy), and that is doable in compute shader. I've managed to come up with a decent solution though, I'm maintaining a map of 8x8 tiles, for each power consumer I add the consumption to the respective tile group, and then I simply propagate that number iteratively through the network, so when the electricity 'flows' it 'splits' according to that difference between the groups. One easy way to 'visualize' my solution is to think of the power consumers as having 'weight' and that deforms the terrain accordingly, and then electricity flows down the hills or something. It's not the same thing because the way it's split is according to the proportion of the 'slope' of neighbor cells, but you get the idea. In any case thanks for replying @_WeirdCat_ 👍
  6. An easy solution to this problem would be to simply split the world in chunks of say 8x8 tiles and store the energy in each of them and propagate iteratively in the shader (only in the chunks with network ofc). Additionally a vector field could be calculated form the energy 'flow' in order to propagate it directionally. The only problem is that there are gameplay implications, and a vector field even with halves takes a lot of memory. Maybe a simplified direction could be an option. It would be nice if I could use 16 bit for the energy and 16 for the flow direction but unfortunately the types and interlocked operations are very limited.
  7. Hello there. I'm not 100% sure this is the appropriate sub-forum but this question is probably more about programming than design. I'm making a game with some mechanics loosely inspired by Factorio (plays like Terraria) and since I want to allow the player to build massive machines and factories all the tilemap logic runs in the GPU and the map data stays there in the GPU memory since it's potentially several gigabytes. That implies in many constraints, especially because there's no recursion in compute and overall it's quite limited (I'm using direct compute SM5, OpenCL is too cringy, Sycl is still in its infancy, Cuda is vendor locked and there's no console support for these techs anyway). So here's the problem: I want electricity to play a role in the game mechanics. The simplest solution by far is just making it global and adding/removing from it as generators/consumers are ticked, but that's gonna suck terribly, so I've been thinking about decent solutions to that problem that still meet the technical constraints imposed. I thought about an electrified 'area' around the power generators and also power lines like Factorio (similar to Sim City). The problem is transmission, I'll have to calculate the electricity flow and that's not simple to perform in the GPU. What I could do is make a bitmask for each power network and then do a bfs on CPU and attribute an ID for each connected power network, then when I'm ticking the power generators and the consumers I get the ID of the power network (for that tile) and add/remove electricity from that network, so the electricity is stored in a value accessed through the ID for that map tile. One of the problems here is when power networks overlap in area but don't interconnect, I think in this case exclusion is acceptable (only one network can supply a tile), or just automatically connecting networks that intersect. The real problem though is the potentially catastrophic result of having too many networks. As I said this is a really massive game with possibly hundreds of millions of actively updating 'tiles', and since each network will require not only a bitmask but also an entry in some array that stores the electricity, the number of networks will have to be capped much before a quantity that's reasonable for the overall magnitude of the game. Maybe say 16 or something because passing data to GPU is very slow. So while it's entirely possible to simply arbitrarily limit the number of power networks in the game, that goes against the basic premise of it. Maybe you guys have another idea on how to handle that. The usual practices are already taken into account like for example using reduced maps for the power grid and such (2x2 or maybe even 4x4), but those aren't solutions. A 'solution' would be to pass an int32 for each power grid tile (holding the network ID) but that's prohibitive in terms of performance. It is possible to flood fill the networks iteratively in the GPU so no data has to be passed, but that would not only cause some delay in converging to the correct solution but would require double buffering and optimizations aren't going to be trivial, for a starter looping the whole map and allocating values for the worst case scenario could work, but optimizing that is going to be a major pain in the rear (especially compartmentalization) and definitely bug prone because it's a whole other layer dependent on the base map data that has do bee synced. So do you guys have any other idea on how that can be done?
  8. AlanGameDev

    Diablo:Immortal

    @CrazyCdn personally I'm also more into indie titles these days. I haven't played Rimworld because I don't really like micro management games, but it's certainly a game that we'll never see a AAA rolling out. Indie titles still have that 'hit and miss' feel though. Early access could be better, and many games are flawed technically. Steam also has a ton of asset flips these days, most are taken down, but that's a disservice to the serious indie developers. For example a game like this which has assets clearly ripped without permission from Driver San Francisco, the developers didn't even remove the license plate text: and that's one of their promotional screenshots, they are so sure that Steam is terrible and flawed that they simply didn't care. And games like that are the vast majority of Steam games I find these days. As an indie developer I'm disgusted by that, and profoundly disappointed on Steam for allowing that kind of outrageous product to reach their virtual shelves. Cheap asset flips are also very common these days, or not even flips, just projects straight out of the Unity Asset store, just built and published on Steam. It's not only unfair competition, but it tarnishes the image of the whole indie community. At least on Steam "Indie" is now synonym of terrible quality asset flips, at least that's what you get when you browse the "Indie" tag, most games are simply pathetic. The strange part is that you get much higher quality in itch.io which is a much more 'open' store, so one can only assume that as long as Steam is making money they couldn't care less. I hope some day indies will have a decent store to sell their products. A store that at least forbids blatant asset flips or games with illegally ripped copyright content. Maybe it's going to be itch.io, maybe GOG is a decent alternative, maybe the Humble guys... I don't know. But it's sad that Steam is the most important store for indies and yet they have absolutely no respect for indies whatsoever... and for the customers too because if you're selling these pathetic products it's pretty clear you don't care.
  9. AlanGameDev

    Diablo:Immortal

    The real question here is why you would expect a company like that to not take that path. While the current model of casual/accessible (some say 'dumb') games full of IAPs and shady stuff works we can only expect companies to go that route. Unfortunately these days you can't really expect these large companies to have any respect or consideration for their customers.
  10. AlanGameDev

    Diablo:Immortal

    @CrazyCdn I think the problem is we're all far from the typical 'mainstream' games audience these days. I personally enjoyed the 3D Fallouts, New Vegas being the best by far, but they're distancing from the original games so much that it doesn't really feel like a Fallout game anymore, at that point they lose me, but if for each old times fan they lose they sell 10 more copies to new players because of the more accessible 'call of duty' mechanics and less contradictory story predicaments that's a win for them... who am I to tell them they're wrong simply because I personally didn't like the new iterations or the overall direction the franchise is taking. I really feel your pain. But as you said people are buying it. Bethesda wants to sell games and they're making the games the most people will buy. That's an irreversible tendency in my opinion, and the old fans will have to content themselves with these indie alternatives I guess... unless they decide to split the franchises into separate 'lite' and 'hardcore' versions, what has been done in the past, I don't know if successfully. What will determine that is whether there is money to be made from games that are more similar to the old school ones. Maybe a production like that won't be profitable for a large studio to begin with.
  11. AlanGameDev

    Diablo:Immortal

    I think in the end it all comes down to the false assumption that AAAs are somehow committed to providing a quality product to their customers. They're not, they are committed to making money, we expect that providing a decent product is part of that process but that's been proven false (fo76 why oh why :trollface:). Their success isn't measured in the quality of their products or satisfaction of their audience, but in the quantity of money they make. It's simple as that. Now I'm not saying that producing a good game isn't part of that equation, but things like appealing to a broader audience might be a lot more important. Especially with the popularization of game development I think niche games are going to be supplied by small and indie studios, the big companies are going to produce games that cater to the widest audience as possible, what means a pretty low common denominator in terms of game mechanics and incredible visuals. I think it's an irreversible tendency that the big franchises are distancing themselves from their past in favor of lower entry barriers for the players. Fallout these days is very different from the isometric versions, and it diverges from them with each iteration. If you want a classic Fallout experience these days, indies got you covered with say Wasteland 2 or other 'spiritual successors'. The same clearly happened to some extent to Diablo, and will only *intensify* in the future in my opinion. I personally lost interest in Diablo 3 when they went for a WoW art style to be honest. What a shallow dude I am :trollface:. @JTippetts I miss the :trollface: so much :sad:.
  12. AlanGameDev

    Diablo:Immortal

    Sad reality. They don't care about their fans, they have a brand name and will use to make as much money in the shortest time possible. Whether that's a wise long-term strategy is arguable of course, but investors want money as soon as possible, destroying the franchise in the process isn't one of their concerns. Typos in last sentence 🚎
  13. @ajmiles Well, I couldn't find that in the VS2017 installer, only this: I'm on Windows 7 though, don't know if that's why it doesn't show for me. I think the problem is the feature level though, I just ran the DirectXTK "SimpleSampleTK", and when I select FL11.0 from the dropdown list, the WARP device isn't available, it's only available for FL10.1 or below. Perhaps the WARP device that supports FL11 isn't available for Windows 7 systems?
  14. @ajmiles I've tried creating the device with `D3D_DRIVER_TYPE_WARP` but for some reason it's failing with HRESULT 0x887a0004 (DXGI UNSUPPORTED), code is like this: auto features = D3D_FEATURE_LEVEL_11_0; HRESULT hr = D3D11CreateDevice( nullptr, D3D_DRIVER_TYPE_WARP, nullptr, debug ? D3D11_CREATE_DEVICE_DEBUG : 0, &features, 1, D3D11_SDK_VERSION, &Gfx.device, nullptr, &Gfx.context ); Maybe there's some incompatible argument there, I don't know . In any case I'll try to determine where exactly is the threshold, and maybe make a simpler example with a 1D buffer. Also, I just realized that a window isn't at all necessary for a repro .
  15. @ajmiles Definitely not. I'm using the Windows SDK 10.0.17134.0 which I believe is the latest. I just mentioned DXUT because I'm using SDL for windowing and I believe you don't want that dependency so maybe I could copypaste some DXUT snippet to handle windowing using the Win API. I think DXUT is being maintained though, but I could be wrong of course.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!