Jump to content
  • Advertisement


GDNet+ Ad Free
  • Content count

  • Joined

  • Last visited

Community Reputation

2823 Excellent

About Tape_Worm

  • Rank


  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So, it’s been a long time since I’ve posted anything here. Mostly because I’ve not been writing much code in the last while and real life has been my focus. Since I’ve been gone, I’ve become a software architect at my job, and that keeps me quite busy. Gorgon I’ve uploaded Gorgon v2 (such as it is) to Git. I’m not entirely happy with that version, and frankly I lost my motivation to finish it. As it is, it’s feature complete, but may have bugs. However, the editor application is in an unfinished state (it’s usable, but it’s got issues). From time to time I toyed with making a new version, but never really had the motivation to go beyond prototyping. I tried using Unreal and Unity, and while those things are quite impressive, they seem to … I dunno, lack something. Especially in the 2D arena. Also, as a programmer, and an oldschool one at that, it felt quite unnatural to use an editor like that to build up stuff. However. Lately I’ve been in a fit of watching game documentaries on Youtube (and oddly enough, CBC.ca), and while watching a documentary series on game developers from the 80’s (henceforth known as the best decade), I felt compelled to have another kick at the can. So, how’s it going? Well.. it’s going quite well. I’ve got the core graphics library done, and the 2D renderer is well under way (I can render text and sprites). I’ve opted to learn how modern rendering engines do their thing to see if I can get some inspiration for performance improvements. And I did. The core Gorgon library now renders using a stateless drawing mechanism where an immutable draw call object, configured with immutable pipeline states (similar to how D3D 12 does a PSO). It’s quite a departure from the older versions. But, I’ve seen some impressive performance gains because of this. I’ve also decided that I absolutely HATE Direct3D 12. I get it, it’s wonderful for squeezing every last ounce of performance from your machine so you can get incredible visuals. But it is a complicated beast, leaving it to the developer to control almost every aspect of programming the GPU. It kind of reminds me of Immediate mode back in the D3D 3.0 days (nowhere near that bad though). So, to that end, I’ve decided to base Gorgon’s core graphics API on the latest (at least, latest available in SharpDX) version of Direct 3D 11, which is 11.4. Honestly, from all I’ve seen, Microsoft has no intention of abandoning 11.x since it has a firmly rooted place in development due to how much simpler it is to use over 12. That may change in the future, but for now it’s good enough for me. And this is likely to piss some people off: This new version of Gorgon requires Windows 10. And no, I don’t care to hear about how I’m being spied on. So, you’re probably wondering hows the performance of the rendering in the 2D API? Well, see the image: Yes, that’s 131072 sprites, plus text, rendering at above 60 FPS. That’s running on a GeForce 1080 GTX. A pretty high end GPU. And you might think… “well, for that GPU, 70+ fps isn’t that great” But keep in mind, those are alpha blended, textured, animated sprites (all 131072 of them). This little test/example app has been with me since 1.0 and it’s served me well as a quasi benchmark. To give you an idea of how much of an improvement this is from v1 to now (all on the 1080): V1 – ~15 FPS with the same number of sprites, and a lower window resolution (800×600 vs 1280×800). This version used D3D 9 V2 – 28-30 FPS. Same resolution, using D3D 11.0 V3 – Well.. look at the screen shot. It’s an improvement. So yeah, that’s where I’m at. Will I keep going and release this? Who knows… I do have a v3.0 branch up on the GitHub repo, which contains just the core API for now (which, sadly has some bugs that I’ve since patched while creating the 2D API). So, if you want, you can grab it, and mess around with it. Just keep in mind that I offer no support for that version, and it will change over time and may break your stuff. View the full article
  2. Tape_Worm

    VSSetConstantBuffers1 Problems

    It's 256 minimum if you're going to offset into the buffer with xxSetConstantBuffers1. I can assume, if you don't offset into the buffer, the size is still an actual minimum of 16 bytes.
  3. Tape_Worm

    VSSetConstantBuffers1 Problems

    Oh I know. It just makes things harder to explain when documenting the functionality I'm writing. Especially since I've had it drilled in to me that cbuffers need 16 byte alignment. Besides, I only need 1 matrix. Whatever would I do with 2? That's just madness.
  4. Tape_Worm

    VSSetConstantBuffers1 Problems

    And... of course... I just figured that out. I had misread the documentation, so I'd thought it was 16 bytes per element. That said, isn't that kind of wasteful? I mean, if a matrix is 64 bytes, and each element is 256 bytes, that leaves a lot of unused space. I had assumed constant buffer elements were laid out using 16 byte alignments, and that viewing into them in this way would follow similar alignment rules.
  5. Tape_Worm

    VSSetConstantBuffers1 Problems

    Well, when I use anything less than a multiple of 16, the debug spew complains that I'm not using a multiple of 16.
  6. So I've been playing around today with some things in D3D 11.1, specifically the constant buffer offset stuff. And just FYI, I'm doing this in C# with SharpDX (latest version). I got everything set up, I have my constant buffer populating with data during each frame, and calling VSSetConstantBuffers1 and passing in the offset/count as needed. But, unfortunately, I get nothing on my screen. If I go back to using the older D3D11 SetConstantBuffers method (without the offset/count), everything works great. I get nothing from the D3D runtime debug spew, and a look in the graphics debugger stuff tells me that my constant buffer does indeed have data at the offsets that I'm providing. And the data (World * Projection matrix) is correct at each offset. The offsets, according again to the graphics debugger, are correct. I could be using it incorrectly, but what little (and seriously, there's not a lot) info I found seems to indicate that I'm doing it correctly. But here's my workflow (I'd post code, but it's rather massive): Frame #0: Map constant buffer with discard Write matrix at offset 0, count 64 Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Frame #1: Map constant buffer with no-overwrite Write matrix at offset 64, count 64. Unmap VSSetConstantBuffers1(0, 1, buffers, new int[] { offset }, new int[] { count }); // Where offset is the offset above, same with count Draw single triangle Etc... it repeats until the end of the buffer, and starts over with a discard when the buffer is full. Has anyone ever used these offset cbuffer functions before? Can you help a brother out? Edit: I've added screenshots of what I'm seeing the VS 2017 graphics debugger. As I said before, if I use the old VSSetConstantBuffers method, it works like a charm and I see my triangle.
  7. Tape_Worm

    Programming Languages

    I hemmed and hawed over whether or not to point this out (I hate being the "um, actually" guy... it's annoying). But considering that this is "For Beginners", I feel it's important that newer programmers have accurate information. Especially since I see so many people come in to my work place with a BSc in Comp Sci and end up having lot of trouble understanding the tools they're using due to bad information, or misconceptions derived from observation. C#, as in the language that comes with .NET/Visual Studio, or Mono, is most certainly not interpreted. That said, it could be interpreted, just as C++ could be interpreted. But for all intents and purposes, it is not. C# with .NET/Mono compiles down to IL (intermediate language, it looks similar to assembly), and then the JIT (Just In Time) compiler compiles the IL into native machine code. Remember: JIT != Interpreted. Interpreted languages are not converted to assembly. They're executed by an interpreter. Hence why they're called "interpreted". From wikipedia: Be aware that any interpreted language is likely going to be much slower than a compiled one. Does that matter? The answer, like so many answers when it comes to Gamedev is: It depends. But just throwing that out there, because it often does impact the decision of so many developers, gaming and otherwise. I can't speak for Python, but I think JavaScript is a little more murky on the subject especially depending on the browser you're using. However, I am not an expert in either language so someone else might be able to clear that up.
  8. I don't know if it's possible, but it'd be real nice if we could customize our front page portal. I just noticed the current layout change when I logged in, and I found it quite jarring and busy looking. I mean, I'll get used to it, so it's no big deal. But still, it'd be nice if we could define our own layout. Also, you could offer it exclusively to one (or more) of your GDNet+ tiers, thereby incentivizing people to sign up for that feature (I would upgrade my account for this + ad free in a heartbeat). To give an example: I really don't need (or want) the sections about contractors, game jobs, image of the day, upcoming events, who's online, etc... It'd be nice if I could hide these or collapse them in some way. I know this would not be trivial to implement, but if ever you get everything done, and you're so totally bored that you need something to do, then this is something you might want to consider?
  9. Tape_Worm

    visual studio code

    It'd be helpful if you told us what you did to solve it, if only to avoid this: https://xkcd.com/979/
  10. Tape_Worm

    Upvoting and Downvoting

    Yep that did it. I see it now. Thanks
  11. Tape_Worm

    Upvoting and Downvoting

    Yeah, don't see it. At first I thought it might be uBlock filtering it out, but I disabled it and still didn't see anything. Tried Edge, and still nothing. I mean it's not the end of the world, but I just figured you guys had finally done away with that system or something.
  12. Tape_Worm

    Upvoting and Downvoting

    You do? I can't see anything related to up or down voting. Which is too bad, because I wanted to give someone an upvote for giving me some good help there the other week. Or is this some new fangled method only the young folks understand like swiping SSE or NW or some such?
  13. Well damn. I remember reading about that ages back, but totally forgot about it. Makes a lot of sense now. That said, the docs for D3D11_CPU_ACCESS_FLAGS should be updated to include a link to that info eh? I've sent feedback regarding that. Thanks for the clarification. I'd give you an upvote or whatever they're using nowadays for that, but I have no idea how that system works anymore. Clearly I'm old and can't handle change.
  14. So last night I was messing about with some old code on a Direct 3D 11.4 interface and trying out some compute stuff. I had set this thing up to send data in, run the compute shader, and then output the result data into a structured buffer. To read this data back in to the CPU, I had copied the structured buffer into a staging buffer and retrieved the data from there. This all worked well enough. But I was curious to see if I could remove the intermediate copy to stage and read from the structured buffer directly using Map. To do this, I created the buffer using D3D11_CPU_ACCESS_READ and a usage of default, and to my shock and amazement... it worked (and no warning messages from the D3D Debug log). However, this seems to run counter to what I've read in the documentation for D3D11_CPU_ACCESS_FLAG: The bolded part is what threw me off. Here, I had a structured buffer created with default usage, and a UAV (definitely bindable to the pipeline), but I was able to map and read the data. Does this seem wrong? I'm aware that some hardware manufacturers may implement things differently, but if MS says that this flag can't be used outside of a staging resource, then shouldn't the manufacturer (NVidia) adhere to that? I can find nothing else in the documentation that says this is allowed or not allowed (beyond the description for D3D11_CPU_ACCESS_READ). And the debug output for D3D doesn't complain in the slightest. So what gives? Is it actually safe to do a map & read from a default usage resource with CPU read flags?
  15. I use a program called DB Browser for SQLite for my front end. Is it good? Meh, it's good enough. There's probably better out there if you look. I don't know of anything that'd convert from MDB to SQLite for you (but I haven't looked either, haven't had need to), but if you can't find anything then you could write yourself a little one-off console app (in 32 bit to keep Access happy). This app would just create the SQLite database, and transfer the data from your access database into SQLite. Then you can discard the Access stuff for good and use that SQLite database going forward. That really shouldn't be too difficult to do. You can totally make a SQLite database for Database B at runtime. It's super easy to do. Hell, if you want, you can attach the two SQLite databases together (Access provided a similar functionality with linked tables if I recall) at run time and transfer data using just SQL queries without having to write code to loop through a record at a time or whatever. Or, you could just read from those tables in A, and write to B, using the same database connection if you attach the databases. Whatever you choose is up to you, but that's some of the capabilities you can take advantage of. One tip though. To get maximum performance out of SQLite when adding/updating a lot of records, you should do it in a transaction. It's blazingly fast this way. If you don't do this, then it can pretty slow doing massive inserts/updates.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!