Moderators

90

14541

Moderators

77

14091

Moderators

57

10972

Moderators

39

11197

## Popular Content

Showing content with the highest reputation since 10/20/17 in all areas

1. 9 points

## Do you think game development costs will eventually go back down again?

If you think about it cost of games has gone down a lot. You can make Pack Man for the fraction of the cost when it was first released. A small home team could make a full GTA 1 clone for much less than it first cost to make. The problem is that the ambitions of developers and the expectation of players is always driving up cost. So I think making games will always be expensive. No matter how good we get at it.
2. 7 points

## System that casts base class to derived class

Generally speaking this is a bad idea. If you need to render each of these objects in a different way, there's not much point having them all in the same container, which in turn brings into question whether it's worthwhile having them derive from Renderable at all. (Deriving from a base class so that you can avoid typing out those 2 buffers for each derived class is not a good reason to do it.)
3. 7 points

4. 6 points

5. 6 points

6. 6 points

## Game data container

No, you don't. And that is how you solve the problem. Don't try to look for one magical pattern or process to solve this issue. What you need to do is look at each of these things, and find ways to refactor to the code so that you don't need to pass them. Keep doing that until you're happy with the state of the code. An example to get you started: an Enemy class may need to be able to know where the nearest Player is, to attack it. And it might need to know about all other world objects, so it can walk around them. Instead of passing in "list of all world objects" and "list of all players", pass in a GameWorld object, and ensure that GameWorld has functions like "FindNearestPlayer" or "FindNearbyWorldObjects" so the Enemy can query for what it needs from one single object that represents the environment.
7. 6 points

## Writing api agnostic rendering layer. How to design and where to start?

I wouldn't worry too much about the use of runtime polymorphism. It's way down on the list of interesting or useful problems. The key point you need to think about is, what level will the abstraction exist on? There are two equally valid ways to do this. You can create "API-level" abstractions for buffers, textures, shaders, etc and implement common interfaces for the different APIs. Or you can define high level concepts - mesh objects, materials, scene layouts, properties - and let each API implementation make its own decisions about how to translate those things into actual GPU primitives and how to render them. There are a lot of trade-offs inherent in both approaches.
8. 6 points

## Origins of Open GL libraries

His point was that it does -- when you install the platform SDK you get d3d11.h and you get gl.h (or more importantly Wingdi.h and Opengl32.lib) -- you can't install the Windows "D3D SDK" without also getting the Windows "GL SDK" Look at the source of GLUT/etc... GLUT is in no way essential to GL, it's an application framework for people who don't want to write their main loop. It automates more Win32 programming than it does GL programming It will be more interesting to look at the source for something like GLEW. Khronos writes the API specifications, from the API specification you can automatically generate the full list of enums, structures and function signatures yourself -- many projects do exactly that. e.g. see https://raw.githubusercontent.com/KhronosGroup/OpenGL-Registry/master/xml/gl.xml which could be used to generate your own gl.h file. On Windows, you don't talk to your GL driver directly, because it could come from NVidia/AMD/etc... so you need a middle-man to load the GL implementation for you and allow you to connect to it. Every OS/platform has their own separate API for doing this step. On windows, it's wgl, and the most important part is wglGetProcAddress. You pass it the name of a GL function, and it returns you a function-pointer to that function in the implementation (NVidia's/AMD's GL driver). You then cast this function pointer to the correct type (using a typedef that you automatically generated from Khronos' specs) and now you're able to call this GL function. Other platforms are similar, except with their own API's for looking up GL functions, such as glXGetProcAddress, eglGetProcAddress, etc... See also:
9. 5 points

## Thank you, GameDev.net!!!

Little over a month ago, some guy noticed me by my nickname on another website (I use Embassy of Time several places), and asked if I was the one who also posted on GameDev. I said yes. Apparently, he was amongst those reading my scientific ramblings (like this or this) on the site. And he also happened to be a small-time member of a network of personal investors, so-called "business angels". Now, I've run a company before (web TV and 3D animation, not game development), so I know that a lot of people make big claims, and even if those claims are true, you don't win the lottery from just being noticed. But it was an interesting talk. Then, about a week ago, he contacted me again. A couple of his colleagues (I have no idea what investors call each other) wanted to see a project suggestion on some of the things we talked about. Part of why they wanted to see this was that they had a look at my blog in here and wanted to know more. So now, I am working on a presentation of some of the things I have worked with on a serious science-based game. I am pretty nervous, and very open to ideas from people in here on how to dazzle these folks! It's not a big blog entry this time, I know, but I felt like letting people here know, and giving a big thanks to GameDev.net for being a community where some lunatic with a science fetish (me) has a chance to get noticed! If this works out well, I definitely won't forget you
10. 5 points

## How to reduce data sizes?

Particles might number in the millions, but we don't try to send them across the network. The amount of objects you can send is proportional to the amount of data each one needs. Sorry for such a flippant answer but there's no trick or magic here. You can send as much data as your network bandwidth allows (nothing much to do with packet size, incidentally) and the less data you need per entity, and the less frequently you want to update them, the more entities you can update. To get transmission sizes down, you need to think in terms of information, not in terms of data. You're not trying to copy memory locations across the wire, you're trying to send whatever information is necessary so that the recipient can reconstruct the information you have locally. e.g. If I want to send the Bible or the Koran to another computer, that's hundreds of thousands of letters I need to transmit. But if that computer knows in advance that I will be sending either the Bible or the Koran, it can pre-store copies of those locally and I only have to transmit a single bit to tell it which one to use, as there are only 2 possible values of interest here. Similarly, if I want to send a value that has 8 possible values - i.e. the directions we talked about - that's just 3 bits. I could pack 2 such directions into a single byte, and leave 2 bits spare. Or I could send 8 directions and pack them into 3 bytes (3 bytes at 8 bits per byte is 24 bits to play with). If you're not comfortable with bitpacking, maybe read this: https://gafferongames.com/post/reading_and_writing_packets/
11. 5 points

## Will it cast?

Yes, read IEEE 754 for more detail. The range of integers that can be exactly represented by a single precision float is from -0x01000000 to 0x01000000.
12. 5 points

## When/How often should I be using forward declaration?

My personal preference is to forward declare only what I need to use, and then promote to a full dependency as infrequently as possible. So if you need to use another type, forward declare when you can, but don't just throw around declarations for the fun of it.
13. 5 points

## Weird behavior for a function with optimizations turned on

I'ma just leave this here: https://godbolt.org Also, your code has a lot of improvement to be made. Your API for GetProcessorName() is bad. Please don't ever write a function like that. You should either return a string object or fill a string buffer, but not both in one function. As you are discovering this is a recipe for confusion and unhappiness. Please don't use C-strings. You have a number of issues in this code that indicate (1) you are not familiar with how C-strings work, (2) you aren't thinking carefully enough about how you manipulate C-strings, or (3) both. For example, you confuse allocation size with string length in a couple places, and your attempts to account for NULL terminators look wrong to me. The loop style invocation of __cpuid is overly complex and needlessly busy if the CPU returns a huge number of capabilities/extended IDs. You can write this simply as a single if check and 3 successive __cpuid calls with no loop.
14. 5 points

## Is Phil Fish a Jerk?

From watching those films, I could see him as a dick but also really empathise with him and see him as someone who's under too much stress and not aware of how other people are going to interpret and twist what they say. I think he's the perfect example of consumers enjoying hating a creator, which is terrible phenomenon. I don't know him personally so I can't judge
15. 5 points

## Variadic templates and tuples of wrapped types

Off the top of my head (untested): std::tuple<TypedMap<Ts>...> Storage;
16. 5 points

## Is it possible to get STL vector like debugging features for a non STL vector class

^That^ Also as a tip for the 'watch' debugging window - say you've got something like: struct MyVec { int* begin; int* end; } MyVec v = ...; In the watch window you can type "v.begin" but it will only show the first element... So, instead you can type "v.end-v.begin" to see the size -- and let's say it's 42 -- then you can type "v.begin,42" and Visual Studio will display the entire array in the watch window.
17. 5 points

## Is it possible to get STL vector like debugging features for a non STL vector class

Yes, you can use the 'Natvis' system to customise how any type is displayed in the Visual Studio debugger. https://docs.microsoft.com/en-gb/visualstudio/debugger/create-custom-views-of-native-objects
18. 5 points

## How do people make games without a commercial engine?

Obviously it is viable - if it possible to write an engine, and it is possible to write a game using an engine, it is possible to do both. Whatever the engine-makers deemed important to create, you can create yourself. Libraries help, because they are basically pre-packaged bits of code that other people wrote, which you can use. For example, you might use a library to load 3D model formats, or to play back audio. There are many of these, and most are available for the popular programming languages like C++, C#, Java, and Python. Another word you might hear is a 'framework' - this is usually a big library, or a collection of libraries, that does lots of different things, but which works well as a whole. SDL and SFML are frameworks for C++ which give you a lot of game-making functionality for free. An engine is basically just the logical extension of this idea - it's typically a framework that is very fully-featured and which comes with its own editor which lets you create and test levels. Unity is an example of an engine that uses C# for its code, and Unreal is an engine that uses C++. If your main aim is to be productive at making games in the short to medium term, then starting with an engine is probably a good idea. Some people prefer to learn the fundamentals and like starting with a more primitive programming environment and a simpler framework - Python and Pygame is a popular pairing, for example. Each route has pros and cons. It would be a very tough job to make a game in a 24 hour jam without using at least one game framework or a bunch of good libraries, but that's not to say it isn't possible for the right person.
19. 5 points

## Writing api agnostic rendering layer. How to design and where to start?

The cost of virtual functions are usually greatly exaggerated in many posts on the subject. That is not to say they are free but assuming they are evil is simply short sighted. Basically you should only concern yourself with the overhead if you think the function in question is going to be called >10000 times a frame for instance. An example, say I have the two API calls: "virtual void AddVertex(Vector3& v);" & "virtual void AddVertices(Vector<Vector3>& vs);" If you add 100000 vertices with the first call the overhead of the indirection and lack of inlining is going to kill your performance. On the other hand, if you fill the vector with the vertices (where the addition is able to be inlined and optimized by the compiler) and then use the second call, there is very little overhead to be concerned with. So, given that the 3D API's do not supply individual vertex get/set functions anymore and everything is in bulk containers such as vertex buffers and index buffers, there is almost nothing to be worried about regarding usage of virtual functions. My API wrapper around DX12, Vulkan & Metal is behind a bunch of pure virtual interfaces and performance does not change when I compile the DX12 lib statically and remove the interface layer. As such, I'm fairly confident that you should have no problems unless you do something silly like the above example. Just keep in mind there are many caveats involved in this due to CPU variations, memory speed, cache hit/miss based on usage patterns etc and the only true way to get numbers is to profile something working. I would consider my comments as rule of thumb safety in most cases though.
20. 5 points

## Node Graphs and the Terrain Editor

I've been working on the node graph editor for noise functions in the context of the Urho3D-based Terrain Editor I have been working on. It's a thing that I work on every so often, when I'm not working on Goblinson Crusoe or when I don't have a whole lot of other things going on. Lately, it's been mostly UI stuff plus the node graph stuff. The thing is getting pretty useful, although it is still FAR from polished, and a lot of stuff is still just broken. Today, I worked on code to allow me to build and maintain a node graph library. The editor has a tool, as mentioned in the previous entry, to allow me to use a visual node graph system to edit and construct chains/trees/graphs of noise functions. These functions can be pretty complex: I'm working on code to allow me to save these graphs as they are, and also to save them as Library Nodes. Saving a graph as a Library Node works slightly differently than just saving the node chain. Saving it as a Library Node allows you to import the entire thing as a single 'black box' node. In the above graph, I have a fairly complex setup with a cellular function distorted by a couple of billow fractals. In the upper left corner are some constant and seed nodes, explicitly declared. Each node has a number of inputs that can receive a connection. If there is no connection, when the graph is traversed to build the function, those inputs are 'hardwired' to the constant value they are set to. But if you wire up an explicit seed or constant node to an input, then when the graph is saved as a Library Node, those explicit constants/seeds will be converted to the input parameters for a custom node representing the function. For example, the custom node for the above graph looks like this: Any parameter to which a constant node was attached is now tweakable, while the rest of the graph node is an internal structure that the user can not edit. By linking the desired inputs with a constant or seed node, they become the customizable inputs of a new node type. (A note on the difference between Constant and Seed. They are basically the same thing: a number. Any input can receive either a constant or a seed or any chain of constants, seeds, and functions. However, there are special function types such as Seeder and Fractal which can iterate a function graph and modify the value of any seed functions. This is used, for example, to re-seed the various octaves of a fractal with different seeds to use different noise patterns. Seeder lets you re-use a node or node chain with different seeds for each use. Only nodes that are marked as Seed will be altered.) With the node graph library functionality, it will be possible to construct a node graph and save it for later, useful for certain commonly-used patterns that are time-consuming to set up, which pretty much describes any node graph using domain turbulence. With that node chain in hand, it is easy enough to output the function to the heightmap: Then you can quickly apply the erosion filter to it: Follow that up with a quick Cliffify filter to set cliffs: And finish it off with a cavity map filter to place sediment in the cavities: The editor now lets you zoom the camera all the way in with the scroll wheel, then when on the ground you can use WASD to rove around the map seeing what it looks like from the ground. Still lots to do on this, such as, you know, actually saving the node graph to file. but already it's pretty fun to play with.
21. 4 points

## What do you think can help stop cheating in PUBG

Why can't it be? What are you basing that on? It sounds like you're just repeating things that you've heard rather than basing this on development experience? It's currently lagging because they don't have a dedicated server, so you're relying on other players on vastly different connections to share game-state instead of having a single reliable connection to a data centre. If 100 players all simultaneously fire a 600RPM rifle at a target that's 3km away, then after three seconds there will be 3000 projectiles in the air. If the server ticks at 20Hz it needs a ray-tracer capable of 60K rays/s, which isn't much - it would only consume a tiny fraction of the server's CPU budget. If every bullet needs to be seen by every client (e.g. To draw tracers) then this would cause a burst of about 72kbit/s in bandwidth, which is fine for anyone on DSL, and a 7mbit/s burst on the server, which is fine for a machine in a gigabit data centre. None of this impacts latency because that's not how that works... And if not every bullet has visible tracers then these numbers go down drastically.  I quickly googled and apparently PUBG does already run on dedicated servers in data centres? Maybe they still do client side hit detection for some reason though...
22. 4 points

## Marching cubes

I have had difficulties recently with the Marching Cubes algorithm, mainly because the principal source of information on the subject was kinda vague and incomplete to me. I need a lot of precision to understand something complicated Anyhow, after a lot of struggles, I have been able to code in Java a less hardcoded program than the given source because who doesn't like the cuteness of Java compared to the mean looking C++? Oh and by hardcoding, I mean something like this : cubeindex = 0; if (grid.val[0] < isolevel) cubeindex |= 1; if (grid.val[1] < isolevel) cubeindex |= 2; if (grid.val[2] < isolevel) cubeindex |= 4; if (grid.val[3] < isolevel) cubeindex |= 8; if (grid.val[4] < isolevel) cubeindex |= 16; if (grid.val[5] < isolevel) cubeindex |= 32; if (grid.val[6] < isolevel) cubeindex |= 64; if (grid.val[7] < isolevel) cubeindex |= 128; By no mean I am saying that my code is better or more performant. It's actually ugly. However, I absolutely loathe hardcoding. Here's the result with a scalar field generated using the coherent noise library joise :
23. 4 points

24. 4 points

## Virtuals hide information on the implementation

Are you sure this matches in your code? You need to store pointers to the objects rather than the base objects themselves. The way the language usually enforces this is for your code make the base class abstract so it cannot be instantiated. They are commonly called an ABC, or Abstract Base Class, because of this. Except in rare cases, only the endmost leaf classes should be concrete, everything below it in the hierarchy should be abstract. In that case, are you certain you are following LSP? Are the sub-objects TRULY able to be substituted for each other? Code should be able to move back up the hierarchy to base class pointers, but should never need to move back out the hierarchy, instead using functions in the interface to do whatever work needs to be done. As a code example: for( BaseThing* theThing : AllTheThings ) { theThing->DoStuff(); } versus: for( BaseThing* theThing : AllTheThings ) { // Might also be a dynamic cast, or RTTI ID, or similar auto thingType = theThing->GetType(); if( thingType == ThingType.Wizard ) { theThing->DoWizardStuff(); } else if( thingType == ThingType.Warrior ) { theThing->DoWarriorStuff(); } else ... ... // Variations for each type of thing } Also, in general it is a bad idea to have public virtual function. Here is some reading on that by one of the most expert among C++ experts. This is different from languages like Java where interface types are created through a virtual public interface. Some people who jump between languages forget this important detail. As the code grows those public virtual functions end up causing breakages as people implement non-substitutable code in leaf classes, as it appears you have done here.
25. 4 points

44. 3 points

## Recursion Quick Sort

Try walking through a visualization (like https://visualgo.net/en/sorting - click QUI on the top bar to switch to QuickSort).
45. 3 points

## Reinventing the wheel

Reinventing a crude wheel is good if you want to understand wheels better. For the sake of education, limited reinvention is generally beneficial. Reinventing a fancy wheel is a waste of time unless you want to be in the business of selling fancy wheels. Doing everything from scratch is rarely merited. Even then, you're most assuredly building upon other people's experience and expertise (or you're going to be a naive failure). For pragmatic decisions, only one question matters: is rolling your own going to result in a dramatic improvement over the available state of the art, so much so that it offsets the cost of doing it? If so, go for it. That's how advancements happen. Otherwise... don't waste your time and effort. Play to your strengths. I have done no research besides reading the article in question, but I sincerely doubt they are referring to writing every last bit of code from scratch. That's crazy pants. The splash screens for Witcher 3 show a dozen different middleware packages in use just in that title; obviously RED is not averse to using other people's "wheels" when it makes sense to do so. As for what you should do... It doesn't matter. Learn what you want to learn, ship what you want to finish. Be flexible. Hard and fast rules are useless.
46. 3 points

## Reinventing the wheel

Like everything, the answer is "it depends". Are you a massive studio with resources to burn and the expertise to create new and better wheels? Reinvent away! Are you a one-man indie dev who has limited time and money? I would ask myself if reinventing the wheel is the best use of my limited time. Even that's just one axis on the "should I reinvent this wheel" chart. Are you doing this project for fun or profit? What will the return be on this custom wheel? Are you really good at making wheels, or are you more of a flash paint job kinda guy? Do your (potential) customer love wheels, or will they not even see them?
47. 3 points

## Questions on buffers and their lifetime in GPU memory

Before Windows 10 (WDDM 1.x), the driver was responsible for submitting an "allocation list" and "patch location list" alongside every command buffer. These lists served two primary purposes: it let the kernel-mode driver patch addresses in the command buffer with actual physical memory addresses (this old model assumed that GPU's didn't have virtual memory capabilities), and it let the video memory manager (VIDMM) know which resources were being referenced by the command buffers. That second part is tied to residency: the video memory manager would go off those lists of referenced resources when it would try to term what to keep resident in video memory and what to evict to system memory. So with that in mind, the act of "binding" a resource can indirectly affect residency assuming that you issue a Draw or Dispatch call that actually references that binding. So in other words, if you bind your buffer every frame and use it in 1 draw call, it's less likely to get evicted then if you didn't reference it all for a while. With WDDM 2.0, the patch and allocation lists are gone for GPU's that support virtual addresses. Under this model residency is explicit from an OS point of view: it just responds to requests to evict resources or make them resident. In D3D12 these actions are made directly available to you on a per-resource or per-heap basis. In D3D11 you don't have those controls, so instead it's up to the driver to make changes to residency automatically, which it will (likely) do based on which resources are referenced by your Draw/Dispatch calls.
48. 3 points

## Why can it be safe to sell an open-source game library?

To sell a C++ game library/framework/engine, does it usually have to be open-sourced? No. For a C++ template class (real template), it is quite hard not to be open-sourced, correct? Yes, in the sense of "the code is visible". 'Open source' usually implies the use of certain licenses, and using those is entirely your choice (unless you're using someone else's code too...) If it is open-sourced, how to protect the copyright? Keep an eye out for infringement, then send lawyers. How to enforce people to just use it, but not copy? "Open source" usually implies that copying is okay, since most open source licenses say so. But if you're selling a library with full source code but don't want it copied, it's just down to the lawyers again. ... I think it is practically very hard, because people can see the source code, and "rewrite" it. That's not a question. Does a shop like Unity's asset store make thing safer? If so, how? No. You can sell libraries there without including the source, but you could do that anyway.
49. 3 points

## Octagon-Square tiling for world map

Well, the good thing about 3D is that you can use as much geometry as you need to. In this case, to add a river you could simply subdivide the ground tile mesh to a proper detail level, then displace the 'river' parts downward. Then add a water tile for the water surface: You can even scatter a few doodad meshes as in the above, to 'dress up' the water's edge. Again, that tile is drawn in 3 passes: ground, then water, then rocks. (Although, in a 3D engine, if the water material is partially transparent it would typically be drawn in an alpha pass after the solids.) No complicated stitching required, just 3 meshes. (Or, 3 batches, anyway; the clutter could be built as a batch using instanced meshes.)
50. 3 points

## Motion Capture pipeline

My friend runs a motion capture company here in seattle. Here's his website: http://www.mocapnow.com/ They offer motion capture services for companies large and small. Usually, you'd rent their studio for a day or two, bring in an actor, they'd help you setup the trackers and run their system, and then you'd capture motion data. After you capture the motion data, you may have to do some data cleanup. Generally speaking, what you're trying to do is map an actors bone structure to your animation bone structure and replay the actors bone rotations on the animation bone rotations. It's important to note that you only want to capture bone rotations, not bone positions (because bones don't stretch!). You have two options: You can try to setup your own mocap hardware/software or you can rent out a studio for a day or two. If you look at the costs between the two options, renting the mocap studio is generally far more economical. If you setup your own mocap studio, you are going to be purchasing hardware, space, and spending time to setup and get proficient with the technology. Not only does it cost money, it also costs you a lot of time (and time is money!), and you don't have seasoned experts to help. I would only setup an internal mocap studio ONLY if I have a ton of mocap to do on a frequent basis (ie, large company with heavy production). If you're just an indie or a hobbyist, rent it out. Obviously, you want to plan out your shot list before renting studio time...

1. 1
2. 2
3. 3
4. 4
frob
10
5. 5
• ### Member Statistics

• Total Members
239285
• Most Online
6110