Leaderboard


Popular Content

Showing content with the highest reputation since 10/21/17 in all areas

  1. 9 points
    If you think about it cost of games has gone down a lot. You can make Pack Man for the fraction of the cost when it was first released. A small home team could make a full GTA 1 clone for much less than it first cost to make. The problem is that the ambitions of developers and the expectation of players is always driving up cost. So I think making games will always be expensive. No matter how good we get at it.
  2. 7 points
    Generally speaking this is a bad idea. If you need to render each of these objects in a different way, there's not much point having them all in the same container, which in turn brings into question whether it's worthwhile having them derive from Renderable at all. (Deriving from a base class so that you can avoid typing out those 2 buffers for each derived class is not a good reason to do it.)
  3. 7 points
    Last January I was in the same situation as you, and since then I have spent a lot of time writing an abstraction layer on top of DX11, DX12 and Vulkan. Here are some tips, however, I am by no means an expert so take everything I say with a grain of salt. Just go for the pure virtual interface, if that is what you are comfortable with. As others have stated, the performance does not need to be a problem if you think about how often you call the backend through the API. Decide early if you want to support new, lower level, APIs such as DX12, Vulkan or Metal. And if you do, develop first and foremost for those. It is a lot easier to write a backend using DX11 or OpenGL for a low level API, than writing a DX12 or Vulkan backend for a high level API. Certain things, such as fences, can just be skipped when writing a DX11 backend. But good luck trying to write a backend for DX12 when the API has no concept of a fence. Even if you don't plan on supporting low level APIs right now, think about bringing some of their concepts into your API, such as PSOs and command buffers. At least I much prefer packaging state together into PSOs rather than setting state all over the codebase. They tend to make the app code cleaner, and might even help you do optimizations such as hashing state and avoiding doing unnecessary backend calls. If you plan to support Vulkan, make render passes a first-class citizen of the API. They are the only way to render in Vulkan, and they are easy to emulate in the other APIs. Make a unified shader system. When I create a PSO, I load a shader blob from disk and pass it to the renderer. The app programmer never needs to know which underlying shader language is used. The blob contains "language slots" for all supported backends. Eg. the blob might contain HLSL bytecode for DX11/12, SPIR-V for Vulkand and GLSL for OpenGL. The backend pulls the correct code out of the blob and the others are unused. The blob also contains uniform metadata, ie. which binding slot is used for the constant buffer with name "cbLighting" etc. Don't write shaders in multiple languages. Cross-compile HLSL into what you need. glslangvalidator supports HLSL out of the box. There are HLSL-to-GLSL cross compilers available etc. Eg. https://github.com/LukasBanana/XShaderCompiler seems promising. These are some things I have found helpful. Hope it helps!
  4. 6 points
    Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. It also supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. The engine contains shader source code converter that allows shaders authored in HLSL to be translated to GLSL. The engine currently supports Direct3D11, Direct3D12, and OpenGL/GLES on Win32, Universal Windows and Android platforms. API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Build Instructions Please visit this page for detailed build instructions. Samples The engine contains two graphics samples that demonstrate how the API can be used. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. It can also be thought of as Diligent Engine’s “Hello World” example. Atmospheric scattering sample is a more advanced one. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The engine also includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  5. 6 points
    Absolutely not a jerk. At least, not a jerk for saying "I am done, I am not finishing this, leave me alone forever." It is not a matter of saying "suck it up" or "handle trolls better" or "get a thicker skin". Many successful game developers become targets. Those aren't just minor issues, they are repeated targets of major felony crimes, including death threats against themselves and their families, sometimes with their names, addresses, photos, and the schools they attend. People on my team have received death threats, including death threats accompanied by doxxing attacks. It is stressful to everyone involved, including the police and FBI agents who were all brought in to investigate. It changes the workplace environment to know that your designer's kids have death threats with their pictures and school associated with them. You know the designer won't be able to focus on making the best game, no matter how much they're working they will always have in their mind the thought of their child with a target on them. The toxic cultures that some sub-groups encourage are a horrible thing. It is something the industry does very little to correct. Much of the casual slang used by some immature gamers, comments like "go kill yourself" for petty annoyances, are completely unacceptable in any other social context. In many contexts it would be enough to get a person fired, arrested, or face civil penalties. Many sub-groups are extremely aggressive digitally. Then there are people who engage in swatting where people have been shot, and then think it is a joke for Internet gold. But the highly toxic gaming culture not only accepts vitriol as normal, many sub-cultures of fans praise and promote the hatred, the attacks, the insults. In some sub-cultures these are not seen as the actual crimes of blackmail or stalking or harassment or criminal threats, but instead seen as a victory for their group in inducing change. Imagine if your role was reversed. You're putting out a project, a work of passion where you invest all you have to create something amazing. And your results are hugely popular. But it is popular not among adoring fans, but fans who are quite literally threatening to kill you over your work. Fans who write the most horrible hate messages because you haven't done whatever it is that your fan imagined you would do. At the time I hoped his leaving would make a dent in the culture, but I think it has only gotten worse. When you fear for your life because of multiple death threats, and when the government tells you that they are actual credible threats against your life and you need to take precautions, that changes things. When that happens, I agree that it is not only a good thing to step away, but to make a big deal about the crimes as well. He risked (and received) even more threats due to his visible decision to say the behavior is unacceptable. In that regard he has my complete respect and appreciation.
  6. 6 points
    No, you don't. And that is how you solve the problem. Don't try to look for one magical pattern or process to solve this issue. What you need to do is look at each of these things, and find ways to refactor to the code so that you don't need to pass them. Keep doing that until you're happy with the state of the code. An example to get you started: an Enemy class may need to be able to know where the nearest Player is, to attack it. And it might need to know about all other world objects, so it can walk around them. Instead of passing in "list of all world objects" and "list of all players", pass in a GameWorld object, and ensure that GameWorld has functions like "FindNearestPlayer" or "FindNearbyWorldObjects" so the Enemy can query for what it needs from one single object that represents the environment.
  7. 6 points
    I wouldn't worry too much about the use of runtime polymorphism. It's way down on the list of interesting or useful problems. The key point you need to think about is, what level will the abstraction exist on? There are two equally valid ways to do this. You can create "API-level" abstractions for buffers, textures, shaders, etc and implement common interfaces for the different APIs. Or you can define high level concepts - mesh objects, materials, scene layouts, properties - and let each API implementation make its own decisions about how to translate those things into actual GPU primitives and how to render them. There are a lot of trade-offs inherent in both approaches.
  8. 6 points
    His point was that it does -- when you install the platform SDK you get d3d11.h and you get gl.h (or more importantly Wingdi.h and Opengl32.lib) -- you can't install the Windows "D3D SDK" without also getting the Windows "GL SDK" Look at the source of GLUT/etc... GLUT is in no way essential to GL, it's an application framework for people who don't want to write their main loop. It automates more Win32 programming than it does GL programming It will be more interesting to look at the source for something like GLEW. Khronos writes the API specifications, from the API specification you can automatically generate the full list of enums, structures and function signatures yourself -- many projects do exactly that. e.g. see https://raw.githubusercontent.com/KhronosGroup/OpenGL-Registry/master/xml/gl.xml which could be used to generate your own gl.h file. On Windows, you don't talk to your GL driver directly, because it could come from NVidia/AMD/etc... so you need a middle-man to load the GL implementation for you and allow you to connect to it. Every OS/platform has their own separate API for doing this step. On windows, it's wgl, and the most important part is wglGetProcAddress. You pass it the name of a GL function, and it returns you a function-pointer to that function in the implementation (NVidia's/AMD's GL driver). You then cast this function pointer to the correct type (using a typedef that you automatically generated from Khronos' specs) and now you're able to call this GL function. Other platforms are similar, except with their own API's for looking up GL functions, such as glXGetProcAddress, eglGetProcAddress, etc... See also:
  9. 5 points
    Why can't it be? What are you basing that on? It sounds like you're just repeating things that you've heard rather than basing this on development experience? It's currently lagging because they don't have a dedicated server, so you're relying on other players on vastly different connections to share game-state instead of having a single reliable connection to a data centre. If 100 players all simultaneously fire a 600RPM rifle at a target that's 3km away, then after three seconds there will be 3000 projectiles in the air. If the server ticks at 20Hz it needs a ray-tracer capable of 60K rays/s, which isn't much - it would only consume a tiny fraction of the server's CPU budget. If every bullet needs to be seen by every client (e.g. To draw tracers) then this would cause a burst of about 72kbit/s in bandwidth, which is fine for anyone on DSL, and a 7mbit/s burst on the server, which is fine for a machine in a gigabit data centre. None of this impacts latency because that's not how that works... And if not every bullet has visible tracers then these numbers go down drastically. [edit] I quickly googled and apparently PUBG does already run on dedicated servers in data centres? Maybe they still do client side hit detection for some reason though...
  10. 5 points
    Little over a month ago, some guy noticed me by my nickname on another website (I use Embassy of Time several places), and asked if I was the one who also posted on GameDev. I said yes. Apparently, he was amongst those reading my scientific ramblings (like this or this) on the site. And he also happened to be a small-time member of a network of personal investors, so-called "business angels". Now, I've run a company before (web TV and 3D animation, not game development), so I know that a lot of people make big claims, and even if those claims are true, you don't win the lottery from just being noticed. But it was an interesting talk. Then, about a week ago, he contacted me again. A couple of his colleagues (I have no idea what investors call each other) wanted to see a project suggestion on some of the things we talked about. Part of why they wanted to see this was that they had a look at my blog in here and wanted to know more. So now, I am working on a presentation of some of the things I have worked with on a serious science-based game. I am pretty nervous, and very open to ideas from people in here on how to dazzle these folks! It's not a big blog entry this time, I know, but I felt like letting people here know, and giving a big thanks to GameDev.net for being a community where some lunatic with a science fetish (me) has a chance to get noticed! If this works out well, I definitely won't forget you
  11. 5 points
    Particles might number in the millions, but we don't try to send them across the network. The amount of objects you can send is proportional to the amount of data each one needs. Sorry for such a flippant answer but there's no trick or magic here. You can send as much data as your network bandwidth allows (nothing much to do with packet size, incidentally) and the less data you need per entity, and the less frequently you want to update them, the more entities you can update. To get transmission sizes down, you need to think in terms of information, not in terms of data. You're not trying to copy memory locations across the wire, you're trying to send whatever information is necessary so that the recipient can reconstruct the information you have locally. e.g. If I want to send the Bible or the Koran to another computer, that's hundreds of thousands of letters I need to transmit. But if that computer knows in advance that I will be sending either the Bible or the Koran, it can pre-store copies of those locally and I only have to transmit a single bit to tell it which one to use, as there are only 2 possible values of interest here. Similarly, if I want to send a value that has 8 possible values - i.e. the directions we talked about - that's just 3 bits. I could pack 2 such directions into a single byte, and leave 2 bits spare. Or I could send 8 directions and pack them into 3 bytes (3 bytes at 8 bits per byte is 24 bits to play with). If you're not comfortable with bitpacking, maybe read this: https://gafferongames.com/post/reading_and_writing_packets/
  12. 5 points
    Yes, read IEEE 754 for more detail. The range of integers that can be exactly represented by a single precision float is from -0x01000000 to 0x01000000.
  13. 5 points
    My personal preference is to forward declare only what I need to use, and then promote to a full dependency as infrequently as possible. So if you need to use another type, forward declare when you can, but don't just throw around declarations for the fun of it.
  14. 5 points
    I'ma just leave this here: https://godbolt.org Also, your code has a lot of improvement to be made. Your API for GetProcessorName() is bad. Please don't ever write a function like that. You should either return a string object or fill a string buffer, but not both in one function. As you are discovering this is a recipe for confusion and unhappiness. Please don't use C-strings. You have a number of issues in this code that indicate (1) you are not familiar with how C-strings work, (2) you aren't thinking carefully enough about how you manipulate C-strings, or (3) both. For example, you confuse allocation size with string length in a couple places, and your attempts to account for NULL terminators look wrong to me. The loop style invocation of __cpuid is overly complex and needlessly busy if the CPU returns a huge number of capabilities/extended IDs. You can write this simply as a single if check and 3 successive __cpuid calls with no loop.
  15. 5 points
    From watching those films, I could see him as a dick but also really empathise with him and see him as someone who's under too much stress and not aware of how other people are going to interpret and twist what they say. I think he's the perfect example of consumers enjoying hating a creator, which is terrible phenomenon. I don't know him personally so I can't judge
  16. 5 points
    Off the top of my head (untested): std::tuple<TypedMap<Ts>...> Storage;
  17. 5 points
    ^That^ Also as a tip for the 'watch' debugging window - say you've got something like: struct MyVec { int* begin; int* end; } MyVec v = ...; In the watch window you can type "v.begin" but it will only show the first element... So, instead you can type "v.end-v.begin" to see the size -- and let's say it's 42 -- then you can type "v.begin,42" and Visual Studio will display the entire array in the watch window.
  18. 5 points
    Yes, you can use the 'Natvis' system to customise how any type is displayed in the Visual Studio debugger. https://docs.microsoft.com/en-gb/visualstudio/debugger/create-custom-views-of-native-objects
  19. 5 points
    Obviously it is viable - if it possible to write an engine, and it is possible to write a game using an engine, it is possible to do both. Whatever the engine-makers deemed important to create, you can create yourself. Libraries help, because they are basically pre-packaged bits of code that other people wrote, which you can use. For example, you might use a library to load 3D model formats, or to play back audio. There are many of these, and most are available for the popular programming languages like C++, C#, Java, and Python. Another word you might hear is a 'framework' - this is usually a big library, or a collection of libraries, that does lots of different things, but which works well as a whole. SDL and SFML are frameworks for C++ which give you a lot of game-making functionality for free. An engine is basically just the logical extension of this idea - it's typically a framework that is very fully-featured and which comes with its own editor which lets you create and test levels. Unity is an example of an engine that uses C# for its code, and Unreal is an engine that uses C++. If your main aim is to be productive at making games in the short to medium term, then starting with an engine is probably a good idea. Some people prefer to learn the fundamentals and like starting with a more primitive programming environment and a simpler framework - Python and Pygame is a popular pairing, for example. Each route has pros and cons. It would be a very tough job to make a game in a 24 hour jam without using at least one game framework or a bunch of good libraries, but that's not to say it isn't possible for the right person.
  20. 5 points
    The cost of virtual functions are usually greatly exaggerated in many posts on the subject. That is not to say they are free but assuming they are evil is simply short sighted. Basically you should only concern yourself with the overhead if you think the function in question is going to be called >10000 times a frame for instance. An example, say I have the two API calls: "virtual void AddVertex(Vector3& v);" & "virtual void AddVertices(Vector<Vector3>& vs);" If you add 100000 vertices with the first call the overhead of the indirection and lack of inlining is going to kill your performance. On the other hand, if you fill the vector with the vertices (where the addition is able to be inlined and optimized by the compiler) and then use the second call, there is very little overhead to be concerned with. So, given that the 3D API's do not supply individual vertex get/set functions anymore and everything is in bulk containers such as vertex buffers and index buffers, there is almost nothing to be worried about regarding usage of virtual functions. My API wrapper around DX12, Vulkan & Metal is behind a bunch of pure virtual interfaces and performance does not change when I compile the DX12 lib statically and remove the interface layer. As such, I'm fairly confident that you should have no problems unless you do something silly like the above example. Just keep in mind there are many caveats involved in this due to CPU variations, memory speed, cache hit/miss based on usage patterns etc and the only true way to get numbers is to profile something working. I would consider my comments as rule of thumb safety in most cases though.
  21. 5 points
    I've been working on the node graph editor for noise functions in the context of the Urho3D-based Terrain Editor I have been working on. It's a thing that I work on every so often, when I'm not working on Goblinson Crusoe or when I don't have a whole lot of other things going on. Lately, it's been mostly UI stuff plus the node graph stuff. The thing is getting pretty useful, although it is still FAR from polished, and a lot of stuff is still just broken. Today, I worked on code to allow me to build and maintain a node graph library. The editor has a tool, as mentioned in the previous entry, to allow me to use a visual node graph system to edit and construct chains/trees/graphs of noise functions. These functions can be pretty complex: I'm working on code to allow me to save these graphs as they are, and also to save them as Library Nodes. Saving a graph as a Library Node works slightly differently than just saving the node chain. Saving it as a Library Node allows you to import the entire thing as a single 'black box' node. In the above graph, I have a fairly complex setup with a cellular function distorted by a couple of billow fractals. In the upper left corner are some constant and seed nodes, explicitly declared. Each node has a number of inputs that can receive a connection. If there is no connection, when the graph is traversed to build the function, those inputs are 'hardwired' to the constant value they are set to. But if you wire up an explicit seed or constant node to an input, then when the graph is saved as a Library Node, those explicit constants/seeds will be converted to the input parameters for a custom node representing the function. For example, the custom node for the above graph looks like this: Any parameter to which a constant node was attached is now tweakable, while the rest of the graph node is an internal structure that the user can not edit. By linking the desired inputs with a constant or seed node, they become the customizable inputs of a new node type. (A note on the difference between Constant and Seed. They are basically the same thing: a number. Any input can receive either a constant or a seed or any chain of constants, seeds, and functions. However, there are special function types such as Seeder and Fractal which can iterate a function graph and modify the value of any seed functions. This is used, for example, to re-seed the various octaves of a fractal with different seeds to use different noise patterns. Seeder lets you re-use a node or node chain with different seeds for each use. Only nodes that are marked as Seed will be altered.) With the node graph library functionality, it will be possible to construct a node graph and save it for later, useful for certain commonly-used patterns that are time-consuming to set up, which pretty much describes any node graph using domain turbulence. With that node chain in hand, it is easy enough to output the function to the heightmap: Then you can quickly apply the erosion filter to it: Follow that up with a quick Cliffify filter to set cliffs: And finish it off with a cavity map filter to place sediment in the cavities: The editor now lets you zoom the camera all the way in with the scroll wheel, then when on the ground you can use WASD to rove around the map seeing what it looks like from the ground. Still lots to do on this, such as, you know, actually saving the node graph to file. but already it's pretty fun to play with.
  22. 4 points
    I have had difficulties recently with the Marching Cubes algorithm, mainly because the principal source of information on the subject was kinda vague and incomplete to me. I need a lot of precision to understand something complicated Anyhow, after a lot of struggles, I have been able to code in Java a less hardcoded program than the given source because who doesn't like the cuteness of Java compared to the mean looking C++? Oh and by hardcoding, I mean something like this : cubeindex = 0; if (grid.val[0] < isolevel) cubeindex |= 1; if (grid.val[1] < isolevel) cubeindex |= 2; if (grid.val[2] < isolevel) cubeindex |= 4; if (grid.val[3] < isolevel) cubeindex |= 8; if (grid.val[4] < isolevel) cubeindex |= 16; if (grid.val[5] < isolevel) cubeindex |= 32; if (grid.val[6] < isolevel) cubeindex |= 64; if (grid.val[7] < isolevel) cubeindex |= 128; By no mean I am saying that my code is better or more performant. It's actually ugly. However, I absolutely loathe hardcoding. Here's the result with a scalar field generated using the coherent noise library joise :
  23. 4 points
    Pong Challenge! Make a single player Pong, with a twist. Game Requirements The game must have: Start screen Key to return to the start screen Score system AI Player Graphics at least for the UI and players Sound effects for ball and paddle and when a player scores A unique gameplay element that makes your Pong game stand out from others The actual gameplay must happen on a single screen (no camera translation) Art Requirements The game must be 2D The art must be your own Duration 4 weeks - November 1, 2017 to November 30, 2017 Submission Post on this thread your entries: Link to the executable (specifying the platform) Screenshots: if the screenshots are too big, post just a few, if they are small, you can post more, just don't take the entire space A small post-mortem in a GameDev.net Blog, with a link posted in this thread, is encouraged, where you can share what went right, what went wrong, or just share a nifty trick Source-code link is encouraged
  24. 4 points
    Are you sure this matches in your code? You need to store pointers to the objects rather than the base objects themselves. The way the language usually enforces this is for your code make the base class abstract so it cannot be instantiated. They are commonly called an ABC, or Abstract Base Class, because of this. Except in rare cases, only the endmost leaf classes should be concrete, everything below it in the hierarchy should be abstract. In that case, are you certain you are following LSP? Are the sub-objects TRULY able to be substituted for each other? Code should be able to move back up the hierarchy to base class pointers, but should never need to move back out the hierarchy, instead using functions in the interface to do whatever work needs to be done. As a code example: for( BaseThing* theThing : AllTheThings ) { theThing->DoStuff(); } versus: for( BaseThing* theThing : AllTheThings ) { // Might also be a dynamic cast, or RTTI ID, or similar auto thingType = theThing->GetType(); if( thingType == ThingType.Wizard ) { theThing->DoWizardStuff(); } else if( thingType == ThingType.Warrior ) { theThing->DoWarriorStuff(); } else ... ... // Variations for each type of thing } Also, in general it is a bad idea to have public virtual function. Here is some reading on that by one of the most expert among C++ experts. This is different from languages like Java where interface types are created through a virtual public interface. Some people who jump between languages forget this important detail. As the code grows those public virtual functions end up causing breakages as people implement non-substitutable code in leaf classes, as it appears you have done here.
  25. 4 points
    This sounds really bad. GPUs have been moving away from specialized hardware and towards general purpose hardware forever. You'd be better off running whatever this algorithm is on the GPU than on another chip... And if your GPU is too weak, put that extra $140 into getting a better one than a fancy chip. The only valid use case seems to be optionally adding extra image processing to old fixed hardware (consoles). If you're talking about building the hardware into a GPU, then again you're probably better off dedicating that space to more compute cores, and adding any extra fixed-function behavior that it requires into the existing fixed-function texture sampling/filtering hardware. That way games can decide to use a different AA method and not have some fixed-function AA chip sitting idle -- the extra compute cores are running the game's own AA solution. Or if it does have some fancy fixed function filtering, then by integrating this into the existing texture filters, then you allow developers to use that hardware on other algorithms too (not just post processing). As for upscaling though, yeah GPUs sometimes to have a hardware upscaler built into the video output hardware. Microsoft is a fan of making sure the hardware upscaler is as good as possible... so there may be some room to add some smarts into that... but on PC, it's not common to render at 480p any more and have the GPU upscale to 1080p output... Secondly, there's two kinds of anti-aliasing techniques - ones that avoid aliasing in the first place (toksvig, increased sampling + filters) and ones that try to hide aliasing after it's occurred (FXAA, etc). The latter aren't a solution, they're a kludge, and they're not the future. The biggest problem with filters like FXAA is that they're not temporally stable. Jaggies get smoothed out, but "smooth jaggies" still crawl along edges. The current state of the art in geometric AA involves extra samples (MSAA + temporal) and smart filters. This kind of of external add on can never have access to that extra information, so is doomed to FXAA style kludge filtering. The current state of the art in specular aliasing attacks the problem at the complete other end of the pipeline - filtering before lighting calculations. An add on can never do that. [edit] Watched the video, and yeah, it's smoothing jaggies but in no way solving aliasing. This kind of flickering is everywhere:
  26. 4 points
    Hi everybody! I decided to use this challenge as an excuse for learning to use Game Maker (or I wanted to learn Game Maker and decided to do it while participating in this challenge; pick one :P). My project is called "Power Word: Pong". It's available for Windows here: https://racso.itch.io/power-word-pong I still want to improve the AI, as it's still quite dumb, and I want to give some variety to the ball (angle and speed changes), but I'm almost done with the game. Still, as always, feedback is appreciated! I hope you like the twist of the game. You may find the game source in this repository. I'll create a post-mortem after I finish working in the project. Regards! O.
  27. 4 points
    I'm going to try and limit the snark a bit here, but believe me, it's hard. Matthias, what you're doing is basically reading us one page at a time of a mystery novel, and then asking us who the killer is. Except it isn't even consecutive pages, or even chronological pages. Just rip a sheet of paper out of the book at total random, read it aloud, and then see who gets the correct killer. And you've read us only two pages so far. We don't even know the title of the book or who the characters are. We can't help you find the killer.
  28. 4 points
    If you want a billion zombies, then synchronising them over a network would be impractical... But you don't have to. You can synchronise their initial conditions and any player interactions, and then run identical updates on each client. https://www.gamasutra.com/view/feature/131503/1500_archers_on_a_288_network_.php
  29. 4 points
    Try walking through a visualization (like https://visualgo.net/en/sorting - click QUI on the top bar to switch to QuickSort).
  30. 4 points
    IIRC quake 3 did the store as JPEG, transcode to DXT thing. If your DXT encoder is fast, then the quality is almost certainly terrible. They might look OK on casual inspection, but your game artists will eventually run into horrible 4x4 block artefacts that look like minecraft and horrible green/purple hue shifts (especially in areas that should be grey) and they'll start compensating by increasing their texture resolution, disabling compression, or just nagging you to fix the engine Look up crunch and binomial. Instead of naively using JPEG, they invent new on-disk formats that are designed to be small but also transcode directly to DXT with minimal runtime cost and quality loss. Another avenue is, when creating your DXT/DDS files, you add another constraint to your encoder - - as well as looking for block encodings that produce the best quality, you also search for ones that will produce lower entropy / will ZIP(etc) better. You then sacrifice a little quality by choosing sub-optimal block encodings, but end up with much smaller files over the wire.
  31. 4 points
    Change the DXGI_FORMATs of the resources/SRVs/DSVs of your depth buffer, shadow maps, etc. optional Swap the near and far plane while constructing the view-to-projection and projection-to-view transformation matrices of your camera, lights, etc. Change the D3D11_COMPARISON_LESS_EQUAL (D3D11_COMPARISON_LESS) to D3D11_COMPARISON_GREATER_EQUAL (D3D11_COMPARISON_GREATER) of your depth-stencil and PCF sampling states. Clear the depths of the DSVs of your depth buffer, shadow maps, etc. to 0.0f instead of 1.0f. Negate DepthBias, SlopeScaledDepthBias and DepthBiasClamp in the rasterizer states for your shadow maps. Set all far plane primitives to the near plane and vice versa. (Ideally using a macro for Z_FAR and Z_NEAR in your shaders.) For more info:
  32. 4 points
    1. This thread was ill advised from the beginning. Even the title is clearly biased and unobjective. It verges on a personal attack which is never acceptable. 2. Labelling people as "trolls" because they do not expressly agree with you is basically just as bad as offense #1. 3. Basing your opinion of someone on an edited, selective, dramatized, and utterly incomplete third party account of their words is worse than #1 and #2 put together. A number of you need to go sit quietly and think very hard about how you relate to your fellow human beings. To facilitate this: thread locked.
  33. 4 points
    Maybe I missed something, but I just remember him saying that Japanese games suck. How is that appalling? I mean, it's just the guy's opinion. People need to really stop getting their panties in a bunch every time someone expresses an opinion.
  34. 4 points
    But is also not too bad for just 10 points and a densely packed array of distances (i.e. cache friendly accesses). There may be some improvements If the path consists of notably more points: a) You may store not the distance from one point to the next but from the very beginning to the particular point, i.e. an array of ever increasing distance values. This would simply allow for binary search. b) If the point of interest does not wildly jump back and forth on the path but "wanders" in some orderly manner (and "moving an entity along" seems me to be such a thing), then the previously found position (i.e. its index) can be stored and be used as the starting point of a linear search in the next iteration. Usually only one or just a few search steps will be necessary. You may further know the direction of search (up or down the path) in front of the search.
  35. 4 points
    An enum can be assigned a value. result = true sets the value to 1, result = false sets the value to zero. They use an enum because the rules allow using the value directly rather than placing them in a variable or anywhere in memory. And they use the typedef so you don't need to actually run the code, you can look up the value as a constant expression. The end result is that the template code optimizes away at compile time to the integer constant 1 or 0. This can be applied to other optimizations, such this type of things is usually in an if() statement. The compiler can use the constant expression to potentially eliminate even more code at compile time.
  36. 4 points
    shared_ptr alone is not a silver bullet. There are a few ways that using it can go wrong and lead to leaks and performance issues. Circular references (where you have two objects that refer to each other with a shared_ptr) are an example of that. Mitigating cases like that constitutes "memory management" to some degree. If you're working in C++ you are going to have to deal with memory management even if you're using smart pointers, mainly in the form of ownership semantics. Now, shared_ptr is intended to be used in cases where ownership is shared, hence the name. Is shared ownership really what you want here? If only one object (the scene list) will ever have ownership of the scenes, then you can represent that in code using a unique_ptr, is a smart pointer template that enforces the idea that only one object at a time can own the memory that it points to. You can use references or raw pointers if you need objects to refer to memory they don't own and unique_ptrs to refer to the memory if the objects do own.
  37. 4 points
    Like Jbadams said. It took me years to talk GD.Net into doing this, but if you plan on hosting content, you'll need to register a "copyright agent" with the Copyright office (copyright.gov), and include a DMCA takedown procedure and Counter Notification procedure in the Terms of Service for your game/server. It's a pretty straightforward process, but it's always best to consult an attorney (obviously doesn't have to be me, whoever you're comfortable with) when you're revising ToS or other legal documentation.
  38. 4 points
    XMVectorSelect is what you're looking for. It takes the masks output by functions such as LessOrEqual and each bit in the mask is used to select between A or B.
  39. 4 points
    Don't remove things from the list. Throw the whole thing away after you've rendered it and build a new list from scratch next frame.
  40. 4 points
    Well... Photometry is about what humans can see. Radiometry is about all rays, whether they are visible or not. So photometry is a subset of radiometry. Commonly, computer graphics is about what we can see, so photometry looks better. However, when moving threw mediums, rays can evolve (they can loose energy for example, but not only...), so a ray that was first not in the visible range can become visible when moving threw mediums. The same kind of logic can be applied to matter that receive energy from various sources, even if all the sources have wavelengths out from the visible range. Nevertheless, real-time CG does not generally care about this: diffraction for example will change the ray direction, polarization will tend to accept or refuse ray from some wavelengths. So this can become very complex. Also, in radiometry you express the flux in watt whereas in photometry you'll express this flux in lumen. But both deal with the same quantities. One is just focusing on human vision whereas the second one cares about everything.
  41. 4 points
    It’s very common to hear engineers talking about "code reuse" - particularly in a positive light. We love to say that we’ll make our designs "reusable". Most of the time the meaning of this is pretty well understood; someday, we want our code to be able to be applied to some different use case and still work without extensive changes. But in practice, code reuse tends to fall flat. A common bit of wisdom is that you shouldn’t even try to make code reusable until you have three different use cases that would benefit from it. This is actually very good advice, and I’ve found it helps a lot to step back from the obsession with reusability for a moment and just let oneself write some "one-off" code that actually works. This hints at the possibility of a few flaws in the engineering mindset that reuse is a noble goal. Why Not Reuse? Arguing for reuse is easy: if you only have to write and debug the code once, but can benefit from it multiple times, it’s clearly better than writing very similar code five or six times…​ right? Yes and no. Premature generalization is a very real thing. Sometimes we can’t even see reuse potential until we’ve written similar systems repeatedly, and then it becomes clear that they could be unified. On the flip side, sometimes we design reusable components that are so generic they don’t actually do what we needed them to do in the first place. This is a central theme of the story of Design Patterns as a cultural phenomenon. Patterns were originally a descriptive thing. You find a common thread in five or six different systems, and you give it a name. Accumulate enough named things, though, and people start wanting to put the cart before the horse. Patterns became prescriptive - if you want to build a Foo, you use the Bar pattern, duh! So clearly there is a balancing act here. Something is wrong with the idea that all code should be reusable, but something is equally wrong with copy/pasting functions and never unifying them. But another, more insidious factor is at play here. Most of the time we don’t actually reuse code, even if it was designed to be reusable. And identifying reasons for this lapse is going to be central to making software development scalable into the future. If we keep rewriting the same few thousand systems we’re never going to do anything fun. Identifying Why We Don’t Reuse Here’s a real world use case. I want to design a system for handling callbacks in a video game engine. But I’ve already got several such systems, built for me by previous development efforts in the company. Most of them are basically the exact same thing with minor tweaks: Define an "event source" Define some mechanism by which objects can tell the event source that they are "interested" in some particular events When the event source says so, go through the container of listeners and give them a callback to tell them that an event happened Easy. Except Guild Wars 2 alone has around half a dozen different mechanisms for accomplishing this basic arrangement. Some are client-side, some are server-side, some relay messages between client and server, but ultimately they all do the exact same job. This is a classic example of looking at existing code and deciding it might be good to refactor it into a simpler form. Except GW2 is a multi-million line-of-code behemoth, and I sure as hell don’t want to wade through that much code to replace a fundamental mechanism. So the question becomes, if we’re going to make a better version, who’s gonna use it? For now the question is academic, but it’s worth thinking about. We’re certainly not going to stop making games any time soon, so eventually we should have a standardized callback library that everyone agrees on. So far so good. But what if I want to open-source the callback system, and let other people use it? If it’s good enough to serve all of ArenaNet’s myriad uses, surely it’d be handy elsewhere! Of course, nobody wants a callback system that’s tied to implementation details of Guild Wars 2, so we need to make the code genuinely reusable. There are plenty of reasons not to use an open-source callback library, especially if you have particular needs that aren’t represented by the library’s design. But the single biggest killer of code reuse is dependencies. Some dependencies are obvious. Foo derives from base class Bar, therefore there is a dependency between Foo and Bar, for just one example. But others are more devilish. Say I published my callback library. Somewhere in there, the library has to maintain a container of "things that care about Event X." How do we implement the container? Code reuse is the name of the game here. The obvious answer (outside of game dev) is to use the C++ Standard Library, such as a std::vector or std::map (or both). In games, though, the standard library is often forbidden. I won’t get into the argument here, but let’s just say that sometimes you don’t get to choose what libraries you rely on. So I have a couple of options. I can release my library with std dependencies, which immediately means it’s useless to half my audience. They have to rewrite a bunch of junk to make my code interoperate with their code and suddenly we’re not reusing anything anymore. The other option is to roll my own container, such as a trivial linked list. But that’s even worse, because everyone has a container library, and adding yet another lousy linked list implementation to the world isn’t reuse either. Policy-Based Programming to the Rescue The notion of policy-based architecture is hardly new, but it is sadly underused in most practical applications. I won’t get into the whole exploration of the idea here, since that’d take a lot of space, and I mostly just want to give readers a taste of what it can do. Here’s the basic idea. Let’s start with a simple container dependency. class ThingWhatDoesCoolStuff { std::vector<int> Stuff; }; This clearly makes our nifty class dependent on std::vector, which is not great for people who don’t have std::vector in their acceptable tools list. Let’s make this a bit better, shall we? template <typename ContainerType> class ThingWhatDoesCoolStuff { ContainerType Stuff; }; // Clients do this ThingWhatDoesCoolStuff<std::vector<int>> Thing; Slightly better, but now clients have to spell a really weird name all the time (which admittedly can be solved to great extent with a typedef and C++11 using declarations). This also breaks when we actually write code: template <typename ContainerType> class ThingWhatDoesCoolStuff { public: void AddStuff (int stuff) { Stuff.push_back(stuff); } private: ContainerType Stuff; }; This works provided that the container we give it has a method called push_back. What if the method in my library is called Add instead? Now we have a compiler error, and I have to rewrite the nifty class to conform to my container’s API instead of the C++ Standard Library API. So much for reuse. You know what they say, you can solve any problem by adding enough layers of indirection! So let’s do that real quick. // This goes in the reusable library template <typename Policy> class ThingWhatDoesCoolStuff { private: // YES I SWEAR THIS IS REAL SYNTAX typedef typename Policy::template ContainerType<int> Container; // Give us a member container of the desired type! Container Stuff; public: void AddStuff (int stuff) { using Adapter = Policy::ContainerAdapter<int>; Adapter::PushBack(&Stuff, stuff); } }; // Users of the library just need to write this once: struct MyPolicy { // This just needs to point to the container we want template <typename T> using ContainerType = std::vector<T>; template <typename T> struct ContainerAdapter { static inline void PushBack (MyPolicy::ContainerType * container, T && element) { // This would change based on the API we use container->push_back(element); } }; }; Let’s pull this apart and see how it works. First, we introduce a template "policy" which lets us decouple our nifty class from all the things it relies on, such as container classes. Any "reusable" code should be decoupled from its dependencies. (This by no means the only way to do so, even in C++, but it’s a nice trick to have in your kit.) The hairy parts of this are really just the syntax for it all. Effectively, our nifty class just says "hey I want to use some container, and an adapter API that I know how to talk to. If you can give me an adapter to your container I’ll happily use it!" Here we use templates to avoid a lot of virtual dispatch overhead. Theoretically I could make a base class like "Container" and inherit from it and blah blah vomit I hate myself for just thinking this. Let’s not explore that notion any further. What’s cool is that I can keep the library code 100% identical between projects that do use the C++ Standard Library, and projects which don’t. So I could publish my callback system exactly once, and nobody would have to edit the code to use it. There is a cost here, and it’s worth thinking about: any time someone reuses my code, they have to write a suitable policy. In practice, this means you write a policy about once for every time you change your entire code base to use a different container API. In other words, pffffft. For things which aren’t as stable as containers, the policy cost may become more significant. This is why you want to reuse in only carefully considered ways, preferably (as mentioned earlier) when you have several use cases that can benefit from that shared abstraction. Concluding Thoughts One last idea to consider is how the performance of this technique measures up. In debug builds, it can be a little ugly, but optimized builds strip away literally any substantial overhead of the templates. So runtime performance is fine, but what about build times themselves? Admittedly this does require a lot of templates going around. But the hope is that you’re reusing simple and composable components, not huge swaths of logic. So it’s easy to go wrong here if you don’t carefully consider what to apply this trick to. Used judiciously, however, it’s actually a bit better of a deal than defining a lot of shared abstract interfaces to decouple your APIs. I’ll go into the specific considerations of the actual callback system later. For now, I hope the peek at policy-based decoupling has been useful. Remember: three examples or you don’t have a valid generalization! View the full article
  42. 4 points
    @TobyTheRandom I don't mean to sound rude (and conciseness is generally a good thing), but this question, as stated, is too vague to be useful. First off, what kind of "beginner" are you referring to? Beginning game designer? Artist? Gameplay programmer? Musician? Some/All of the above, or perhaps something not mentioned above? What are your goals? Are you more of a tinkerer/coder, or would you prefer to use a drag-and-drop can workflow? 2D or 3D? Are you trying to make mobile games, PC, console? Etc. A bit more context would help. And then, even after we have more context, the "what's the best engine..." type of question tends to just open the floodgates for a holy war of Unity vs Unreal vs hand-write-your-own-engine vs some other choice. There is no "right answer" to a question like this. But at least with more knowledge of what you want to do, we can point you towards tools that will help you accomplish your goals.
  43. 4 points
    They're equivalent so it doesn't really matter what you pick. Global functions that share hidden static state of extremely common in C. I guess singletons are more common in C++ as they're OOish. Fighting for a better singleton is like hoping to step in a better dog turd though...
  44. 3 points
    This update is going to show you guys what an idiot I can be. Be warned, this could happen to you too! For the past month, I have been focusing almost exclusively on developing an artificial intelligence system which uses machine learning to play the characters within my game. This is a primary objective. I have spent several weeks learning more about artificial neural networks, reinforcement learning, and a few other AI methodologies. I am by no means an expert at any of this -- I'm just a novice/beginner. I have also spent considerable time working on developing my own AI system which combines the best elements from the existing methodologies but also introduces a model for actual intelligence. Digging into this has felt like a series of intellectual epiphanies exploding in my head and has been extremely rewarding. I feel like I have a strong grasp on intelligence, learning, consciousness, and sentience, and am on the verge of creating a successful conceptual model to emulate intelligence. Okay, that's a bold claim to make. I'm going to just cut to the chase and discredit myself. This is like saying, "I found a compression algorithm which compresses anything by 97%". Strong claims like this, require proof to be believed, and I have not implemented this. Instead, I'm going to share my design progress on this: AI 2.0- Reinforcement Learning.docx (~8 pages, 10 min read) Within Spellbound, I have spent a lot of time refactoring the AI and preparing it for a machine learning system. An AI agent now has memory and gets signal inputs about the state of the world through its senses. I have implemented sight, hearing, and smell. Sight is a cone of vision which is oriented to the characters head position and rotation. I collect a list of all objects which overlap the cone, and then I do a line trace from the eye to the object to see if there is a line of sight. If an object passes the cone test and the line of sight test, then it is registered as a visible object. Hearing works a bit differently than sight: Sight is looking out into the world to perceive objects, hearing is waiting for object noises to come to us. So, when an object creates a "noise", I create a sphere at that location and set an intensity value to be proportionate to the decibel value of the sound. Then, I change the radius of the sphere at the speed of sound and stop when the intensity of sound attenuation reaches zero (via inverse square law). If this sphere overlaps an ear, and the ear hearing threshold is capable of hearing the current intensity of the sound, then we register the object as being "heard". The sense of smell is a bit different. Some objects emit odors over time, and the odor of an object slowly radiates outward (probably following an inverse square law for intensity as well). The important note to make distinct about odor is that an object is continuously emitting odors over time. Again, an odor is going to be represented by a sphere which grows over time and only overlaps noses. As an odor emitting object moves, it emits more odor spheres, and we get an "odor trail". A smart creature with a nose can detect an odor, and then find what direction the odor gets stronger, and then it can follow the odor trail to the odor source. So, a hungry zombie can smell living flesh and follow the smell to a living person. All of the sensory inputs are stored in the short term memory of a brain. If a sensed object is no longer sensed, the lack of sensory information doesn't mean the object stopped existing -- it's sill persistent in memory. Instead of using raw sensory information as our input stream, I use short term memory as the input stream to drive behavior. We can think of memory as a representation of world state for the AI agent, and then operate based off of the state. We can either use this state information as inputs into a state machine based expert system, or we can feed this into a machine learning system. Either way we handle the inputs, should not matter: The output of both systems should be the most optimal behavior for the given state. Currently, Spellbound uses the state machine based expert system for AI. It works. It makes a believable illusion of intelligence. The code for each characters behavior is roughly 200 lines. That's kind of manageable, right? Keep in mind, the behavior is currently scripted only to suit what is necessary for the prelude chapter of the game, so every time I want to add new game mechanics or capabilities, I will have to script out more behavior. This is where I become an idiot. My line of reasoning: "Okay, I am probably going to eventually have up to 50 different characters. They will all need AI scripts to drive their behaviors. My game is constantly changing, so that means every time I make a significant change to the game, I will have to update each AI script. That sounds like a lot of work! Okay... I also don't even know what the most optimal behavior for every character will be, so that may mean that some expert systems are not going to be very good. Wouldn't machine learning be able to handle this gracefully? Let's do that." Okay. Why am I an idiot? Because machine learning is a trap for engineers. I recently learned about this new term called "Nerd Sniping". Let's take a reality check on what I'm trying to do here. I'm trying to create a generalized AI system which is so good that it learns how to play any character in my game as an expert, without any coaching or training from me. To date, the DeepMind team funded by Google, has been able to create AI systems which play Go so well that they can beat world champions, they can play atari games perfectly, etc. Now, I'm asking that same type of intelligence to play any character in my game? It's possible, yes, but it's not easy. After spending a month doing R&D on this, I realize how hard this would be to accomplish, and if I were to put an estimate on how long it would take for a novice/beginner like me to implement this, I'd be looking at a minimum of 3-6 months. That's 3-6 months in developer time, which means the realistic estimate would be multiplied by any number between two and ten. Just as I have the capability to write my own game engine from scratch (which I spent 12 months on!), I also have the capability to create this kind of general AI system. The hard question is, "Is this really the most important thing for me to work on for the next six months?". Let's put this into a different context: "Is the alternative approach cheaper and faster and less risky? (yes)" and "Do you want to ship games today or build technologies for 6-12 months away?" You know that "Always be closing" scene from "Glengarry Glen Ross"? The equivalent for game development is "Always be shipping!". The focusing question for everyone on the team should be, "What am I doing to make this game ship as soon as possible?" And the other question: "What am I doing to increase sales?" If you don't sell, you don't make money. If you don't make money, like it or not, you are on your way out of the industry. If you don't ship, you don't sell. If you ship garbage, you don't sell. Therefore, by hypothetical syllogism: If you don't ship or you ship garbage, then you are on your way out of the industry. If you want to stay in the industry, then ship fast, ship quality, ship often. My AI system, while interesting, does not help me ship my game faster. It's a trap. It's a premature optimization to solve a problem I don't have yet (and may not ever have!). There's a bigger problem to solve: I need to ship quality content asap. I could continue working on this AI system and I could develop it to work and be so good that it can be reusable within any game, and I could turn it into its own product/technology and license it out to other companies, even outside of the game industry. I eventually plan on doing this. However, the right time is not right now. I need to ship my game, and whether I have machine learning AI or expert system AI, won't matter to 99% of the customers. So, I'm going to focus on wrapping up the production of the Prelude episode, shipping it asap, and then switching gears to working on the content for Episode 1. I'm going to design the content for Episode 1 to keep AI behavior relatively simple for now. If I ship Episode 1 and sales completely suck despite my best marketing efforts, then there's no point in creating Episode 2. If Episode 1 is a success, then I can build Episode 2, and *that* may be the right time to build out my machine learning AI. Certainly by Episode 3. The key is to build a customer base first so that releases actually have an audience to see them. I've recently been talking by email with a developer relations rep from Oculus. He wanted to drop the price of my game for their online store to better reflect the content. Initially I resisted the idea, but started thinking about alternatives. I currently offer my game on Steam for $20 in early access, which has Episodes 1-3 included, whenever the production finishes. I like the simplicity of that structure because it means I can just release builds which have content updates included. Another option I am now considering is releasing each episode as a separate purchase. I could release the prelude for free to act as a teaser/loss leader, and then have in app purchases for each episode. It might be a good way to build out an customer base, because people like free shit and if the content for the free episode is great and I leave a huge cliff hanger at the end, people will want to buy the next episode. The problem is that the current build on Steam only contains the prelude, so if I give the prelude away for free, nobody has any incentive to purchase the full set of content for $20. So, my current solution is to set the current build to the "premium" version when Episode 1 is released, and then make the prelude free. Existing customers won't feel cheated because they'll get all of the future episodes included. If people want to buy episodes individually, they'll cost a little more per episode ($7.99?) than a bulk purchase of $20. This also solves a future problem as well: What if a future customer isn't interested in the "Red Wizards Tale", but they do want to experience the content for "The Sorceress of Light"? Instead of spending $20 for content they don't want, they can spend it on what they do want. Anyways, I need to refocus my efforts on shipping the final update for the Prelude. Get it done! The next step is to write the script for episodes 1-3. I've been reading a lot of fantasy books lately to get a better feel for writing fantasy, but I now just feel like an amateur writer in comparison to J.K. Rowling, Brent Weeks and George R.R. Martin. I guess it's important to remember that writing a good story is like constructing a skyscraper. If you only look at the finished product in wonder, you won't see any of the scaffolding it took to build it. The scaffolding of a writer is 20+ iterations of the story? So, I have to write my script about 20 times. The first five drafts will probably be garbage, and will mostly be about trying to find the story I want to tell. The remaining 15 drafts will be refining the story I found and polishing it to perfection. And writing for VR games is more like writing a movie script than writing a book. The reason I need to write out the full story in advance is that I need to know where the story is going so that I can go back to earlier sections of the plot and drop foreshadowing hints, cliff hangers, and sharpen plot twists.
  45. 3 points
    I wouldn't agree with that statement. They're all independent concerns. Every modern console game is multi-threaded. My workstation has roughly the same CPU model as the XbOne/PS4, except that the Consoles are clocked around 1.6GHz, while my PC is clocked at 4GHz. The consoles are slowwwwwwwwww... but they have 8 cores. So you have to write your code so that it will actually run across 8 cores. The PS3/Xb360 were the first platforms to force this change in mindset, so multi-threaded games have been a thing for about a decade now. By latency I assume you mean the latency between the user physically pressing a button and the monitor showing some change as a result of that button press -- input-to-photon latency. In a typical game this will be about 3-4 frames, or 2-3 on a CRT/projector... In the best possible situation it looks something like: * user presses button, the game will poll for the button state at the beginning of the next frame. This will be somewhere from 0 to 1 frames from now (+0.5 frames on average). * game starts a new frame, polls the input devices, updates game state (+1 frame) * game finishes the frame by issuing the rendering commands to the GPU. * GPU then spends an entire frame executing these rendering commands (+1 frame). * Your nice LCD monitor then decides to buffer the image for about a frame before displaying it (+1 frame) At 60Hz, that's around 50-60ms. The easiest was to reduce those numbers is to just run at a higher framerate. If you can run the loop at 200Hz (no vsync) then the latency will be <20ms. Games may make tradeoffs that produce worse latency than this, but not because of multi-threading/determinism. A common one is using a fixed-time-step for your gameplay update loop, in order to keep the physics more stable... actually yeah, stable physics is a determinism concern, so you're right there! In order to use a fixed-time-step and keep the visual animation perfectly smooth you have three choices: 1) use a very small time-step (like 1000Hz) where the jittery movement won't be noticed. 2) buffer an extra frame of game-state and interpolate object positions. 3) extrapolate object positions instead of interpolating, though this still causes jittery movement when objects change direction/collide/etc.. Choice #2 adds another frame to the above calculations. Another reason to add an extra frame of latency is if input rhythm / relative timing of inputs is important. To get perfect rhythm, you can poll the input device very fast (e.g. at 1000Hz) and push all the inputs into a queue with a timestamp of when they were recorded. The game can buffer up a whole frame's worth of inputs in this manner, and then in the gameplay logic it can process the most recent frame's worth (e.g. a 16.67ms slice of inputs) at once, taking the timestamps into account. You're then able to process inputs with sub-frame accuracy -- e.g. even though you still receive all inputs right at the start of an update cycle, you can determine facts such as- the player pushed this button 3/4ths of the way through the frame. Determinism is often down to our desire for fast floating point math. The IEEE has a specification on how floats should behave, and you can tell your compiler to follow that specification to the letter, and then you know that your calculations are reproducible... However, this is often a hell of a lot slower than telling your compiler to ignore the IEEE specification. Then there's loads of other things like being very careful how you generate random numbers, and very careful about when and how any kind of user input is allowed to affect the simulation. e.g. in a peer-to-peer RTS game, you might need to transmit everyone's user-inputs to every other player first, get acknowledgement, then apply everyone's user inputs simultaneously on an agreed upon simulation frame. Ok... that's also a case where determinism does necessitate higher input latency I'm getting your post now! However, in that situation, the local client can start playing sound effects and animations immediately, which makes it seem like there's no input latency, even though they're not allowed to actually modify the simulation state for another 500ms. There are situations where certain multi-threading approaches can introduce non-determinism, but all of those approaches are wrong. If you're multi-threading your game in such a way where your support for threading means that the game is no longer deterministic, then you're doing something terribly wrong and need to turn around and go back. I can't stress that one enough. There's no reason that multi-threaded gameplay code shouldn't behave exactly the same as a single-threaded version.
  46. 3 points
    The underlying resource will stay alive as long as there is at least one D3D resource or NT shared handle referencing it. In order to destroy it, you must release the original (created) resource, the NT handle, and the opened resource.
  47. 3 points
    Reinventing a crude wheel is good if you want to understand wheels better. For the sake of education, limited reinvention is generally beneficial. Reinventing a fancy wheel is a waste of time unless you want to be in the business of selling fancy wheels. Doing everything from scratch is rarely merited. Even then, you're most assuredly building upon other people's experience and expertise (or you're going to be a naive failure). For pragmatic decisions, only one question matters: is rolling your own going to result in a dramatic improvement over the available state of the art, so much so that it offsets the cost of doing it? If so, go for it. That's how advancements happen. Otherwise... don't waste your time and effort. Play to your strengths. I have done no research besides reading the article in question, but I sincerely doubt they are referring to writing every last bit of code from scratch. That's crazy pants. The splash screens for Witcher 3 show a dozen different middleware packages in use just in that title; obviously RED is not averse to using other people's "wheels" when it makes sense to do so. As for what you should do... It doesn't matter. Learn what you want to learn, ship what you want to finish. Be flexible. Hard and fast rules are useless.
  48. 3 points
    Yes. The people putting together the documentary get to choose what lines to include, what clips to exclude, and how to edit it all together. All documentaries, news reports, and stories have bias like that. Learning to recognize them is an important skill. As for the people saying "deal with it", or "set some boundaries", I think those are the unhealthy reactions. While these situations happen in the community far too often, recognize that they are serious crimes. If you happen to live in a crime-ridden community you do need a proverbially thick skin to survive there, but that doesn't mean you should ignore the crime, do nothing to fix it, or ignore victims.
  49. 3 points
    Well, the good thing about 3D is that you can use as much geometry as you need to. In this case, to add a river you could simply subdivide the ground tile mesh to a proper detail level, then displace the 'river' parts downward. Then add a water tile for the water surface: You can even scatter a few doodad meshes as in the above, to 'dress up' the water's edge. Again, that tile is drawn in 3 passes: ground, then water, then rocks. (Although, in a 3D engine, if the water material is partially transparent it would typically be drawn in an alpha pass after the solids.) No complicated stitching required, just 3 meshes. (Or, 3 batches, anyway; the clutter could be built as a batch using instanced meshes.)
  50. 3 points
    Check your parentheses. You have more opening parentheses than you do closing parentheses. :^)