Jump to content
  • Advertisement

GuyWithBeard

Member
  • Content count

    522
  • Joined

  • Last visited

Community Reputation

1911 Excellent

3 Followers

About GuyWithBeard

  • Rank
    Advanced Member

Personal Information

  • Industry Role
    3D Artist
    Level Designer
    Programmer
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

12568 profile views
  1. GuyWithBeard

    Networked Physics sanity check

    Seems like a solid version of the Quake networking model. We do something similar, except we keep the server ahead of the clients, ie. we never extrapolate on the clients, only interpolate. One thing missing from your description is lag compensation. Depending on the gameplay this may or may not be needed, but if you have hit-scan weapons you might want to have the server doing some kind of lag compensation, to counter the fact that the client and server are on different timelines.
  2. GuyWithBeard

    DirectX - Vulkan clip space

    I tried out the parameter last night and it does what it says on the can, so that's probably the easiest way to fix it up without having to flip the coordinate manually in the shader code. Not related to this, but I had some very strange issues with glslangvalidator 1.1.73.0 generating sub-optimal SPIR-V, almost as if the spirv-opt step that's supposed to be built into the LunarG version of the program was not run. Anyway, I had to revert to 1.1.70.0, but as it turned out the invert-y parameter is available with that version too :)
  3. GuyWithBeard

    DirectX - Vulkan clip space

    After updating to the LunarG SDK version 1.1.73.0 I noticed that glslangvalidator has gained this parameter: --invert-y | --iy invert position.Y output in vertex shader Sounds like exactly what we need. I haven't tried it out yet though.
  4. I don't know if this is something you can use but maybe it will give you some ideas. I wrote a preprocessor that is run before the actual shader compilation is performed. I let the preprocessor generate all the bindings for all shader inputs, according to some rules I have set up. This system is completely deterministic and I can decide exactly what I want and where it goes. I compile HLSL to both DX bytecode for DX11/12 and to SPIR-V for Vulkan. I might have something like this: #autobind Texture2D gTexture; #autobind cbuffer gConstantBuffer #autobind sampler gSampler; where #autobind is a preprocessor directive that gets expanded to something like this: layout(set=0,binding=0) Texture2D gTexture : register(t0); layout(set=0,binding=1) cbuffer gConstantBuffer : register(b0) layout(set=0,binding=2) sampler gSampler : register(s0); Using this system I have full control over where my inputs get bound, eg. I can always bind shadow map inputs to certain slots etc. I then use D3DReflect() on DX and SPIRV-cross on Vulkan to verify that the compiler generated what I want. Eg. in certain cases the compiler optimizes away an input in DXBC but not in SPIR-V or vice versa. This does not let you "re-bind" stuff in the already generated SPIR-V but it does let you decide where they are put in the first place.
  5. GuyWithBeard

    Send one color per surface to shader?

    Not sure what the "best practice" is in this fairly simple case, but using vertex colors like you do now is not "bad practice". Another way to do it (and I am sure there are more still) is to set up a constant buffer with a single color value and use that, but it seems silly to have to create, update and bind a CB just for one color value. In the end it is all about what you are trying to do and what your data looks like. Eg. if you have hundreds of thousands of vertices, then you could be saving a bit by not having the color be a part of the vertex, but if there are fewer vertices it won't be a problem.
  6. How is game_pVertexShader created? Normally you call CreateVertexShader() on the device to create a vertex shader. The first parameter to that function is the shader byte code.
  7. GuyWithBeard

    D3DReflect unrsolved external symbol

    You also need to link against dxguid.lib. Try adding #pragma comment(lib, "dxguid.lib") to your code. I don't know why the docs don't mention this.
  8. GuyWithBeard

    Tips on porting to Apple platforms

    Don't know how I missed your response, only saw your message now. Anyway, thanks! Yes, it seems like the safest way to go with Apple platforms (and probably most power-aware platforms, ie. mobile) is to have the OS update the app rather than doing it yourself. Luckily I planned for this some years ago when setting up my engine base and the engine lets the application take care of the main loop however it wants as long as it calls certain things in a certain order during one update. I found this (http://floooh.github.io/2016/01/15/oryol-metal-tour.html) article which suggests doing the same, ie. hook the update into the draw callback of MTKView, which is probably what I am going to do. However, after spending years developing only on MSVC it will take some time still before I can even build the core engine library on clang, so it will be some time before I get to implement this. But it's all good, the codebase becomes more mature through all this. I have also found some subtle bugs and some questionable code that MSVC has (erronously) swallowed without a warning.
  9. GuyWithBeard

    Gameplay in C++

    In my experience, the most common reason to write gameplay code in C++ is that you already have an engine or base framework in C++ and don't want to deal with another language. Even if you have a well-working scripting layer, there is power and convenience in being able to write and debug the whole game from top to bottom using just a C++ IDE. This is of course not just the case with C++ but with any language.
  10. GuyWithBeard

    Tips on porting to Apple platforms

    Really? No-one knows? This seems like a fairly typical scenario to me, iOS being as popular as it is. Anyway, I took the sword as it is dangerous to go alone, and started porting my core library. For now I am going with a Cocoa Touch Shared Library, as it seems to be the closest to what I have on Windows. Apparently iOS supports loading shared libraries since iOS 8 so I should be good regarding that. It seems like this produces a "framework" which is a bundle containing the dylib file + headers and resources, which is cool I guess. Still not sure if I actually need that or not. For the plugins I am thinking about having straight dylib files but I am still unsure if I need resources to create a .nib file or whatever the windowing resources are called nowadays, in which case a framework would be better. Seems like I should be able to drive the update/render logic the way I normally do but it remains to be seen if that is a good idea or not. Seems like the CADisplayLink is the preferred way to render, so I might want to add support for that later. It will be a while before I have even the core lib compiling on clang, so I don't have to worry about that right now. Thoughts?
  11. Hey folks, I have been thinking about porting my tech stack (engine + game prototype) to Apple products for some time now. I have done some Mac development in the past, small scale Cocoa desktop development 10 years ago or so. I realize I have fallen out of the Apple development loop completely and don't really know what's what anymore. So I ask you for pointers and tips on how to proceed. Some background: - My game is a classical Win32 app linking to my main engine library, which is a DLL. The engine can then load any number of plugin DLLs at runtime. Eg. the rendering backends are loaded as plugins. If you know how OGRE loads render systems you get the idea, as that was what inspired me back in the day. - The current version of the engine has rendering backends for DX11, DX12 and Vulkan, and the abstraction layer is written with the modern APIs in mind, ie. it exposes command queues, command buffers, PSOs, fences etc. Because of this I imagine I should target Metal rather than try to bolt an OpenGL backend onto it for the Apple platforms. My first order of business would be to get a simple test app (eg. a spinning cube) running on an iPad Air 2. I have a few questions on how to get started: 1. What kind of projects do I want to set up in XCode? What should the actual game be? Xcode offers me something called a "Game". Is that the right one? Or perhaps "Single view app"? Keep in mind that the game would probably not be the thing that instantiates the window. In my windows builds the game contains mostly game-specific code. The engine library contains the code for loading a renderer backend DLL which then sets up the actual window (plain Win32 window for D3D-backends and a GLFW window for Vulkan). Can I do the same on iOS? Or does the game itself have to create the window on iOS? 2. What about the engine library and the plugin libraries? What kinds of projects do I create for them? XCode offers something called a Cocoa Touch Library, but that sounds like it is related to the touch UI somehow. 3. On windows the game runs in a typical main loop, something like: void main() { startup(); while(running) { update(); render(); } shutdown(); } Can I use the same kind of structure on iOS with a Metal backend? Most Metal tutorials I see use some kind of view that has a draw callback that it fires when it is time to draw things. That's fine as well if that is how the OS wants me to structure the game, but that would require some changes to how the game is set up. If that's the case, where do I put all of the non-rendering code, ie. the world update etc. Is there a separate place for that or do I put all of the update in the draw callback? 4. Should I be using something like SDL for this? I did some googling and it seems people have made Metal work with SDL but it is all highly WIP or unofficial. SDL seems to offer the plain old update/render loop structure I mentioned above, even on iOS, but does that work well with Metal? Or is it better to let the OS call into your code for updating/rendering? Thanks!
  12. GuyWithBeard

    My renderer stopped working

    Allright then, glad you figured it out! Still, I would recommend enabling the debug layers. They have helped me many times in the past.
  13. GuyWithBeard

    My renderer stopped working

    Okay, first of all you haven't told how is it "isn't working". What's the expected output? What's the actual output? Does it crash? Does it render incorrectly? Does it render at all? Try enabling the debug layers and make them output any DX11 warnings/errors. That might help you narrow down the problem. Looky here: http://blogs.msdn.com/b/chuckw/archive/2012/11/30/direct3d-sdk-debug-layer-tricks.aspx I haven't look through all your code (because c'mon), but one thing I noticed is that your DX pixel shader always outputs a constant color value while your GLSL fragment shader outputs whatever color is passed into it. Not sure if this is by design or not.
  14. GuyWithBeard

    AngelScript 2.32.0 is out

    Thanks for your hard work!
  15. GuyWithBeard

    C++ Custom Memory Allocation

    It's a pointer to a u8 value, not the actual value itself. The pointer is still going to be 32 or 64 bits, depending on the architecture. You need to cast it to a u8 pointer to have the pointer arithmetic treat it as a byte array, which is obviously what you want when allocating memory.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!