• Advertisement

GuyWithBeard

Member
  • Content count

    516
  • Joined

  • Last visited

Community Reputation

1908 Excellent

3 Followers

About GuyWithBeard

  • Rank
    Advanced Member

Personal Information

  • Interests
    Art
    Programming

Recent Profile Visitors

12108 profile views
  1. DX11 D3DReflect unrsolved external symbol

    You also need to link against dxguid.lib. Try adding #pragma comment(lib, "dxguid.lib") to your code. I don't know why the docs don't mention this.
  2. Tips on porting to Apple platforms

    Don't know how I missed your response, only saw your message now. Anyway, thanks! Yes, it seems like the safest way to go with Apple platforms (and probably most power-aware platforms, ie. mobile) is to have the OS update the app rather than doing it yourself. Luckily I planned for this some years ago when setting up my engine base and the engine lets the application take care of the main loop however it wants as long as it calls certain things in a certain order during one update. I found this (http://floooh.github.io/2016/01/15/oryol-metal-tour.html) article which suggests doing the same, ie. hook the update into the draw callback of MTKView, which is probably what I am going to do. However, after spending years developing only on MSVC it will take some time still before I can even build the core engine library on clang, so it will be some time before I get to implement this. But it's all good, the codebase becomes more mature through all this. I have also found some subtle bugs and some questionable code that MSVC has (erronously) swallowed without a warning.
  3. C++ Gameplay in C++

    In my experience, the most common reason to write gameplay code in C++ is that you already have an engine or base framework in C++ and don't want to deal with another language. Even if you have a well-working scripting layer, there is power and convenience in being able to write and debug the whole game from top to bottom using just a C++ IDE. This is of course not just the case with C++ but with any language.
  4. Tips on porting to Apple platforms

    Really? No-one knows? This seems like a fairly typical scenario to me, iOS being as popular as it is. Anyway, I took the sword as it is dangerous to go alone, and started porting my core library. For now I am going with a Cocoa Touch Shared Library, as it seems to be the closest to what I have on Windows. Apparently iOS supports loading shared libraries since iOS 8 so I should be good regarding that. It seems like this produces a "framework" which is a bundle containing the dylib file + headers and resources, which is cool I guess. Still not sure if I actually need that or not. For the plugins I am thinking about having straight dylib files but I am still unsure if I need resources to create a .nib file or whatever the windowing resources are called nowadays, in which case a framework would be better. Seems like I should be able to drive the update/render logic the way I normally do but it remains to be seen if that is a good idea or not. Seems like the CADisplayLink is the preferred way to render, so I might want to add support for that later. It will be a while before I have even the core lib compiling on clang, so I don't have to worry about that right now. Thoughts?
  5. Hey folks, I have been thinking about porting my tech stack (engine + game prototype) to Apple products for some time now. I have done some Mac development in the past, small scale Cocoa desktop development 10 years ago or so. I realize I have fallen out of the Apple development loop completely and don't really know what's what anymore. So I ask you for pointers and tips on how to proceed. Some background: - My game is a classical Win32 app linking to my main engine library, which is a DLL. The engine can then load any number of plugin DLLs at runtime. Eg. the rendering backends are loaded as plugins. If you know how OGRE loads render systems you get the idea, as that was what inspired me back in the day. - The current version of the engine has rendering backends for DX11, DX12 and Vulkan, and the abstraction layer is written with the modern APIs in mind, ie. it exposes command queues, command buffers, PSOs, fences etc. Because of this I imagine I should target Metal rather than try to bolt an OpenGL backend onto it for the Apple platforms. My first order of business would be to get a simple test app (eg. a spinning cube) running on an iPad Air 2. I have a few questions on how to get started: 1. What kind of projects do I want to set up in XCode? What should the actual game be? Xcode offers me something called a "Game". Is that the right one? Or perhaps "Single view app"? Keep in mind that the game would probably not be the thing that instantiates the window. In my windows builds the game contains mostly game-specific code. The engine library contains the code for loading a renderer backend DLL which then sets up the actual window (plain Win32 window for D3D-backends and a GLFW window for Vulkan). Can I do the same on iOS? Or does the game itself have to create the window on iOS? 2. What about the engine library and the plugin libraries? What kinds of projects do I create for them? XCode offers something called a Cocoa Touch Library, but that sounds like it is related to the touch UI somehow. 3. On windows the game runs in a typical main loop, something like: void main() { startup(); while(running) { update(); render(); } shutdown(); } Can I use the same kind of structure on iOS with a Metal backend? Most Metal tutorials I see use some kind of view that has a draw callback that it fires when it is time to draw things. That's fine as well if that is how the OS wants me to structure the game, but that would require some changes to how the game is set up. If that's the case, where do I put all of the non-rendering code, ie. the world update etc. Is there a separate place for that or do I put all of the update in the draw callback? 4. Should I be using something like SDL for this? I did some googling and it seems people have made Metal work with SDL but it is all highly WIP or unofficial. SDL seems to offer the plain old update/render loop structure I mentioned above, even on iOS, but does that work well with Metal? Or is it better to let the OS call into your code for updating/rendering? Thanks!
  6. DX11 My renderer stopped working

    Allright then, glad you figured it out! Still, I would recommend enabling the debug layers. They have helped me many times in the past.
  7. DX11 My renderer stopped working

    Okay, first of all you haven't told how is it "isn't working". What's the expected output? What's the actual output? Does it crash? Does it render incorrectly? Does it render at all? Try enabling the debug layers and make them output any DX11 warnings/errors. That might help you narrow down the problem. Looky here: http://blogs.msdn.com/b/chuckw/archive/2012/11/30/direct3d-sdk-debug-layer-tricks.aspx I haven't look through all your code (because c'mon), but one thing I noticed is that your DX pixel shader always outputs a constant color value while your GLSL fragment shader outputs whatever color is passed into it. Not sure if this is by design or not.
  8. AngelScript 2.32.0 is out

    Thanks for your hard work!
  9. C++ C++ Custom Memory Allocation

    It's a pointer to a u8 value, not the actual value itself. The pointer is still going to be 32 or 64 bits, depending on the architecture. You need to cast it to a u8 pointer to have the pointer arithmetic treat it as a byte array, which is obviously what you want when allocating memory.
  10. DX12 PSO Management in practice

    Not sure if this is something within your reach, but if possible I would suggest going the other way, ie. have your API expose a DX12-style PSO and emulate that on DX11. You can easily hash the DX11 rasterizer state, depth stencil state and blend state and only set them if they actually change. In my engine I always set the shaders and input layout on DX11 when switching PSO and it hasn't bitten me performance-wise yet at least. I would assume you could hash those too if you want, and only set them if they change. On DX12, setting the PSO is straightforward and Vulkan is pretty much the same if you want to go that route at some point.
  11. Gamma correction - sanity check

    Yep. I missed that exact flag. However, texconv seems to do linear-to-SRGB conversion even if you leave it out as long as you specify an SRGB format as output format. So, what I assume happened was that my input texture, already in SRGB, was treated as linear and was converted to SRGB a second time. This caused the texture to come out very bright.
  12. Gamma correction - sanity check

    Well, I managed to get the textures to look correct. I incorrectly assumed that texconv (of DX SDK fame) would be able to detect if an image was already in sRGB format or not. Turns out my images lacked the necessary metadata, so now my texture converter assumes a texture is in sRGB if it is set to output into sRGB, and that took care of the overly bright textures.
  13. Gamma correction - sanity check

    No need to apologize, this is very helpful. Some of my confusion came from the fact that I tried to do the same sanity check in Unity, by setting the project to use linear color space and clearing the screen to [128,128,128,1] and it actually comes out as 128 when color picked in GIMP, while my render window is cleared to 186. Although I realize I have no idea what Unity is doing behind the scenes so this probably is not helpful at all. Anyway, I realized that I have been converting my textures incorrectly to sRGB which probably explains why they are showing up brighter than they should. I'll fix that and output one of them on screen and see how they look...
  14. Gamma correction - sanity check

    I feel like you are dodging my actual question Anyway, thanks thus far. I will have to read up on this a bit I feel... EDIT: For the record, Matt77hias, edited his previous posts to provide more info. Thanks dude!
  15. Gamma correction - sanity check

    Yes, but in this case I render the color instead of sampling it. Would rendering the color to screen and doing a print-screen of that equate to your third sampling step? Ie. should I expect 128 or 186? (actually it came out as 188).
  • Advertisement