Jump to content
  • Advertisement

malboro

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

105 Neutral

About malboro

  • Rank
    Newbie

Personal Information

  • Interests
    Programming
  1. OMG, look who replied - I'm sorry for this fanboy moment I just want to thank you for ALL of your input here. I've even become a little paranoid - since all those great sites like AltDevBlogADay tend to disappear quickly and wrote a tool that backups all your replies for my grand-grandsons I hope one day I will be as good as you (and several others at this forum). Yes, this is far more better since some passes might be shared between different stages - allowing us to precompute/optimize the command buffer. Also this is way more decoupled. Exactly, I'm more familiar with this setup - I believe it was also used in old (deprecated now) DirectX Effect Framework. As a side note - I implemented such approach in my previous engine and it was a nightmare (I even deferred the effect+technique+pass lookup up to the render loop (facepalm)) - this is the reason I wanted to design it from scratch in a more sensible way. Now I'm thinking I wanted to give too much flexibility (and try to solve problems that does not exist yet)... I will go the same path as you (as always ) since I like preprocessing/offline baking as much as possible. Thank You soo much
  2. Sorry, but I have to bump this thread. I would really love to hear any feedback on whats the good (if not the best) way to define a "Technique". I'm started to think that a Technique should implement a shader stage ("DepthOnly", "Deferred", "Forward") - but I'm still not confident / have mixed feelings about this. Thanks
  3. Hi, I'm designing a new shader system for my engine - so I decided to checkout how others do it... I stumbled uppon Hodgman's response, I saw the Bitsquid's presentation about their Data-driven renderer and finally FrameGraph (D.A.G. applied to yet another thing ). And... now I'm overloaded with informations... Initially I thought that I've understanded Hodgman's post so I tried to wrap my shader into Passes, and those into Techniques. Since I know Lua pretty well, I also decided to use it for custom "effect" files: Shader "common" [[ // GLSL snippet ]] Shader "simpleVertexShader" [[ // GLSL snippet ]] Shader "simpleFragShader" [[ // Another GLSL snippet ]] ShaderProgram "simpleProgram" { vertex_shader = { "common", "simpleVertexShader" }, fragment_shader = { "simpleFragShader" } } ShaderProgram "simplerProgram" { options = { maxLights=2, cheapApproximations=true }, vertex_shader = { "common", "simpleVertexShader" }, fragment_shader = { "simpleFragShader" } } Technique "normal" { { name="opaque_pass", layer="opaque", program="simpleProgram" }, } Technique "selected" { { name="outline_pass", layer="opaque", program="simplerProgram", uniforms={ outlineColor={0.5, 0.1, 0.1, 1.0} } }, { name="opaque", layer="opaque", program="simpleProgram" } } So basically a Technique is a container for TechniquePasses, each Pass has a name, a shading program and is assigned to a Layer. A layer is a special place in a pipeline, and it has its own RenderQueue and optional profiling scope (e.g. "shadow mapping"). My tought process was as follows: I can have multiple Pipelines (Forward, Shadow, Deferred, Postprocess, Raytracing). Pipelines are divided in stages ("layers") that run at different times and they enforce the ultimate render order (the connections between pipelines builds RenderPaths my version of FrameGraph). TechniquePasses are also guaranted to execute in declaration order (not exactly one right after another, but the order won't be changed - I reserved some bits in the sortkey for pass index). During rendering I look up the Material, it gives me a Technique (the Material-Technique connection is actually baked offline), then I iterate over it's passes submitting drawables to respective renderqueues (layers). In theory a "Toon Shading" rendering technique should be pipeline-independent (it should be possible to use it forward and deferred). So does "Phong" and "BlinnPhong", but there might be issues with Deferred Pipelines (different G-Buffer layouts in some cases)... I started implementing it, but it went to far for my taste - It feels too much coupled (high-level rendering mixed with low-level) and with high "mental tax". Swapping Techniques at runtime is hard and the overall complexity just skyrocketed... and this should be low-level render wrapper So taking 2 steps back - what actually is a "Technique"? Should it describe how to render a material in different pipelines (e.g. "DepthOnly", "Deferred", "Forward"), shading methods ("Phong", "BlinnPhong", "PBR" techniques) or more basic variations ("selected", "nightVision", "normal", "glowing" - currently I'm using such variations as shader options). Up to this point I always imagined a "Technique" as a dumb container for passes for either in the same stage (like in toon shading: contour pass + normal pass) or for special effects like some kind of magic shield (opaque + transparent passes), but they are not that popular these days. Well, I believe the last time I've used them was when XNA was becoming popular. And one more question: should the Shading System allow for technique swaps at run-time? I'm starting to think, that it should be static - in order to make Materials less complicated (no dynamic lookups, single UBO/cbuffer layout, etc.)
  4. Thank you for your quick but very informative anwser!   Now I think that I've shot myself in the foot because from day one my engine is an executable - think typical player looking for compiled blob - instead of library. I am a typical Linux user so I liked the one application = one purpose principle (instead of über-editor)... Starting to regret that.   I have another question: so how (from very high-level view) an editor is being designed/implemented? It is yet another engine application (game but for editing)? Does the engine need to know/understand the source data? For now I have small cross-compiles that understand my data-definition languge and prepare the runtime data for the engine. Still thinking if the editors you listed use operating-system UI or they implemented it by themselves.   Best regards malboro
  5. Hello everyone! This is my first post in my life (since I always tried to avoid asking others because I always liked to dig and get everything by myself...) so I apologies if I did something wrong.   I want to create a level editor that uses my engine to display the scene - the problem is that I don't want to link the editor directly to the game engine, but use the "clean" way that the Bitsquid Engine communicates with its tools [1].   My engine is using OpenGL and SDL2 to abstract the platform details. I quickly prototyped the editor using wxPython and C#, I get the window handle (HWND on Windows, XWindowID on Linux) so I tried to "attach" my engine to the editor by using SDL_CreateWindowFrom((void *)handle) but I every attempt to create the rendering context I get NULL. As a side note, in C# using Control.Handle property as a window handle/ID on Linux generates BadWindow error. First I guessed that SDL misses something, so I tried with EGL with the same result.   Is such a tool architecture even possible with OpenGL (like shared context, but across processes)? I guess that the Bitsquid guys used swap-chains from DirectX, but I don't have any idea how to emulate them on OpenGL (while still being cross-platform).   [1] http://bitsquid.blogspot.com/2010/04/our-tool-architecture.html   Thanks in advance for any hints malboro   PS: I want to run my tools (including level editor) at least on Windows, Linux and maybe MacOS X.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!