Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


AvengerDr

Member Since 06 Dec 2003
Offline Last Active Nov 24 2014 07:15 AM

#5192237 ECS

Posted by AvengerDr on 11 November 2014 - 03:49 AM

If you want to have a look at another implementation, in my engine I have built another ECS inspired by the Artemis framework. Look for the Source/Odyssey.Talos subfolder. It's in C#.

 

Edit: Talos has now been renamed to "Epos" :)




#5178818 How to implement globe style mouse only camera control?

Posted by AvengerDr on 08 September 2014 - 03:51 AM

You have to search for "Arcball" rotation. Here is a very basic implementation. That will change the object's orientation. If you want to orbit around it, you can try this one.




#5177183 How to clearly highlight GUI object regardless of its color

Posted by AvengerDr on 31 August 2014 - 05:52 AM

Do you really need an automatic approach? I would think that having different "themes" for each UI would be the best approach. In my UI library, I have implemented the possibility of parsing UI themes from a XAML-like file. This way you can specify all the UI appearance without needing to recompile everything. You can have a look at this post in my journal.




#5173323 Text and ECS (Entity Component System) architecture questions

Posted by AvengerDr on 13 August 2014 - 06:51 AM

I faced a similar problem so here's my perspective, though you might not like it.

Not everything has to be framed in the context of an ECS. It's the same problem I see with developers using the MVVM pattern. I myself have at times become so obsessive that everything had to follow that pattern, to the point that things that would have taken 10 seconds to implement in the codebehind required complex XAML-only solutions.

 

I am building this UI library, which I am using together with my ECS framework. The solution I adopted was to treat the whole HUD as an entity. Therefore the UI system has the only responsibilities of polling input devices and rendering the whole HUD (though I might split those eventually). I assume that you wanted an approach in which every UI element was a different entity? Or something along the lines of having different UI-components, such as a "ButtonComponent", "PanelComponent", "LabelComponent" and so on?

 

The only advantage I can see is that you would then be able to attach UI elements to 3D objects. The downside is that you'll have to find someway to have different UI-components communicate with each other.




#5168969 Get Array of Points that Make a Circle And Displace Them

Posted by AvengerDr on 24 July 2014 - 02:34 PM

Are you talking about the parametric equation of a circle? You have to decide how many points your circle will have and then you can calculate theta accordingly, e.g.: 

Vector2[] circle = new Vector2[numVertices];
float delta = (2*PI) / numVertices;

for (int i=0; i < numVertices; i++)
{
   float t = delta * i;
   x = x0 + cos(t)*r;
   y = y0 + sin(t)*r;
   circle[i] = new Vector2(x,y);
}

Where x0,y0 are the coordinates of your center and r is the radius of the circle. As the for loop runs, t will be incremented until it covers the whole circle. The more points you calculate, the closer it will approximate a circle.




#5167546 Understanding concept of creating resources in SharpDX

Posted by AvengerDr on 18 July 2014 - 02:48 AM

RenderTarget/DepthStencil View are classes that provide access to an underlying RenderTarget/DepthStencilBuffer resource. In order to create them you need to pass the correct resource. For example, a rtv is usually created for the BackBuffer texture, while the dsv is created for a DepthStencilBuffer object.




#5167355 Understanding concept of creating resources in SharpDX

Posted by AvengerDr on 17 July 2014 - 04:39 AM

What are you trying to do?

SharpDX's GraphicsDevice class in the toolkit can be implicitly passed as parameter to any method accepting a Direct3D11.Device. They have implemented an "implicit (conversion) operator" that automatically returns the underlying native device.




#5166369 Entity manager size

Posted by AvengerDr on 12 July 2014 - 03:36 AM

I guess it depends on what kind of game you are building. For instance, in a RPG, are your spells considered to be their own entities? If so whenever you cast a spell (eg a fireball), you will have to create a new entity. The same applies to summoned creatures. I thought about the same issue. I'm building a turn-based tactical game and for instance if a spaceship fires a missile that takes more than one turn to reach the target, I'd like the player to be able to select it, so it would have to be an entity in its own regards.




#5165613 How to use external variables in HLSL?

Posted by AvengerDr on 08 July 2014 - 12:54 PM

Indeed, that's very true unsure.png If some day I'll find the time to build a graphical editor, that should address it. However, the main reason I did it was the above mentioned automatic initialization. Now I don't have to worry about that anymore biggrin.png And I really don't like all the #include stuff. It's easy to manage for a small demo but a real project will have dozens of shaders and it can get messy really fast.

 

I didn't write any tutorial on how to use it (yet). Being a lone developer most of the code is unfortunately uncommented. I have now uploaded the source code to the repository. Look up the Odyssey.Daedalus project in OdysseyTools.sln. There are more examples in the Shaders/Techniques folder (though some might not work as I added a lot more functionalities and I need to check what happened to the older ones). If you are interested in the initialization code, look at the variable factory classes. Some of the static properties defined there instantiate objects that are tagged with the relevant engine value to use. Then in the Odyssey.Talos project (yes i like fancy names biggrin.png , it's my ECS framework) you can see how it works inside the Initializers folder.




#5165529 How to use external variables in HLSL?

Posted by AvengerDr on 08 July 2014 - 06:15 AM

If you have time to spare you could also go the totally overkill route as I did. I wrote a Shader Generation tool that allows me to define shaders as tree of "nodes". For example here is a Gaussian Blur shader.

DeclarationNode nColor = DeclarationNode.InitNode("color", Shaders.Type.Float4, 0, 0, 0, 0);
ArrayNode fOffsetWeight = new ArrayNode() {Input = fOffsetsAndWeights, Index = "i"};

AdditionNode nBlur = new AdditionNode
{
    PreCondition = new ForBlock()
    {
        PreCondition = nColor,
        StartCondition = new ScalarNode(){Value = 0},
        EndCondition = new ScalarNode(){Value = 15}
    },
    OpensBlock = true,
    Input1 = nColor,
    Input2 = new MultiplyNode()
    {
        Input1 = new TextureSampleNode()
        {
            Texture = tDiffuse,
            Sampler = sLinear,
            Coordinates = new AdditionNode()
            {
                Input1 = new ReferenceNode() {Value = InputStruct[Param.SemanticVariables.Texture]},
                Input2 = new SwizzleNode(){Input = fOffsetWeight, Swizzle = new []{Swizzle.X, Swizzle.Y}},
            }
        },
        Input2 = new SwizzleNode() { Input = fOffsetWeight, Swizzle = new[] { Swizzle.Z } },
    },
    ClosesBlock = true,
    IsVerbose = true,
    Declare = false,
    AssignToInput1 = true,
    Output = nColor.Output
};

OutputStruct = Struct.PixelShaderOutput;
Result = new PSOutputNode
{
    FinalColor = nBlur,
    Output = OutputStruct
};

And the resulting shader code (cbuffer declarations omitted)

PSOut GaussianBlurPS(VSOut input) : SV_Target
{
    float4 color = float4(0, 0, 0, 0);
    for (int i = 0; i < 15; i++)
    {
        color += tDiffuseMap.Sample(sLinear, input.Texture + fOffsetsAndWeights[i].xy) * fOffsetsAndWeights[i].z;
    }
    PSOut output;
    output.Color = color;
    return output;
}

The issue is that shaders need to be declared as combinations of node. There's no graphical editor as of yet (someday!). On the other hand this allows me to tailor it to the necessities of my engine. Since I can annotate every variable with the corresponding engine references. For example if a cbuffer requires a World matrix, the corresponding variable is tagged with that reference. So the shader initializer system when encountering that reference will automatically apply the correct data without needing to initialize a shader on a ad hoc basis.

Further, I can have "function" nodes such as one representing your "DoBlur" nodes (either as combination of other nodes or as plain text methods). When the shader is generated it will contain only the code strictly necessary (and nothing else) for that shader to work. Without messy includes and whatnot. But wait, there's more! The generated shaders are part of a "ShaderCollection" structure that holds various technique mappings. For example technique A might use SM4, while technique B might use SM5. So a ShaderCollection object can hold different versions of the same shader type, making the choice of the correct version a simpler one. AND it also comes with shader preview functionality (which is still experimental).

 

I just noticed that I forgot to add it to my GitHub repository but it will be when I go back home.




#5104358 Yet another shader generation approach

Posted by AvengerDr on 25 October 2013 - 08:27 AM

In the past few months I found myself juggling between different projects  aimed at several different platforms (Windows 7,8, RT and Phone). Some of those have different capabilities so some shaders needed to be modified in order to work correctly. I know that premature optimization is just as bad but in this specific situation I thought that addressing the problem sooner than later would be the right choice.
 
To address this problem I created a little tool that allows me to dynamically generate a set of shaders through a graph like structure. Which is nothing new, as it is usually the basis for this kind of application. I did probably reinvent a lot of wheels but since I couldn't use MS's shader designer (it only works with C++ I think) nor Unity's equivalent (as I have my own puny little engine) I decided to roll my own. I am writing here to get some feedback on my architecture and if there is something I overlooked.
 
Basically I have defined classes for most of the HLSL language. Then there are nodes such as constants, math operations and special function nodes. The latter are the most important ones as they correspond to high-level functions such as Phong Lighting, shadow algorithms and so on. Each of these function nodes expose several switches that enable me to enable/disable specific features. For example if I set a Phong node's "Shadows" to true then it will generate a different signature for the function than if it had it set to false. Once the structure is complete the graph is traversed and the actual shader code is generated line by line. From my understanding I think that dynamic shader linking works similarly but I've not been able to find a lot of information on the subject.
 
Right now shaders can only be defined in code, in the future I could build a graphical engine. A classic phong lighting pixel shader looks like this and this is the generated output. It is also possible to configure the amount of "verbosity". The interesting thing is that once the shader is compiled it gets serialized to a custom format that contains other information. Variables and nodes that are part of the shader are decorated with engine references. If I add a reference to the camera position for example, that variable tells the engine that it has to look for that value when initialising the shader. Analogously for the values needed to assemble constant buffers (like world/view/projection matrices). 

Once the shader is serialised, all this metadata helps the engine to automatically assign each shader variable or cbuffer with the right values. Before in my engine, each shader class had these huge parts of code that fetched needed values from somewhere else in the engine. Now all that has been deleted and is taken care automatically as long as the shaders are loaded in this format.
 
ShaderGenV01.png

Another neat feature is that within the tool I have built I can define different techniques; i.e: a regular Phong shader, one using a Diffuse Map, one using a Shadow Map. Each technique maps a different combination of vertex and pixel shaders. The decoration that I mentioned earlier helps the tool generate a "TechniqueKey" for each shader that is then used by the engine to fetch the right shader from the file on disk. For example the PhongDiffusePS shader is decorated with attributes defining its use of a DiffuseMap (among other things). When in the actual application I enable the DiffuseMap feature, the shader system checks whether that feature is supported by the current set of shaders assigned the material. If a suitable technique is found, then the systeme enables the relevant parameters. In this way it is also possible to check for different feature levels and so on.

Probably something like this is overkill for a lot of small projects and I reckon it is not as easy to fix something in the generated code of this tool as it is when making changes in the actual source code it self. But once it does finally work, the the automatic configuration of shader variables is something that I like (at least if compared to my previous implementation, I don't how everyone else handles that). What I am asking is how extensible or expandable this approach is (maybe it is too late to ask this kind of questions biggrin.png)? Right now I have a handful of shaders defined in the system. If you had a look at the code, what kind of problems am I likely to run into when adding nodes to deal with Geometry shaders and other advanced features?

Finally, if anyone could be interested in having a look at the tool I'm happy to share it on GitHub.


#5077315 Should game objects render themselves, or should an object manager render them?

Posted by AvengerDr on 13 July 2013 - 04:56 AM

my "engine" uses different object representations: object are added to the world using a scene graph but that is not used for rendering as it would not be the most efficient way. Rather, after the scene is complete, a "SceneManager" examines the graph and computes the most efficient way to render it. As it has been said, objects are grouped according to materials, geometry used, rendering order and other properties. This scene manager returns a list of "commands" that the rendering loop executs. Commands can be of various types, i.e.: generate a shadow map, activate blending, render objects and so on. 

 

Another thing that I've been doing is separating the object class from the geometry class. In my engine, the object represents the high-level properties of a mesh such as its local position, rotation, etc. (local because the absolute values are obtained according to the scene graph). Whereas the geometry class contains the actual vertex/index buffers. There is only one geometry instance for each unique 3D object in the world.

 

This helps further improve the rendering efficiency. After each object has been grouped into materials then I further group each one of these objects according to the geometry used. Then for each Material/Geometry couple I issuse a "Render Command" to render all the objects that use the same materials and reference geometry. This way there will be only one setVB/IB command per group. This also helps with hardware instancing: if a material supports it, then I just use the list of object instances to compute an instance buffer.




#5075673 DX11 - Instancing - The choice of method

Posted by AvengerDr on 06 July 2013 - 03:57 AM

In general you take a single geometry object (i.e. a vertex buffer and possibly an index buffer) and replicate it as needed. Where and how many to replicate is determined by the instance buffer. Here you will put the world matrix (for example) of each bullet in the world. In your shader you will multiply your model's vertices with the particular instance's world matrix to determine where in the world that particular instance is. It's like you were using the geometry you wanted to instance as a stamp, then the instance buffer would contain the locations where you need to stamp :) (and other properties, like the colour for example)

 

If the number of instances changes, one approach is to recompute all of it and re-bind it. If your geometry is mostly static, then another approach is to a create a "large enough" buffer. You can bind it even though it could not be full. When the need arises, you simply add more entries to the buffer, without having to recreate the other ones. This also sets some sort of limit for your rendering engine as it's like saying that you can render up to X instances. 

In the specific case of spaceships though, I think you'd need to update them each frame. BUT! If you go for the hard science approach, then you don't need to worry about any of this at all, beam weapons are supposed to travel at the speed of light so .. problem solved!




#5026527 Real time + pause Vs Turn Based for a Space tactical game on smartphones

Posted by AvengerDr on 28 January 2013 - 03:31 PM

Well casual gamers are obviously out of question. Ideally anyone who's not going to be put off from the game's lack of awesome 3D graphics.

 

If I had the resources, I would totally go for something like Homeworld. As things stand now, the prospect of having a professional artist create the graphics is outside my reach at the moment. That's why I'd like to stick to a minimalistic approach, similar to Introversion's Defcon game. First, it's relatively easy to draw iconic symbols for ships and the like, second it supports the idea of the player being in some sort of "situation room" rather than in the fighter's cockpit.




#4861723 [SlimDX] Compatibility with NVIDIA 3D Vision?

Posted by AvengerDr on 14 September 2011 - 02:46 PM

I've been experimenting with 3D Vision myself. If you're going to use 3D VIsion automatic you simply need to hit CTRL+T and your glasses. There's a way of doing it automatically if you hook up some methods from NVAPI, but I've not yet attempted this as currently there's no managed port and it only supports DX10.


Each directx app can theoretically be supported by 3D vision automatic. 3D Vision automatic is not "true" stereoscopy: the driver itself will take care of duplicating the render calls from two different point of view. There's a hack that lets you control the stereoization process. The "NV_STEREO_IMAGE_SIGNATURE" method which consists in combining the eye-specific images into a texture twice the width and writing a special value in the last row and then presenting the results.This special value is picked up by the driver and it starts synchronizing this texture with the glases. I've tested it with DX9 and DX10 and it definitely works.




PARTNERS