Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


AvengerDr

Member Since 06 Dec 2003
Offline Last Active Nov 24 2014 07:15 AM

Topics I've Started

B-Spline sampling

23 August 2014 - 06:15 AM

Hi all,

I've built an algorithm that displays a B-Spline. I'm not sure if it is working correctly. For example, take this picture:

 

C5RNkwd.png

 

the red points are the control points and the blue points are the various points on the curve. Each blue point is calculated through a fixed timestep. The Knot vector of this (clamped) curve is [0 0 0 0 0.5 1 1 1 1] and it is calculated using DeBoor's function in the interval [alpha, 1-alpha]. In this clamped case the first and last control points are added back at the end (though there might be some issues with the control points being drawn at their top left corner instead of their center, but it shouldn't matter).

 

As you can notice, the points towards the first and last control point appear to be more "sparse". I would explain this by it being due to the other control points concentrating the curve in the upper part as the number of points to calculate is constant (100, with alpha incrementing by 0.01).

 

Are there better ways to sample the curve in order to have a constant displacement between curve segments?


Changing colors of a ID2D1Linear/RadialGradientBrush at runtime

14 August 2014 - 03:57 AM

Hi all,

if you had a look at my devjournal, I'm trying to build an animatable UI framework. I'm going to use C# terminology because I'm using D2D from SharpDX but it shoud apply to C++ just the same. So, every property that can be written to such as the Opacity of a gradient or its RadiusX/Y (in the case of a Radial gradient) can be animated just fine. 

 

I'd like to implement the possibility of animating the colors of a Gradient Brush. Once you create a gradient brush, it seems you can only get a copy of the GradientStops you used returned to you through the GetGradientStops method. Since a GradientStop is a structure, if I were to change their colors, it would have no effect on the actual brush being used.

 

This seems to be a big limitation, so before going any further I'd like to know if someone has attempted to do anything similar. Is there really no other way? The Color property of a SolidColorBrush can be written to, so that could be animated over time. How to do the same with a GradientBrush? Is the only way to do it to create intermediate gradients so that instead of animating the colors I change the whole brush? For a decent animation, you could need to create dozens of slightly different brushes and that seems terribly inefficient.

 

I have two alternative approaches in mind:

  1. Have the source and target gradient one on top of the other and then animate their opacity so that one goes from 0 to 1 and the other from 1 to 0 in the same interval of time. Though I'm not sure the effect would be the same.
  2. Have a greyscale gradient and animate a blended solidcolorbrush on top of it.

But these seem more like hacks, so I was wondering if there was a better way to do this.


Making a mesh appear face by face

14 July 2014 - 03:12 AM

Hi all, I'd like to build this animation effect. It should animate over time the appearance of a mesh by having its faces appear either randomly (ie from zero faces to all of them) or through some sort of pattern (ie given a starting face, have all the others appear through an expanding radius). One simple way to do it would be to animate the VertexBuffer, but it doesn't seem the brightest idea. 

 

Another idea I had would be to use a Vertex Struct having Alpha information and then in the vertex shader fetch from a 1D texture (or float array, which would be faster?) the corresponding alpha value for the given vertex. Perhaps it would be necessary to have a vertexId value in the vertex struct itself? So in the shader I could do (HLSL)

output.color.a = alphaValues[input.vertexId];

In theory if I set all three face vertices' alpha to zero, that face should disappear, correct? Assume that I am not using indices.

Are there any other possibly better ways to do it?


3D Vector Art graphical effect

29 May 2014 - 01:50 PM

Hi all,

what would be the best way to achieve a "modern" vector art-like rendering effect? For example see:

UphYyjA.jpg

 

In my mind, it would need to be a bit more complex than a simple wireframe shader with a glow post-processing effect. I wouldn't want it to render back faces and all face edges. What kind of techniques should I look into?

 


Yet another shader generation approach

25 October 2013 - 08:27 AM

In the past few months I found myself juggling between different projects  aimed at several different platforms (Windows 7,8, RT and Phone). Some of those have different capabilities so some shaders needed to be modified in order to work correctly. I know that premature optimization is just as bad but in this specific situation I thought that addressing the problem sooner than later would be the right choice.
 
To address this problem I created a little tool that allows me to dynamically generate a set of shaders through a graph like structure. Which is nothing new, as it is usually the basis for this kind of application. I did probably reinvent a lot of wheels but since I couldn't use MS's shader designer (it only works with C++ I think) nor Unity's equivalent (as I have my own puny little engine) I decided to roll my own. I am writing here to get some feedback on my architecture and if there is something I overlooked.
 
Basically I have defined classes for most of the HLSL language. Then there are nodes such as constants, math operations and special function nodes. The latter are the most important ones as they correspond to high-level functions such as Phong Lighting, shadow algorithms and so on. Each of these function nodes expose several switches that enable me to enable/disable specific features. For example if I set a Phong node's "Shadows" to true then it will generate a different signature for the function than if it had it set to false. Once the structure is complete the graph is traversed and the actual shader code is generated line by line. From my understanding I think that dynamic shader linking works similarly but I've not been able to find a lot of information on the subject.
 
Right now shaders can only be defined in code, in the future I could build a graphical engine. A classic phong lighting pixel shader looks like this and this is the generated output. It is also possible to configure the amount of "verbosity". The interesting thing is that once the shader is compiled it gets serialized to a custom format that contains other information. Variables and nodes that are part of the shader are decorated with engine references. If I add a reference to the camera position for example, that variable tells the engine that it has to look for that value when initialising the shader. Analogously for the values needed to assemble constant buffers (like world/view/projection matrices). 

Once the shader is serialised, all this metadata helps the engine to automatically assign each shader variable or cbuffer with the right values. Before in my engine, each shader class had these huge parts of code that fetched needed values from somewhere else in the engine. Now all that has been deleted and is taken care automatically as long as the shaders are loaded in this format.
 
Attached File  ShaderGenV01.png   32.73KB   1 downloads

Another neat feature is that within the tool I have built I can define different techniques; i.e: a regular Phong shader, one using a Diffuse Map, one using a Shadow Map. Each technique maps a different combination of vertex and pixel shaders. The decoration that I mentioned earlier helps the tool generate a "TechniqueKey" for each shader that is then used by the engine to fetch the right shader from the file on disk. For example the PhongDiffusePS shader is decorated with attributes defining its use of a DiffuseMap (among other things). When in the actual application I enable the DiffuseMap feature, the shader system checks whether that feature is supported by the current set of shaders assigned the material. If a suitable technique is found, then the systeme enables the relevant parameters. In this way it is also possible to check for different feature levels and so on.

Probably something like this is overkill for a lot of small projects and I reckon it is not as easy to fix something in the generated code of this tool as it is when making changes in the actual source code it self. But once it does finally work, the the automatic configuration of shader variables is something that I like (at least if compared to my previous implementation, I don't how everyone else handles that). What I am asking is how extensible or expandable this approach is (maybe it is too late to ask this kind of questions biggrin.png)? Right now I have a handful of shaders defined in the system. If you had a look at the code, what kind of problems am I likely to run into when adding nodes to deal with Geometry shaders and other advanced features?

Finally, if anyone could be interested in having a look at the tool I'm happy to share it on GitHub.

PARTNERS