Jump to content

  • Log In with Google      Sign In   
  • Create Account

NFAA - A Post-Process Anti-Aliasing Filter (Results, Implementation Details).


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
27 replies to this topic

#1 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 24 August 2010 - 05:21 PM

Helloooo people of GameDev.net! How ya been?

I've got something I'd like to share with you, along with implementation details and results. I'm sure some of you with a deferred renderer will appreciate this.

It's a new method of post-processing anti-aliasing that I finished today. I haven't thought of a name for it yet (if anyone has any ideas, I'd be infinitely grateful). - (I found a name, see bottom of post) -

Anyway, Here's how it works:

1. Gather samples around the current pixel for edge detection. I use 8, 1 for each neighboring pixel (up, down, left, right, diagonals), I initially used 3 but that wasn't enough.
2. Use them to create a normal-map image of the entire screen. This is the edge detection. How you do this is entirely up to you. Scroll down the page to see some sample code of how I do it.
3. Average 5 samples of the scene using the new normal image as offset coordinates (quincux pattern).

As simple as it sounds it actually works extremely well in most cases, as seen below. Images rendered with CryEngine2.

Left is post-AA, right is no AA. Image scaled by 200%.


Gif animation comparing the two. Also scaled by 200%.


Here's the heart of my algorithm, the edge detection.


Pros:
- Easily implemented: it's just a post effect.
- Pretty fast, mostly texture-fetch heavy (8 samples for edge detection, 5 for scene). < 1ms on a 9800GTX+ at 1920x800.
- Can solve aliasing on textures, lighting, shadows, etc.
- Flexible: in a deferred renderer, it can be applied to any buffer you want, so you can use real AA on the main buffer and post-AA on things like lighting.
- Works on shader model 2.0
- Since it's merely a post-process pixel shader, it scales very well with resolution

Cons:
- Can soften the image due to filtering textures (double-edged sword).
- Inconsistent results (some edges appear similar to 16x MSAA while others look like 2x).
- Can cause artifacts with too high of a normal-map strength/radius.
- Doesn't account for "walking jaggies" (temporal aliasing) and can't compensate for details on a sub-pixel level (like poles or wires in the distance), since that requires creating detail out of thin air.

Notes on the cons and how they might possibly be fixed:
-Image softening can be solved by using a depth-based approach instead.
-Inconsistent results and artifacts can probably be solved with higher sample counts.
-Last one will require some form of temporal aliasing (check Crytek's method in "Reaching the speed of light" if you want to do it via post-processing as well.)

That's it. Use the idea as you want, but if you get a job or something because of it, let 'em know I exist. ;)

Edit: New name = Normal Filter AA, because we're doing a normal filter for the edge detection. Less confusing than the previous name. :)

[Edited by - Styves on September 3, 2010 1:41:37 AM]

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 31968

Like
0Likes
Like

Posted 24 August 2010 - 05:38 PM

Nice results.

Just to clarify -
* The normal is based on the rate of change in color, luminosity, ...?
* If the rate of change is large vertically, then your normal points in the horizontal direction?
* When doing the averaging, you take 4 points in a line, oriented in the direction of the normal?

#3 MJP   Moderators   -  Reputation: 11790

Like
0Likes
Like

Posted 24 August 2010 - 06:38 PM

The quality's definitely not bad for a post-process technique. It's not MLAA quality, but it's obviously many times simpler and cheaper. :P

Personally I'm still not a fan of color-only approaches. They work well for large high-contrast edges, but without sub-pixel information you totally fail deal two of the most annoying results of aliasing: low-frequency artifacts from thin wire-shaped geometry, and "crawling edges" caused by pixel-sized movements. You can also end up inadvertently filtering out high-frequency details in your textures or shading.

In regards to improvements, I don't think CryTek's solution for addressing temporal artifacts is a particularly good one...I think you really need sub-pixel coverage information to properly deal with that problem without just blurring things even further. Also depth works well for the silhouettes of your meshes, but you need normals as well if you want to handle intersecting geometry and also avoid false-positives for triangles parallel to the view direction.

Sorry if I sound overly negative...I've just been spending some time on this subject recently and I've been frustrated with the shortcomings. :P

#4 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 25 August 2010 - 08:16 PM

1. Anything you want. I'm using color for the moment, but I may switch to luminosity or use a different color space (I read a GPU MLAA paper that suggested using LAB color space). I tried depth and normals, which also works, albeit you need to do a bit of tweaking to get the most edges possible.
2. I'm not quite sure I get what you mean by that.
3. Maybe some pseudo-code might help:


float4 Scene0 = tex2D(sceneSampler, TexCoord.xy);
float4 Scene1 = tex2D(sceneSampler, TexCoord.xy + Normal.xy);
float4 Scene2 = tex2D(sceneSampler, TexCoord.xy - Normal.xy);
float4 Scene3 = tex2D(sceneSampler, TexCoord.xy + float2(Normal.x, -Normal.y));
float4 Scene4 = tex2D(sceneSampler, TexCoord.xy - float2(Normal.x, -Normal.y));

return (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;


That's pretty much it, should give you a better explanation than me trying to type it out.

I hear y'a MJP, this definitely isn't a magic solution for AA. It just helps reduce aliasing from edges and alpha-tested textures, which is better than nothing (especially in a game that uses a lot of vegetation, like Crysis for example). :)

[Edited by - Styves on August 26, 2010 1:16:04 PM]

#5 InvalidPointer   Members   -  Reputation: 1445

Like
0Likes
Like

Posted 26 August 2010 - 03:21 AM

Quote:
Original post by Styves
As for the name, me and some friends decided to call it "Tangent Space Anti-Aliasing", 'cause we're basically working with a tangent-space normal map for the edge detection.


Nitpick: This is most definitely in screen space, not tangent space :)

Have you experimented at all with temporal reprojection? If you've read any of the new SIGGRAPH 2010 materials on CryEngine 3, Crytek demonstrates pretty good results for their new edge antialiasing technique by jittering the camera to capture sub-pixel detail-- it works surprisingly well. I have link(s) if you're interested.



#6 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 26 August 2010 - 07:17 AM

Edit: Changed the name to "Normal Filter AA", since we're doing a normal-map filter (more appropriate, less confusing :D).

I linked the CryEngine3 papers in my original post, but if you have any other links I'd be glad to have 'em. I do most (if not all) of my work in CryEngine2, so I've got quite a few limitations (can't provide my own engine information, can't create custom shader passes, can't assign certain shaders custom textures, etc.) It's a real pain.

Anyway, I'm definitely interested in at least attempting it to see how far I can get. :)

[Edited by - Styves on August 26, 2010 1:17:16 PM]

#7 Volgut   Members   -  Reputation: 256

Like
0Likes
Like

Posted 01 September 2010 - 10:25 PM

Hello,

I was looking for a good anti-aliasing technique for my deferred rendering for a long time. Your idea sounds very interesting and I would like to test it in my engine, but I am not sure if I fully understood how it works. How exactly do you calculate normals from color? In my engine I used luminosity, but my results are not as good as yours.

Here is my shader code:


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

// Normal
float2 upOffset = float2( 0, vPixelViewport.y );
float2 rightOffset = float2( vPixelViewport.x, 0 );

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );

float xDifference = (2*rightHeight+rightTopHeight+rightBottomHeight)/4.0f - (2*leftHeight+leftTopHeight+leftBottomHeight)/4.0f;
float yDifference = (2*topHeight+leftTopHeight+rightTopHeight)/4.0f - (2*bottomHeight+rightBottomHeight+leftBottomHeight)/4.0f;
float3 vec1 = float3( 1, 0, xDifference );
float3 vec2 = float3( 0, 1, yDifference );
float3 Normal = normalize( cross( vec1, vec2 ) );

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;




And this is how I calculate luminosity:

float GetColorLuminance( float3 i_vColor )
{
return dot( i_vColor, float3( 0.2126f, 0.7152f, 0.0722f ) );
}




And at the end I would like to show you screenshots with and without AA. Images scaled by 200%.


[Edited by - Volgut on September 2, 2010 4:25:22 AM]

#8 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 02 September 2010 - 05:54 PM

Volgut, I'm very glad to see someone interested in the idea. :D

I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.

What I do is sum up samples that form triangles then subtract the opposite triangle.

I've modified your code to use the same edge detection algorithm, try it out and let me know how it works (also added documentation for convenience). :)


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

// Normal, scale it up 3x for a better coverage area
float2 upOffset = float2( 0, vPixelViewport.y ) * 3;
float2 rightOffset = float2( vPixelViewport.x, 0 ) * 3;

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset+upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy-rightOffset-upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset+upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy+rightOffset-upOffset).rgb );

// Normal map creation, this is where it differs.
float sum0 = rightTopHeight+ topHeight + rightBottomHeight;
float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;
float sum2 = leftTopHeight + leftHeight + TopRight;
float sum3 = leftBottomHeight + rightHeight + rightBottomHeight ;

// Then for the final vectors, just subtract the opposite sample set.
// The amount of "antialiasing" is directly related to "filterStrength".
// Higher gives better AA, but too high causes artifacts.
float filterStrength = 1;
float vec1 = (sum1 - sum0) * filterStrength;
float vec2 = (sum2 - sum3) * filterStrength;

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;

// To debug the normal image, use this:
// o_Color .xyz = normalize(float3(vec1, vec2 , 1) * 0.5 + 0.5);
// using vec1 and vec2 for the debug output as Normal won't display anything (due to the pixel scale applied to it).














You should get something similar to this when using the debug output.



As I said before, I'm using pure color. It's rather simpler really, just do the same thing as usual but sample the colors (just change the samples to float3), and just before you put them together to form "Normal", you add all three color channels together.

[Edited by - Styves on September 3, 2010 1:54:58 AM]

#9 Volgut   Members   -  Reputation: 256

Like
0Likes
Like

Posted 03 September 2010 - 02:10 AM

Hello Styves,

Thank you for helping me :) Your code was very useful. I only modified it a bit and I implemented normal scale/offset based on pixel distance from the camera. With fixed scale/offset I was getting some artefacts in the distance (bleeding).

Here you can see two screenshots (scaled 200%) with and without anti-aliasing.


Your anti-aliasing technique works very well. As you wrote, it doesn't solve all problems, but it definitely can reduce aliasing effect. Now I will try to look into Crytek's temporal aliasing.

Thanks again.

Regards,
Tomasz


#10 Hodgman   Moderators   -  Reputation: 31968

Like
0Likes
Like

Posted 03 September 2010 - 02:44 AM

Quote:
Original post by Styves
I don't use a cross computation to get my normal-image. I use an add/subtract system in order to form my edges.
What I do is sum up samples that form triangles then subtract the opposite triangle.
Hey that's cool. I actually implemented that once on the Wii, except my triangles were rotated 45 degrees compared to yours! I never ended up getting as good quality as this out of it though, so we scrapped it.
I'm going to have to give this another shot now, thanks ;)

#11 __sprite   Members   -  Reputation: 461

Like
0Likes
Like

Posted 08 September 2010 - 10:55 AM

Hi. I've been testing this out. Seems pretty cool so far

Quote:
Original post by Styves

source:

// Put them together and multiply them by the offset scale for the final result.
float2 Normal = float2( vec1, vec2) * vPixelViewport;

// Color
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur



This seems wrong to me though. I don't think you need to multiply by vPixelViewport a second time?

#12 Volgut   Members   -  Reputation: 256

Like
0Likes
Like

Posted 08 September 2010 - 07:17 PM

Yes, you are right. The vector should be only scaled by variable called "filterStrength".

#13 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 08 September 2010 - 10:22 PM

Ah yes, you're right. The second multiplication isn't needed, you just need to do it once. :)

Feel free to post images and ideas you have regarding the technique! I'm curious to see how it's helping people :D.

#14 venzon   Members   -  Reputation: 256

Like
0Likes
Like

Posted 01 October 2010 - 05:32 PM

I implemented this in OpenGL/GLSL and at first I was disappointed with the results... but after playing with it a bit I got pretty decent results. The most important thing I found was clamping Normal.xy before doing the texture lookups:
Normal.xy = clamp(Normal,-float2(1.0,1.0)*0.4,float2(1.0,1.0)*0.4); //Prevent over-blurring in high-contrast areas! -venzon
Normal.xy *= vPixelViewport * 2; // Increase pixel size to get more blur
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) );


[Edited by - venzon on October 2, 2010 12:32:22 AM]

#15 Styves   Members   -  Reputation: 1078

Like
0Likes
Like

Posted 30 October 2010 - 11:57 AM

I've made some changes to the shader. I've removed the need to clamp values or scale the edges. The source of the problem seemed to be the diagonal samples at the end of the shader. I've tweaked the shader code from this thread to do the same. Of course, if you're using the shader on an HDR image then clamping may still be necessary. :)


float2 vPixelViewport = float2( 1.0f / VIEWPORT_WIDTH, 1.0f / VIEWPORT_HEIGHT );

const float fScale = 3;

// Offset coordinates
float2 upOffset = float2( 0, vPixelViewport.y ) * fScale;
float2 rightOffset = float2( vPixelViewport.x, 0 ) * fScale;

float topHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + upOffset).rgb );
float bottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - upOffset).rgb );
float rightHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset).rgb );
float leftHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset).rgb );
float leftTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset + upOffset).rgb );
float leftBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy - rightOffset - upOffset).rgb );
float rightBottomHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset + upOffset).rgb );
float rightTopHeight = GetColorLuminance( tex2D( ColorTextureSampler, i_TexCoord.xy + rightOffset - upOffset).rgb );

// Normal map creation
float sum0 = rightTopHeight+ topHeight + rightBottomHeight;
float sum1 = leftTopHeight + bottomHeight + leftBottomHeight;
float sum2 = leftTopHeight + leftHeight + TopRight;
float sum3 = leftBottomHeight + rightHeight + rightBottomHeight;

float vec1 = (sum1 - sum0);
float vec2 = (sum2 - sum3);

// Put them together and scale.
float2 Normal = float2( vec1, vec2) * vPixelViewport * fScale;

// Color
float4 Scene0 = tex2D( ColorTextureSampler, i_TexCoord.xy );
float4 Scene1 = tex2D( ColorTextureSampler, i_TexCoord.xy + Normal.xy );
float4 Scene2 = tex2D( ColorTextureSampler, i_TexCoord.xy - Normal.xy );
float4 Scene3 = tex2D( ColorTextureSampler, i_TexCoord.xy + float2(Normal.x, -Normal.y) * 0.5 );
float4 Scene4 = tex2D( ColorTextureSampler, i_TexCoord.xy - float2(Normal.x, -Normal.y) * 0.5 );

// Final color
o_Color = (Scene0 + Scene1 + Scene2 + Scene3 + Scene4) * 0.2;





#16 b34r   Members   -  Reputation: 365

Like
1Likes
Like

Posted 04 November 2010 - 04:39 AM

Hi,

Very nice results! Unfortunately I could not reproduce them no matter how. So I came up with my own version of your filter. It requires slightly less samples and still gives very good results.

I made a blog entry about it and here is the GLSL filter. w controls the width of the filter (how large the edges will get) and the threshold (hard-coded comparison to 1.0/6.0) prevents over blurring edges that might not need it.


float lumRGB(vec3 v)
{ return dot(v, vec3(0.212, 0.716, 0.072)); }

void main()
{
vec2 UV = gl_FragCoord.xy * inverse_buffer_size;

float w = 1.75;

float t = lumRGB(texture2D(source, UV + vec2(0.0, -1.0) * w * inverse_buffer_size).xyz),
l = lumRGB(texture2D(source, UV + vec2(-1.0, 0.0) * w * inverse_buffer_size).xyz),
r = lumRGB(texture2D(source, UV + vec2(1.0, 0.0) * w * inverse_buffer_size).xyz),
b = lumRGB(texture2D(source, UV + vec2(0.0, 1.0) * w * inverse_buffer_size).xyz);

vec2 n = vec2(-(t - b), r - l);
float nl = length(n);

if (nl < (1.0 / 16.0))
gl_FragColor = texture2D(source, UV);

else
{
n *= inverse_buffer_size / nl;

vec4 o = texture2D(source, UV),
t0 = texture2D(source, UV + n * 0.5) * 0.9,
t1 = texture2D(source, UV - n * 0.5) * 0.9,
t2 = texture2D(source, UV + n) * 0.75,
t3 = texture2D(source, UV - n) * 0.75;

gl_FragColor = (o + t0 + t1 + t2 + t3) / 4.3;
}
}




Btw, I played with temporal anti-aliasing and there really is something great to be done on this front too. The main issue to solve is about how to combine jittered frames without introducing too much ghosting artifacts.

Cheers

#17 0xffffffff   Members   -  Reputation: 150

Like
0Likes
Like

Posted 23 January 2011 - 07:23 PM

We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.

#18 JorenJoestar   Members   -  Reputation: 173

Like
0Likes
Like

Posted 26 January 2011 - 06:28 AM

We had great success with this approach. Our implementation is somewhat different than that described above.

We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA.


Can you describe better the process you are using? Are you using both normal and depth???

Thanks!
---------------------------------------http://badfoolprototype.blogspot.com/

#19 0xffffffff   Members   -  Reputation: 150

Like
0Likes
Like

Posted 26 January 2011 - 03:07 PM

Can you describe better the process you are using? Are you using both normal and depth???

Using only the color information. At each pixel the edge direction and contrast is estimated using color differences between neighboring pixels (in our version, the color of the "central" pixel is ignored for this purpose). E.g., if it's bright on the right side and dark on the left side, then the edge has a strong vertical component (put another way, the edge normal points to the left).

In practice our sample placement represents a slightly rotated pair of basis vectors relative to the rows and columns of the image. The color differences are reduced to scalars (somehow) which become the weights for a sum of these basis vectors, and that sum is the "step size" for the directional blur.

aa filter.jpg

#20 Sikkpin1   Members   -  Reputation: 100

Like
0Likes
Like

Posted 27 January 2011 - 10:18 AM

I did pretty much the same thing a little bit ago that really did work rather well, considering how simple it is. The shader code can be found here. It's for Crysis and it's in the "PostEffects.cfx" file. Searching for "// sikk---> Edge AA" in the file will take you right to it.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS