Sign in to follow this  
corntown

2 questions I have

Recommended Posts

corntown    122
Hello all I have 2 questions I hope you can reply them for me: 1. I can't seem to understand the difference between two FVFs- D3DFVF_XYZ - coordinate of an untransformed vertex. D3DFVF_XYZRHW - already transformed coordinates. Don't understand what "Transformed" means in those cases 2.In the book Im reading, a cube is created by these vertex:
{
// 1
{ -64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ -64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
// 2

{ -64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ -64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
// 3
{ -64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ -64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,255,0)},
// 4
{ -64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ -64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,0,0)},
// 5
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,255,255)},
// 6
{-64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,80,80,80)},
};

I have a problem understanding how are the cordinates located on the screen.. I think if the Z coordinate is negative meaning it's looking further and if it's positive it's closer.. but I don't understand what's the meaning of the X , Y coordinate when they are negative. Can anyone tell me how are they x,y,z axis located on the screen.. because until now I though X,Y can only be positive...the bigger value Y gets it goes down the screen and the bigger value X gets it goes right in the screen.. but when it's negative? is it out of the screen or what? thanks

Share this post


Link to post
Share on other sites
sirob    1181
The two questions you asked are similar in nature. The difference between transformed vertices and untransformed vertices, is the same as the difference between the points of the cube as they are presented by the book (in world space) and the point of the cube as you are expecting them to be (in screen space).

Vertices marked as XYZRHW provide coordinates in pixels on the screen. This means that a point at 512, 384 will be at the center of a 1024x768 screen. In this case, Z is the depth "into the screen" (in the range of [0..1]) and W should always be 1.

Untransformed vertices, on the other hand, are not in screen space. They are in world space. That means that their position is relative to "the world". While "the world" isn't anything specific, in most cases, "world space" is the playing field for your 3D game. If you would image a soccer field, that could be your world. One corner might be at 0,0,0 and another at 100, 0, 20. How you use this space is up to you, and the size of each unit, is also up to you.

When Untransformed vertices are rendered, they go through a transformation process that moves them from one space to another. In this case, from world space to screen space. This is how a 3D scene is turned into a 2D picture - using transformation. The actual transformation depends on several things, such as the position of the "camera" taking the picture. These are all defined using Transformation Matrices, which determine how a vertex is transfered from being XYZ to being XYZRHW.

Going back to your cube, the positions are specified in some "world space". Where this space is, or how big each unit in it is hasn't been specified yet, and will be when Transformation Matrices are set. In this space, negative or positive numbers don't make much difference.

I hope this clears things up a bit, if you have more questions, feel free to ask.

Share this post


Link to post
Share on other sites
corntown    122
Quote:
Original post by sirob
The two questions you asked are similar in nature. The difference between transformed vertices and untransformed vertices, is the same as the difference between the points of the cube as they are presented by the book (in world space) and the point of the cube as you are expecting them to be (in screen space).

Vertices marked as XYZRHW provide coordinates in pixels on the screen. This means that a point at 512, 384 will be at the center of a 1024x768 screen. In this case, Z is the depth "into the screen" (in the range of [0..1]) and W should always be 1.

Untransformed vertices, on the other hand, are not in screen space. They are in world space. That means that their position is relative to "the world". While "the world" isn't anything specific, in most cases, "world space" is the playing field for your 3D game. If you would image a soccer field, that could be your world. One corner might be at 0,0,0 and another at 100, 0, 20. How you use this space is up to you, and the size of each unit, is also up to you.

When Untransformed vertices are rendered, they go through a transformation process that moves them from one space to another. In this case, from world space to screen space. This is how a 3D scene is turned into a 2D picture - using transformation. The actual transformation depends on several things, such as the position of the "camera" taking the picture. These are all defined using Transformation Matrices, which determine how a vertex is transfered from being XYZ to being XYZRHW.

Going back to your cube, the positions are specified in some "world space". Where this space is, or how big each unit in it is hasn't been specified yet, and will be when Transformation Matrices are set. In this space, negative or positive numbers don't make much difference.

I hope this clears things up a bit, if you have more questions, feel free to ask.


Thanks alot for your response.

I have some questions about what you wrote:

- I have a bit of a problem understanding where is the "World Space" defined in my program:


// Include the Windows header file that’s needed for all Windows applications
#include <windows.h>
#include <d3d9.h>
#include <d3dx9.h>
#include <iostream>
using namespace std;


#define Move_Speed 10





// a structure for your custom vertex type
typedef struct CUSTOMVERTEX
{
FLOAT x, y, z; // the transformed, 3D position for the vertex
DWORD color; // the vertex color
}customV;

customV g_Vertices[] =
{
// 1
{ -64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ -64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,0,0,255)},
// 2

{ -64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ -64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
// 3
{ -64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ -64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,255,0)},
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,0,255,0)},
// 4
{ -64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ -64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,0,0)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,0,0)},
// 5
{ 64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,255,255,255)},
{ 64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,255,255,255)},
// 6
{-64.0f, 64.0f, -64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, -64.0f, -64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, 64.0f, 64.0f, D3DCOLOR_ARGB(0,80,80,80)},
{-64.0f, -64.0f, 64.0f, D3DCOLOR_ARGB(0,80,80,80)},
};




//------------------------------------------------------------------------




HINSTANCE hInst; // global handle to hold the application instance
HWND wndHandle; // global variable to hold the window handle
// forward declarations

LPDIRECT3D9 pD3D; // the Direct3D object
LPDIRECT3DDEVICE9 pd3dDevice; // the Direct3D device

IDirect3DSurface9* surface; //personal surface pointer
IDirect3DSurface9* surface2;

LARGE_INTEGER timeStart; // holds the starting count
LARGE_INTEGER timeEnd; // holds the ending count
LARGE_INTEGER timerFreq; // holds the frequency of the counter

float anim_rate;


LPDIRECT3DVERTEXBUFFER9 buffer = NULL;
VOID* pVertices;

D3DXMATRIX matProj;
D3DXMATRIX matView;

float CubeUp=400.0f;
float CubeLeft=-80.0f;


//-------------------------------------------------
bool initWindow( HINSTANCE hInstance );
LRESULT CALLBACK WndProc( HWND, UINT, WPARAM, LPARAM );

bool initDirect3D(void);
void render(void);
void cleanUp (void);

IDirect3DSurface9* getSurfaceFromBitmap(std::string filename);

HRESULT SetupVB(void);

void createCamera(float nearClip, float farClip);
void pointCamera(D3DXVECTOR3 cameraPosition, D3DXVECTOR3 cameraLook);


//-------------------------------------------------




// This is winmain, the main entry point for Windows applications
int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPTSTR lpCmdLine, int nCmdShow )
{









// Initialize the window
if ( !initWindow( hInstance ) ) //creates window (given instance)
return false;

if ( !initDirect3D( ) ) //init window for 3d usage
return false;




if (FAILED(SetupVB()))
return false;


QueryPerformanceFrequency(&timerFreq);



// main message loop:
MSG msg;
ZeroMemory( &msg, sizeof( msg ) );

while( msg.message!=WM_QUIT )
{
if( PeekMessage( &msg, NULL, 0U, 0U, PM_REMOVE ) )
{
TranslateMessage ( &msg );
DispatchMessage ( &msg );
}
else
{


QueryPerformanceCounter(&timeStart);

render( );

QueryPerformanceCounter(&timeEnd);
anim_rate =((float)timeEnd.QuadPart - (float)timeStart.QuadPart ) /timerFreq.QuadPart;


}
}
cleanUp ();
return (int) msg.wParam;

}



/******************************************************************************
* bool initWindow( HINSTANCE hInstance )
* initWindow registers the window class for the application, creates the window
******************************************************************************/

bool initWindow( HINSTANCE hInstance )
{
WNDCLASSEX wcex;
// Fill in the WNDCLASSEX structure. This describes how the window
// will look to the system
wcex.cbSize = sizeof(WNDCLASSEX); // the size of the structure
wcex.style = CS_HREDRAW | CS_VREDRAW; // the class style
wcex.lpfnWndProc = (WNDPROC)WndProc; // the window procedure callback
wcex.cbClsExtra = 0; // extra bytes to allocate for this class
wcex.cbWndExtra = 0; // extra bytes to allocate for this instance
wcex.hInstance = hInstance; // handle to the application instance
wcex.hIcon = 0; // icon to associate with the application
wcex.hCursor = LoadCursor(NULL, IDC_ARROW);// the default cursor
wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); // the background color
wcex.lpszMenuName = NULL; // the resource name for the menu
wcex.lpszClassName = "DirectXExample"; // the class name being created
wcex.hIconSm = 0; // the handle to the small icon
RegisterClassEx(&wcex);


// Create the window
wndHandle = CreateWindow(
"DirectXExample",
"DirectXExample",
WS_EX_TOPMOST | WS_POPUP | WS_VISIBLE,
// the window class to use
// the title bar text
// the window style
CW_USEDEFAULT, // the starting x coordinate
CW_USEDEFAULT, // the starting y coordinate
640, // the pixel width of the window
480, // the pixel height of the window
NULL, // the parent window; NULL for desktop
NULL, // the menu for the application; NULL for
// none
hInstance, // the handle to the application instance
NULL); // no values passed to the window
// Make sure that the window handle that is created is valid

if (!wndHandle)
return false;

// Display the window on the screen
ShowWindow(wndHandle, SW_SHOW);
UpdateWindow(wndHandle);
return true;
}


/******************************************************************************
* LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam,
* LPARAM lParam)
* The window procedure
******************************************************************************/

LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
// Check any available messages from the queue
switch (message)
{
case WM_DESTROY:
{
MessageBox(NULL,"Window has ended","End",MB_OK);
PostQuitMessage(0);
}
break;

case WM_KEYDOWN:
{
switch( wParam )
{
case VK_ESCAPE:
PostQuitMessage(0);
break;

case VK_LEFT:
CubeLeft+=Move_Speed;
break;

case VK_RIGHT:
CubeLeft-=Move_Speed;
break;

case VK_UP:
CubeUp+=Move_Speed;
break;

case VK_DOWN:
CubeUp-=Move_Speed;
break;

//case VK_ASTERISK:{}


}


}
break;

}

return DefWindowProc(hWnd, message, wParam, lParam);
}

/*********************************************************************
* initDirect3D
*********************************************************************/

bool initDirect3D(void)
{
pD3D = NULL;
pd3dDevice = NULL;


if( NULL == ( pD3D = Direct3DCreate9( D3D_SDK_VERSION ) ) )
{
return false;
}




D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory( &d3dpp, sizeof( d3dpp ) );
d3dpp.Windowed = FALSE;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8;
d3dpp.BackBufferCount = 1;
d3dpp.BackBufferHeight = 480;
d3dpp.BackBufferWidth = 640;
d3dpp.hDeviceWindow = wndHandle;





if( FAILED( pD3D->CreateDevice( D3DADAPTER_DEFAULT,D3DDEVTYPE_REF,wndHandle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,&d3dpp,&pd3dDevice ) ) )
{
return false;
}
return true;
}

void render(void)
{

HRESULT hr;
D3DXMATRIX objMat, matRotate, finalMat,matTranslate,matScale;

D3DXMATRIX matFinal;

IDirect3DSurface9* backbuffer = NULL;
if( NULL == pd3dDevice )
return;




pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET,
D3DCOLOR_XRGB( 0,0,0 ), 1.0f, 0 );



if ( SUCCEEDED( pd3dDevice->BeginScene( ) ) )
{
pd3dDevice->SetStreamSource ( 0, buffer, 0, sizeof(customV) );

pd3dDevice->SetFVF(D3DFVF_XYZ | D3DFVF_DIFFUSE);

createCamera(0.0f, 0.0f);
pointCamera(D3DXVECTOR3 (-64.0f, 64.0f, -500.0f), D3DXVECTOR3 (0.0f, 0.0f,0.0f));


pd3dDevice->SetRenderState( D3DRS_LIGHTING, FALSE );

// Set meshMat to identity
D3DXMatrixIdentity(&objMat);
// Set the rotation
D3DXMatrixRotationY(&matRotate, timeGetTime()/1000.0f);
// Multiply the scaling and rotation matrices to create the objMat matrix
D3DXMatrixMultiply(&finalMat, &objMat, &matRotate);

pd3dDevice->SetTransform(D3DTS_WORLD, &finalMat);



pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 0, 2 );
pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 4, 2 );
pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 8, 2 );
pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 12, 2 );
pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 16, 2 );
pd3dDevice->DrawPrimitive( D3DPT_TRIANGLESTRIP, 20, 2 );



hr=pd3dDevice->EndScene();

if ( FAILED ( hr ) )
return ;


}




//pd3dDevice->GetBackBuffer( 0,0,D3DBACKBUFFER_TYPE_MONO,&backbuffer );


// Present the back buffer contents to the display
pd3dDevice->Present( NULL, NULL, NULL, NULL );




}


void cleanUp (void)
{
// Release the device and the Direct3D object
if( pd3dDevice != NULL )
pd3dDevice->Release( );
if( pD3D != NULL )
pD3D->Release( );
}



/**********************************************************
* getSurfaceFromBitmap
**********************************************************/

IDirect3DSurface9* getSurfaceFromBitmap(std::string filename)
{
HRESULT hResult;
IDirect3DSurface9* surface = NULL;
D3DXIMAGE_INFO imageInfo; // holds details concerning this bitmap

// Get the width and height info from this bitmap
hResult = D3DXGetImageInfoFromFile(filename.c_str(), &imageInfo);
// Make sure that the call to D3DXGetImageInfoFromFile succeeded
if FAILED (hResult)
return NULL;

// Create the offscreen surface that will hold the bitmap
hResult = pd3dDevice->CreateOffscreenPlainSurface( 640,
480,
D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,
&surface,
NULL );

// Make sure that this function call did not fail; if it did,
// exit this function
if ( FAILED( hResult ) )
return NULL;

// Load the bitmap into the surface that was created earlier
hResult = D3DXLoadSurfaceFromFile( surface,
NULL,
NULL,
filename.c_str( ),
NULL,
D3DX_DEFAULT,
0,
NULL );
if ( FAILED( hResult ) )
return NULL;
return surface;
}






HRESULT SetupVB()
{
HRESULT hr;
// Define the vertices to be used in the buffer


// Create the vertex buffer
hr = pd3dDevice->CreateVertexBuffer(
sizeof(g_Vertices)*sizeof(customV),
0,
D3DFVF_XYZ | D3DFVF_DIFFUSE,
D3DPOOL_DEFAULT,
&buffer,
NULL );

if FAILED ( hr )
return E_FAIL;




hr = buffer->Lock( 0,sizeof(g_Vertices), ( void** ) &pVertices, 0 );

if FAILED (hr)
return E_FAIL;

memcpy( pVertices, g_Vertices, sizeof(g_Vertices) );

buffer->Unlock();

return S_OK;
}


void createCamera(float nearClip, float farClip)
{
// Here, you specify the field of view, aspect ratio,
// and near and far clipping planes
D3DXMatrixPerspectiveFovLH(&matProj, D3DX_PI/4, 640/480, nearClip, farClip);
// Apply the matProj matrix to the projection stage of the pipeline
pd3dDevice->SetTransform(D3DTS_PROJECTION, &matProj);
}

/*************************************************************************
* pointCamera
* points the camera at a location specified by the passed vector
*************************************************************************/

void pointCamera(D3DXVECTOR3 cameraPosition, D3DXVECTOR3 cameraLook)
{
D3DXMatrixLookAtLH (&matView,
&cameraPosition, //camera position
&cameraLook, //look at position
&D3DXVECTOR3 (0.0f, 1.0f, 0.0f)); //up direction
// Apply the matrix to the view stage of the pipeline
pd3dDevice->SetTransform (D3DTS_VIEW, &matView);
}





-Is there a default "world space" maybe?

-And what are the limits of the "world space" (if for example my "Screen space" is 640x480)

-Is the function "createCamera" creates the transformation between the xyz to xyzRHW

-Can you explain to me what is the purpose of nearClip and farClip in the
"createCamera" function?

thanks again

Share this post


Link to post
Share on other sites
Driv3MeFar    1080
You should read up a bit on how the 3D pipeline works. World space is not something that needs to be "defined", nor does it have limits. It is simply a coordinate system in which you can describe all objects in your world wrt the global origin, (0,0,0). Objects in world space get transformed into screen space by a series of transforms. The view(/eye/camera) transform is one of these, but the real magic happens in the perspective projection matrix. This basically creates a frustum (a truncated pyramid) using the near and far planes you provide, as well as the viewing angle and screen aspect ratio. The perspective matrix transforms the frustum and everything in it into a 2x2x2 cube, from which mapping to screen space is trivial. As I said, the near and far planes are required to build your view frustum. Abstractly, the provide the clip planes in front of and behind which objects will not get drawn.

Edit: Now that I look through your code, yes, the (poorly named) createCamera function is where you create the transform that converts coordinates from XYZ->XYZRHW.

[Edited by - Driv3MeFar on May 7, 2007 6:47:56 PM]

Share this post


Link to post
Share on other sites
Lifepower    136
I have small clarification for what Sirob said.

Quote:
Original post by sirob
Untransformed vertices, on the other hand, are not in screen space. They are in world space. That means that their position is relative to "the world". While "the world" isn't anything specific, in most cases, "world space" is the playing field for your 3D game. If you would image a soccer field, that could be your world. One corner might be at 0,0,0 and another at 100, 0, 20. How you use this space is up to you, and the size of each unit, is also up to you.

Untransformed vertices, specifically used with D3DFVF_XYZ flag as Corntown mentioned are not specified in World space. They are specified in local (or object) space, which is transformed to world space by using D3DTS_WORLDMATRIX in IDirect3DDevice9::SetTransform. That is, there vertices represent your 3D object around its own point of reference (doesn't really matter where). This means that you have one or several 3D meshes that are placed around their own points of references and by transforimg them by a particular matrix (commonly called World matrix), you place them all around the shared point of reference; after this operation, what you have is a 3D scene with its own 3D objects placed around.

As you move 3D camera in your world, you see this 3D world from another reference (being your camera), so you have to transform the 3D scene by another matrix, which is commonly called View matrix.

Finally, you have to calculate screen coordinates, where polygons will be drawn - this is achieved by transforming your vertices once again by a so-called Projection matrix. The resulting vertices will have coordinates in [-1..1] range, which needs to be mapped to your screen size (e.g. 640x480). This step is done by Direct3D.

The "transformation" is just a multiplication between vector and matrix. By multiplying the vector by some matrix, you "transform" it to a new position.

If you specify D3DFVF_XYZRHW flag, it means that the coordinates you are passing to Direct3D have already been transformed by all three matrices and has been mapped to screen size, so they are already in range, say [0..639],[0..479].

Generally, there is no limits in the given "space". Basically, a "space" is merely a some point of reference, nothing more. Multiple spaces are used for convenience - so, for example, you can move your 3D camera and see immediate change in position for all 3D objects.

As for "nearClip" and "farClip" parameters in the projection matrix - these are a part of "trick". What it does is to scale the depth of resulting vector so the result is within the specific range. This works both for improving the precision when rasterizing triangles (the depth is used for perspective projection and for depth testing) and for clipping the triangle, so you see only the triangles in front of you.

I hope this clarifies things a little. [smile]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this