Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Murdocki

Member Since 30 Jan 2010
Offline Last Active Feb 16 2014 05:37 AM

Topics I've Started

Downcast application registered interface.

30 September 2013 - 04:26 AM

Hello Angelscript users,

 

I'm trying to downcast an object handle from an application registered interface to a script defined class. I'm wanting to do this as described in the 'Returning script classes' section on this page. But that page doesn't describe what you can/cannot do with the returned handle, and that's where it's going wrong for me.

My final goal is to be able to attach script instances to entities (as behaviours), but then i also want to get those script instances from inside other scripts. So for example i want the scene's main script be able to get a behaviour script that's attached to an entity in that scene. I've added relevant code below, i've provided quite alot to provide the entire context.

 

 

The application registers te interface i'm using to pass around the script instance like so:

engine->RegisterInterface( "Script" );

Then the behaviours that are attached to an entity are included by the script builder like so:

int IncludeCallback( const char* include, const char* from, CScriptBuilder* builder, void* userParam )
{
	...(other predefined includes)
	else if( std::string( include ).compare( "EntityBehaviour" ) == 0 )
	{
		static std::string entityBehaviourClass =
		"class EntityBehaviour : Script\n\
		{\n\
			Entity@ entity;\n\
			ComponentLight@ light;\n\
			ComponentModel@ model;\n\
			...(other components)
		}";
		return builder->AddSectionFromMemory( "EntityBehaviour", entityBehaviourClass.c_str() );
	}
	...(includes from files)
}

The final scripted behaviour includes and implements the EntityBehaviour.

#include "EntityBehaviour"
class TestAnimPlayer : EntityBehaviour
...(class implementation)

All of this is working fine by itself. But then when i try to add the functionality of getting the behaviours from an entity something is going wrong. I have added the function that should return the script class. This is registered outside the angelscript package so it's wrapped. The returned void* is actually an asIScriptObject*.

void* GetScriptBehaviour( Entity& entity, const std::string& typeName )
{
...(looks up the behaviour by it's name and returns an asIScriptObject*)
}
scriptManager.RegisterObjectMethod( "Entity", "Script@ GetScriptBehaviour( const string& in )", asFUNCTION( GetScriptBehaviour ), EECore::ScriptManager::CALL_CDECL_OBJFIRST )

And here's the final script where it's going wrong:

Entity@ lb = scene.GetEntity( "LB animated" ); //This handle is valid.
Script@ playerScript = lb.GetScriptBehaviour( "TestAnimPlayer" ); //This handle is valid.
//DebugScriptType( playerScript );
if( playerScript is null )
	Print( "no player script found" );
else
	Print( "Player script was found" ); //This is printed (since the andle is valid)
EntityBehaviour@ behaviour = cast< EntityBehaviour >( playerScript ); //Will be null
if( behaviour is null )
	Print( "cast failed" ); //Prints this
else
	Print( "cast succeeded" );
TestAnimPlayer@ animPlayer = cast< TestAnimPlayer >( playerScript ); //Will be null
if( animPlayer is null )
	Print( "cast failed" ); //Prints this
else
	Print( "cast succeeded" );
//animPlayer.animationClip = "dance";

The page on inheritance describes the '@handle_to_B = cast<B>(handle_to_A);' behaviour, which is what i'm wanting to do. Though i'm not sure, shouldn't it cast to B@?

For debugging purposes i've hadded the DebugScriptType function, if i enable this it calls this method:

void DebugScriptType( asIScriptObject* obj )
{
	obj->Release();
}
engine->RegisterGlobalFunction( "void DebugScriptType( Script@ obj )", asFUNCTION( DebugScriptType ), asCALL_CDECL );

When inspecting the obj using the debugger it's filled in like this:

obj

  -objType

    -name

      -dynamic: TestAnimPlayer

    -interfaces

      -[0].name

        -local: Script

    -derived from

      -name

        -dynamic: EntityBehaviour

      -interfaces

        -[0].name

          -local: Script

This type hierarchy looks correct to me.

 

I hope somebody can help me out a bit with this, or would it be better to use the script handle add-on?

 

Thanks in advance.


msvc++ reinstall - result: crashes

29 August 2012 - 12:49 PM

Hey guys,

Recently i've reinstalled my pc which also included a new visual studio 2010 installation. With this new installation i'm experiencing alot (about once per minute) of visual studio crashes. Here's some things i was doing when vs crashed:
  • Linking
  • Typing a . (opening autocomplete)
  • Pressing F5 to start a debug session
  • Any time the debugger hits a breakpoint
  • Any time a breakpoint is placed while a debug session is in progress
Also the output generated (an executable) seems to be corrupt or something similar. I am able to run the application but the scripting environment (angelscript) goes completely whack. As in where a perfectly fine Vec3 is interpreted as all float max values.

This all happens with a codebase that was working perfectly fine before which suggests the issue is tool related rather than source or solution.
I have installed all available windows updates and visual studio updates (eg vs SP1). I am missing a hotfix i had before the reinstall though, but this hotfix i needed because builds were hanging up on '.exe in use' so i dont think this hotfix will fix all of these issues.

I'm just wondering if anyone has any tips on where to go from here (i have allready tried another reinstall and that didnt help)

I appreciate any help,
Murdocki

Transformation Gizmo

25 February 2012 - 03:49 PM

Hey guys,

i'm trying to make a transformation gizmo for my scene editor to be able to easilly transform objects around but i've bumped into a little problem i need a little help with. I've started with the translation gizmo and i'd like that to be a series of three axis aligned billboards. I'd also like the gizmo to be a consistent size no matter distance.

To do this i've created a custom shader and i'm building a custom ortho matrix especially for rendering the gizmo. While there are some problems with positioning the gizmo at the correct position the issue i'd like help with is that i'd like the axis aligned billboards to use perspection so that they'll still show parallel to the selected object's axis. I'm not quite sure how to do this, currently i'm thinking i might need to render with a perspective matrix after all and do some custom scaling.

Here's the rendering code, it also includes the building of the ortho matrix and altered position:

//Create an orthographic projection matrix that can see everything the perspective camera can see.
//Since the perspective camera's view is biggest near the far clipping plane we'll have to calculate
//the orthographic matrix to include that area.
float height = tan( mainCamera->GetFieldOfView() / 2.0f ) * mainCamera->GetFarClipDistance() * 2;
float width = height * mainCamera->GetAspectRatio();
EEData::Matrix4x4 ortho = EEData::Matrix4x4::BuildOrthoMatrix( width, height, mainCamera->GetNearClipDistance(), mainCamera->GetFarClipDistance() );
renderer.ApplyMatrix( Renderer::PROJECTION, ortho );

//Since the object we're placing the gizmo on is rendered using a perspective view we'll
//need to convert it's origin to a position that results in the same position on screen if
//we're using an orthographic camera. This is needed so that the gizmo always stays at
//the selection's origin.
EEData::Matrix4x4 worldMatrix;

//Find the position on screen.
Vector3f clipPos = mainCamera->WorldSpaceToClipSpace( currentSelection->GetPosition() );
Logger::Instance().QuickPrint( "x: %f, y: %f, z: %f", clipPos.x, clipPos.y, clipPos.z );
//Fill in the new position using the screen position combined with the camera's coordinate system.
Vector3f newPos;
newPos += mainCamera->GetOwner()->GetMatrix().GetTangent() * (clipPos.x * width / 2.0f);
newPos += mainCamera->GetOwner()->GetMatrix().GetNormal() * (clipPos.y * height / 2.0f);
newPos += mainCamera->GetOwner()->GetMatrix().GetBiNormal() * mainCamera->GetOwner()->GetMatrix().GetBiNormal().DotProduct( currentSelection->GetPosition() - mainCamera->GetOwner()->GetPosition() );
worldMatrix.AsTranslation( newPos );

bool oldDepthTest = renderer.SetRenderState( Renderer::ECHORS_ZENABLE, false );
renderer.DrawMesh( transformMesh->GetId(), worldMatrix );
renderer.SetRenderState( Renderer::ECHORS_ZENABLE, oldDepthTest );

//Apply the camera's perspective matrix again for any futher rendering.
renderer.ApplyMatrix( Renderer::PROJECTION, mainCamera->GetProjectionMatrix() );

Some extra references for the WorldSpaceToClipSpace function:

/**
* Converts a world position to clip space position.
*
* @return: The clip space position. The left side of the screen is x = -1, the right
* side of the screen is x = 1. The top side of the screen is y = 1, the bottom side of
* the screen is y = -1;
*/

Vector3f ComponentCamera::WorldSpaceToClipSpace( const Vector3f& worldPosition ) const
{
EEData::Matrix4x4 viewProj = viewMatrix * projectionMatrix;
Vector4f result = Vector4f( worldPosition.x, worldPosition.y, worldPosition.z, 1.0f );
viewProj.TransformPoint( result );
double rhw = 1.0 / result.w;
result = Vector4f( (result.x * rhw),
				  (result.y * rhw),
				  (result.z * rhw),
				  rhw );
return result.XYZ();
}


void Matrix4x4::TransformPoint( Vector4f& point ) const
{
float x = point.x * values[ 0 ] +
point.y * values[ 4 ] +
point.z * values[ 8 ] +
point.w * values[ 12 ];
float y = point.x * values[ 1 ] +
point.y * values[ 5 ] +
point.z * values[ 9 ] +
point.w * values[ 13 ];
float z = point.x * values[ 2  ] +
point.y * values[ 6  ] +
point.z * values[ 10 ] +
point.w * values[ 14 ];
float w = point.x * values[ 3  ] +
point.y * values[ 7  ] +
point.z * values[ 11 ] +
point.w * values[ 15 ];
point.x = x;
point.y = y;
point.z = z;
point.w = w;
}

While there's probably still some errors in there i dont think they're affecting the gizmo's shape, are they? I think the error must be somewhere in my shader, so i'll post that aswell. The shader assumes a quite specific vertex format so a quick layout:
gl_Vertex.xy: The AABillboard's size where x indicates width and y indicates length in direction of the aligned axis.
gl_Vertex.z: Color index, used to color the quads. Colors are provided through a uniform array.
gl_Normal: The axis to which this billboard should be aligned. So for example 1.0f, 0.0f, 0.0f should result in a billboard that rotates around the x axis.
Texture coordinates are standard, i'll use these later to put an arrow texture or something on the billboards.

The shader responsible for rendering the gizmo.
Shader GizmoTransform
{
  RenderQueue AlphaOff
  Properties
  {
	Matrix World
	Matrix ViewInverse
	Matrix Proj
	Matrix WorldViewProj
	Vec3 CameraPosition
	Vec3Array Colors
  }
  Pass p0
  {
	VertexShader
	{
	  #version 110
	  uniform mat4 World;
	  uniform mat4 ViewInverse;
	  uniform mat4 WorldViewProj;
	  uniform vec3 CameraPosition;
	  uniform vec3 Colors[ 3 ];

	  varying vec2 texCoord;
	  varying vec3 color;
	  void main()
	  {
		vec3 origin;
		origin.x = World[ 3 ][ 0 ];
		origin.y = World[ 3 ][ 1 ];
		origin.z = World[ 3 ][ 2 ];

		vec3 forward = normalize( origin - CameraPosition );
		forward = normalize( vec3( ViewInverse[2][0], ViewInverse[2][1], ViewInverse[2][2] ) );
		forward *= (vec3(1.0) - gl_Normal);
		forward = normalize( forward );

		vec3 up = gl_Normal;
		vec3 right = normalize( cross( up, forward ) );

		vec2 direction = gl_Vertex.xy;
		vec3 vRight = direction.xxx * right;
		vec3 vUp = direction.yyy * up;
		vec4 vPos = vec4( vRight + vUp, 1 );

		gl_Position = WorldViewProj * vPos;
		//gl_Position.xyz *= (gl_Position.w);
		texCoord = vec2( gl_MultiTexCoord0.x, gl_MultiTexCoord0.y );
		color = Colors[ int(gl_Vertex.z) ];
	  }
	}
	FragmentShader
	{
	  #version 110
	  varying vec2 texCoord;
	  varying vec3 color;
	  void main()
	  {
		//gl_FragColor = vec4( texCoord.x, texCoord.y, 0, 1 );
		gl_FragColor = vec4( color, 1 );
	  }
	}
  }
}

The glsl code can be found in the "VertexShader" and "FragmentShader" blocks, the rest is used by the engine and has no effect on the resulting shader whatsoever.
And finally a picture to show what i mean:
Posted Image

Please help me with any issues. Suggestions for a better approach to what i'm trying are welcome aswell.

*Edit: Oh right, i forgot to mention my matrices are row major and are not transposed when set as uniform. This allows me to write matrix * vector rather than vector * matrix

Deferred lighting, lit value influenced by camera

30 October 2011 - 09:42 AM

Hey,

i'm having a problem calculating light value when rendering a full screen quad in the deferred lighting light pass. The problem is that the calculated light value changes when the camera's position changes. I'm seeing a pattern in the error which is: larger lit value than expected between the camera's position and the 0, 0, 0 position. Here's some images to show what i mean (far first, near second)
Posted ImagePosted Image


on the left behind the bright specular'ish looking bulb is the 0, 0, 0 position. On the right the houses seem to be lit correctly but the wall behind it now suffers from the same problem.

Some details about my setup:
-32bit hardware Depth buffer
-RGB32F texture for normals
-Wrap s/t: Clamp to edge
-min mag filter: linear
-Row major matrices
-View = inverted camera's world
-WorldView = World * View
-WorldViewProj = World * View * Proj
-ProjInverse = inverted Proj
-Vertex provided texture coordinates top-left = 0, 0.

Geometry pass shader:
Pass p1 : PREPASS1
  {
    VertexShader
    {
      #version 110
      uniform mat4 WorldViewProj;
      uniform mat4 WorldView;
      varying vec4 normal;
      void main()
      {
        gl_Position = WorldViewProj * gl_Vertex;
        normal = normalize( WorldView * vec4( gl_Normal, 0 )  );
      }
    }
    FragmentShader
    {
      #version 110
      varying vec4 normal;
      void main()
      {
        gl_FragData[ 0 ] = normal * 0.5 + 0.5;
      }
    }
  }


Light pass shader:
Pass p0
  {
    VertexShader
    {
      #version 110
      uniform ivec2 ScreenSize;
      varying vec2 texCoord;
      void main()
      {
        vec4 vPos;
        vPos.x = (gl_Vertex.x) / (float(ScreenSize.x)/2.0) - 1.0;
        vPos.y = 1.0 - (gl_Vertex.y) / (float(ScreenSize.y)/2.0);
        vPos.z = 0.0;
        vPos.w = 1.0;
        gl_Position = vPos;
        texCoord = vec2( gl_MultiTexCoord0.x, 1.0-gl_MultiTexCoord0.y );
      }
    }
    FragmentShader
    {
		#version 110
		uniform sampler2D Texture0;//normal
		uniform sampler2D Texture1;//depth
		varying vec2 texCoord;
		uniform mat4 View;
		uniform mat4 ProjInverse;
		void main()
		{
			float depth = texture2D( Texture1, texCoord ).r;
			vec3 normal = (texture2D( Texture0, texCoord ).rgb - 0.5)*2;
			vec4 projectedPos = vec4( 1.0 );
			projectedPos.x = texCoord.x * 2 - 1;
			projectedPos.y = texCoord.y * 2 - 1;
			projectedPos.z = depth;
			vec4 posVS4d = ProjInverse * projectedPos;
			vec3 posVS3d = posVS4d.xyz / posVS4d.w;
			vec4 lightPos = vec4( 0, 0, 0, 1 );
			lightPos = View * lightPos;
			vec3 toLight = normalize( lightPos.xyz - posVS3d );
			float lit = max( 0.0, dot( toLight, normal ) );
			if( depth > 0.9999 )
				gl_FragData[ 0 ] = vec4( 0.0, 1.0, 1.0, 1.0 );
			else
				gl_FragData[ 0 ] = vec4( lit );
		}
    }
  }


For now i'm using a light hardcoded at position 0, 0, 0. I'm trying to do the calculations in view space (also tried world space but without luck). I've also followed some tutorials but none seem to have this particular problem. I've also seen there is a different approach concerning using a view ray extracted from the frustum but i'd like to get this to work using unprojection first.
My question is if there's someone who can spot an error in what i'm doing.

GL_UNSIGNED_BYTE indices

09 October 2011 - 09:45 AM

Hey,

i'm currently working on some geometry tools and i'm having problems with the first one which should create a cube. The cube it generates uses pos/nor/uv vertex format (interleaved vbo) and uses an IBO to render. The indices are of the type unsigned char and are correct when debugging with MSVC++.
The problem i'm having is that the first cube i'm creating and loading seems to have incorrect index buffer contents when using gDEBugger. When i'm loading the exact same cube a second time that cube has correct index data. I'm wondering if this has anything to do with the GL_UNSIGNED_BYTE index type because when i force the cube's indices to 16bits each the problem seems to be gone.

For completeness here's gDEBugger's output of the indices (incorrect first loaded cube first, correct one second)
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 
244, 
222, 


0, 
1, 
2, 
2, 
1, 
3, 
4, 
5, 
6, 
6, 
5, 
7, 
8, 
9, 
10, 
10, 
9, 
11, 
12, 
13, 
14, 
14, 
13, 
15, 
16, 
17, 
18, 
18, 
17, 
19, 
20, 
21, 
22, 
22, 
21, 
23, 


When looking at the incorrect indices it keeps repeating 244 and 222 (is this some super secret error indication or smth?)

I also found "Clients must align data elements consistent with the requirements of the client platform, with an additional base-level requirement that an offset within a buffer to a datum comprising N bytes be a multiple of N." in the glBufferData specs. I've got no idea what this means but it might be related?
Just asking to see if someone recognizes this problem and knows of a known error or fix.

Thanks in advance,
Murdocki

PARTNERS