Jump to content

  • Log In with Google      Sign In   
  • Create Account


BirdiePeep

Member Since 20 May 2007
Offline Last Active Apr 22 2013 02:30 PM

Topics I've Started

Crash on reference obj method call OSX x64

29 March 2013 - 01:17 PM

It has been my default assumption that I may have done something wrong.  After a few days of looking into the issue I came at an impass that might need info from people who are more familiar.

 

The issue is failrly simple.  I have a registered reference type, which in the script I am calling a method on.  In c++ the code asserts as the pointer of the object is not correct.  Below is the information I've gathered about the crash.

 

System: Mac Os X 10.8.2

IDE: Xcode 4

Achritecture: 64-bit

Compiler: Apple LLVM 4.2 (default) also occurs with (LLVM GCC 4.2)

Config: Appears in both Debug and Release builds

AngelScript: 2.26.1

 

- The issue appears to be related to functions which return a specific registered value type (Vector)

- Other method calls on the same object works fine

- Multiple object types suffer from this crash

- Occurs with both direct method calls and method wrapper functions

- The pointer to the object stored in angelcode is correct

- I have also tracked the object pointer up until the assebly code in x64_CallFunction code, and it appears to be fine until then

- The same codebase works fine on Windows VS2010.  I'm currenting porting the code to OSX.

- I've attempted to re-create the issue with a new class value type and new methods, but those work fine.

- Issues also occurs if I use the "LLVM GCC 4.2" compiler

 

For reference, example configuration of the classes in question. (not full config)  Again, this code works fine on the Windows side so I think the issue is deeper.

 

//This is the suspect value type
ret = engine->RegisterObjectType("Vector", sizeof(GtVector), asOBJ_VALUE | asOBJ_APP_CLASS_CDAK); assert( ret >= 0 );

//Example of object method that fails
ret = engine->RegisterObjectType("Entity", 0, asOBJ_REF); assert( ret >= 0 );
ret = engine->RegisterObjectMethod("Entity", "Vector getOrigin()", asMETHOD(Entity, getOrigin), asCALL_THISCALL); assert( ret >= 0 );

//Another example of object method that fails
ret = engine->RegisterObjectType("Level", 0, asOBJ_REF | asOBJ_NOCOUNT); assert( ret >= 0 );
ret = engine->RegisterObjectMethod("Level", "Vector getBlockOrigin(const Vector& in)", asFUNCTION(_Level_getBlockOrigin), asCALL_CDECL_OBJFIRST); assert( ret >= 0 );

//Example of code that will break
Entity@ entity = CreateEntity("test");  //Works
entity.setOrigin(Vector(1,1,1));  //Works
Vector vector = entity.getOrigin(); //Fails: Entity::getOrigin() is called, but obj pointer is not right

 

At this point I believe the issue may deal with my compile configurations and/or the as_callfunc_x64_gcc.cpp.  I'm not versed well in assembler however, so I'm kind of stuck on what to do.  Any information anyone might be able to provide is welcomed.  If there is more information I can provide about my issue please let me know.


Scene graph with compositeable shaders

18 April 2011 - 09:21 PM

I'm using a custom scene graph based off Wild Magic. My scene graph was intended to be used with a fixed function graphics pipeline, but my requirements have evolved. Now I want to implement a shader based implementation. It's been a big transition to move away from the OpenGL fixed function pipeline, it has changed the scene graph a lot and I've had to learn a lot of new techniques.

My main concern with the scene graph is supporting pseudo composite shaders, so that I'm not having to write a custom shader for every variation that I want. Obviously OpenGL does not support this directly, but I've read a few articles suggesting that you can achieve similar functionality by using a skeleton master shader. The technique turns on and off parts of one main shader depending on what other shader are added to it during compile time.

I.E.

Main Shader
void main(void)
{
	vec4 color;
	
	#ifndef LIGHTING
		color = lightingFunction();
	#else
		color = vec4(1.0, 1.0, 1.0, 1.0)
		
	ect...
}

Module Shader
#define LIGHTING

vec4 lightingFunction()
{
	Some lighting functionality...
}

I know the above technique works, I've used it before and it provides that modular like functionality that helps reduce the amount of unique shaders you need to write. The issue is my scene graph doesn't really know about or care about this technique, and I am wanting to change it around to actually better facilitate this. Below is how I suggest I will do this...

Each scene node has a list of "shader" decorators, these decorators are propagated down to the geometry leaf nodes during a render state update pass. This means each leaf node knows all of the shaders that are attached to it in the hierarchy. Using this list of shaders each geometry piece can then compile it's own program that defines exactly how it will render. I know this technique will work, and it solves some unique issues....

Up Side
- Allows the creation of modular/composite like shaders inside of the scene graph
- Allows me to easily add and manage a shader in a node and not all all individual leafs

Down Side
- Possibly slow down creating many programs
- Possibly many programs created which are completely identical
- No verification that a particular shader is compatible with the master skeleton shader

At the moment I feel the above solution, although having some down sides is going to be the best for what I want to do. But, I want to know what people think of it and if they have solved some of these issues in a different way I would love to hear them. Unfortunately, there is not a lot of great information about some of these more advanced topics that don't get weighed down with specifics.

So tell me what you think, if you do something different, or if you think I'm missing something.

Guis that can be skinned

09 July 2010 - 10:33 AM

There are many gui libraries out there, each with their own strengths and weaknesses. I'm looking for people's opinion on available gui's that are able to be skinned. This is a major deciding factor for my current project and would love to hear other people's experiences.

For the project we are going to be using C++ and OpenGl. However if your know of/have experience with a good skinnable GUI using different back ends I would still like to hear about them.

Fast blit using GDI and PBO

18 August 2009 - 05:24 AM

This seems like the most appropriate place to ask this question, though it involved two different APIs, OpenGL and Windows GDI. I'm attempting to improve speed in a custom operation that I'm doing. The speed of my implementation is fast, but at the resolution it's a bit too slow. Using a unseen OpenGL context to render my scene, reading back the data with glReadPixels into a DIB, then using that to BitBlt to a window. - Render scene - glReadPixels into DIB data - BitBlit into window For a 2560x1600 size the glReadPixels takes about 33ms and the BitBlit takes 3-4ms. I did some research and learned that PBO might be able to speed up my operation. I tested the speed and the actual readback was about 10ms faster, but the problem is that the PBO puts the data into it's own memory location. I can copy the data from here to the DIB, but then I loose the speed up I just gained. I would copy the PBO data directly to the window, but the GDI does not seem to give a good function to do so. So here are some specific questions I have to anyone who might know. - Is there a way to have a PBO map into my own allocated memory? I assume not. - Is there a better way to blit the data to the window? - Is there a way to directly change the bits pointer of a DIB bitmap? Is so this might also solve the issue, as I can intelligently change them around.

How does the STL vector clean up memory?

15 September 2008 - 10:24 AM

It's known that when removing an object from a STL vector it will correctly call it's deconstructor. Then, when the vector is deleted it will also call the deconstructors of the remaining objects. However I'm a little confused about what they doing to product this last result? For example, say we had a vector with size of 6 and enough memory allocated for 12 object. When the vector "delete"s that memory what is preventing the last 6 memory spaces from trying to call the deconstructor for the class type? I would imagine they are doing some memory trick, but I have yet to find anything that supports this. - Kiro www.wingsout.com

PARTNERS