• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

131 Neutral

About CodeReaver

  • Rank
  1. Hi,   I'm trying to compile assimp for use in Visual Studio Express 2013 on Windows 8 and I'm having trouble getting CMake to build the project.   I attempted to follow this video https://www.youtube.com/watch?v=_vjs0cH8qls, however because of updates to CMake (which I've never used before) on assimp itself, I'm unable to create the make file.  I downloaded the source only file from here http://sourceforge.net/projects/assimp/files/assimp-3.0/ and since the generator in CMake complained I hadn't set any compilers and didn't give me anywhere to set them and the only compliers I know are on my machine are in VS Express, I chose the generator for that instead.  I then got a few errors about DirectX 9, which dissapeared once I disabled BUILD_ASSIMP_TOOLS and errors about boost being missing which dissapeared once I downloaded that or enabled the workaround option.  After generating the files, the make file wasn't included, so I was unable to start the project.   After that, I just tried to upgrade the VC9 solution that was included with the source code, and that didn't work because it always wanted to build the tools version and had undefined variables anyway.   I finally considered downloading assimp-sdk-3.0-setup.exe and using either the dll or the libraries, but I'm not used to setting those up other than for DirectX, which I can't remember if it just hooks everything up itself when installed or what to do otherwise.   I ultimately don't mind how I get it working as long as I can incorporate the model loading functionality into the project I've set up.  Has anyone gotten this working with Visual Studio Express 2013 on Windows 8 who can let me know a simple way of doing so?   Sorry if I'm not supposed to embed  the youtube video.  I didn't know how to prevent it.
  2. Marching cube algorithm problem

    EDIT: The edit button showed up, but now I can't delete.
  3. Marching cube algorithm problem

    Hi, I'm trying to do the same sort of thing, but I'm getting a bit confused by the terminology. What exactly is an isovalue? I currently have a cloud of points and a bounding box containing them. I intend to divide the bounding box into equal sized cubes, loop through the lot of them, eliminate the ones that don't contain any points and use the rest as input for the Polygonise function. Unfortunately, that only gives me input for XYZ p[8] in the GridCell and I don't know where to get the rest of the input. EDIT: I should probably explain that I'm attempting to get an approximation of a convex hull that surrounds the cloud of points. I realise I won't get a perfect convex hull, but my goal is to get as close as I can. I'm hoping to get something like the nearest point on a grid to where the convex hull would be. Something that would be acceptable as a collision mesh.
  4. I have an old MDX program that I'm preparing to convert to XNA, so I need to make sure my shader technique is doing the same as my fixed function one. I also want to make sure I'm not setting any device states that were only needed for the FFP as they'll either do nothing or prevent the effect file from loading. I think I remember reading that ColorOp and AlphaOp were removed with the fixed function pipeline, so I'm guessing all of ColorArg1, ColorArg2, AlphaArg1 and AlphaArg2 will have gone too along with MaterialAmbient, MaterialDiffuse MaterialAmbientSource, MaterialDiffuseSource etc. I've used these in the effect below along with several other effect states, but I havn't been able to find a definitive list of which states were removed with the FFP. Would anyone be able to refer me to one or if possible have a look over my effect file below? I imagine there's still a few more FFP features Another thing I wanted to ask was how I might be able to use lighting and vertex colors at the same time. I think I managed to do that in the FFP, but not in my shader. I guess I'd just multiply the final lighting diffuse by the diffuse in the vertex color. [code] [font="Consolas"][size="2"][font="Consolas"][size="2"] texture xTexture0; bool bLighting; bool bColorVertex; int iCullMode; int iFillMode; int iColorOperation; int iAlphaOperation; int iTextureFilter; int iAmbientMaterialSource; int iDiffuseMaterialSource; float4x4 xWorld : WORLD; float4x4 xView : VIEW; float4x4 xWorldViewProj : WORLDVIEWPROJ; float4 xMaterialAmbient; float4 xMaterialDiffuse; float4 xMaterialSpecular; float4 xMaterialEmissive; struct Light { float4 Position; float4 Direction; float4 Diffuse; float4 Specular; float4 Ambient; float4 Attenuation; }; Light xLights[1]; //application to vertex structure struct a2v { float4 position : POSITION0; float3 normal : NORMAL; float4 diffuse : COLOR0; float2 tex0 : TEXCOORD0; }; //vertex to pixel processing structure struct v2p { float4 position : POSITION0; float4 color : COLOR0; float2 tex0 : TEXCOORD0; }; void vs( in a2v IN, out v2p OUT ) { OUT.position = mul(IN.position, xWorldViewProj); float3 xWorldNormal = normalize(mul(IN.normal, xWorld)); float3 xWorldPosition = mul(IN.position, xWorld); float3 xCameraPosition = mul(xWorldPosition, xView); if (bLighting) { float4 diffuse = float4(0,0,0,1); float4 specular = float4(0,0,0,1); for (int i = 0; i < 1; i++) { float3 toLight = xLights[i].Position.xyz - xWorldPosition; float lightDist = length(toLight); float fAttenuation = 1.0 / dot(xLights[i].Attenuation, float4(1, lightDist, lightDist * lightDist, 0)); float3 lightDir = normalize(xLights[i].Direction); // normalize(toLight); float3 halfAngle = normalize(normalize(-xCameraPosition) + lightDir); diffuse += max(0, dot(lightDir, xWorldNormal) * xLights[i].Diffuse * fAttenuation) + xLights[i].Ambient; specular += max(0, pow(dot(halfAngle, xWorldNormal), 64) * xLights[i].Specular * fAttenuation); } OUT.color = diffuse + specular; } else { OUT.color = IN.diffuse; //float4(1,1,1,1); } OUT.tex0 = IN.tex0; } float4 ps() : COLOR { return float4(1,1,1,1); } technique FixedFunctionPipeline { pass p0 { Texture[0] = <xTexture0>; MaterialAmbient = <xMaterialAmbient>; MaterialDiffuse = <xMaterialDiffuse>; Lighting = <bLighting>; ColorVertex = <bColorVertex>; CullMode = <iCullMode>; FillMode = <iFillMode>; ZEnable = TRUE; ZWriteEnable = TRUE; DitherEnable = TRUE; SpecularEnable = FALSE; NormalizeNormals = TRUE; AlphaTestEnable = TRUE; AlphaFunc = GREATER; AlphaRef = 0xA0; ColorOp[0] = <iColorOperation>; ColorArg1[0] = TEXTURE; ColorArg2[0] = CURRENT; AlphaOp[0] = <iAlphaOperation>; AlphaArg1[0] = TEXTURE; AlphaArg2[0] = DIFFUSE; MagFilter[0] = <iTextureFilter>; MinFilter[0] = <iTextureFilter>; AmbientMaterialSource = <iAmbientMaterialSource>; DiffuseMaterialSource = <iDiffuseMaterialSource>; WorldTransform[0] = <xWorld>; } } technique Shader { pass p0 { vertexshader = compile vs_1_1 vs(); //pixelshader = compile ps_1_1 ps(); Texture[0] = <xTexture0>; MaterialAmbient = <xMaterialAmbient>; MaterialDiffuse = <xMaterialDiffuse>; Lighting = <bLighting>; ColorVertex = <bColorVertex>; CullMode = <iCullMode>; FillMode = <iFillMode>; ZEnable = TRUE; ZWriteEnable = TRUE; DitherEnable = TRUE; SpecularEnable = FALSE; NormalizeNormals = TRUE; AlphaTestEnable = TRUE; AlphaFunc = GREATER; AlphaRef = 0xA0; ColorOp[0] = <iColorOperation>; ColorArg1[0] = TEXTURE; ColorArg2[0] = CURRENT; AlphaOp[0] = <iAlphaOperation>; AlphaArg1[0] = TEXTURE; AlphaArg2[0] = DIFFUSE; MagFilter[0] = <iTextureFilter>; MinFilter[0] = <iTextureFilter>; AmbientMaterialSource = <iAmbientMaterialSource>; DiffuseMaterialSource = <iDiffuseMaterialSource>; WorldTransform[0] = <xWorld>; } } [/size][/font][/size][/font][/code]
  5. I'm trying to use an interface to expose the base type of it's entries, but I couldn't find any examples of how to do that. The idea is that I can create a list of ModelExTextureInstances and then expose it as a fixed length list of ModelExTextures in order to hide some of it's functions. Does this look like it would work? class ModelExTextureList : List<ModelExTextureInstance>, ModelExList<ModelExTexture> { public new ModelExTexture this[int t] { get { return base[t]; } } public new IEnumerator<ModelExTexture> GetEnumerator() { for (int t = 0; t < Count; t++) { yield return base[t]; } } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } }
  6. Yeah, but when one of those imports a *.x file they're going to have to interpret the container data. I havn't really used them myself, so I don't know how they'd treat the different parts of a mesh heirarchy or if it depends on any plugins that it uses.
  7. Anyone know what the typical usage is for MeshContainer.NextContainer? Is it used for alternate skins or for stuff like depth bias or a more advanced way of spliting the mesh into subsets? I'm writing a mesh export tool for character models that have a choice of skins for the animation set and I need to know how something like 3DSMax or Maya would interpret it.
  8. Yeah, resource manager sounds more like what I meant. The idea is that I request a resource from the resource manager, the manager searches it's list and then if it finds the resource, it returns that. If the resource isn't in the list it loads the resource from a file, adds it to the list and then returns that. However since each resource can be referenced multiple times, I want the classes referencing it to be able to set the texture/effect as the current one, but not alter it's data.
  9. An assembly's an exe or a dll isn't it? I'd prefere not to have to split the final app into seperate files as it's a fairly small program at the moment and it's easier for people to keep track of that way. Hovever I'm always find myself restarting my projects due to my being too stubborn to do stuff like that.
  10. I'm trying to come up with a file management system for use with a model editor I'm working on. I want the file manager class to create instances of texture and effect classes which would contain some additional data such as where the files were loaded from. However I also want the file manager class to be the only thing that controls the data inside those classes. In C++ I'd make the file manager class a friend of the other classes, but in C#, I can't do that so I'm looking for some programming techniques I could use instead. I know can use different assemblies with the extern keyword, but I think that requires dlls or multiple exes or something and I'd prefere to stick with just the one exe. I've also seen some people use nested classes to get around this type of problem. However, nested classes start to look a bit messy in the examples I've seen which have mostly been short snippets on forums and stuff. I guess I'm not leaving myself many options here, but I heard that the need to use friend classes is a sign of poor code design anyway. Does anyone have any tips or examples of how similar probems have been dealt with?
  11. Well I assign the blank array of EffectInstances (with the number of attributes as its length) elsewhere in the code and when it reaches the part in my other post, it just looks like same the blank array.
  12. Does anyone have any clue on this? Could these errors be caused by invalid data in the new EffectInstance? I'd assume it would have thrown an exception while filling in the EffectInstance's values, but with it being a struct, maybe it's not realising until I try to pass the values to a function. I still can't tell from the error messages though.
  13. I'm trying to replace the EffectInstances that are stored in a MeshContainer with a modified set, but whenever I try to do so the values either don't change or I get an error message. EffectInstance xEffectInstance = new EffectInstance(); xEffectInstance.EffectFilename = strFileName; xEffectInstance.SetDefaults(axEffectDefaults); EffectInstance[] axEffectInstances = new EffectInstance[m_xMeshContainer.GetEffectInstances().Length]; for (int i = 0; i < m_xMeshContainer.GetEffectInstances().Length; i++) { axEffectInstances[i] = m_xMeshContainer.GetEffectInstances()[i]; } axEffectInstances[iIndex] = xEffectInstance; m_xMeshContainer.SetEffectInstances(axEffectInstances); // m_xMeshContainer.GetEffectInstances()[iIndex] = xEffectInstance; When I view in the debugger, xEffectInstance and axEffectinstances are both set up properly and contain the correct data, but if I try to replace the whole array I get the error "Object contains non-primitive or non-blittable data." and when I use the commented line instead, the EffectInstances in the MeshContainer don't get replaced with the new value. Is this anything to do with EffectInstance being a struct instead of a class? (I'm using C#)
  14. Efficiency for camera:

    Ok, I think I've figured out how to explain my problem. I have functions for setting the position, target and the rotation itself. If I set the position, the rotation would be out of date and in the fly cam mode, so would the target (because it's a fixed distance). If I set the target, then the rotation and the position would be out of date for the same reasons. If I set the rotation directly as a quaternion or a matrix, the target and the position would be out of date. I'm anticipating the need to change the rotation more than once per frame (for example when chasing a moving object and receiving input from the mouse/keyboard). I don't think I'll be able to get my updating to as few times as once per frame, but it looks like you guys are saying I can leave the updating until the "get" functions and any other function that needs to make use of the out of date values is called before doing the update.
  15. Efficiency for camera:

    If I store the overall rotation as vectors and I wanted to set the rotation using the accessor (the arc ball requires something like this as well), I'd have to recalculate the vectors every time the rotation changed and convert them back to a matrix again whenever I needed to retreive the rotation. However if I store the rotation as a quaternion or a matrix, I'd have to recalculate the quat or matrix every time I changed any of the vectors or any time I needed to retreive the vectors. It looks like I'll need to update the rotation when I change the vectors and update the vectors every time I change the rotation if I want to support both ways of moving my camera. I suppose it's more efficient to only update things when I need to instead of every frame anyway.
  • Advertisement