Nokobon

Members
  • Content count

    18
  • Joined

  • Last visited

Community Reputation

102 Neutral

About Nokobon

  • Rank
    Member
  1. [quote name='alvaro' timestamp='1336477315' post='4938345'] You are right. The word "oriented" threw me off: I thought an arbitrary bounding box would just be called a "bounding box". After a quick web search I see that your name is perfectly common. I am pretty sure you cannot deduce the distance between two boxes from the projections alone, but perhaps there is something else I don't understand, since you seem to be testing 15 axes, and I can only think of 6 that matter. Would you mind explaining where the 15 axes come from? [/quote] 6 axes: face-normals of both boxes 9 axes: pairwise crossproducts of distinct edges of both boxes (3*3) The projection of the boxes needs to be done on these 15 axes. If on at least one of them the projections don't overlap, there is a plane that separates both boxes. If the projections overlap on all axes, the boxes intersect, too. The penetration depth is the smallest overlap on an axis. So the minimum translation vector to get out of the collision is parallel to the axis with the smallest overlap. The problem is, that the smallest distance between two non-intersecting boxes is not necessarily parallel to one of the 15 axes (e.g. when the minimum distance is between two vertices). So I don't know if there is a way to deduce the distance from the projections...
  2. Thanks for your ideas. [quote name='alvaro' timestamp='1336411199' post='4938116'] Is this what you want? [code]sqrt(pow(x_separation, 2) + pow(y_separation, 2) + pow(z_separation, 2))[/code] [/quote] I think this only works for axis-aligned bounding boxes? I use oriented bounding boxes... [quote name='wildbunny' timestamp='1336425074' post='4938191'] Ahh yes, of course - I'd forgotten it worked differently for separation compared with penetration. With separation you need to account for more than just the separating axis themselves, you also need to account for the geometry of the faces and edges. Penetration is different because you only need to worry about the actual axis formed by the faces of both objects and the cross-products of the edges. In this article I wrote, you can see the problem you need to address: [url="http://www.wildbunny.co.uk/blog/2011/04/20/collision-detection-for-dummies/"]http://www.wildbunny...on-for-dummies/[/url] Scroll down to the section on OBB vs OBB - there is a little applet demonstrating the difference I was describing. The axis is shown as the line from the centre of the applet to a face on B-A. When this axis is aligned with a face of A or B, the minimum distance is aligned with a separating axis. However, when that axis is not aligned, the distance is actually that of two vertices. In 3d there is vertex vs vertex and edge vs edge distance as well. Hope that helps! Cheers, Paul. [/quote] Interesting approach. This might be a good solution for the problem but it would require a whole new implementation and I could discard my separating axis test. My main question is: Is the distance parallel to on of the 15 axes as if the penetration depth? If so I could use the separating axis test results to calculate it...
  3. Hello, I implemented a separating axis test for 3D collision detection with oriented bounding boxes. I can get the collision depth by looking for the minimum overlap on the axes. Now I also need to get the distance of non colliding boxes but I can't figure out if there is a way to calculate it from the projections of the separating axis test. So is there a way which uses the available data from the separating axis test or do I have to implement a distance calculation algorithm separately? I would be glad if you have any ideas...
  4. You are right, BufferedImage is created with all pixels fully transparent. So my first solution was to create a new BufferedImage eacht frame and only change those pixel which are not transparent. For performance reasons this might be suboptimal. But your sugestion with the Color constructor solved my problem. I think it's unexpected that getRGB() also returns the alpha value. So now I do it this way: [code] image.setRGB(x, y, (new Color(0.0f, 0.0f, 0.0f, 0.0f)).getRGB()); [/code] Thank you!
  5. Hi, I've got a problem with BufferedImage and transparency. I am creating a BufferedImage: [code]image = new BufferedImage(xRes, yRes, BufferedImage.TYPE_INT_ARGB);[/code] I am setting every pixel value by hand using: [code]image.setRGB(x, y, Color.BLACK.getRGB());[/code] But some pixels need to be completely transparent... I just can't figure out how to do this. Any ideas? Nokobon
  6. I figured out that it works when using the glew 32 Bit version on my 64 Bit system. So I don't need to compile it by myself. But why? Does that mean, although I'm using 64 bit Windows and 64 bit visual sudio it does build for bit? And thanks for the tip regarding the .dll file. But linking to the .lib file this way I don't really need the .dll, don't I?
  7. Yes, you are right the Linker can't find the .lib file. I definitely need to add it to "additional dependencies". But although I gave the library path (where glew32.lib is in), it cannot be found. That problem appears when using VS2010 64bit and Windows 7 64bit. On Windows7 32bit with VS2010 32bit it works properly. Could there be a problem with the glew32.lib file for 64bit systems?
  8. Hi, for hours now I am trying to use glew on Windows 7 with Visual Studio 2010. That's what I did: 1. downloaded the glew binaries 2. added the glew Include-Directory to my project 3. added the glew lib directory to Library Directories of my project 4. added glew32.lib to aditional dependencies 5. copied glew32.dll to Windows/System32 I tried to compile this source code: [code] #include <GL\glew.h> #include <cstdlib> int main() { if (GLEW_OK != glewInit()) exit(1); return 0; } [/code] On my 32Bit system I have no problems, but using Windows 7 64Bit I get the following Linking Errors: [code] error LNK2019: unresolved external symbol __imp__glewInit referenced in function _main error LNK1120: 1 unresolved externals [/code] Any idea what's wrong? Thanks in advance, Nokobon
  9. [quote name='ryan20fun' timestamp='1307136116' post='4819225'] do you need to include that header in your header or cant you include that in your source file ? [/quote] That's it! I don't need the header in my dll header. There are function declarations that need the header, but these functions are not exported by my dll, so I can put their declaration in the cpp file. Now it works fine. In fact it makes sense. Of course the project using my dll needs to know the location of the third-party headers when included in my dll header. Thanks to both of you!
  10. Sorry, I don't really get you point... I want my dll to provide only 3 functions I wrote. The third-party library should be hidden inside completely. I think I don't have a .def-file, as I exported my functions by using the __declspec(dllexport) keyword.
  11. [quote name='ryan20fun' timestamp='1307110934' post='4819058'] i have my own dlls and us it in another project and i dont need to do that. odd, im also using VS2010. what error(s) do you get if you dont include thos other headers ? [/quote] Does your dlls also depend on other libraries? My dll depends on the OpenNI library, so when compiling a project that uses my dll I get that error: [code]error C1083: Cannot open include file: 'XnOpenNI.h': No such file or directory [/code] In my dll I added the OpenNI Include folder to the Include Directories. I guess I have to put these headers directly into my project, as it is no standard library. How should I do this?
  12. Hi, I am trying to create a DLL using Microsoft Visual Studio 2010. My DLL does use another library, so I added the .lib-File of that library to my DLL project and added it's header directory to my DLL's include dir. No problems so far, but when I want to use my DLL in another project, I need to include the header files of the third-party library again, to get my DLL working properly. So how do I add the header files of that library to my DLL project so that I don't need to include them again, when using my DLL? Thanks in advance, Nokobon
  13. Hi, I'm am trying to run the "First Triangle" example code from OpenGL Superbible 5th Edition on Ubuntu 10.04. Unfortunately there is no explanation how to set up a project including GLTools on Linux and I'm not experienced using Libraries in C++ with Eclipse. I've got the source and header files of GLTools (including Glew) from the homepage of the Superbible. OpenGL and Glut are working properly. I added the header-files folder of GLTools to the include directory of my C++ Compiler in Eclipse. Now when building the project I get some errors saying there are no matching functions for some calls. for example: at [code]triangleBatch.CopyVertexData3f(vVerts);[/code] this error occures: [code]no matching function for call to ‘GLBatch::CopyVertexData3f(GLfloat [9])’[/code] The headers are included without any problem. So what do I need to do? I assume I have to compile GLTools by my self, don't I? If yes, can you tell me how to do that? Here's the Source Code Example: [code] // Triangle.cpp // Our first OpenGL program that will just draw a triangle on the screen. #include <GLTools.h> // OpenGL toolkit #include <GLShaderManager.h> // Shader Manager Class #ifdef __APPLE__ #include <glut/glut.h> // OS X version of GLUT #else #define FREEGLUT_STATIC #include <GL/glut.h> // Windows FreeGlut equivalent #endif GLBatch triangleBatch; GLShaderManager shaderManager; /////////////////////////////////////////////////////////////////////////////// // Window has changed size, or has just been created. In either case, we need // to use the window dimensions to set the viewport and the projection matrix. void ChangeSize(int w, int h) { glViewport(0, 0, w, h); } /////////////////////////////////////////////////////////////////////////////// // This function does any needed initialization on the rendering context. // This is the first opportunity to do any OpenGL related tasks. void SetupRC() { // Blue background glClearColor(0.0f, 0.0f, 1.0f, 1.0f ); shaderManager.InitializeStockShaders(); // Load up a triangle GLfloat vVerts[] = { -0.5f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f }; triangleBatch.Begin(GL_TRIANGLES, 3); triangleBatch.CopyVertexData3f(vVerts); triangleBatch.End(); } /////////////////////////////////////////////////////////////////////////////// // Called to draw scene void RenderScene(void) { // Clear the window with current clearing color glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); GLfloat vRed[] = { 1.0f, 0.0f, 0.0f, 1.0f }; shaderManager.UseStockShader(GLT_SHADER_IDENTITY, vRed); triangleBatch.Draw(); // Perform the buffer swap to display back buffer glutSwapBuffers(); } /////////////////////////////////////////////////////////////////////////////// // Main entry point for GLUT based programs int main(int argc, char* argv[]) { gltSetWorkingDirectory(argv[0]); glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL); glutInitWindowSize(800, 600); glutCreateWindow("Triangle"); glutReshapeFunc(ChangeSize); glutDisplayFunc(RenderScene); GLenum err = glewInit(); if (GLEW_OK != err) { fprintf(stderr, "GLEW Error: %s\n", glewGetErrorString(err)); return 1; } SetupRC(); glutMainLoop(); return 0; } [/code]
  14. [quote name='hplus0603' timestamp='1298331693' post='4777296'] Yeah, that's a typo. It's "insert(vec.end(), from, to)" on the vector template. I use an enum because you can't easily define the value of a member variable within the declaration of the class in C++. Also, a variable cannot be used as a constant in future template specializations, for example. [/quote] Okay, now I do understand. Thank you!
  15. [quote name='Kylotan' timestamp='1298249863' post='4776847'] Enums are better than numeric values because it's virtually impossible to accidentally store an invalid number in there. The append function here adds char data to the end of the vector of chars, so that in the end you have one long set of chars to send, which you can do in one call, passing the data directly to the networking call. That's quite different from having a vector of char* which is pretty useless really - if you push a char* onto a vector then you're just storing a set of pointers to the strings, not the strings themselves. Not only does this mean you don't have a single string which you can write out, but it can lead to all sorts of errors as your pointers can end up pointing to invalid memory and so on. This is a common mistake made by beginners - perhaps ask on the "For Beginners" forum why using char* for strings is a bad idea! [/quote] Okay, so I understand why to user vector<char>: It's like a dynamic array and it guarantees that the chararacters are stored in a row in memory. In fact we also could use char[] instead, but the disadvantage would be that it has a fixed size, right? But I still don't get why to use "append" here? It's a function of the string class, but you use it on a vector...