• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

D.V.D

Members
  • Content count

    114
  • Joined

  • Last visited

Community Reputation

1029 Excellent

About D.V.D

  • Rank
    Member
  1.  That's correct. The matrix multiplication operation does not care what is in the matrices. They might not contain basis vecotrs at all, but perhaps weighting of how much you like different ice-cream flavours. Regardless of what kind of mathematical convention you're using (whether you're writing your basis vectors horizontally or vertically), your matrix multiplication function will be implemented the same way.   However, your 1D-array-storage convention does matter. e.g. if you have float data[16]; and you write data[2], then is that row-0/column-2, or is it row-2/column-0?       That depends on what you mean. Row-major and column-major ordering generally refer to the computer science topic of how you map a 2D array to a 1D array. This is just an internal detail of your math library of how it decides to store the matrix elements in memory.   Row-vectors and column-vectors generally refer to the math topic of whether you're writing vectors horizontally or vertically... This choice actually does affect your math (e.g. do you write projection * view, or view * projection).   However, the terms "row major" and "column major" also sometimes get used to describe the mathematical conventions... which makes everything pretty confusing. If someone is writing their basis vectors in the rows of a matrix, they might sometimes say that it's a "row major matrix" -- here they're talking about their math, not about computer science arrays :(     Okay, thanks for clarifying!! 
  2. Yeah, the easist way that I find to create a view matrix is to construct a "local-to-world" (aka world) matrix as if the camera was an object in the world, and then invert this matrix to get a "world-to-camera" (aka view) matrix. If a 3x3 matrix only contains the three axis, then transposing it is the same as inverting it (and cheaper).   No they don't. You can use column-major maths, which looks on paper like: $$\begin{bmatrix} Right.x & Up.x & Forward.x & Pos.x \\ Right.y & Up.y & Forward.y & Pos.y \\ Right.z & Up.z & Forward.z & Pos.z \\ 0 & 0 & 0 & 1 \end{bmatrix}$$   or row-major maths, which looks on paper like: $$\begin{bmatrix} Right.x & Right.y & Right.z & 0 \\ Up.x & Up.y & Up.z & 0 \\ Forward.x & Forward.y & Forward.z & 0 \\ Pos.x & Pos.y & Pos.z & 1 \end{bmatrix}$$   And you can use column-major arrays, or row-major arrays.   If you use column-major maths with column-major arrays, or if you use row-major maths with row-major arrays, then your array of 16 floats will look like: Right.x, Right.y, Right.z, 0, Up.x , Up.y, Up.z, 0, Forward.x, Forward.y, Forward.z, 0, Pos.x, Pos.y, Pos.z, 1 If you use column-major maths with row-major arrays, or if you use row-major maths with column-major arrays, then your array of 16 floats will look like: Right.x, Up.x, Forward.x, Pos.x, Right.y, Up.y, Forward.y, Pos.y, Right.z, Up.z, Forward.z, Pos.z, 0, 0, 0, 1   All four of those choices of conventions are supported by D3D and OpenGL. The mathematical convention alters how you write your math, e.g. whether you write vOut = vIn * projection * view * world, or vOut = world * view * projection * vIn. The array convention alters how you write your matrix library, and whether you write column_major float4x4 myMatrix; or row_major float4x4 myMatrix; in your shader code.   If you're using an existing matrix library, then both of these choices may have already been made for you.     Okay this makes sense. Just to clarify though, are the basis vectors in a row order matrix the rows or are they always columns? I found a blog post on the ryg blog that talks about matrix ordering and he says that whatever algorithm you use for matrix multiplication, it doesn't depend on the ordering of the matrices. Currently, I think of matrix ordering as, you write matrices a certain way and the columns are always the basis vectors but you can choose to store things such that rows are sequential in memory or columns are.       This makes some sense, I'll go over it a bit to better understand it but in the videos, matrix multiplication is not explained as dot products (as it usually is in other resources) since the lectures try to explain matrices more as a change of basis vectors and as linear transformations. I know its equivalent, but the matrix multiplication formula in the videos is easier to understand but requires knowing what your basis vectors are.    If B is some matrix that A can multiply with, and B has n basis vectors, than matrix multiplication is defined as such:   A*B = A*Basis0 | A*Basis1 | ... | A*Basisn, where each of the results of A times the ith basis vector becomes the ith column of the resulting matrix. Then you can decompose matrix vector multiplication to be each component of the vector times the corresponding basis in the matrix and you add all of those results together. Basically, it makes the code become something super simple like this: inline v4 operator*(m4 A, v4 B) { v4 Result = {}; Result = B.x*A.v[0] + B.y*A.v[1] + B.z*A.v[2] + B.w*A.v[3]; return Result; } inline m4 operator*(m4 A, m4 B) { m4 Result = {}; Result.v[0] = A*B.v[0]; Result.v[1] = A*B.v[1]; Result.v[2] = A*B.v[2]; Result.v[3] = A*B.v[3]; return Result; } (v[0-3] are the columns or basis stored in the matrix). This probably isn't the most efficient code for matrix multiplication but its conceptually easy to understand and its not as complicated as other code which has a ton of inner loops and what not.
  3. Oh okay, so the reason that the view matrix isn't what I think it is is because we are trying to perform the opposite rotations that apply to the camera and that happens to be the transpose of the 3x3 rotation matrix? So if the camera's view is rotated to the left by 90 the degrees, the view matrix will contain a rotation by -90 degrees instead right?
  4. Hey guys,    I've been watching 3blue1browns video series on linear algebra (https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) and I decided to try and implement matrices and vectors myself instead of blindly following tutorial code without really understanding it. I've ran into a problem with my camera matrix, specifically the rotation aspect of it. I'm working with column ordered matrices and in a Left hand coordinate system.    From the videos, he explains that matrices are just a set of new axis or basis vectors which define the new x,y,z,... axis for some vector multiplied by that matrix. As I understand it, the camera matrix (if the camera is at the origin) should transform a vector such that its x axis is the cameras horizontal vector, its y axis is the cameras up vector, and its z axis is the cameras target vector. But everywhere I look, they say that for column ordered matrices, the first column should be [Horizontal.x, Up.x, Target.x, 0], the second column should be [Horizontal.y, Up.y, Target.y, 0] and so on. The videos say that the columns of a matrix are the new axis vectors so that would mean that the camera matrix would transform some vector such that its x axis is the x component of the horizontal, up and target vectors, the y axis is the y component of the horizontal, up and target vectors, and so on.   My question is, how does that make sense? Shouldn't the new axis vectors be Horizontal, Up and Target instead? 
  5. Hey guys, I've been trying to setup a batch file that builds a native activity into a apk which i can then run and debug on visual studio 2015. I managed to get the apk built and signed properly but whenever I try to debug it with visual studio, I get the following error:   "Unable to start debugging. Android command run-as failed. Package com.example.native_activity is not debuggable."   The app gets installed just fine on the emulator and it runs properly on one of the two emulators that I tried. However, in both cases, I can't actually debug the apk that I built and I tried setting everything to debug that I could, but it still doesn't work. The code I'm using for my app is the example code for a native activity from google: http://brian.io/android-ndk-r10c-docs/Programmers_Guide/html/md_2__samples_sample--nativeactivity.html   My AndroidManifest.xml does have debuggable set to true: <?xml version="1.0" encoding="utf-8"?> <!-- BEGIN_INCLUDE(manifest) --> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.native_activity" android:versionCode="1" android:versionName="1.0" android:debuggable="true"> <!-- This is the platform API where NativeActivity was introduced. --> <uses-sdk android:minSdkVersion="9" /> <!-- This .apk has no Java code itself, so set hasCode to false. --> <application android:label="@string/app_name" android:hasCode="false"> <!-- Our activity is the built-in NativeActivity framework class. This will take care of integrating with our NDK code. --> <activity android:name="android.app.NativeActivity" android:label="@string/app_name" android:configChanges="orientation|keyboardHidden"> <!-- Tell NativeActivity the name of or .so --> <meta-data android:name="android.app.lib_name" android:value="native-activity" /> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> <!-- END_INCLUDE(manifest) --> I'm using the android_native_glue static lib which I'm not sure if its set to debuggable but I tried to build it myself (which worked), but I don't know how to link the version that I built myself with my native app code (I think it still links with the lib thats given by the ndk). This is what my build.bat looks like: @echo off set CodeDir=W:\untitled\code set OutputDir=W:\untitled\build_android set AndroidDir=%ProgramFiles(x86)%\Android\android-sdk set AndroidCmdDir=%AndroidDir%\build-tools\21.1.2 set GlueDir=W:/untitled/code/android/glue call ndk-build -B NDK_DEBUG=1 APP_BUILD_SCRIPT=%GlueDir%\Android.mk NDK_APPLICATION_MK=%GlueDir%\Application.mk -C %GlueDir% NDK_PROJECT_PATH=%GlueDir% NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj call ndk-build -B NDK_DEBUG=1 APP_BUILD_SCRIPT=%CodeDir%\android\Android.mk NDK_APPLICATION_MK=%CodeDir%\android\Application.mk -C %CodeDir%\android NDK_PROJECT_PATH=%CodeDir%\android NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj REM Create Keystore for signing our apk REM call keytool -genkey -v -keystore %OutputDir%\debug.keystore -storepass android -alias androiddebugkey -dname "filled in with relevant info" -keyalg RSA -keysize 2048 -validity 20000 pushd %OutputDir% del *.apk >NUL 2> NUL popd REM Create APK file call "%AndroidCmdDir%\aapt" package -v -f -M %CodeDir%\android\AndroidManifest.xml -S %CodeDir%\android\res -I "%AndroidDir%/platforms/android-19/android.jar" -F %OutputDir%\AndroidTest.unsigned.apk %OutputDir% call "%AndroidCmdDir%\aapt" add W:\untitled\build_android\AndroidTest.unsigned.apk W:\untitled\build_android\lib\x86\libnative-activity.so REM Sign the apk with our keystore call jarsigner -sigalg SHA1withRSA -digestalg SHA1 -storepass android -keypass android -keystore %OutputDir%\debug.keystore -signedjar %OutputDir%\AndroidTest.signed.apk %OutputDir%\AndroidTest.unsigned.apk androiddebugkey "%AndroidCmdDir%\zipalign" -v 4 %OutputDir%\AndroidTest.signed.apk %OutputDir%\AndroidTest.aligned.apk The debug key already exists, I just don't recreate it on every build process so thats why its commented out. The first ndk-build builds the native_glue while the second one builds the native-activity. The Android.mk for the native-glue is the same as the one provided in the ndk with no changes. The Application.mk is the same as the one I use for the native-activity. This is what my Android.mk and Application.mk look like for the native activity: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := native-activity LOCAL_SRC_FILES := main.c LOCAL_LDLIBS := -llog -landroid -lEGL -lGLESv1_CM LOCAL_STATIC_LIBRARIES := android_native_app_glue include $(BUILD_SHARED_LIBRARY) $(call import-module,android/native_app_glue) APP_ABI := x86 APP_PLATFORM := android-9 I looked online and they say that one way to make sure your apk is debuggable is to unzip it and see if the lib folder has the gbserver files. I did that for mine and the gbserver files were there so I'm not sure why my apk is not debuggable. Is it because I'm not properly linking with my own version of the native_glue and if it is, how do I make my makefile link with my version of the native glue and not the default provided by the ndk? 
  6.   Ahh ofcourse it was that simple. Thanks a lot, this looks like it fixed my problem!
  7. Hello, I've been building a batch file to build my android projects with ndk. My problem is, after I call ndk-build in the batch file to build my C/C++ code into a lib, all commands that come after in the batch file do not execute. Here is what it looks like: ndk-build -B NDK_DEBUG=1 NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj mkdir A My batch file sets the paths for OutputDir and only has these 2 calls (for testing), yet the mkdir never executes because the folder A is never created. Once I remove the ndk-build command, mkdir executes. This also seems to happen when I call these 2 commands: %AndroidCmdDir%\dx --dex --output="classes.dex" "fibpackage\FibLib.class" "fibpackage\FibActivity.class" %AndroidCmdDir%\aapt package -v -f -M \AndroidManifest.xml -I %AndroidDir%/platforms/android-23/android.jar -F %OutputDir%/unsigned.apk %OutputDir% If I have other commands after either of these 2 calls, they also don't get executed. I looked online but it doesn't seem like anyone is having this issue other than me. I'm using android ndk 12 which should be the latest one currently (downloaded from Android Studio's NDK manager) and I'm building for android-23.
  8. I skimmed through the discussion and didn't see anyone post this here but Jonathan Blow has talked a lot about his issues with C++ and actually implemented his own language. He shows why his language is more well-suited for games than C++ is. He started off with lectures and then went to demos. You can find the first lectures here (the demos are on his channel) https://www.youtube.com/watch?v=TH9VCN6UkyQ   To give a quick summary of his points about why he doesn't like C++:   1) A language should understand that the shipping code is not the way the code will look like through development. He says that certain code modifications should be fast and seamless so that we have less busy/braindead work. A few examples of these are the fact that accessing a member of a class by value uses the "." operator while accessing via pointer uses the "->" operator. In reality, both are exactly the same but as you are writing your code and your switching between accessing a struct by value or by reference, you constantly have to modify these operators even though the logic is the same (could kill a lot of time depending on how many times you switch and how many members are part of the struct).   Another example was the process of taking code that starts to repeat itself and putting it into a lambda function and then into a proper function. The idea here is that if I have a function and I notice that I'm repeating some lines of code, then maybe its useful to have a lambda function which defines that process and then call it where I need it in that one function. I don't want to make it a proper function just yet because I'm only using it in one function so when someone comes along to analyze the code, they don't have to assume that the given function can be called from anywhere. Keeping it lambda keeps it more local. Once I actually need that functionality somewhere else, than I want to easily move my lambda function out into a proper function. In C++, this requires a ton of work since due to how lambdas are defined. And then going from lambda to a proper function requires more busy work since now the layout of everything is changed again. Jonathan's approach is like pythons where you just have nested functions that have the exact same syntax as normal functions. This makes the transformation of code from one state to another as you are in the process of writing it much easier since you only copy paste the function to some other location unlike C++ where the way I define functions and lambdas is completely different.   2) Another point he makes is since this language should be for games as a priority, it would be nice to support common things game developers due for optimization. For example, if I have a struct for a model with a pointer to the vertices and pointer to the indices, then its a lot faster to allocate one block of memory for these 2 pointers and then just say that the second pointer points to where the vertices data ends (save a allocation). In C++, I have to do a lot of array indexing or pointer arithmetic to do that but he makes a keyword which tells the language that the memory 2 pointers point to should be one block of memory. Then when we deallocate the memory for the struct, we deallocate the entire block and the language knows the size and all that. This is a really specific feature but he notices that its something he and his friends use a lot so giving support for it and making it one keyword vs like 15 lines of code that you feel unsafe about is a much better option.   3) Instead of using templates or macros for preprocessing, he argues why not just run the language code directly at preprocess time. You use a keyword that runs a given function at preprocess time, computes whatever it returns, and sticks it into the line which it is found in. Instead of playing around with ugly template syntax and messing around with a different language like macros, we just run the language we are using directly but during compilation.   4) Jonathan argues to remove header files and just have one type of file like in java. The idea here is to remove the busy work of defining a function in one file, then jumping to another file to to write its implementation. He also wants to get rid of the #pragma once keywords and the order of function definitions and just have the compiler sort out dependencies itself like in many high level languages (again, getting rid of needless busy work).    ... And more.   He talks about a lot more stuff and the best part is all of these features and many more have already been implemented by him and hes constantly refining it, seeing what works and what doesn't. The stuff I talked about is mostly from his 2 lecture videos but he does a whole lot more in the demo videos and its pretty amazing stuff. In my opinion, his language is a really good alternative to C++ for gamers since it focuses on getting rid of lots of busy work and spending more time coding in easy to understand syntax so that you don't feel uneasy about what you just wrote (and it seems to succeed in its goals). 
  9. I also don't really use a cubemap. I just identify for each screen-ray which side of the cube it would correspond. I've just drawn a link between my and bcmpinc's old algorithm in saying that I project the octree on a cubemap, what I effectively do.   That's impressive, but I can't seem to find where bcmpinc is mentioning this.   I don't try to mimic his algorithm. I just got inspiration of it for this gpu-raycaster. The benefit I see with it, is that the ray voxel intersection is much simpler and the front to back sorting of the octree is automatically given. This should theoretically be a benifit, but in practice it isn't. The question is: Why is that?     I had the same questions as to how his code works and you can view our exchange in the comment section in the following link (https://bcmpinc.wordpress.com/2013/11/03/a-sorting-based-rendering-method/). My user name was D.V.D and I ask him in more detail exactly how he does his rendering.   From what I understand, he basically projects the screen on to each corner and finds how far in each axis (l,r,t,b) it is from the screen bounds to the corner. He then uses this to interpolate in world space how these bounds change when he subdivides the octree to its children (this is all linear since its done in world space). At this point, he checks his quadtree if the current octree node is inside the frustum bounds of the current quadtree node. If it is, he subdivides the quadtree into 4 children and creates a sub frustum for each new quadtree node. He then calls the rendering function recursively on each of the quadtree children with each of the 4 sub frustum's and continues rendering. If he ever finds the octree node to be larger than the current frustum, he recursively splits the octree into its 8 children and calls the rendering function recursively on the octree children. Once he hits a leaf node for the quadtree, if the current octree node is inside the leafs frustum then he just sets the pixels color to the octree nodes color.   It basically becomes a method of splitting up the view frustum in world space until eventually each sub frustum is a pixel in size. At that point if a node is inside the frustum bounds, he just sets the pixel in the quadtree leaf. He doesn't have to do any division because all of this is done in worldspace (the frustum checks and what not).   As to the benefits of his algorithm, in complete software with simd and a single thread, he gets between 5-10 fps so this isn't a really fast algorithm unfortunately. On a GPU, I guess you can skip the quadtree entirely by just doing the frustum checks for each pixel in parallel but then that just amounts to raycasting and you retraverse the octree for each pixel. His algorithm does all of this hierarchally so it traverses the octree once but that makes it very linear. I haven't ever worked with GPU volume rendering so I'm not sure why your approach is slow but I did my own version of bcmpinc's renderer in software so if you know how his algorithm works, it might help identifying why your approach is slow.
  10. I'm not sure if it will make a difference but bcmpinc changed his method part way through to ditch using the cube map and instead, simply project the screen on to the octree and do frustum checks using a quadtree hierarchal z buffer. In his latest source code he doesn't have the cubemap as part of the main render loop and his pdf is the old technique which he was using. He stated that the new method of just projecting the screen on to the octree was a lot faster than creating the cube map so it might help to see what his current method is doing unless you specifically want to use the cube map and trace rays.   Lastly, the main premise of his algorithm was to get rid of divisions in the code and to not use the concept of rays (which he succeeded, no perspective division at all in the code) so I'm not entirely sure why you are raycasting when attempting to mimic his algorithm. He has a couple of posts on getting his algorithm to the GPU that might be worth checking out if your interested.
  11. But it can't be propogated down an octree. For example, the vertices that make up the outline for the root of the octree are not the same vertices that make up the outline for the children of that root. Unfortunatly, it looks like i have to recalculate the outline on each traversal until my cube is in either the top left, top right, bot left, bot right (can't be in 2 parts at the same time). Unfortunatly this is probably going to be a lot more performance intensive but ill give it an implementation. Also, i dont want to approximate a cube with a quad that encompases in screenspace because im tryingn to minimize my traversals and using quads covers more area than the object actually takes up (creating many false positives). Sorry for leaving for a while, had some personal stuff and work eat up all my time.
  12. Ah makes sense. Im not entirely sure what your masks represent. Could you give some more insight? It looks really interesting. I probably made a mistake with my math, ill look over how i call it. Does anyone have any ideas to my post about how the outline isn't always the same and cant be propogated down the octree traversal? Is my only option to calcukate it until my nodes are in one of the 4 screen quadrants?
  13.   For your snippet, is box min and box max the corners of the cube? Are they expected to be axis alligned (an AABB)? I'm attempting to implement your snippet and get it working but I'm not sure what space your function is assuming. It seems like its world space but when I implement it, it doesn't generate proper values: Mat4x4f MatPos = SetWorldPos(Pos); Mat4x4f MatRot = SetRotation(Rot); Mat4x4f MatScale = SetScale(Scale); Mat4x4f MatCamera = SetCamera(Camera); Mat4x4f MatProject = SetProjection(Screen, 90.0f, 0.01f, 100.0f); Mat4x4f Transform = MatCamera * MatPos * MatRot * MatScale; Vector3f Outline[6]; int32 len = GetOutline(MatPos * MatRot * MatScale * Vector3f{ -0.5f, -0.5f, -0.5f }, MatPos * MatRot * MatScale * Vector3f{ 0.5f, 0.5f, 0.5f }, Camera->Pos, Outline); for (int32 i = 0; i < 6; ++i) { Outline[i] = ProjectToScreen(Screen, MatProject * MatCamera * Outline[i]); // Converts from NDC to screen coords } RenderCubeOutline(Screen, Outline[0], Outline[1], Outline[2], Outline[3], Outline[4], Outline[5], 0xFFFFFFFF);
  14.   Hmm, Id have to research a bit on that, I'm not entirely sure what you mean but it sounds interesting. I'll look up what stock algo's are and the approach with a 2D convex hull. Thanks a lot!     Man that looks a lot simpler than my code but I haven't tested it out yet. Will do in just one second. I will for sure have questions on how some of this works.     I encountered an error with this approach in general. My idea for getting the outer edges of a cube was so that I could render octrees and figure out the outline for the root and which points of the octrees root are in it (and their order). When I subdivide my tree, I would render the children but use the same points as I did for the root to generate an outline for them and while my outline is correct, I got into this error:     It may not be obvious what is happening at first so Ill single out 2 nodes and render its 6 visible vertices.     Notice how different faces are visible for these 2 nodes. For the node on the left, only 2 faces are visible while for the node on the right, 3 faces are visible. Here is an illustration(the perspective is exaggerated):     I'm a little confused on how to approach this. I do not want to find the outline on a per node basis since I can expect at least a million traversals a frame. Whats more is the paper which i'm trying to replicate (http://www.cs.princeton.edu/courses/archive/spr01/cs598b/papers/greene96.pdf) does the same approach where you only render the outline and I doubt they would find the outline on a per node basis given their performance (it would seem like way too large of a draw back). I'm thinking that there may be a way of tweaking visible faces given the cubes screen position but the cube shares the exact same orientation as its root so it would seem that if only 2 nodes for the root are visible then only 2 are visible for all the children as well.   EDIT: And this is what the root node looks like for the above example  
  15. The camera location is the origin facing the positive z direction. I think I fixed the error. I did the same dot product test but now, my vertices are in projection space (I multiplied them by the projection matrix) and the points chosen are proper and in the right order. Here is how it looks:     This version changes the points given different view directions where as the previous version was static as the cube moved to different parts of the screen. My code doesn't account for just 4 visible vertices yet but I can add that special case. Basically, the difference is that instead of having the vertices in view space, they are in screen space and it all ended up working out. Here is the current code, its a little long and I found some resources online to help me with it but it seems to be working properly: Vector3f center; bool CompareCcwOrder(Vector3f a, Vector3f b) { if (a.x - center.x >= 0.0f && b.x - center.x < 0.0f) { return true; } if (a.x - center.x < 0.0f && b.x - center.x >= 0.0f) { return false; } if (a.x - center.x == 0.0f && b.x - center.x == 0.0f) { if (a.y - center.y >= 0.0f || b.y - center.y >= 0.0f) { return a.y > b.y; } return b.y > a.y; } // compute cross product of (center -> a) x (center -> b) float32 det = (a.x - center.x) * (b.y - center.y) - (b.x - center.x) * (a.y - center.y); if (det < 0.0f) { return true; } if (det > 0.0f) { return false; } // points a and b are on the same line from the center // check which point is closer to the center float32 d1 = (a.x - center.x) * (a.x - center.x) + (a.y - center.y) * (a.y - center.y); float32 d2 = (b.x - center.x) * (b.x - center.x) + (b.y - center.y) * (b.y - center.y); return d1 > d2; } int32* GetOutlineList(Vector3f Corners[8]) { // Assume winding order is CCW Vector3f FaceNormal[6]; FaceNormal[0] = GetQuadNormal(Corners[3], Corners[2], Corners[0]);//(Corners[0], Corners[2], Corners[3]); // Front Face FaceNormal[1] = GetQuadNormal(Corners[5], Corners[1], Corners[0]);//(Corners[0], Corners[1], Corners[5]); // Right Face FaceNormal[2] = GetQuadNormal(Corners[7], Corners[5], Corners[4]);//(Corners[4], Corners[5], Corners[7]); // Back Face FaceNormal[3] = GetQuadNormal(Corners[3], Corners[7], Corners[6]);//(Corners[6], Corners[7], Corners[3]); // Left Face FaceNormal[4] = GetQuadNormal(Corners[6], Corners[4], Corners[0]);//(Corners[0], Corners[4], Corners[6]); // Top Face FaceNormal[5] = GetQuadNormal(Corners[1], Corners[5], Corners[7]);//(Corners[7], Corners[5], Corners[1]); // Bottom Face // Check if facing camera bool FacesScreen[6]; for (int32 i = 0; i < 6; ++i) { FacesScreen[i] = FaceNormal[i].z > 0; } // Find visible points in CCW order std::vector<Vector3f> IndexArray; if (FacesScreen[0] != FacesScreen[1]) // Front and Right { IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 }); IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 }); } if (FacesScreen[1] != FacesScreen[2]) // Right and Back { IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 }); IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 }); } if (FacesScreen[2] != FacesScreen[3]) // Back and Left { IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 }); IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 }); } if (FacesScreen[3] != FacesScreen[0]) // Left And Front { IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 }); IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 }); } if (FacesScreen[0] != FacesScreen[4]) // Front and Top { IndexArray.push_back({ Corners[0].x, Corners[0].y, 0 }); IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 }); } if (FacesScreen[0] != FacesScreen[5]) // Front and Bottom { IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 }); IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 }); } if (FacesScreen[1] != FacesScreen[4]) // Right and Top { IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 }); IndexArray.push_back({ Corners[0].x, Corners[0].y, 0 }); } if (FacesScreen[1] != FacesScreen[5]) // Right and Bottom { IndexArray.push_back({ Corners[1].x, Corners[1].y, 1 }); IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 }); } if (FacesScreen[2] != FacesScreen[4]) // Back and Top { IndexArray.push_back({ Corners[4].x, Corners[4].y, 4 }); IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 }); } if (FacesScreen[2] != FacesScreen[5]) // Back and Bottom { IndexArray.push_back({ Corners[5].x, Corners[5].y, 5 }); IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 }); } if (FacesScreen[3] != FacesScreen[4]) // Left and Top { IndexArray.push_back({ Corners[2].x, Corners[2].y, 2 }); IndexArray.push_back({ Corners[6].x, Corners[6].y, 6 }); } if (FacesScreen[3] != FacesScreen[5]) // Left and Bottom { IndexArray.push_back({ Corners[7].x, Corners[7].y, 7 }); IndexArray.push_back({ Corners[3].x, Corners[3].y, 3 }); } // Calculate center center = {}; for (int32 i = 0; i < IndexArray.size(); ++i) { center.x += IndexArray[i].x; center.y += IndexArray[i].y; } center.x /= IndexArray.size(); center.y /= IndexArray.size(); // Sort the elements in counter clock wise order std::sort(IndexArray.begin(), IndexArray.end(), CompareCcwOrder); // Only take every second point since there are 2 copies of every point in IndexArray int32* Result = new int32[IndexArray.size()/2]; for (int32 i = 0; i < IndexArray.size() / 2; ++i) { Result[i] = IndexArray[i * 2].z; } return Result; } One thing that sucks about the above code is for every edge, I need to write which points describe it in code which makes the function really long. What also sucks is that every point is included twice since 2 edges can share a point which is also an inefficiency I want to get rid of. So far however, except for the 4 vertex case this code seems to be working properly. Also, when I choose vertices for edges, I write the index of the vertex as the z component since I don't actually need it for sorting in counter clock wise order. I need to get rid of the global center variable and maybe make that a functor instead. If there are any modifications that can make this faster or simpler please leave a comment. Thanks a lot for the help to get this working everyone!