# D.V.D

Member

114

1029 Excellent

• Rank
Member
1. ## Camera Matrix and Axis Vectors

That's correct. The matrix multiplication operation does not care what is in the matrices. They might not contain basis vecotrs at all, but perhaps weighting of how much you like different ice-cream flavours. Regardless of what kind of mathematical convention you're using (whether you're writing your basis vectors horizontally or vertically), your matrix multiplication function will be implemented the same way.   However, your 1D-array-storage convention does matter. e.g. if you have float data[16]; and you write data[2], then is that row-0/column-2, or is it row-2/column-0?       That depends on what you mean. Row-major and column-major ordering generally refer to the computer science topic of how you map a 2D array to a 1D array. This is just an internal detail of your math library of how it decides to store the matrix elements in memory.   Row-vectors and column-vectors generally refer to the math topic of whether you're writing vectors horizontally or vertically... This choice actually does affect your math (e.g. do you write projection * view, or view * projection).   However, the terms "row major" and "column major" also sometimes get used to describe the mathematical conventions... which makes everything pretty confusing. If someone is writing their basis vectors in the rows of a matrix, they might sometimes say that it's a "row major matrix" -- here they're talking about their math, not about computer science arrays :(     Okay, thanks for clarifying!!
2. ## Camera Matrix and Axis Vectors

Yeah, the easist way that I find to create a view matrix is to construct a "local-to-world" (aka world) matrix as if the camera was an object in the world, and then invert this matrix to get a "world-to-camera" (aka view) matrix. If a 3x3 matrix only contains the three axis, then transposing it is the same as inverting it (and cheaper).   No they don't. You can use column-major maths, which looks on paper like: $$\begin{bmatrix} Right.x & Up.x & Forward.x & Pos.x \\ Right.y & Up.y & Forward.y & Pos.y \\ Right.z & Up.z & Forward.z & Pos.z \\ 0 & 0 & 0 & 1 \end{bmatrix}$$   or row-major maths, which looks on paper like: $$\begin{bmatrix} Right.x & Right.y & Right.z & 0 \\ Up.x & Up.y & Up.z & 0 \\ Forward.x & Forward.y & Forward.z & 0 \\ Pos.x & Pos.y & Pos.z & 1 \end{bmatrix}$$   And you can use column-major arrays, or row-major arrays.   If you use column-major maths with column-major arrays, or if you use row-major maths with row-major arrays, then your array of 16 floats will look like: Right.x, Right.y, Right.z, 0, Up.x , Up.y, Up.z, 0, Forward.x, Forward.y, Forward.z, 0, Pos.x, Pos.y, Pos.z, 1 If you use column-major maths with row-major arrays, or if you use row-major maths with column-major arrays, then your array of 16 floats will look like: Right.x, Up.x, Forward.x, Pos.x, Right.y, Up.y, Forward.y, Pos.y, Right.z, Up.z, Forward.z, Pos.z, 0, 0, 0, 1   All four of those choices of conventions are supported by D3D and OpenGL. The mathematical convention alters how you write your math, e.g. whether you write vOut = vIn * projection * view * world, or vOut = world * view * projection * vIn. The array convention alters how you write your matrix library, and whether you write column_major float4x4 myMatrix; or row_major float4x4 myMatrix; in your shader code.   If you're using an existing matrix library, then both of these choices may have already been made for you.     Okay this makes sense. Just to clarify though, are the basis vectors in a row order matrix the rows or are they always columns? I found a blog post on the ryg blog that talks about matrix ordering and he says that whatever algorithm you use for matrix multiplication, it doesn't depend on the ordering of the matrices. Currently, I think of matrix ordering as, you write matrices a certain way and the columns are always the basis vectors but you can choose to store things such that rows are sequential in memory or columns are.       This makes some sense, I'll go over it a bit to better understand it but in the videos, matrix multiplication is not explained as dot products (as it usually is in other resources) since the lectures try to explain matrices more as a change of basis vectors and as linear transformations. I know its equivalent, but the matrix multiplication formula in the videos is easier to understand but requires knowing what your basis vectors are.    If B is some matrix that A can multiply with, and B has n basis vectors, than matrix multiplication is defined as such:   A*B = A*Basis0 | A*Basis1 | ... | A*Basisn, where each of the results of A times the ith basis vector becomes the ith column of the resulting matrix. Then you can decompose matrix vector multiplication to be each component of the vector times the corresponding basis in the matrix and you add all of those results together. Basically, it makes the code become something super simple like this: inline v4 operator*(m4 A, v4 B) { v4 Result = {}; Result = B.x*A.v[0] + B.y*A.v[1] + B.z*A.v[2] + B.w*A.v[3]; return Result; } inline m4 operator*(m4 A, m4 B) { m4 Result = {}; Result.v[0] = A*B.v[0]; Result.v[1] = A*B.v[1]; Result.v[2] = A*B.v[2]; Result.v[3] = A*B.v[3]; return Result; } (v[0-3] are the columns or basis stored in the matrix). This probably isn't the most efficient code for matrix multiplication but its conceptually easy to understand and its not as complicated as other code which has a ton of inner loops and what not.
3. ## Camera Matrix and Axis Vectors

Oh okay, so the reason that the view matrix isn't what I think it is is because we are trying to perform the opposite rotations that apply to the camera and that happens to be the transpose of the 3x3 rotation matrix? So if the camera's view is rotated to the left by 90 the degrees, the view matrix will contain a rotation by -90 degrees instead right?
4. ## Camera Matrix and Axis Vectors

Hey guys,    I've been watching 3blue1browns video series on linear algebra (https://www.youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) and I decided to try and implement matrices and vectors myself instead of blindly following tutorial code without really understanding it. I've ran into a problem with my camera matrix, specifically the rotation aspect of it. I'm working with column ordered matrices and in a Left hand coordinate system.    From the videos, he explains that matrices are just a set of new axis or basis vectors which define the new x,y,z,... axis for some vector multiplied by that matrix. As I understand it, the camera matrix (if the camera is at the origin) should transform a vector such that its x axis is the cameras horizontal vector, its y axis is the cameras up vector, and its z axis is the cameras target vector. But everywhere I look, they say that for column ordered matrices, the first column should be [Horizontal.x, Up.x, Target.x, 0], the second column should be [Horizontal.y, Up.y, Target.y, 0] and so on. The videos say that the columns of a matrix are the new axis vectors so that would mean that the camera matrix would transform some vector such that its x axis is the x component of the horizontal, up and target vectors, the y axis is the y component of the horizontal, up and target vectors, and so on.   My question is, how does that make sense? Shouldn't the new axis vectors be Horizontal, Up and Target instead?
5. ## Help With Building A Debuggable Native Apk

Hey guys, I've been trying to setup a batch file that builds a native activity into a apk which i can then run and debug on visual studio 2015. I managed to get the apk built and signed properly but whenever I try to debug it with visual studio, I get the following error:   "Unable to start debugging. Android command run-as failed. Package com.example.native_activity is not debuggable."   The app gets installed just fine on the emulator and it runs properly on one of the two emulators that I tried. However, in both cases, I can't actually debug the apk that I built and I tried setting everything to debug that I could, but it still doesn't work. The code I'm using for my app is the example code for a native activity from google: http://brian.io/android-ndk-r10c-docs/Programmers_Guide/html/md_2__samples_sample--nativeactivity.html   My AndroidManifest.xml does have debuggable set to true: <?xml version="1.0" encoding="utf-8"?> <!-- BEGIN_INCLUDE(manifest) --> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.native_activity" android:versionCode="1" android:versionName="1.0" android:debuggable="true"> <!-- This is the platform API where NativeActivity was introduced. --> <uses-sdk android:minSdkVersion="9" /> <!-- This .apk has no Java code itself, so set hasCode to false. --> <application android:label="@string/app_name" android:hasCode="false"> <!-- Our activity is the built-in NativeActivity framework class. This will take care of integrating with our NDK code. --> <activity android:name="android.app.NativeActivity" android:label="@string/app_name" android:configChanges="orientation|keyboardHidden"> <!-- Tell NativeActivity the name of or .so --> <meta-data android:name="android.app.lib_name" android:value="native-activity" /> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> <!-- END_INCLUDE(manifest) --> I'm using the android_native_glue static lib which I'm not sure if its set to debuggable but I tried to build it myself (which worked), but I don't know how to link the version that I built myself with my native app code (I think it still links with the lib thats given by the ndk). This is what my build.bat looks like: @echo off set CodeDir=W:\untitled\code set OutputDir=W:\untitled\build_android set AndroidDir=%ProgramFiles(x86)%\Android\android-sdk set AndroidCmdDir=%AndroidDir%\build-tools\21.1.2 set GlueDir=W:/untitled/code/android/glue call ndk-build -B NDK_DEBUG=1 APP_BUILD_SCRIPT=%GlueDir%\Android.mk NDK_APPLICATION_MK=%GlueDir%\Application.mk -C %GlueDir% NDK_PROJECT_PATH=%GlueDir% NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj call ndk-build -B NDK_DEBUG=1 APP_BUILD_SCRIPT=%CodeDir%\android\Android.mk NDK_APPLICATION_MK=%CodeDir%\android\Application.mk -C %CodeDir%\android NDK_PROJECT_PATH=%CodeDir%\android NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj REM Create Keystore for signing our apk REM call keytool -genkey -v -keystore %OutputDir%\debug.keystore -storepass android -alias androiddebugkey -dname "filled in with relevant info" -keyalg RSA -keysize 2048 -validity 20000 pushd %OutputDir% del *.apk >NUL 2> NUL popd REM Create APK file call "%AndroidCmdDir%\aapt" package -v -f -M %CodeDir%\android\AndroidManifest.xml -S %CodeDir%\android\res -I "%AndroidDir%/platforms/android-19/android.jar" -F %OutputDir%\AndroidTest.unsigned.apk %OutputDir% call "%AndroidCmdDir%\aapt" add W:\untitled\build_android\AndroidTest.unsigned.apk W:\untitled\build_android\lib\x86\libnative-activity.so REM Sign the apk with our keystore call jarsigner -sigalg SHA1withRSA -digestalg SHA1 -storepass android -keypass android -keystore %OutputDir%\debug.keystore -signedjar %OutputDir%\AndroidTest.signed.apk %OutputDir%\AndroidTest.unsigned.apk androiddebugkey "%AndroidCmdDir%\zipalign" -v 4 %OutputDir%\AndroidTest.signed.apk %OutputDir%\AndroidTest.aligned.apk The debug key already exists, I just don't recreate it on every build process so thats why its commented out. The first ndk-build builds the native_glue while the second one builds the native-activity. The Android.mk for the native-glue is the same as the one provided in the ndk with no changes. The Application.mk is the same as the one I use for the native-activity. This is what my Android.mk and Application.mk look like for the native activity: LOCAL_PATH := $(call my-dir) include$(CLEAR_VARS) LOCAL_MODULE := native-activity LOCAL_SRC_FILES := main.c LOCAL_LDLIBS := -llog -landroid -lEGL -lGLESv1_CM LOCAL_STATIC_LIBRARIES := android_native_app_glue include $(BUILD_SHARED_LIBRARY)$(call import-module,android/native_app_glue) APP_ABI := x86 APP_PLATFORM := android-9 I looked online and they say that one way to make sure your apk is debuggable is to unzip it and see if the lib folder has the gbserver files. I did that for mine and the gbserver files were there so I'm not sure why my apk is not debuggable. Is it because I'm not properly linking with my own version of the native_glue and if it is, how do I make my makefile link with my version of the native glue and not the default provided by the ndk?
6. ## Ndk-Build Stops All Next Commands In Batch File

Ahh ofcourse it was that simple. Thanks a lot, this looks like it fixed my problem!
7. ## Ndk-Build Stops All Next Commands In Batch File

Hello, I've been building a batch file to build my android projects with ndk. My problem is, after I call ndk-build in the batch file to build my C/C++ code into a lib, all commands that come after in the batch file do not execute. Here is what it looks like: ndk-build -B NDK_DEBUG=1 NDK_LIBS_OUT=%OutputDir%\lib NDK_OUT=%OutputDir%\obj mkdir A My batch file sets the paths for OutputDir and only has these 2 calls (for testing), yet the mkdir never executes because the folder A is never created. Once I remove the ndk-build command, mkdir executes. This also seems to happen when I call these 2 commands: %AndroidCmdDir%\dx --dex --output="classes.dex" "fibpackage\FibLib.class" "fibpackage\FibActivity.class" %AndroidCmdDir%\aapt package -v -f -M \AndroidManifest.xml -I %AndroidDir%/platforms/android-23/android.jar -F %OutputDir%/unsigned.apk %OutputDir% If I have other commands after either of these 2 calls, they also don't get executed. I looked online but it doesn't seem like anyone is having this issue other than me. I'm using android ndk 12 which should be the latest one currently (downloaded from Android Studio's NDK manager) and I'm building for android-23.

10. ## Performance of an experimental SVO raycaster

I'm not sure if it will make a difference but bcmpinc changed his method part way through to ditch using the cube map and instead, simply project the screen on to the octree and do frustum checks using a quadtree hierarchal z buffer. In his latest source code he doesn't have the cubemap as part of the main render loop and his pdf is the old technique which he was using. He stated that the new method of just projecting the screen on to the octree was a lot faster than creating the cube map so it might help to see what his current method is doing unless you specifically want to use the cube map and trace rays.   Lastly, the main premise of his algorithm was to get rid of divisions in the code and to not use the concept of rays (which he succeeded, no perspective division at all in the code) so I'm not entirely sure why you are raycasting when attempting to mimic his algorithm. He has a couple of posts on getting his algorithm to the GPU that might be worth checking out if your interested.
11. ## Compute cube edges in screenspace

But it can't be propogated down an octree. For example, the vertices that make up the outline for the root of the octree are not the same vertices that make up the outline for the children of that root. Unfortunatly, it looks like i have to recalculate the outline on each traversal until my cube is in either the top left, top right, bot left, bot right (can't be in 2 parts at the same time). Unfortunatly this is probably going to be a lot more performance intensive but ill give it an implementation. Also, i dont want to approximate a cube with a quad that encompases in screenspace because im tryingn to minimize my traversals and using quads covers more area than the object actually takes up (creating many false positives). Sorry for leaving for a while, had some personal stuff and work eat up all my time.
12. ## Compute cube edges in screenspace

Ah makes sense. Im not entirely sure what your masks represent. Could you give some more insight? It looks really interesting. I probably made a mistake with my math, ill look over how i call it. Does anyone have any ideas to my post about how the outline isn't always the same and cant be propogated down the octree traversal? Is my only option to calcukate it until my nodes are in one of the 4 screen quadrants?
13. ## Compute cube edges in screenspace

For your snippet, is box min and box max the corners of the cube? Are they expected to be axis alligned (an AABB)? I'm attempting to implement your snippet and get it working but I'm not sure what space your function is assuming. It seems like its world space but when I implement it, it doesn't generate proper values: Mat4x4f MatPos = SetWorldPos(Pos); Mat4x4f MatRot = SetRotation(Rot); Mat4x4f MatScale = SetScale(Scale); Mat4x4f MatCamera = SetCamera(Camera); Mat4x4f MatProject = SetProjection(Screen, 90.0f, 0.01f, 100.0f); Mat4x4f Transform = MatCamera * MatPos * MatRot * MatScale; Vector3f Outline[6]; int32 len = GetOutline(MatPos * MatRot * MatScale * Vector3f{ -0.5f, -0.5f, -0.5f }, MatPos * MatRot * MatScale * Vector3f{ 0.5f, 0.5f, 0.5f }, Camera->Pos, Outline); for (int32 i = 0; i < 6; ++i) { Outline[i] = ProjectToScreen(Screen, MatProject * MatCamera * Outline[i]); // Converts from NDC to screen coords } RenderCubeOutline(Screen, Outline[0], Outline[1], Outline[2], Outline[3], Outline[4], Outline[5], 0xFFFFFFFF);
14. ## Compute cube edges in screenspace

Hmm, Id have to research a bit on that, I'm not entirely sure what you mean but it sounds interesting. I'll look up what stock algo's are and the approach with a 2D convex hull. Thanks a lot!     Man that looks a lot simpler than my code but I haven't tested it out yet. Will do in just one second. I will for sure have questions on how some of this works.     I encountered an error with this approach in general. My idea for getting the outer edges of a cube was so that I could render octrees and figure out the outline for the root and which points of the octrees root are in it (and their order). When I subdivide my tree, I would render the children but use the same points as I did for the root to generate an outline for them and while my outline is correct, I got into this error:     It may not be obvious what is happening at first so Ill single out 2 nodes and render its 6 visible vertices.     Notice how different faces are visible for these 2 nodes. For the node on the left, only 2 faces are visible while for the node on the right, 3 faces are visible. Here is an illustration(the perspective is exaggerated):     I'm a little confused on how to approach this. I do not want to find the outline on a per node basis since I can expect at least a million traversals a frame. Whats more is the paper which i'm trying to replicate (http://www.cs.princeton.edu/courses/archive/spr01/cs598b/papers/greene96.pdf) does the same approach where you only render the outline and I doubt they would find the outline on a per node basis given their performance (it would seem like way too large of a draw back). I'm thinking that there may be a way of tweaking visible faces given the cubes screen position but the cube shares the exact same orientation as its root so it would seem that if only 2 nodes for the root are visible then only 2 are visible for all the children as well.   EDIT: And this is what the root node looks like for the above example