Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 12 Feb 2011
Offline Last Active Mar 05 2015 01:15 AM

Posts I've Made

In Topic: Illegal Instruction, possible causes?

28 October 2013 - 12:47 AM

I think I found the issue.  My compilation settings for Enhanced Instruction Set was "AVX".  I changed this and I don't get any illegal instructions on the desktop anymore.  I suspect my laptop supports it, while my desktop does not.

In Topic: Illegal Instruction, possible causes?

13 October 2013 - 03:45 PM

Hello PixelArtist,


For each machine, I am compiling in VS2012 with the same properties files, except for paths to supporting libraries.  I am  also compiling on the machine that is running the program.  Difference would be that my laptop has a Geforce 700 level card. My desktop has a 400 level card hooked up to the monitors, and a 200 level card so games can offload Physx, Cuda, etc.  Both sould support OpenGL 4.2, which my program targets.  I can do a simple program that just sets the clear color and clears the screen each frame with the same API calls that crash my bigger program with the same supporting boilerplate libraries without it complaining.


What kind of configuration would be the most likely candidate?


Thanks for any continued assistance.

In Topic: very beginer

30 June 2011 - 04:02 AM

After a google search, the information in this thread seems to be what I am looking for, but I have some questions:

- So "row-major" and "colunm-major" refer to how a 4x4 (in this case) matrix is laid out in memory, and
- "row-vector" and "column-vector" refer to whether <x,y,z,w> is a 1x4 or 4x1 matrix, respectively?
- Pre-multiply is v' = v*M (implying "row-vector"), and post-multiply v' = M*v (implying "column-vector")

The thing that confuses me is that it does not seem that being row or column major should change the actual "pencil/paper" math. Let's say that A and B are 4x4 matrices. Regardless on whether they are arranged in memory in row or column major order, A * B = C should always result in the same C. However, many posts seem to associate an order of transformations with a "row-major" or "column-major" layout. For transformation multiplication order, I'd think that whether you are pre or post multiplying would be the determining factor.

If I am just trying to create a single transformation matrix, it will be a different matrix depending on whether I am pre or post multiplying, one will be the transpose of the other, correct?

Now, how does this all translate to pre OpenGL 3.2 transform stacks? Take the following pseudo code for example.

pushMatrix( lookAt(eye, center, up) );
pushMatrix( translate(x, y, z) );
pushMatrix( rotate(axis, theta) );

Thank you for any assistance.