Jump to content

  • Log In with Google      Sign In   
  • Create Account


Kwizatz

Member Since 07 Apr 2000
Offline Last Active Jun 19 2014 05:08 PM

Topics I've Started

Best way to get 2D overlays or images on screen (for a GUI).

31 May 2013 - 12:17 PM

I am finally putting some effort towards my open source GUI library, which is meant to be generic but has OpenGL 3.x as the main focus for now since that's what my engine uses.

 

I decided to do away with a lot of cruft it had accumulated, doing away with OpenGL 1.5 support and focus on OpenGL 3.2+ core profile.

 

In order to keep the library generic enough that it could eventually be used with D3D, SDL or some other graphics API, I have separated the library into modules, a core, which does all graphic API (GAPI) independent operations and a renderer which does the specific API operations.

 

I keep an image buffer that represents the screen or the drawable area of the window hosting the GAPI, and the base renderer class draws lines, rects and so on to this buffer.

 

On the specific GAPI renderer, OpenGL in this case, I keep a texture object and each frame (for now, I will be doing it only when necessary later on), I issue a glTexSubImage2D with the contents of the image system buffer, then I render a quad (triangle strip really) textured with this texture over the whole screen.

 

So, I was wondering if this is the way to go or if I should be looking for some of the new fancy stuff on OpenGL 3.x+, it would be nice to keep image data entirely on the GPU, I don't think there is a simple way to access that memory for example to only change 15 pixels worth of a text caret, is there?

 

I considered keeping multiple textures (one per window/widget), but I don't know how optimal would that be, I guess as long as the window dimensions don't change much, it may be an improvement.

 


Why won't this compile?

30 January 2013 - 08:54 PM

class Base
{
public:
virtual bool Load()=0;
bool Load(const char* filename);
};

bool Base::Load(const char* filename)
{
/* read file to some memory buffer member */
bool success = Load();
/* close file */
return success;
}

class Derived : public Base
{
public:
bool Load();
}

bool Derived::Load()
{
/* do whatever needs to be done */
return true;
}

SomeOtherClass::Load()
{
Derived* derived = new Derived;
derived->Load("somefile");
}

I am getting a "function does not take 1 arguments" on the call to Load in SomeOtherClass::Load(), if I change the name of the Load(const char*) function to something else or I downcast the derived pointer to the base class, it does compile and links fine.

 

Same error on Visual Studio 2010 and 2012

 

Any ideas?

 

Thanks!


Adding or Concatenating Transforms (Scale, Rotation, Translation)

08 January 2013 - 05:18 PM

I keep my transformations as a scale vector, a rotation quaternion and a translation vector (SRT) rather than a single matrix, when I need a matrix I do some conversion and get a 4x4 matrix out of the 2 vectors and quaternion.

 

To add or concatenate 2 transforms what I do is generate the matrices from the 2 SRT transforms and multiply them together, which is fine since I rarely require to convert back from Matrix to SRT.

 

In the long run I think there are some matrix multiplications that can be shaved off if I could "add transforms", so I am trying to do that, but I ran into some issues.

 

My basic approach is as follows:

 

I have 2 SRT structures which I want to concatenate.

 

I do a element wise multiplication of the scale vector (S = S1[0] * S2[0],S1[1] * S2[1],S1[2] * S2[2]), this seems to work, but for now I have no scaling, so all values are 1, I just need to know if this is correct or if there is something wrong there.

 

I do a simple quaternion multiplication with R1 * R2, again this works as expected as the 3x3 sub-matrix is the same as when I convert to matrix and multiply matrices.

 

I to an element wise addition of the translation vectors so T = T1[0]+T2[0],T1[1]+T2[1],T1[2]+T2[2], and this is where it all goes wrong, the values on the matrices are different, and now that I think of it, this may have to do with the last element in the matrix, the 4th element of a position vector....

 

TL;DR version:

 

So anyway, long story short I want to concatenate/add transforms in Scale, Rotation, Translation format and then convert the result to a 4x4 matrix rather than convert the SRT's to matrices and then multiply the matrices, but the translation vector addition is giving me trouble.

 

Any ideas?

 

Thanks in advance!


Matrix Multiplication vs Compression Dilemma

18 September 2012 - 03:28 PM

I am almost done with my skeletal animation file format specification, but I have come across a dilemma.

I am using animation channels which are arrays containing one value per frame, I came up with the idea of not storing the array if all the values are the same, so the value is stored only once, I call this, obviously, a constant channel.

From there I thought I could do something about other channels that change too little, for example for some sort of trigger channel, where values are mostly zeros with ones here and there to signal a sound or that a footprints, so I added Run Length Encoding channels.

So far so good but! once I came to store joint transformations I noticed that when I store full model space transforms, the children joints change as the parent changes because the transform 'includes' the parent's transform, so transformation vectors do not compress well this way.

However, if I store the joint transforms in 'parent space', if a joint never moves or rotates or scales, its values never change and thus, very good compression ratios can be achieved.

Now, the problem is that if I store the transforms in parent space, I still need to calculate the transforms in model space at run time, and that may involve several matrix multiplications that are avoided if the transforms are precomputed as is in the first case.

So my question is... what should I do? should I go for (perceived) speed or size? is this even a valid concern? perhaps the overhead is negligible, or maybe a smaller memory footprint compensates for the computation cost, I don't know.

Ideas? Discuss! Posted Image

From GL_UNSIGNED_BYTE to uvec4?

05 September 2012 - 12:14 AM

Hello All,

I am writing some GLSL skeletal animation code, and I am passing the weight information onto the vertex shader as unsigned bytes something like this:


glVertexAttribPointer ( weight_indices, 4, GL_UNSIGNED_BYTE, GL_FALSE, 8 , ( ( char * ) NULL + ( 0 ) ) );



glVertexAttribPointer ( weights, 4, GL_UNSIGNED_BYTE, GL_TRUE, 8 , ( ( char * ) NULL + ( 4 ) ) );

On the shader (I am using #version 140) I catch those like so:


in  uvec4 weight_indices;

in  vec4 weights;



The weights come out right, normalization turns them into a float in the range [0,1] and they add up to 1 or whereabouts, however weight_indices is not getting the values I expect, that being a simple cast from uint8_t to uint32_t.

I think there must be something I am missing, if I read the manual page for glVertexAttribPointer, int values are converted to floats, so there may be some sort of casting issue when the float is cast back to int... but I am just assuming here... any ideas?

Thanks!

PARTNERS