Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


kloffy

Member Since 10 Apr 2005
Offline Last Active Dec 31 2014 05:28 AM

Topics I've Started

Critique my approach to writing RAII wrappers

04 September 2014 - 04:21 AM

Recently, I have played around with wrapping up a couple of handy C libraries in order to have a nicer more modern C++11 programming interface. I am aware that there can be associated problems, however I still find RAII wrappers handy in many cases.

So, the straight forward approach would be something like this (untested):
class Texture
{
public:
	Texture():
		_id{0}
	{
		glGenTextures(1, &_id);
		
		if (!*this)
		{
			throw std::runtime_error("Failed to acquire texture.");
		}
	}
	
	~Texture()
	{
		if (*this)
		{
			glDeleteTextures(1, &_id);
		}
	}
	
	explicit operator bool() const
	{
		return _id != 0;
	}
	
	
	// Non-Copyable
	Texture(Texture const& other) = delete;
	Texture& operator=(Texture const& other) = delete;
	
	// Moveable
	Texture(Texture&& other):
		_id{0}
	{
		swap(*this, other);
	}
	Texture& operator=(Texture&& other)
	{
		swap(*this, other);
		return *this;
	}
	
	// Wrapper Methods
	//void bind();
	//void image(...);
	//voud subImage(...);
	
private:
	GLuint _id;
	
	friend void swap(Texture& lhs, Texture& rhs)
	{
		using std::swap;
		swap(lhs._id, rhs._id);
	}
};

Initially this works fine, but it tries to do things at the same time: provide wrapper methods and manage lifetime. This becomes a problem when you receive a handle from the C api, whose lifetime is already managed in some other way. In that case, you cannot use the wrapper methods. For example, this isn't possible:
Texture getBoundTexture()
{
	GLuint boundTexture = 0;
	glGetIntegerv(GL_TEXTURE_BINDING_2D, (GLint*) &boundTexture);
	return {boundTexture};
}

Even if there was a matching constructor, this code would be buggy since now there would be two RAII objects managing one lifetime. When one of them is destroyed, the resource gets freed and the second destructor results in double deletion. Really, it should be possible to distinguish between objects that manage lifetimes and those that don't. Well, you might say there already is such a thing: smart pointers! I could create my Texture object on the heap and use smart pointers to manage its lifetime. However, there is really no need for creating objects on the heap, if instead we generalize smart pointers. In fact, this has been proposed. However, it is not part of the standard library yet and - what I think is even worse - it's not easily possible to associate wrapped methods with the handle.

So, instead I have come up with the following approach (which I am sure has its own problems, that's why I am looking for feedback):

First, lifetime management is encapsulated in a separate class (similar to the proposed unique_resource):
template<typename T>
class Unique
{
public:
	template<typename... Args>
	Unique(Args&&... args):
		_value{}
	{
		T::New(_value, std::forward<Args>(args)...);
		
		if (!_value)
		{
			throw std::runtime_error("Failed to acquire resource.");
		}
	}
	
	~Unique()
	{
		if (_value)
		{
			T::Delete(_value);
		}
	}
	
	Unique(Unique const& other) = delete;
	Unique& operator=(Unique const& other) = delete;
	
	Unique(Unique&& other):
		_value{}
	{
		swap(*this, other);
	}
	Unique& operator=(Unique&& other)
	{
		swap(*this, other);
		return *this;
	}
	
	T& 			operator*() 		{ return _value; }
	T const& 	operator*() const 	{ return _value; }
	
	T* 			operator->() 		{ return &_value; }
	T const* 	operator->() const 	{ return &_value; }
	
private:
	T _value;
	
	friend void swap(Unique& lhs, Unique& rhs)
	{
		using std::swap;
		swap(lhs._value, rhs._value);
	}
};

And the wrappers look like this:
class Texture
{
public:
	typedef GLuint Handle;
	
	static void New(Texture& object)
	{
		glGenTextures(1, &object._handle);
	}
	
	static void Delete(Texture& object)
	{
		glDeleteTextures(1, &object._handle);
	}
	
	Texture(): _handle{0} {}
	Texture(Handle const& handle): _handle{handle} {}
	
	Handle const& get() const { return _handle; }
	
	explicit operator bool() const
	{
		return _handle != 0;
	}
	
	// Wrapper Methods
	//void bind();
	//void image(...);
	//voud subImage(...);
	
private:
	Handle _handle;
	
	friend void swap(Texture& lhs, Texture& rhs)
	{
		using std::swap;
		swap(lhs._handle, rhs._handle);
	}
};

The usage could be as follows (artificial example):
{
	Texture bound = getBoundTexture(); // Imlementation as before, now works.

	Unique<Texture> managed;
	
	//Setup texture etc.
	//managed->image(...);
	
	managed->bind();
	
	// Draw Something

	bound.bind();
}

So, the wrappers are like a plain pointer, whereas if you want their lifetime managed, you should use Unique<Texture>. At the end of the block, the manage texture is destroyed, whereas the plain one is untouched. Of course, it would also be possible to implement a Shared<Texture> in the future.

Anyway, I am curious to hear your thoughts. Please critique away...

Metafunction to check whether a template specialization exists

14 March 2013 - 01:06 PM

This should be pretty basic template meta programming, but everything I have found on Google isn't quite the thing I am looking for. Suppose I have a template with an empty body that users can specialize: 

template<typename Target, typename Source, typename Enable = void>
struct conversion
{
	static const bool value = false;

	static Target to(Source const& value);
	static Source from(Target const& value);
};

 

And a possible specialization might look like this:

template<typename Target>
struct conversion<Target, std::string>
{
	static const bool value = true;

	static Target to(std::string const& value)
	{
		return boost::lexical_cast<Target>(value);
	}

	static std::string from(Target const& value)
	{
		return boost::lexical_cast<std::string>(value);
	}
};

 

Now, I can check whether a conversion exists by checking conversion<Target, Source>::value. However, I would like to simplify this further, eliminating the need to explicitly define the integral value constant. This can probably be done using a check like sizeof(?) == sizeof(char). However, since there are a few pitfalls, I wonder if someone could point me towards the "best practice" approach.


Convert binary number (fixed point) to string of decimal digits

08 January 2013 - 05:05 PM

I am hacking together a quick implementation of a rudimentary big number library. I am aware that there are plenty of those around, but mine is for a very specific purpose (I want it to work under the restrictions of C++ AMP). Besides, I figured that it would be a good exercise.

So, all of my arithmetic operations work (using basic textbook long multiplication and division). However, I am starting to get tired of thinking in binary. I would like to output my numbers in decimal digits. For integers, I have just implemented the naive approach.
str = "";
num = x;
do {
    digit = num%base;
    num = num/base;
    str = to_char(digit) + str;
} while (num>0);

Edit: Uhm, not sure where the rest of this went. Basically, my question was how to do the same for fixed point decimal numbers. I described an overly complicated solution and Álvaro whipped up the right answer straight away.

VAO - Is it necessary to redo setup each time buffer data changes?

11 March 2011 - 04:21 AM

I am a bit surprised that it seems like my VAOs get invalidated each time that I provide new buffer data. Are you really supposed to do glEnableClientState and setup gl*Pointers all over again after each glBufferSubData? That would make VAOs pretty useless for dynamic vertex data, wouldn't it?

I was hoping that it would be more like this:

	// Initialization (Once)

	glGenVertexArrays(1, &vao);
	glGenBuffers(1, &vbo);
	
	glBindVertexArray(vao);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);

	glBufferData(GL_ARRAY_BUFFER, size * sizeof(vertex_t), vertices, GL_DYNAMIC_DRAW);

	glEnableClientState(GL_VERTEX_ARRAY);
	glEnableClientState(GL_NORMAL_ARRAY);
	glEnableClientState(GL_COLOR_ARRAY);	
	glEnableClientState(GL_TEXTURE_COORD_ARRAY);
	
	glVertexPointer(3, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(0 * sizeof(vec3_t)));
	glNormalPointer(GL_FLOAT, sizeof(vertex_t), (GLvoid*)(1 * sizeof(vec3_t)));
	glColorPointer(4, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(2 * sizeof(vec3_t)));
	glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(2 * sizeof(vec3_t) + sizeof(vec4_t)));

	glBindVertexArray(0); 
	// Update (Each Frame)

	glBindVertexArray(vao);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);

	glBufferSubData(GL_ARRAY_BUFFER, 0, size * sizeof(vertex_t), vertices);

	glBindVertexArray(0); 
	// Render (Each Frame)

	glBindVertexArray(vao);
	glBindTexture(GL_TEXTURE_2D, texture);
    	
	glDrawArrays(GL_TRIANGLES, 0, size);
	
	glBindVertexArray(0);
However, it seems like I need to do this in order for it to work:

 	// Update (Each Frame)

	glBindVertexArray(vao);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);

	glBufferSubData(GL_ARRAY_BUFFER, 0, size * sizeof(vertex_t), vertices);

	glEnableClientState(GL_VERTEX_ARRAY);
	glEnableClientState(GL_NORMAL_ARRAY);
	glEnableClientState(GL_COLOR_ARRAY);	
	glEnableClientState(GL_TEXTURE_COORD_ARRAY);
	
	glVertexPointer(3, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(0 * sizeof(vec3_t)));
	glNormalPointer(GL_FLOAT, sizeof(vertex_t), (GLvoid*)(1 * sizeof(vec3_t)));
	glColorPointer(4, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(2 * sizeof(vec3_t)));
	glTexCoordPointer(2, GL_FLOAT, sizeof(vertex_t), (GLvoid*)(2 * sizeof(vec3_t) + sizeof(vec4_t)));

	glBindVertexArray(0); 
Is this reallly necessary or am I doing something else wrong?

GL_ALPHA_TEST replacement in OpenGL ES 1.1 (no shaders)

03 March 2011 - 10:10 AM

So it seems like the performance of GL_ALPHA_TEST on iOS is very poor. To quote Apple:

Graphics hardware often performs depth testing early in the graphics pipeline, before calculating the fragment’s color value. If your application uses an alpha test in OpenGL ES 1.1 or the discard instruction in an OpenGL ES 2.0 fragment shader, some hardware depth-buffer optimizations must be disabled. In particular, this may require a fragment’s color to be completely calculated only to be discarded because the fragment is not visible.An alternative to using alpha test or discard to kill pixels is to use alpha blending with alpha forced to zero. This effectively eliminates any contribution to the framebuffer color while retaining the Z-buffer optimizations. This does change the value stored in the depth buffer and so may require back-to-front sorting of the transparent primitives.

If you need to use alpha testing or a discard instruction, draw these objects separately in the scene after processing any primitives that do not require it. Place the discard instruction early in the fragment shader to avoid performing calculations whose results are unused.

I am wondering what exactly they mean by "use alpha blending with alpha forced to zero". How can you accomplish this? Alternatively, is there any other way to omit/hide pixels based on their alpha value?

PARTNERS