Jump to content

View more

Image of the Day

The night is still, but the invasion brings chaos. #screenshotsaturday #hanako #indiegame #gameart #ue4 #samurai https://t.co/cgILXuokoS
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

Sign up now

question on using parts of single texture with the multiple instances of same geometry

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 timothyjlaird   Members   


Posted 26 January 2014 - 02:53 PM

What is the best way to use parts of a single texture with the multiple instances of same geometry?


I am using WebGL (opengl es).


I have this texture here...





This is the vertex array buffer (simple quad) that I want to re-use...

//position, point size, colors, st interleaved
var vertData = new Float32Array([
        -0.25, 0.25, 		4.0,		1.0, 0.0, 0.0,		0.0, 0.5,
	-0.25, -0.25,		4.0,		0.0, 1.0, 0.0,		0.0, 0.0,
	0.25, 0.25,		4.0,		0.0, 0.0, 1.0,		0.5, 0.5,
	0.25, -0.25,		4.0,		1.0, 1.0, 0.0,		0.5, 0.0,

Those (s,t) coordinates show the bottom left part of the image on rendering in WebGL in this case. What I want to do is use this as sort of a 'texture atlas' (I think that's the buzz word) so I can use a single texture for an 'n' (say >100) number of quads without having to repeat all of the vertex data each time I want to use a different part of the single texture. So what's the best way to do that? I think of some solutions but I'm kind of a novice so I don't know what is best to use. I could...


a) Change the shaders to pass in what part of the texture I want to use

b) Repeat the vertex buffer data four times (in this case) for each piece of the texture I want to use with different st coordinates.

c) Get rid of the 'texture atlas' and just use four (in this case) different textures.


How do most people solve this problem? I know it's sort of trivial with such simple geometry but it seems like it could get out of hand fast...

#2 haegarr   Members   


Posted 26 January 2014 - 03:34 PM

Solution c) is probably the worst: It not only requires to call the drawing method once for each quad, but it also requires to switch the texture for each call.


Solution a) means presumably to use a uniform vector for passing in a texture offset?! If so, you still have to invoke a draw method once for each quad this time with altering uniforms.


Solution b) may mean that you have a VBO ready to use on the GPU, and call a drawing with the sub-mesh to use. Again, you would call a drawing method once for each quad.


OTOH b) may mean that you build the VBO as needed and send it to the GPU (a.k.a. geometry batching). Assuming that the vertex structure as shown is needed, then a VBO consumes 100 * 4 * 8 * 4 = 12.800 bytes. (For comparison: A texture will consume significantly more; e.g. a RGB texture at 512*512 texels consumes 786.432 bytes if being uncompressed.) There is a break-through point where the overhead of calling the drawing method often enough is less effective than sending the small VBO to the GPU. I don't know where it is. But in general I think that batching the geometry and sending it as dynamic draw VBO is the way to go. Think of a way to make the VBO data smaller (is "point size" really needed?).


BTW: The particular sub-images should not touch each other or else interpolation may cause bleeding in color from neighbors.

#3 mhagain   Members   


Posted 26 January 2014 - 09:44 PM

For your solution (a), and since you're using equal-sized subimages in your atlas, you could just add s-offset and t-offset as an additional 2-float vertex attrib per-instance.


With full OpenGL, and where texture arrays are available, using a texture array instead of an atlas would be the preferred approach here: it would avoid the problem of adjacent images bleeding into each other, and you would also be able to mipmap them properly.  In that case, a single float indicating which array slice to use would be your additional per-instance attrib.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

#4 MaxDZ8   Members   


Posted 27 January 2014 - 12:05 AM

I strongly support mhagain and its proposal of using texture arrays.

There's really no other way around it. Tiling will screw you up sooner or later. Just say NO.

Previously "Krohm"

#5 timothyjlaird   Members   


Posted 28 January 2014 - 07:06 PM

Do texture units in WebGL have performance comparable to texture arrays in OpenGL? Or am I thinking apples and oranges?

#6 haegarr   Members   


Posted 29 January 2014 - 01:58 AM

A texture unit is the place where a texture can be bound to. The equivalent in the shader script is a sampler to access the bound texture. A GPU nowadays provides several units to work with; the exact amount can be requested during runtime.


An array texture (considering that the basis is 2D here) means that a texture has a 3rd dimension to sample, i.e. it is build of layered 2D texel images. As an array the 3rd dimension is not interpolated. How many levels are available at what maximal 2D size can be requested at runtime, too.


When being bound, an array texture needs a single texture unit and a single sampler. The 3rd texture co-ordinate is used to define which layer you want to access. With the image shown in your OP in mind, the array texture could be build of 4 layers each one with one quadrant of the original image.


Notice that this method is fine if all of your sub-images are nearly equal in size. As soon as a few sub-images are very big compared to the majority of the remaining images, using an array texture in this way is wasteful w.r.t. memory (mhagain has mentioned this already: "… using equal-sized subimages ...").


The reasoning for using an array texture here is that it supports mip-mapping. When using mip-mapping with a texture atlas instead, the sub-images must be in regions limited by co-ordinates at a power of two. This may play a role if the textures are used for billboards, particles, decals, and so on. OTOH, if they are used for sprites or UI glyphs, mip-mapping normally plays no role, and using an atlas is still a option. If both ways will work equally good for you, you should prefer array textures.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.