# OpenGL OpenGLES - Texture Projection?

## Recommended Posts

Hey guys, Not sure where to ask this, but I've searched high n low and I'm more stuck than a glue sniffer on Tuesday. Can someone give me some help working out how to go texture projection in OpenGL ES? It doesn't have the EYE_LINEAR stuff, which means it all has to be done manually, which is fine if you know how. :) I've googles for hours today and just keep finding the same examples using the OpenGL eye_linear stuff. In standard OpenGL, no worries, but ES is hampering me. Any help or links to examples or anything would be well n truly appreciated! Thanks!

##### Share on other sites
Do you mean 'projective texturing'? Where you have a 'projector' (lightsource) that projects an image (texture) onto the objects in your scene? I.e. where you instead of manually supplying a texture-coordinate for each vertex, you calculate them such that it looks like an image is projected onto the objects?

If so, i learned it from the "Cg Tutorial" chapter 9.3. This book is now avaliable online for free at: http://developer.nvidia.com/object/cg_tutorial_home.html

(i could have written a longer explanation here, but i'm not yet 100% sure we're talking about the same thing)

Note, i don't know anything about OpenGL ES, but it seems like you do have shaders. Also, i've never used glTexGen. If you have shaders, glTexGen is unnecessary, since you can calculate the texture coords however you want in your shaders yourself.

##### Share on other sites
Quote:
 Original post by ZaiPpANote, i don't know anything about OpenGL ES, but it seems like you do have shaders. Also, i've never used glTexGen. If you have shaders, glTexGen is unnecessary, since you can calculate the texture coords however you want in your shaders yourself.
OpenGL ES 2.0 has shaders, ES 1.1 does not. Unfortunately, ES 2.0 devices are still relatively rare, while 1.1 devices (such as the iPhone) are quite common.

##### Share on other sites
Thanks, but yeah, unfortunately no shader support under OpenGL ES 1.1.

That's exactly what I'm trying to do tho, project a texture like a slide projector, but I need to do the matrix maths all manually.

Any ideas?

Thanks again.

##### Share on other sites
I have the same question.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627781
• Total Posts
2979025
• ### Similar Content

• Hi guys,
With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written?
Could someone these days write an OpenGL library from scratch? How would you even go about this?
Obviously this question stems from the fact that there is no OpenGL SDK.
DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works.
If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.

• 11
• 10
• 10
• 23
• 9