is it possible to render a 2D scene from the side?

Started by
10 comments, last by Ravyne 15 years, 2 months ago
Hi. I have this app...let's call it a 2D game where some critters do stuff in their 2D world. My critters decide what to do based on what they see around. "Around", meaning a bitmap gathered with glReadPixels that is centered on them. Now I want to give them a perspective view...which in their 2D world would mean a 1D view. The problem is that polygons thickness is...well...zero, so if I try to render a 2D scene from the side, I get darkness... I suppose I could extend each edge of each polygon with a perpendicular polygon that matches the color of the edge, but I don't want that. So is there any other way to project a 2D scene to 1D in openGL? Thanks in advance.
Advertisement
Well, I think you covered it... 2D is MxN, where 1D is 1xN. You need to figure out a way to represent that 2D data in 1 dimension. I'm not sure what you mean by "I don't want that". You either have a visible polygon, or you have nothingness.

What exactly are you trying to accomplish? Maybe we can help you think of other approaches.
Sketch out what you want.
_______________________"You're using a screwdriver to nail some glue to a ming vase. " -ToohrVyk
well, the critters are controlled by a neural net that gets its inputs from color data which, right now is something like a 50x50 pixel bitmap. They get a bird's eye view of their immediate surroundings...which means lots of data to process.
I want to give them a perspective view which would mean 2 things:
- less data to process(just a vector instead of a matrix)
- the ability to see objects that are further away
So I'm wondering if there's anything like gluLookAt, which projects 3d scene into 2D screen, to project 2D scene into a 1D "screen".
It just dawned on me that perhaps you are allowing your knowledge of implementation to confuse you a little bit. Sure, you are rendering your game onto a 2D view (a polygon)... but that is simply how you are representing a world which is understood to be 3d. If you want a view from the perspective of a creature on the ground, that's just a diffent way of viewing it.

An example from above (like yours) and from the ground (like what you want):
Photobucket

Photobucket
Quote:Original post by blueboy35
well, the critters are controlled by a neural net that gets its inputs from color data which, right now is something like a 50x50 pixel bitmap. They get a bird's eye view of their immediate surroundings...which means lots of data to process.
I want to give them a perspective view which would mean 2 things:
- less data to process(just a vector instead of a matrix)
- the ability to see objects that are further away
So I'm wondering if there's anything like gluLookAt, which projects 3d scene into 2D screen, to project 2D scene into a 1D "screen".


I see... I didn't understand your question. Instead of using "colors", I would suggest that you calculate some useful values to load into the vector. As an example, instead of using RGBA colors in 1D, you could use closest object on +X/-X/+Y/-Y or anything else that might be useful, and feed THAT into the neural net.
Quote:Original post by smitty1276
It just dawned on me that perhaps you are allowing your knowledge of implementation to confuse you a little bit. Sure, you are rendering your game onto a 2D view (a polygon)... but that is simply how you are representing a world which is understood to be 3d. If you want a view from the perspective of a creature on the ground, that's just a diffent way of viewing it.


You would be right...except the world isn't understood to be 3d...it is 2D...so I guess I'll just have to extend it to 3D like I said in my original post to make a projection work in opengl and then just extract one line of pixels from that projection
Quote:Original post by blueboy35
You would be right...except the world isn't understood to be 3d...it is 2D...so I guess I'll just have to extend it to 3D like I said in my original post to make a projection work in opengl and then just extract one line of pixels from that projection
Again, you seem to be confusing the rendering techniques with the "world" that your critters are living in. You can go to google maps and look at a 2D satellite image of your house. That doesn't mean you live in a 2D world. Nor does rendering a quad that is orthogonal to your 2D view somehow change the "world" in which your critters are living... it is simly another way of viewing the world.

This is neither here nor there, of course, since you're just trying to generate inputs for a function. I realized that and had some ideas in my last comment.

Let me repeat, sketch out what exactly you are talking about. A picture says a thousand words.
_______________________"You're using a screwdriver to nail some glue to a ming vase. " -ToohrVyk
You could raytrace it.

Set up how much resolution you require (I don't know how one would make it equal rseolution), and start a ray from one point on the 2d map to another point.

When it hits an object that is not part of the ground, render it as black.

Otherwise, render it as white.

There's probably a very easy way to do this ray test, but one way would be to represent each pixel as a square and do intersection tests. (probably a bad idea, unless you implement some acceleration scheme).

If you were to use rasterization to do this, you could try making a bunch of polygons that represent raised portions of your 2d maps. Then simply render a 1 x width image with opengl from the position of the viewer.

I might be confused, however.

This topic is closed to new replies.

Advertisement