Archived

This topic is now archived and is closed to further replies.

libraries for 2D using 3D

This topic is 4949 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was reading through the Isometric forum and it occured to me that one of the main problems affecting people using 2D is that they don''t have access to a lot of the cool lighting acceleration that 3D programmers take for granted. The main feature that people seem to need this for is day/night transitions, but it could also apply to 101 other game concepts, such as moving light sources (carrying a torch down a tunnel). As a result, they have to either use palette switching, which restricts them to 256 colours, or learn a 3D api, which takes a lot of time and trouble (in my opinion, anyway), or perhaps use the gamma ramps which will work well for the day/night but not so well for individual lights. My first thought was - isn''t there some sort of library that will let you initialise D3D or OpenGL, and then later render an alpha-blended primitive to the screen to give the impression of night-time, while still allowing you to use DirectDraw or SDL or whatever for 2D graphics? I figured that was probably unreasonable as the library wouldn''t know exactly where in video memory the frame buffer is stored. So my second thought was - is there a simple library that will initialise a 3D API for you to some sane (although not necessarily optimal) settings, and expose a simple API for things like alphablended blits, simple light sources, and so on? I''m sure I recall seeing one once, but that was quite a while ago. And before anyone say so, I don''t consider D3DX or SDL-with-OpenGL to be anything like as simple as I mean SDL using the glSDL back-end may be a start, however. And if there isn''t a decent library of this nature anywhere, would it be worth developing one? [ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost | Asking Questions | Organising code files ]

Share this post


Link to post
Share on other sites
I''m in the process of adding an optional OpenGL renderer to a 2D open-source game, and it might have been nice to have a library like you''ve suggested. It turns out that there isn''t a whole lot of code involved; mostly, it''s a matter of managing sprites as textures.

On the other hand, I''m working on this mostly to learn a bit about OpenGL, so I wouldn''t have wanted a library to insulate me from the details.

Share this post


Link to post
Share on other sites
For most 2D game developers, I expect the Direct3D initialisation code would be horrendous. There''s masses of it, and it''s all ugly. OpenGL was a lot more compact, but is that as likely to work on as many computers? I really don''t know, not being graphically-minded.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost | Asking Questions | Organising code files ]

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan
OpenGL was a lot more compact, but is that as likely to work on as many computers? I really don''t know, not being graphically-minded.



It''s likely to work on even MORE computers than D3D, OpenGL being ported to a zillion platforms already, and D3D not.


2DNow Banner

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan
For most 2D game developers, I expect the Direct3D initialisation code would be horrendous. There''s masses of it, and it''s all ugly.


um what are you talking about?
  
if(FAILED(pd3d = Direct3DCreate8(D3D_SDK_VERSION)))
{
// Error

}

D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory(&d3dpp, sizeof(d3dpp));

d3dpp.Windowed = FALSE;
d3dpp.BackBufferWidth = width;
d3dpp.BackBufferHeight = height;
d3dpp.BackBufferCount = 1;
d3dpp.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
d3dpp.FullScreen_PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
d3dpp.MultiSampleType = D3DMULTISAMPLE_NONE;
d3dpp.SwapEffect = D3DSWAPEFFECT_FLIP;
d3dpp.BackBufferFormat = D3DFMT_R5G6B5;

if(FAILED(pd3d->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwnd, D3DCREATE_HARDWARE_VERTEXPROCESSING,&d3dpp, &pd3dDevice)))
{
// No hardware vertex processing, try in software

if(FAILED(pd3d->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hwnd,D3DCREATE_SOFTWARE_VERTEXPROCESSING,&d3dpp, &pd3dDevice)))
{
// Hardware Driver Failed, or unknown error, switch to reference driver

if(FAILED(pd3d->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_REF, hwnd,D3DCREATE_SOFTWARE_VERTEXPROCESSING,&d3dpp, &pd3dDevice)))
{
// Couldnt create device

}
}
}

I wouldnt exactly call that much, And I have also included alot of error checking etc in there, of which you dont need.



CEO Platoon Studios

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan
For most 2D game developers, I expect the Direct3D initialisation code would be horrendous. There''s masses of it, and it''s all ugly.


sigh... and from a moderator, no less...

If you really want to count lines of code (a silly metric IMHO), then perhaps there is more code for D3D, but most of that code is there to give you lots of control.

As far as the original question, D3DXSprite gets you quite far. The 2D in DX8 article outlines a very simple framework that gives you more control. In general a "2D wrapper" for D3D is very thin. As soon as you get into lighting and "effects", you would probably want to scrap the wrapper and deal with D3D directly.

Given the thinness of a 2D D3D wrapper, the idea of using D3D for alpha and DirectDraw for drawing becomes counterproductive because of the overhead associated with switching.

We must move past this "2D API" vs. "3D API" business. It''s a "Graphics API".



Share this post


Link to post
Share on other sites
for isometric-like games, i think using plain old 2d blitters is a bit outdated. there''s a bunch of effects you can do with a 3d api, like "real" lighting, zooming in and out of the map view, or particle systems for explosions.. and it''s usually much faster... setting up a 3d viewport and projecting triangles to pixel level and then doing your calculations in 2D is not that hard in D3d or openGl..

Share this post


Link to post
Share on other sites
Going back to the origional topic a 2d engine in a 3d environment that supports lighting- Yes there is definitly a need for a something like this and Yes I think I it would definitly be worth developing. The only thing close that I can think of is CDX-uses ddraw.

So lets list what features would be desirable:

0. Basiic 2d engine features like a scrolling map, sprites, particles...
1. moveable lightsources
2. Zooming in and out on the map
3. Alphablended blits
4. Of course this would all apply to iso, or whatever views

I''ve used a library before (Javanerds GameFrame for Java) and I didn''t like it because it restricted me from using a the oo design that I wanted to. So I would like to see a library that is fully oo and extensible.

Share this post


Link to post
Share on other sites
i have thought about a 3d api for 2d games quite a bit... ive been using allegro for some time now, and i wish tehre was a simple library that would allow me to make these calls, but using a 3d api such as open GL and D3D.

blit(source bitmap, dest bitmap, pos x, pos y) seems much simpler to me then all that crazy polygon crap with culling and whatever all taht stuff is im not a 3d guy yet

my downfall has definitly been lighting! it makes any game look better (especially 2d ones). and all the lighting i have access to with a 2d library is crappy blending type functions (check every pixel and add/subtract specified amounts to make it appear lit).

i would love to help out with a library that would allow me to do this, becasue ive been tryinbg to find one for a long time/.

some other usefull fiunctions:

blit(), rotate_blit(), lit_blit(), stretch_blit(), blend_blit()
etc.
and possible simplified loading commands and surface creation...


[edited by - sic_nemesis on July 25, 2002 2:17:55 PM]

Share this post


Link to post
Share on other sites
Actually im currently working on a template-based lightweight OpenGL wrapper. It will have other non-opengl game related features like sound and resource-management, and higher-level classes built using the core wrapper to aid common 2d and 3d features. Im not even sure if its classed as a generic engine or a library.

If there is enough demand for it i could release it under the bsd license or something. (when its done, and i have no idea of how long it will be before i produce a workable version im happy with)

Share this post


Link to post
Share on other sites
quote:
Original post by CrazedGenius
sigh... and from a moderator, no less...

If you really want to count lines of code (a silly metric IMHO), then perhaps there is more code for D3D, but most of that code is there to give you lots of control.

More lines of code means more opportunity for mistakes and bugs. It''s that simple. Control is a good thing but it should be optional. I am not saying D3D is a worse API than OpenGL, I am saying that it was not exactly designed with the 2D programmer in mind.

quote:
In general a "2D wrapper" for D3D is very thin. As soon as you get into lighting and "effects", you would probably want to scrap the wrapper and deal with D3D directly.

Well no, actually, you don''t, which is why more and more people are moving to SDL all the time. You may like D3D, but it''s not perfect for everything. I don''t eat my dinner using a swiss army knife and a lot of people don''t want to program with the equivalent.

As for your points about D3DXSprite, I can''t agree. That stuff is still ugly and awkward for someone used to a clean and simple API. Does it not still require loading ''textures'' (a term that is largely meaningless in the context of a simple sprite or tile). It''s clean and simple to someone like myself who''s had to put up with crap like DX5,6,7, but the system as a whole is still not optimal.

quote:
We must move past this "2D API" vs. "3D API" business. It''s a "Graphics API".

That''s a pretty odd view - a 3D API has to deal with transforming coordinate systems, projections, and so on. A 2D API maps directly to the screen surface and can therefore be simplified. There''s no need to treat everything as 3D. And it''s not about one API ''versus'' another. It''s about picking the appropriate tool for the job.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost | Asking Questions | Organising code files ]

Share this post


Link to post
Share on other sites
Elis-Cool, you''re absolutely right, Direct3D 8 is a lot easier to set up than previous DirectX versions: I have 700 lines here setting up Direct3D 7, getting basic lighting, a material, a depth-buffer, enumerating devices and formats, etc etc. But on the other hand D3D8 is harder for 2D people to use since a lot of the common stuff like blitting or whatever is effectively gone. I am just thinking of a library that gives the best of both worlds: initialisation in no more than 10 lines, some simple blitting functions with names intuitive to 2D programmers, with hardware acceleration transparent to the programmer.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost | Asking Questions | Organising code files ]

Share this post


Link to post
Share on other sites
quote:
Original post by Kylotan
That's a pretty odd view - a 3D API has to deal with transforming coordinate systems, projections, and so on


quote:
I don't consider D3DX or SDL-with-OpenGL to be anything like as simple as I mean


If you use transformed vertices, you never think about coordinate systems (beyond screen coordinates), projections, and so on. It can be EXTREMELY simple.

Your point is exactly correct, and it's not a 3D API. The purpose of an API is first and foremost to provide an interface to some lower level functionality. Like it or not, the cards now think in terms of the data types "vertex" and "texture". I'm not sure why you are so offended by the idea of loading a texture. How is that different than copying bits to a surface? It is in fact VERY similar from a loading perspective. The similarities only break down at the point where you can do so much more than copy rectangles around. That's arguably a good thing.

It's not just 2D. I think we're about to see a similar shift with 3D as well. At some point, people are going to start to ask "why do I need to write a shader, why can't I just set a light??". The answer will be "you can, but it's not the best." This is already true on shader cards - you can write a shader that creates far more than the number of hardware lights. And then one day, hardware lights might go away...

I haven't used DirectDraw for a very long time because I was using OpenGL before DX8, but I've always felt that 2D in OpenGL and now DX8 was actually much cleaner than DirectDraw. It's slightly more code, but for a *little* extra hassle you get many many more features (alpha, stretch, skew, etc.).

Consider OpenGL for a moment. It's *not* explicitly a 3D API. It features a 2D "blitting" subset but very very few pieces of hardware support it. In the OpenGL world, if you want to draw *anything* you think in terms of textures and vertices. Your comments about D3D not being perfect for everything break down in the OpenGL world. On an SGI, the "3D API" IS perfect for everything (including 2D) and very few people seem to complain. On an SGI, it's a graphics API, not "3D".

Now, having said all that, I agree with you that wrappers can be very good things. That's true for everything from COM ports to 2D graphics. I'm only being argumentative because I think there are serious misconceptions out there. I'm trying to make the point that you have to be willing to think about things differently. Once you do, you'll see that things aren't that hard.

If/when you decide to write your wrapper, you'll find that you can put something decent together in a couple of hours. I'm looking at my 2D class right now and I'm pretty sure that this post is longer...


[edited by - CrazedGenius on July 26, 2002 1:29:40 AM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
blah! "2D" is just orthographic projection in a 3D world.
"2D" is just having everything with a ''Z'' of 0...

why "blit" when you can simply update the scene graph and
"post" the new scene?

people who think in terms of "2D" are hopelessly stuck in
the 1980''s and 1990''s. wake up to the 21st century and
embrace 3D!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Yes, and while we''re at it, lets all forget basic gameplay too! THAT IS SOOOOOO 1987!

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
making 2d be a kind of 3d is just plain stupid. If the hardware wants to deal with it that way that''s fine with me. But why should I have to deal with all that extra 3d stuff if I don''t want to? That''s just all there is to it, other than lazyness on the part of library writters.

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
making 2d be a kind of 3d is just plain stupid. If the hardware wants to deal with it that way that''s fine with me. But why should I have to deal with all that extra 3d stuff if I don''t want to? That''s just all there is to it, other than lazyness on the part of library writters.



isn''t it laziness on *your* part? as in, you''re too *lazy*
to work with "that 3d stuff"?



Kami no Itte ga ore ni zettai naru!

Share this post


Link to post
Share on other sites
quote:
Original post by tangentz
isn't it laziness on *your* part? as in, you're too *lazy*
to work with "that 3d stuff"?

Explain to me why "vertices", a concept that probably has little to no meaning to the average newbie, should enter into the equation. Explain to me why "textures", a term that is unfamiliar to newbies, should enter the equation. Explain to me why the programmer should think about such things instead of "sprites" or "pictures".

Show me a 2d-in-3d with terms such as "picture", "draw", and so on. That's what Kylotan is aiming for and if it's so easy to do then for f*ck's sake go make one and offer it to the community .

2D-in-3D programming should not be different from straight 2D with the right package. Expecting people to put up with miscellaneous crap to use extra features such as alpha blending is really quite shocking. WHY SHOULD PROGRAMMING BE DIFFICULT? IF YOU DON'T STRIVE FOR SIMPLICITY THEN YOU SHOULD NOT BE PROGRAMMING. If you don't yearn for simple APIs then you should not be programming.

3D APIs do 3D well as that's the task for which they're primarily designed. They do *not* make 2D programming intuitive. You can say that "it's not difficult to do x, y, z... in (some 3d API)." You'd be right, but it's counter-intuitive and is something that should be taken care of by the system . Explain, for example, what a newbie would make of having to use "glOrtho" to set 2D mode, using glLoadIdentity, glMatrixMode, and so on. That's a pretty tricky one, because it's not something that the newbie would want to have to deal with when programming.

Tell me whether you'd rather, if you were a newbie, use Direct3D or OpenGL stuff or use some library that allowed you to do this:

Picture myPicture("somefile.bmp");
Picture myLogo("logo.bmp");
Screen.Draw(myPicture, 20, 20);
Screen.DrawBlended(myLogo, 30, 30);

...and other code to that effect. Don't kid yourself.

We, as programmers, have become accustomed to lots of 3D jargon. We're not newbies any more. Do you expect people entering the programming game to learn that too, instead of getting on with the business of *using the graphics package to do things*.

You can argue that "doing things the hard way helps you learn what's going on underneath the hood." That's a load of crap! What's the first thing a newbie wants to do? Plot a pixel. Draw an image onto the screen (ex: see deClavier's recent post in the For Beginner's forum). Learn underneath the hood in good time, after you've done the simple things first . Guess what the result would be if I tried to teach a newbie about Delphi by showing them my pixel-based effects? Their eyes would glaze over and they'd say "I'll never understand that code."

There's no doubt that using 3D hardware for 2D is the way to go. However, most people don't (look at the amount of posts in the isometric land forum that talk about wanting light, etc.). Logically, if it's better then people would use it, right? Wrong. Clearly, if it's offers so much but people don't want to use it straight-off then there's a problem.

[edited by - Alimonster on July 26, 2002 10:51:50 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Alimonster
Explain to me why "vertices", a concept that probably has little to no meaning to the average newbie, should enter into the equation. Explain to me why "textures", a term that is unfamiliar to newbies, should enter the equation. Explain to me why the programmer should think about such things instead of "sprites" or "pictures".



Aren't you forgetting that the (X,Y) "coordinate" or the screen
*IS* a vertex, with a Z-component of 0? Texture, image,
bitmaps, aren't they the "same" things? Why would a "newbie"
find it easier to understand "bitmap" than "texture"? They're
just RGB-triplets saved in whatever formats on disk.

A "sprite" is just a movable "thing" in the game world. It
can be implemented as a 3D mesh as easily as a 2D image.

quote:

Show me a 2d-in-3d with terms such as "picture", "draw", and so on.



"Picture" is "texture". "Draw" is "render". Same things.

quote:

2D-in-3D programming should not be different from straight 2D with the right package. Expecting people to put up with miscellaneous crap to use extra features such as alpha blending is really quite shocking. WHY SHOULD PROGRAMMING BE DIFFICULT? IF YOU DON'T STRIVE FOR SIMPLICITY THEN YOU SHOULD NOT BE PROGRAMMING. If you don't yearn for simple APIs then you should not be programming.



Uh, you don't have to use alpha-blending if you don't want
to. You seem to equate 3D with alpha-blending and
multitexturing and other some such advanced topics? Programming
graphics can be as simple or complex as you want it, 3D or not.

quote:

3D APIs do 3D well as that's the task for which they're primarily designed. They do *not* make 2D programming intuitive. You can say that "it's not difficult to do x, y, z... in (some 3d API)." You'd be right, but it's counter-intuitive and is something that should be taken care of by the system .



Counter-intuitive in what sense? If you stop thinking in
terms of pixels and start thinking "camera" and "perspective",
then it's all intuitive. That seems to be the root of the
problem. Games from the 80's and 90's were 2D, and even
though now we have advanced 3D hardware to make it a reality,
people are still "stuck" in that era.

quote:

Explain, for example, what a newbie would make of having to use "glOrtho" to set 2D mode, using glLoadIdentity, glMatrixMode, and so on. That's a pretty tricky one, because it's not something that the newbie would want to have to deal with when programming.



If a newbie find this stuff "hard", there are "harder" things
down the road. May as well quit before wasting more time.

quote:

Tell me whether you'd rather, if you were a newbie, use Direct3D or OpenGL stuff or use some library that allowed you to do this:

Picture myPicture("somefile.bmp");
Picture myLogo("logo.bmp");
Screen.Draw(myPicture, 20, 20);
Screen.DrawBlended(myLogo, 30, 30);

...and other code to that effect. Don't kid yourself.



I wrote a TexturedQuad class and this is what I do to display
an image:

EnterOrtho();
TexturedQuad Logo("logo.png", 200, 200);
Logo.Render(100, 100);
LeaveOrtho();

quote:

You can argue that "doing things the hard way helps you learn what's going on underneath the hood." That's a load of crap! What's the first thing a newbie wants to do? Plot a pixel. Draw an image onto the screen (ex: see deClavier's recent post in the For Beginner's forum).



Yeah, and that's the OBSOLETE way of thinking in 2D.
You "plot" a vertex in the game world and position the camera
and orient it to whatever you want. e.g. Orthographic.

Actually, there are some uses for 2D. I'm talking about GUI.
That's about the only time where 3D does not apply.

This post is long enough. But all you're saying is that you
are not willing to "think out of the 2D box".



Kami no Itte ga ore ni zettai naru!

[edited by - tangentz on July 26, 2002 11:29:04 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Alimonster
Explain to me why "textures", a term that is unfamiliar to newbies, should enter the equation.



How is the word "Texture" more offensive than "Surface" or "Buffer". If your problem is with the verbiage, I don''t think anyone can help you.

quote:
Original post by Alimonster
Show me a 2d-in-3d with terms such as "picture", "draw", and so on. That''s what Kylotan is aiming for and if it''s so easy to do then for f*ck''s sake go make one and offer it to the community



See the "2D in DX8 article on this site." The article itself took longer to write than the code. The article involves matrices, etc. but you can actually do it much more easily if you want to use T&L vertices. This is the complete interface to a class built similarly (notice that there is no mention of the dreaded word "Texture"):
class CPanel
{
public:
void Terminate();
void SetPosition(long X, long Y)
void Draw(float Alpha);
HRESULT Initialize(LPDIRECT3DDEVICE8 pDevice, char *pFileName);
CPanel();
virtual ~CPanel();
};

The implementation is somewhere between 25-50 lines of code, depending on how you count. It is simple enough that it is worthwhile to understand the underlying code because there is so much more that you can do with it once you understand.

For instance, once you understand how to use an ortho matrix, you realize that it is trivial to create a GUI/HUD that adapts to any screen resolution. Once you take the time to peruse the texture states, you realize that you can tremendous control over alpha blending.

Share this post


Link to post
Share on other sites
quote:
Original post by tangentz
Aren''t you forgetting that the (X,Y) "coordinate" or the screen
*IS* a vertex, with a Z-component of 0? Texture, image,
bitmaps, aren''t they the "same" things? Why would a "newbie"
find it easier to understand "bitmap" than "texture"? They''re
just RGB-triplets saved in whatever formats on disk.

A "sprite" is just a movable "thing" in the game world. It
can be implemented as a 3D mesh as easily as a 2D image.

I''m not forgetting anything. What''s more intuitive:

1) Draw a picture as an entire object ("Screen.Draw")
2) Bind a texture and draw each edge separately (repeated vertex calls)

The answer is clearly 1, of which there cannot be debate. That''s one example of how 3D APIs need to be wrapped. A newbie would *not* expect to draw each vertex independently. It just *does not make sense* relative to what they''d expect. It''s unintuitive.

quote:
"Picture" is "texture". "Draw" is "render". Same things.

It''s unfamiliar terminology, thus it''s bad. You should want to use "image" or "picture", not "texture". You''ll definitely expect "draw" instead of "render" if you''re new to the game.

Think about it: I pick up a pen. Do I...

1) Draw onto a piece of paper
2) Render something onto the paper

Well, duh! Terminology is important.

quote:
Uh, you don''t have to use alpha-blending if you don''t want
to. You seem to equate 3D with alpha-blending and
multitexturing and other some such advanced topics? Programming
graphics can be as simple or complex as you want it, 3D or not.

It was an example: I expected the reader to generalize. Apparently, you haven''t - that''s not my fault. Maybe if I said "alpha blending et al."

Setting up the 3D system for 2D should not be any more difficult than using the 2D system straight off the bat. I was using alpha blending as an example. This can be extended to other techniques (anti-aliasing, stretching, skewing, etc).

The general point: just because you get extra features doesn''t mean that it should automatically be more difficult. You can do these things in 2D in software. You don''t want to - you want to use 3D. Why should you have to pay a heavy productivity price in unfamiliar terminology and ugly setup code to do so?

quote:
Counter-intuitive in what sense? If you stop thinking in
terms of pixels and start thinking "camera" and "perspective",
then it''s all intuitive. That seems to be the root of the
problem. Games from the 80''s and 90''s were 2D, and even
though now we have advanced 3D hardware to make it a reality,
people are still "stuck" in that era.

Counter-intuitive in the sense that... they do not follow the intuitive process. What''s so complicated about that statement?

People do *not* think about "camera", "perspective", etc when they start programming. They just don''t, so don''t pretend they do. The only bad thing about 2D is that it''s technology is dated - something that can be removed by using 3D stuff hardware with a 2D interface. This "3d or die" attitude towards games astonishes me. Clearly, someone making a 2D game won''t expect to sell it commercially (unless it is really beautiful). They''d be doing it to learn game mechanics. You don''t need 3D for that. True, they use some underlying principles but the idea that you need to use 3D to do learn is pretty strange.

quote:
If a newbie find this stuff "hard", there are "harder" things
down the road. May as well quit before wasting more time.

I''ll say this out loud: BULLSH*T!

The way people learn is by seeing something on the screen. When you start off you want to see something on the screen that represents what you''ve coded. That''s why it can be harder to debug abstract concepts than simple graphical stuff.

quote:
I wrote a TexturedQuad class and this is what I do to display
an image:

EnterOrtho();
TexturedQuad Logo("logo.png", 200, 200);
Logo.Render(100, 100);
LeaveOrtho();

That''s getting towards the idea. But ask yourself... if you were just beginning programming, would you expect to see the abbreviation "Ortho" in there? Do you really expect a newbie to understand the concept of orthographical projection and why it''s good for 2D? Something simpler, such as "Enter2DMode", would be just as effective and is much clearer. You always want to express your intent when coding. Don''t get too caught up in jargon unless it helps you.

TexturedQuad - would a newbie instinctively think about "textured quads" or "drawing a picture"? It''s fine if understand the concepts (it depends on what level of programmer the functions are aimed), but it could be named more intuitively for the newcomer.

LeaveOrtho? Again, that''s an implementation detail. What about "Restore3DMode" or something similar? Why not express what you want to do, rather than how it''s done?

quote:
Yeah, and that''s the OBSOLETE way of thinking in 2D.
You "plot" a vertex in the game world and position the camera
and orient it to whatever you want. e.g. Orthographic.

Actually, there are some uses for 2D. I''m talking about GUI.
That''s about the only time where 3D does not apply.

This post is long enough. But all you''re saying is that you
are not willing to "think out of the 2D box".

It''s how people learn. Don''t pretend that 3D is the best way to learn, because it is not. No amount of saying it is will change the fact that 2D concepts are simpler. They just are - people think about 2D concepts and want to represent them on the screen. People see a monitor that has a 2D surface. It''s more intuitive to them to think of placing something on that 2D surface than to extend that into a 3D world and crush it into a 2D space via perspective.

Oh, and btw - can the crap about "not thinking outside the 2D box". I can do 3D stuff just fine - it''s simply that I don''t want to inflict that onto others who are learning. I don''t hate others, so letting them do things the easy way is much better than ramming 3D math down their throats when they simply want to see their actions represented on the screen in the simplest way possible.

The idea is to create a 2D library using hardware. People will use this, so why do you care if it deals with 2D ideas? They''ll learn 3D in time. Why shouldn''t they learn to program first so the transition is simpler?

If people want to use 3D concepts then they can use the 3D APIs. If they want to use 2D concepts then they should **not** deal with 3D concepts - the idea simply does not make sense.

Share this post


Link to post
Share on other sites
What would I actually have to do if I wanted to write an isometric Engine featuring DX Graphics?
Especially what must be done in order to be able to have a candle that realistically illuminates the wall ?

Im Anfang war die Tat...
Faust

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Alimonster

Explain to me why "vertices", a concept that probably has little to no meaning to the average newbie, should enter into the equation. Explain to me why "textures", a term that is unfamiliar to newbies, should enter the equation.



Explain to me why in HELL a newbie should be concerned in this issue at all? If you aren''t willing to learn it, shut the fuck up.

Share this post


Link to post
Share on other sites