• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

marcus256

To T&L or not to T&L

7 posts in this topic

There has been a lot of discussion about whether to use the driver-supplied T&L implementation (software or hardware) or to go through the complicated process of detecting if there is hardware T&L support, and if not, use your own "super optimized" T&L routines. Well, here is a little story on a related (?) topic: Once upon a time there was a wonderful computer called Amiga. It had cool graphics hardware with lots of cool acceleration features which allowed for cool 2D animations (in fact, a 7 MHz based Amiga was way faster than a 66 MHz PC on most 2D games). Enter Wolfenstein 3D and Doom. Oops! Bitplaned graphics was not good at all for software 3D rendering! For a couple of years the "elite" among programmers spent their valuable time writing "chunky to planar" converter routines, each one faster and more elaborate than the others. Of course, there was an operating system function that did just that, but it was super-slow (probably 20-40% slower than the fastest routines) = not acceptable. Enter the 24-bit graphic cards for the Amiga. Oops! 95% of the games and programs which used the custom converter routines did not even run on these new cards since they were so tightly coupled with the Amiga hardware (my 3D engine was one of those programs). The remaining 5% ran slower on those new graphic cards than they did before (the OS had to do an extra conversion pass back to "chunky" graphics that the graphic cards used). ALL of the programs would have run MUCH faster if they had used the operating system funcion, since it sent the graphics unmodified to the new graphic cards! The point is: If you are writing your own custom functions to do stuff that is already supported by standard functions, you had better know pretty darn well what you are doing, because some day there WILL be some super-cool hardware that you would never had dreamt about that would have made your program super-fast if only you had used that standard function instead of your own custom routine. Generally, writing custom routines will improve the overall speed of an application by 1-10%. What is that worth compared to the constant ongoing speed increase of CPUs, graphic cards and memories? There is probably a small percentage of programs out there that can gain a noticable amount of performance by using custom routines. How many programs would have gained performance by using the standard T&L functionality of OpenGL? There, now I have shared my view on the topic... Marcus
0

Share this post


Link to post
Share on other sites
>>The point is: If you are writing your own custom functions to do stuff that is already supported by standard functions, you had better know pretty darn well what you are doing, because some day there WILL be some super-cool hardware that you would never had dreamt about that would have made your program super-fast if only you had used that standard function instead of your own custom routine<<

exactlly this is what unreal/UT did ppl brought there geforces hoping to get a major speed boast yet it didnt happen, why not? cause the programmeurs of it decided to ignore the opengl matrix routines etc. whereas a game like q3a did use them + now u can run it at 200fps with a gefroce3 ( cant think of a reason why u would want to but ) though with UT youre stuck with 70fps
0

Share this post


Link to post
Share on other sites
You MUST use the OpenGL matrix function! It is true that, on some old card, you can get some FPS with your own custom routine but it is driver dependant. If you use your custom routine the acceleration will be only profitable to people who don''t have 3D card, because the Microsoft OpenGl software driver is very slower than others (not true for some old ATI card, where software mode is faster than hardware!!!).
0

Share this post


Link to post
Share on other sites
This is a bit of an age old problem really. As the developer you have to strike the right balance. In some situations you just can't use OGL's matrix mult functions.

It's all about getting at the data once OpenGL has calculated it. With these new 3d cards, the temptation is to make objects of massive complexity - loads of polygons. Using the OGL matrix routines is of course the prefered way to transform this around your 'world'.

However - you start coming into problems when you want your super complex space ship (or whatever) to interact with other objects in the world. Collision detection requires you have at least a bounding sphere in the same position as your space ship. Obviously, you need to use your own math routines to get this into position because if you used OpenGLs and the appropriate glGet you'd slow the rendering down.

But - bounding spheres really aren't any use to anyone. You need a more complex approximation of your space ship - again, this needs to transformed by your own routines so you can get at the data to perform collision detection etc. As the complexity of the models increases, so it does for your 'low detail' model. So you end up writing all these math routines anyway.

To avoid the player throwing the computer out the window when "It missed me by miles!" these 'low' details are actually quite high detail

I guess this isn't relavent to what you were saying - and it supports whats been said about QIII and UT.

Use OGL's functions for the display geometry, but don't think for a minute that'll prevent you from getting your hands dirty with your own routines...

Paul Groves
pauls opengl page

Edited by - Pauly on August 29, 2001 9:14:53 AM
0

Share this post


Link to post
Share on other sites
true but collision models have usually a lot less polygons than the drawn model. eg i believe q3 uses 3 boxes to represent the collisions for a player instead of the 1000+ tris.
0

Share this post


Link to post
Share on other sites
QuakeIII''s a bad example though, with humanoid figures you could get away with very few bounding boxes like you said. You don''t need to go into proper point-in-poly intersection. I was talking about games you ''fly'' over stuff - like space ships games and things of that ilk.

Paul Groves
pauls opengl page
0

Share this post


Link to post
Share on other sites
donno about that, humanoids are prolly one the hardest out there (as with a lot of cg we can model buildings trees cars etc that look pretty realistic + in some films look like reality yet when it comes to ppl it always looks like cg eg final fantasy )
with a human u have a head,body,arms,legs but each can move in a variaty of ways. a spacecraft you have the fusalarge + the wings. but they dont move with relation to each other(big difference) .
ideally a person on the ground will have one leg higher than the other ( u will need to model using bones ) or else what''ll happen one of the legs will go into the ground which looks bad (quake, tombraider etc unfortunatly also this is what my game does )
0

Share this post


Link to post
Share on other sites
quote:
Original post by Pauly
This is a bit of an age old problem really. As the developer you have to strike the right balance. In some situations you just can''t use OGL''s matrix mult functions.

...

Obviously, you need to use your own math routines to get this into position because if you used OpenGLs and the appropriate glGet you''d slow the rendering down.




Yes, you''re right. In many situations you need to do some transformations "on your own" (collision detection is perhaps the most important example). What I meant in my original post was that you should not replace standard routines with your own routines just for the sake of speed . I think we all agree that glGet is almost always a "don''t do" thing, since it would only make sense on totally un-pipelined and un-optimised hardware.

However, you can still use OpenGL matrix operations for the screen projection, which is still a significant part of the transformation operations. You can also use OpenGL lighting in most situations without interfering too much with your internal geometry representation and calculations (unless you need to do very special scene lighting).

If people did, we would probably see better optimised hardware drivers (and hardware). From what I''ve understood, the Voodoo 5 drivers were never very good at T&L, which is probably because the driver developers did not expect programmers to use T&L.

/Marcus
0

Share this post


Link to post
Share on other sites