Archived

This topic is now archived and is closed to further replies.

Nacho

Graphics in ASM still a necessary thing?

Recommended Posts

After some months learning DirectX, I was wondering if learning some drawing routines in asm could still be considered a good learning experience? What do you think about the subject?

Share this post


Link to post
Share on other sites
C++ evangelists will say "no". assembly evangelists will say "yes".

IMHO, i suggest learning 3D programming theory is more important than learning assembly language.

i personally enjoy programming in assembly language (non x86), and can code quite quickly in it. but i use C++ for most of my large projects.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
if you want to use sse, sse2, mmx, 3dnow, etc. you will have to code in assembly (unless you happen to have intels latest compiler (though no 3d now in that obviously). there is only one other compiler i know of that uses simd instructions (with some help) amde by some thrid party, unfortunatly i dont remember the name of the compiler (no its not by bordland nor by ms). assmebly coding in games will NEVER die since no compiler can compete to a good assmebly coder, but they can get pretty close. persocally i use asmebly ONLY when using the simd instruction set, otherwise i trust the compiler (i am not the best asm coder nor do i feel a 1-2 percent increase is wirth it for the work required, on the other hand i have had mmx increase speed by 25-50 percent).

Share this post


Link to post
Share on other sites
Generally normal CPU assembly (like x86) is very rarely used on higher and next gen platforms for games. And it should only be done when profiling (with tools such as TrueTime, VTune etc) reveals that the program is spending a lot of its time on a small repetetive task on a simple repetetive chunk of data. An example of that kind of place in *some* applications for say SSE/3DNow! would be matrix multiplication in an app which had a lot of skinning and hierarchical scene stuff.

As mentioned, optimising compilers are good enough for at least 90% of cases (many games are released with absolutely no assembly code).

Older and smaller gaming platforms (PS1, GB, GBA etc) tend to use much more assembly language in their graphics cores.

Some current ''next-gen'' platforms do use assembly language, but it isn''t for programming the CPU - instead its for specialist co-processors. Examples would be vertex and pixel shaders on the PC and Xbox, vector units and certain IOP modules on the PS2 etc.

Knowledge of how assembly language works - and experience of optimising it such as instruction pairing, reordering to avoid stalls, cunning reuse of registers/output values etc are all still useful to know. Partly to be able to program specialist CPUs as mentioned above. But also to have a better understanding of what''s happening between your program and the CPU - if you''ve only ever programmed higher level languages you can end up blind to things which seem the ''perfect'' way to implement things, but can perform really badly in hardware terms.

The days of hand tuned software transforms on the CPU though are largely gone unless you''re writing for 1997 level PC hardware.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
thats kinda sad, that we are relying on our modern hardware alone instead of optimizing, shows how lazy many of us have become

Share this post


Link to post
Share on other sites
I don''t think it really means we are lazy. It mostly means
we can put our time into other parts of application development. Slow software that is done is always better
than fast software that spent to long in development, and
either gets cancelled, or is too dated to matter.

Of course, I am not saying doing assembly takes forever,
or using assembly always brings great results. I am just glad
the Bresenheim(sp) speed debates are becoming less.

But when someone asks me if they need to learn it,
I ask if they have a little spare time. If they answer
yes, I say by all means. Always keep learning.

Share this post


Link to post
Share on other sites
I too would say it''s more a question of time than lazyness. As languages and machines evolve, it allows us to code a higher level of abstraction. We can do more and faster.

I remember reading about a fascinating concept in a book about the programmers of tommorow. Imagine software so evolved that instead of coding, you "herd" the code to evolve in the right direction. OK I admit it, it''s pure science fiction and has no base in reality nor do I see this in the future but it does sounds like fun ^_^.

Share this post


Link to post
Share on other sites
It all comes down to optimization, but at a more general level.

As processors become faster, and memory and disk space become cheaper, technological advancement itself acts as an implicit "optimizer" when it comes to the Speed and Size issues. Software becomes automatically more "optimized" in these ways long after its development is over, simply by users executing it on a better machine. I''m not saying this as an excuse to ignore performance issues, but more as a reminder that improvement comes in many forms.

In contrast, the march of time has also raised expectations of what good software should be. While hardware improvements have made our programs faster, the time taken to develop high-quality software has increased by similar orders of magnitude. Attempting to create a product that meets or raises the bar (in whatever area of software the product is competing) has become a task of extraordinary expense, both human and monetary.

Given the above, the primary focus for optimization these days should be the increase of productivity and a corresponding decrease of development time. In other words, process optimization rather than product optimization.

Of course, while product optimization can be performed by digging into the nuts and bolts (often times with reckless hacking), process optimization can''t really be done without a solid foundation of software engineering and related principles.

When looking for bottlenecks, don''t just focus on what you''re making, but how you''re making it.

- Chris

Share this post


Link to post
Share on other sites
Maybe I was not so clear in my first post. What I meant is that if learning 2D/3D drawing routines in asm would help me to understand how the D3D api works internally, because up to the moment I know how to draw certain things on the screen with the aid of the api, but I would like to know what is really going on in there. If I learn how to draw in asm, will I understand the foundations where the api is built on? Thanks!

Share this post


Link to post
Share on other sites

I suppose with "2d/3d drawing" routines in DX you mean
software rendering.

Well, DX software mode sucks, it''s slow, I doubt
that one line asm was coded for it.
They''re not so much interested in optimizing their SW routines,
for nowadays everyone has graphics boards with HW
rendering support.

So, I wouldn''t say learning asm helps to learn __how
DX works internally__ , but if you''re interested
in how such drawing routines work, a bit of graphics theory
may help, and when you understood how it works in general,
THEN it could be useful to know some asm, to optimize.

But I wouldn''t write ie. a texture mapper completely in
asm, maybe the inner loop is a good candidate for that.





Share this post


Link to post
Share on other sites
It doesn´t have to be DX software mode, DOS is fine for me, too. I found Abrash´s black book on the Internet and I was trying to find out if learning from it would be a good idea or not.

Share this post


Link to post
Share on other sites
I''m sure assembly would improve the performance of this line algorithm:

glBegin(GL_LINES);
glVertex2f(x1,y1); glVertex2f(x2,y2);
glEnd();

ASM isn''t useless, but not needing it doesn''t make you lazy. Rendering is usually the most time-demanding part of a game, and it has been so for quite some time. These days, though, most of the hard stuff is done by the card itself. The trick these days is making sure that you''re using the hardware efficiently. (minimizing GL state changes, etc...)

"The best optimizer is between your ears"

Share this post


Link to post
Share on other sites
Where can I get information on how to use the hardware efficiently? The books I read on D3D say nothing about the subject.

Share this post


Link to post
Share on other sites
Learn it if you have time or have nothing on your plate right now. Knowing it wont harm you in any way and it''s just another skill in your bag. It''s actually very useful and enjoyable on a GBA, and probably on a host of other platforms, and I doubt ASM will ever die outside of the PC.

------------
- outRider -

Share this post


Link to post
Share on other sites
for learning 3d theory and how the video card renders 3d scenes, you dont need to know how its done in assembly. that will merely obfuscate the knowledge you are trying to gather. though writing your own software engine WITHOUT stealing source in c is probally more rewarding then reading through a book using assembly techniques (since they will be trying to optimize stuff instead of fully explaining things). plus if you dont know assembly already, using it to learn something will be even more difficult and time wasting.

as to dx software being slow, because its a REFERENCE implementation. designed to ensure what you see in the hardware is the correct result. the older versions of d3d whihc had a software renderer was not that slow, if used in 8bit mode or mmx mode (16bit oclordepth) on a more modern cpu (ie 266+). though the last version to have a software mode i believe was dx6.

Share this post


Link to post
Share on other sites
We''re learning some assembler as part of my degree course. Currently I''m also fiddling around with some ARM and THUMB stuff for the GBA (really kick ass). I don''t think it''s needed for the majority of programming tasks faced today but when your presented with limited power it sometimes helpful to be able to tweak portions yourself.

Share this post


Link to post
Share on other sites
I know some asm for programming microcontrollers and now I´m learning 80x86. I´m in my last year at high school, so I´m not in a rush to finish a cool demo in DX gathering code from many internet´s sites in order to get a job into the industry. Instead, I also prefer to learn the HOWs and the WHYs of what I´m programming. After learning some graphics programming theory and making my own software rasterizer in C as a person suggested, is it worth learning old technology like ModeX? Am I going to benefit from that or should I discard it and learn more about DX/new technology? My problem is that the idea behind 2D graphics seems clear to me but in 3D I just tell the D3D api DrawPrimitive(), I specify how many points, the buffers, etc. and the line appears correctly on the screen and I would like to know HOW does the api does to render it correctly on my screen. I hope you get my point. Thanks!

Share this post


Link to post
Share on other sites
quote:
Original post by Sorensen
[quote]I would like to know HOW does the api does to render it correctly on my screen.


It tells the hardware to do it.

But how? Where can I learn those principles?

Share this post


Link to post
Share on other sites
quote:
Original post by Ignacio Liverotti
[quote]Original post by Sorensen
[quote]I would like to know HOW does the api does to render it correctly on my screen.


It tells the hardware to do it.

But how? Where can I learn those principles?



Continue pursuing X86 assembly language.

If you''re doing things in DOS (which I assume you will be if you''re using Abrash''s book), start with Mode 0x13 which is a linear 8-bit 320x200 mode. Learn about the palette registers and do some stuff with them. None of this stuff will be particularly insightful to you (except perhaps the VGA palette registers, if you haven''t dealt with them before), but it''s a great place to start learning.

Then, if you have the time, it might not be such a bad idea to take a look at 320x240 or 320x200 ModeX (planar.) Nowadays you will only have to worry about linear modes but trying out a planar is an interesting exercise and will teach you how old games worked back in the Stone Age You''ll learn about how and why the VGA hardware uses planes (hint: 64KB segment limit in real mode) and you''ll get hands-on experience with VGA plane control registers. Just so you know, you don''t have to worry too much about HOW to get into ModeX (learning the functions of all of those registers used to set the mode will be rather pointless), but the rest is interesting stuff. Many consoles (prior to the 3D 32-bit systems) of yesteryear used planar graphics, and it is still an important concept to understand if you ever want to write code for or emulate older console and arcade systems.

I don''t know how Abrash teaches Mode X, as I skipped over that portion of the book, but I learned how to use it by looking at the tutorial included with it. Grab the Tweak package here: http://home.nvg.org/~rsc/programming.html

Anyway, if you get that far and want more, you can either keep working under DOS with VESA Video BIOS Extensions (a standardized set of SVGA BIOS calls for accessing high resolutions and color depths with linear frame buffers using bank switching or true linear frame buffer support in protected mode) or you can go back to Windows.

As to how APIs work for rendering 3D graphics, the best place to learn that would be to look at existing documentation for 3D hardware. This sort of hardware often works by having the CPU send commands to registers (or the card''s RAM) and the specified polygons end up getting drawn in a frame buffer which is displayed on the screen. Some console (PSX, Saturn, possible N64) and arcade (Hard Drivin'' in MAME) hardware is documented and that''s a great place to learn about this stuff since PC cards are fairly similar concept-wise.

There are probably articles and papers out there which deal with this as well (IEEE computer publications will sometimes have articles on this stuff.)

Have fun! You definitely are curious about this stuff and rather than accepting half-baked answers like "the API just _does_" or "it''s not important to know how", it''s a good thing that you are taking steps to educate yourself.

---
Bart

Share this post


Link to post
Share on other sites
how does the api tell the hardware how to do things? simple, it calls driver function stubs. this is why you need newer drivers to get the newer features of newer versions of dx. you should look at glide (3dfx only 3d api), the ddk for dx8 (at ms site), and of course you could try to find the white papers on the hardware and try to bypass the drivers.

though i think you are getting yourself confused about asm. its just another langauge like c, with the exception that its 1:1 with machine code (ie what you see is what you get). you should not waste time learning how an api does things with the video card unless you expect to write video card drivers for an api. there is nothing terribly interesting there gaming wise, and you wont learn much that will help you when you code games. learning how the card does things (ie write a software rasterizer) is more beneficial. also learning modex is pointless, directdraw will give you nearly the exact same type of access. also most of what you learn wont help you with newer games, but may be of interest to see how coders of the old days got past certain limits of the hardware/os.

i think you will confuse yourself by trying to understand the mess of the api interface to the drivers which interface the os and the video card. cant hurt to learn though if you have the time and pateince, but dont expect a much clearer outlook on how things work which will help you make the next great 3d engine.

Share this post


Link to post
Share on other sites
Read Computer Graphics: Principles and Practice by Foley, van Dam, Feiner, et al. That book is the definitive work on the algorithms, data structures and procedures requisite to display 2- and 3-dimensional images on a computer screen. Also read other texts, texts on projection algebra, on matrix math - read texts on anything that interests you and may possibly answer any of your questions. Read about Fourier transforms if you''re into audio, and frequency modulation.

Read.

[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!

Share this post


Link to post
Share on other sites
I agree with Oluseyi, read Computer Graphics principles and practice and a linear algebra math. THAT will teach you how 3d work, not asm . ASM is a mean not an end.

Share this post


Link to post
Share on other sites