Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Nacho

Graphics in ASM still a necessary thing?

This topic is 5985 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After some months learning DirectX, I was wondering if learning some drawing routines in asm could still be considered a good learning experience? What do you think about the subject?

Share this post


Link to post
Share on other sites
Advertisement
C++ evangelists will say "no". assembly evangelists will say "yes".

IMHO, i suggest learning 3D programming theory is more important than learning assembly language.

i personally enjoy programming in assembly language (non x86), and can code quite quickly in it. but i use C++ for most of my large projects.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
if you want to use sse, sse2, mmx, 3dnow, etc. you will have to code in assembly (unless you happen to have intels latest compiler (though no 3d now in that obviously). there is only one other compiler i know of that uses simd instructions (with some help) amde by some thrid party, unfortunatly i dont remember the name of the compiler (no its not by bordland nor by ms). assmebly coding in games will NEVER die since no compiler can compete to a good assmebly coder, but they can get pretty close. persocally i use asmebly ONLY when using the simd instruction set, otherwise i trust the compiler (i am not the best asm coder nor do i feel a 1-2 percent increase is wirth it for the work required, on the other hand i have had mmx increase speed by 25-50 percent).

Share this post


Link to post
Share on other sites
Generally normal CPU assembly (like x86) is very rarely used on higher and next gen platforms for games. And it should only be done when profiling (with tools such as TrueTime, VTune etc) reveals that the program is spending a lot of its time on a small repetetive task on a simple repetetive chunk of data. An example of that kind of place in *some* applications for say SSE/3DNow! would be matrix multiplication in an app which had a lot of skinning and hierarchical scene stuff.

As mentioned, optimising compilers are good enough for at least 90% of cases (many games are released with absolutely no assembly code).

Older and smaller gaming platforms (PS1, GB, GBA etc) tend to use much more assembly language in their graphics cores.

Some current ''next-gen'' platforms do use assembly language, but it isn''t for programming the CPU - instead its for specialist co-processors. Examples would be vertex and pixel shaders on the PC and Xbox, vector units and certain IOP modules on the PS2 etc.

Knowledge of how assembly language works - and experience of optimising it such as instruction pairing, reordering to avoid stalls, cunning reuse of registers/output values etc are all still useful to know. Partly to be able to program specialist CPUs as mentioned above. But also to have a better understanding of what''s happening between your program and the CPU - if you''ve only ever programmed higher level languages you can end up blind to things which seem the ''perfect'' way to implement things, but can perform really badly in hardware terms.

The days of hand tuned software transforms on the CPU though are largely gone unless you''re writing for 1997 level PC hardware.

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Share this post


Link to post
Share on other sites
thats kinda sad, that we are relying on our modern hardware alone instead of optimizing, shows how lazy many of us have become

Share this post


Link to post
Share on other sites
I don''t think it really means we are lazy. It mostly means
we can put our time into other parts of application development. Slow software that is done is always better
than fast software that spent to long in development, and
either gets cancelled, or is too dated to matter.

Of course, I am not saying doing assembly takes forever,
or using assembly always brings great results. I am just glad
the Bresenheim(sp) speed debates are becoming less.

But when someone asks me if they need to learn it,
I ask if they have a little spare time. If they answer
yes, I say by all means. Always keep learning.

Share this post


Link to post
Share on other sites
I too would say it''s more a question of time than lazyness. As languages and machines evolve, it allows us to code a higher level of abstraction. We can do more and faster.

I remember reading about a fascinating concept in a book about the programmers of tommorow. Imagine software so evolved that instead of coding, you "herd" the code to evolve in the right direction. OK I admit it, it''s pure science fiction and has no base in reality nor do I see this in the future but it does sounds like fun ^_^.

Share this post


Link to post
Share on other sites
It all comes down to optimization, but at a more general level.

As processors become faster, and memory and disk space become cheaper, technological advancement itself acts as an implicit "optimizer" when it comes to the Speed and Size issues. Software becomes automatically more "optimized" in these ways long after its development is over, simply by users executing it on a better machine. I''m not saying this as an excuse to ignore performance issues, but more as a reminder that improvement comes in many forms.

In contrast, the march of time has also raised expectations of what good software should be. While hardware improvements have made our programs faster, the time taken to develop high-quality software has increased by similar orders of magnitude. Attempting to create a product that meets or raises the bar (in whatever area of software the product is competing) has become a task of extraordinary expense, both human and monetary.

Given the above, the primary focus for optimization these days should be the increase of productivity and a corresponding decrease of development time. In other words, process optimization rather than product optimization.

Of course, while product optimization can be performed by digging into the nuts and bolts (often times with reckless hacking), process optimization can''t really be done without a solid foundation of software engineering and related principles.

When looking for bottlenecks, don''t just focus on what you''re making, but how you''re making it.

- Chris

Share this post


Link to post
Share on other sites
Maybe I was not so clear in my first post. What I meant is that if learning 2D/3D drawing routines in asm would help me to understand how the D3D api works internally, because up to the moment I know how to draw certain things on the screen with the aid of the api, but I would like to know what is really going on in there. If I learn how to draw in asm, will I understand the foundations where the api is built on? Thanks!

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!