Jump to content

  • Log In with Google      Sign In   
  • Create Account


Thoughts on Nasm, etc..


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
15 replies to this topic

#1 iGoogleThis   Members   -  Reputation: 205

Like
0Likes
Like

Posted 20 March 2013 - 08:40 PM

Hey GDers,  I was studying some Nasm today and had something of an epiphany.  It might be odd, but I realized something about assembly programming that may or may not have dawned on others before me.  In order to avoid ranting I have noted several specific things based on Netwide Assembler that I beleive makes writing code in assembly  - and call me crazy - easier (for better or worse) than writing in some high-level languages:

 

IMO

 

Nasm makes the math easier

 

- I so very hated math in highschool because like a lot of people it just wouldn't stick.  Folks have been back and forth about whether or not this plays any significant role in a programmers ability.  From my perspective, no it doesn't really matter so long as you do recognize a lot of the mathematical formulas as they were taught from the get-go.  Chances are you'll have to morph the way you think for the language itself anyway(cough cough X,Y coordinates).  In turn,  Assembly, being so damn manual, can turn a complicated string into something much more systematic and therefore making it so that you're under the hood using ADD and SUB (INC, DEC etc) instead of "If" statements laced with multiplication and division.  Given,  that's a double edged sword isn't it? Which brings me to my next point.

 

It's like, the computer is the tree..and I'm the hugger.  And then you have these chainsaws..

 

- You can jerry-rig whatever the hell you need to with some well written assembly.  The way I see it is, there's an apple in the tree(a computer problem),  you can either (solve the problem by..) start building a fine staircase with assembly,  straight up climb the tree yourself with C or chop the tree down with say..Perl and walk off with all the fruit with a bag made of leaves, screw what the tree looks like now, right?  I like being close to the hardware but that's not the point.  In fact I've always just wanted to use that analogy so don't mind me.  I'll just move on and say..

 

The syntax doesn't bother me

 

 - you will run the other way when you see perfectly good grammar being slaughtered (in a good way) for the sake of making syntax "easier".  If you're anything like me you will see the "elsif" (I love perl) statement and go 0_o ugh.  I'm not even going to bring up graphics libraries, those mouths are just filthy.  Mov, dd, db and even EAX makes more sense to me because it looks like it's supposed to be abbreviated.  But I love high-level langauges nonetheless.  It's like pseudo-code on top of code on top of comments.

 

Assembly literally makes you feel the structure

 

- Which is what I think the real allure is.  It's the fact that you can reallly get into the nooks and get stuff done.  Also, I want a share a little tip to folks learning x86 assembly because it helped me out, if any pros out there want to share/correct anything in turn that's be badass but here I go.

 

operation    destination    source

 

is some common format seen in the assembler.  The best way I memorize and apply this is by thinking:

 

The mission the target and rendevous (except you rendevous before taking out the target, hitmen can be team players too.)

 

or if you're not weird just zone in on the destination being the middle most accessible part of a road of some sort and by adding labels and directives etc etc you're adding lanes to it. But have these lanes be specific similar to how there are bike lanes and HOV lanes and hell, trolley car lanes in real life.  Just my two cents.  What are your thoughts on assembly in 2013?


Edited by iGoogleThis, 20 March 2013 - 08:52 PM.

Chris LeJohn

Build Engineer (RE)
Gnovahex Computing


Sponsor:

#2 Vortez   Crossbones+   -  Reputation: 2697

Like
0Likes
Like

Posted 20 March 2013 - 09:05 PM

Assembly is a thing of the past, it *can* be usefull to write a couples of inline assembly code for beginers, like me once, who didn't know how to make some simple operations in c++ code, like playing with bitfield, simple stuff like that, however i doubt you'll get very far nowaday writing only in assembly. Try making a 10000 lines of code project in asm and tell me how it goes tongue.png . One of the only thing i could think of using it for is for bit rolling (ror, rol), emulators, or to make floppy boot disk, which are almost non-existant in 2013. If you want to decrease your productivity by 10x (compare to c++) or 100x (compare to c# and delphi), then go for it... And i don't see how it make the math easier either... Have you tried multiplication, division, signed integer, floating point??? Probably not...


Edited by jbadams, 04 April 2013 - 05:43 AM.
Restored post contents from history.


#3 KulSeran   Members   -  Reputation: 2472

Like
0Likes
Like

Posted 20 March 2013 - 09:49 PM

I think assembly / a compilers class / an OS class are all majorly useful to understand.

 

Knowing what a compiler or OS is doing takes a lot of the mystery out of what the computer is doing.  When you realize that while/for can be written with if/goto and functions are just gotos that clean up code, you'll realize the answers to a lot of beginner questions like "when do i use while(){} vs do{}while vs for?".

 

Also compilers optimize. I've been in a lot of situations where writing assembly isn't important but reading it is. And understanding what the compiler probably did is equally important. The "release" build of most code looks nothing like the output of a debug build. When you get a crash, chances are it doesn't line up well with the code the debugger claims crashed. Even when it does line up, logic and values are still often folded out by the compiler. Being able to understand the assembly to what the code is actually doing is the only way to discover some types of bugs efficiently.



#4 iGoogleThis   Members   -  Reputation: 205

Like
0Likes
Like

Posted 20 March 2013 - 09:52 PM

Have you tried multiplication, division, signed integer, floating point??? Probably not...

 

Of course I have, that's the law of the land. 

 

I want to note, assembly is not my primary language but I am learning it because I write code in C primarily.  As KulSeran said:

 

Being able to understand the assembly to what the code is actually doing is the only way to discover some types of bugs efficiently.


Chris LeJohn

Build Engineer (RE)
Gnovahex Computing


#5 JTippetts   Moderators   -  Reputation: 8333

Like
4Likes
Like

Posted 20 March 2013 - 10:16 PM

I spent many years hand-tuning assembly for EGA, CGA and later VGA/ModeX. Back then, bare-metal programming was not only "all the rage" it was necessary if you wanted to achieve acceptable levels of performance. You had to know your assembly. Now, though... while I won't argue that knowing things on the assembly level isn't useful (all knowledge is useful, in one way or another)and fun, it is inarguably far less vital. It is possible to become a very competent developer without ever touching it. In fact, I'd even go so far as to say that for developers up to a certain (undefined) level of expertise, it might be dangerous to dabble too deeply into assembly. Writing assembly now is pretty far removed from what it once was. Optimization is just so different from what it used to be, that you are almost always better off letting the compiler do it rather than trying to do it yourself. And I'm not convinced that it really can help you to become a better programmer anymore, unless you're a kernel or driver hacker. Modern programming takes place on top of so many layers of abstraction, that the actual details of the underlying hardware are almost irrelevant. And you certainly won't find very many job listings that list it as a requirement. It arguably might make you marginally better at debugging, but if you are planning your education arc around something that will only come in handy in a small number of cases, I worry you might be wasting your time.

Now, if you're doing it for the joy/learning/bragging rights, or you want to be a kernel developer, a driver developer, or a compiler developer, then knock yourself out. But beyond a small handful of niche specialties, it's just not all that useful anymore.

Edited by JTippetts, 20 March 2013 - 10:17 PM.


#6 iGoogleThis   Members   -  Reputation: 205

Like
0Likes
Like

Posted 21 March 2013 - 12:24 AM

JTippets,  I didn't want to beleive it!  But coming from someone who's been there I can definitely see what you're saying.  Still though for the sake of grunt development (kernel, drivers etc) what's the best way to really go about applying it?  I find that the documentation is alright but the application is verry limited.  I suppose this goes back to what you said about it just not really having a place anymore.


Chris LeJohn

Build Engineer (RE)
Gnovahex Computing


#7 Hodgman   Moderators   -  Reputation: 29298

Like
3Likes
Like

Posted 21 March 2013 - 12:42 AM

Assembly, being so damn manual, can turn a complicated string into something much more systematic and therefore making it so that you're under the hood using ADD and SUB (INC, DEC etc) instead of "If" statements laced with multiplication and division.

You can write in that style in a higher level language if you like:
void SUB( int& a, int b, int c ) { a = b - c; }
void ADD( int& a, int b, int c ) { a = b + c; }
void SET( int& a, int b )        { a = b; }
void PRINT( int a )              { printf("%d\n", a ); }

int main()
{
  int foo, bar, baz;
  SET( foo, 1 );
  SET( bar, 2 );
  ADD( baz, foo, bar );
  SUB( foo, baz, foo );
  PRINT( foo );
}
In high school, we had a class where we learnt to use a "CPU simulator", which was a PC program simulating a simple RISC CPU with a simple assembly language, and a small array of bytes of RAM to operate on.
I still remember being absolutely stumped when being asked to write a routine on this "CPU" to calculate the square root of a user-inputted value, and then being ridiculously excited once I started putting together the ASM building blocks to a solution (and then competing with the other kids to shave instructions off our solutions to get the most compact one)!

It is a great learning exercise, but it's very rare to actually need to use assembly in day to day programming tasks.

#8 Orangeatang   Members   -  Reputation: 1448

Like
1Likes
Like

Posted 21 March 2013 - 02:28 AM

Assembly is a thing of the past

Not if you've spent any time doing some serious debugging (especially on the PS3).

 

Inline assembly maybe, but if you're trying to track down a bug that only occurs in your retail/release build it's pretty much essential. 


Edited by Orangeatang, 21 March 2013 - 02:30 AM.


#9 Olof Hedman   Crossbones+   -  Reputation: 2737

Like
0Likes
Like

Posted 21 March 2013 - 05:19 AM

I can definitely understand your enamourment with assembly language smile.png

 

I cut my teeth as a budding software developer on 68k and z80 assembler for my calculator.

Almost no OS in the way, and full control. C compilers felt ridiculously bloated. 

 

Professionally though, over the last 10 years, I've produced a single line of assembly that was included in production code. (that single line of asm was in 200+ million phones worldwide though, so not bad smile.png )

 

As everyone else says, with compilers getting better, and hardware more complex, hand assembler optimization is getting less and less important.

 

And knowledge is never a bad thing!

For anyone who wants to write performance critical code, its very useful to know your asm enough to at least inspect your compilers effort.

 

I'd never want to be without my OOP when the problems get a bit more complex...


Edited by Olof Hedman, 21 March 2013 - 05:20 AM.


#10 Vortez   Crossbones+   -  Reputation: 2697

Like
0Likes
Like

Posted 21 March 2013 - 11:57 AM

Im not saying that it's dead, in fact i know a good deal of assembly language, but i wouldn't say it's better than higher level language. Still, it's fun to play with and you can learn a great deal of how the processor really work.


Edited by jbadams, 04 April 2013 - 05:44 AM.
Restored post contents from history.


#11 carangil   Members   -  Reputation: 490

Like
1Likes
Like

Posted 21 March 2013 - 01:59 PM

When I was a kid, the progression to learning programming was BASIC first, then assembly. You couldn't get a C compiler for free back then, and DOS came with 'debug' letting you write little assembly programs.  Later I moved onto C(++), Java, etc, and never went back to it.  But I do understand your fascination with it, and at times, something I think about getting in to x64 assembly programming.  Maybe later.

 

Don't let others give you grief about it though.  I'm a hardcore C programmer, and even program C in an OO style (function pointers in structs for polymorphism, etc), and some people give me trouble for it.  I understand C++ is there, but for my own personal projects (not at work: at work I use whatever the rest of the team is using because its not my project) I choose C.

 

Keep doing what you're doing.  We need more assembly programmers, because somebody needs to fix the compiler when it spits out broken code, or write those high performance hardware drivers!  



#12 Ravyne   Crossbones+   -  Reputation: 7116

Like
7Likes
Like

Posted 21 March 2013 - 03:14 PM

To touch a little more on what JTippets said, in production its potentially dangerous (to your performance, that is) for a non-expert programmer to delve into assembly directly. The reason, at its core, is this: modern CPUs are so complex that even an engineer from Intel or AMD, given a small routine written in assembly, can't tell you for sure how many cycles it'll take to execute--even ignoring the affect of memory latencies--at best they can give you a window, they can say "between 37 and 108 cycles". Performance today just depends on so much more than the stream of opcodes you're interested in.

 

Today's CPUs break down even single assembly statements into potentially-many smaller instructions, and re-orders them on-the-fly based on what execution units are available and who's parameters are ready. In fact, and I'm not making this up, only around 1% of the transistors in a modern CPU are actually used to compute anything -- Around 75% of the transistors are spent on caches (to hide memory latency), and the remaining 24% hide latency in other ways, like instruction re-ordering. All of this comlexity exists on a micro-scale, and is compounded by the number of different CPU micro-architecutes with different properties.

 

Up a level or two from that are questions like "How much bettor or worse will my program perform if I inline this function?", "What about at this call site?", "What about at that call site?" "What if I allocate this variable in a register?", "What if I allocate that variable in the register instead?". Its not even that a determined human with the right set of tools couldn't figure out the optimal instruction set at this level if given enough time, its that a human can't perform this function for the tens-of-thousands of permutations that are necessary to produce globally optimized code. This is why even very skilled assembly programmers today touch only those functions that are most performance-critical, and usually out near the very leaves of the programs call-graph -- there's more to be lost than gained to write entire (or even large-sections-of) programs in assembly today, and the cost is large.

 

I'm glad that you have an interest in assembly, its knowlege that's somewhat arcane but also indespensible when you really need it. Furthermore, having knowlege of what high-level program constructs (say, a loop, a switch statement, a virtual function table) or patterns (say, Duff's Device) will look like after the compiler translates them to assembly can help you be a more conciencious programmer. All good things.

 

However, as a practical exercise today, assembly-level programming is rightly relagated to the margins of programming. When writing and optimizing a program, or any part there-of, you should follow these steps:

  1. Write first for correctness, properly weighing performance against maintainability, and choosing good algorithms.
  2. Profile to identify performance-critical functions.
  3. IFF profiling reveals hot-spots, consider optimizing them by following the next steps, otherwise stop here.
  4. Consider algorithmic optimizations, or ways to reduce the ammount of work that needs to be performed in a given time. Space partitioning algorithms are a good example of this type of optimization. I would also consider this step to include methods of optimizing cache-locality, often called Data-Oriented Design, or DoD, and transforming the problem to utilized vector hardware (SSE, AVX, etc.).
  5. IFF algorithmic optimization and work-reduction still does not provide sufficient performance (as opposed to ultimate performance) consider futher optimization by following the next steps, otherwise stop here.
  6. Consider re-writing the code using intrisic functions to get closer to the metal, without tying the hands of the compiler to perform its usual optimizations. In general, a compiler or assembler will not optimize assembly code at all. Its assumed that if you're writing assembler, then you really mean to do exactly what you're doing, resulting complications be damned.
  7. IFF, after all of this, you can prove to yourself that you will generate better code than the compiler-intrics, consider further optimization by following the next steps, otherwise stop here.
  8. Consider re-writing in assembly language. If you follow these steps honestly, you should almost never end up here except in a few cases. Namely, that you as a programmer have insights into the code that the compiler cannot, that can employ hardware resources or CPU instructions that the compiler cannot or has no means for you to express at a higher level, or, for whatever reason, the compiler simply does The Wrong ThingTM with this particular piece of code.
     


#13 iGoogleThis   Members   -  Reputation: 205

Like
0Likes
Like

Posted 21 March 2013 - 06:01 PM

You can write in that style in a higher level language if you like:

 

Are those prototypes up there above the main function or just globals?  Either way I can't wait to get good enough at it to throw a few for-fun versions of that down from off the top!

 

  1. Write first for correctness, properly weighing performance against maintainability, and choosing good algorithms.

 

Ravyne your entire post was super insightful!  That number 1 is definitely the key, figure if you can do that part right then optimazation should become less and less of an issue.  For me I'd rehash the thing till it's blazing fast and after awhile I suppose if you're good enough at that you can almost fit all of those steps into just number one there from the start.  My C code can be unruly at times because I like to avoid using too many pointers but that's a whole different topic.  Suppose to goes back to the performance thing being essential.

 

I'm a hardcore C programmer, and even program C in an OO style (function pointers in structs for polymorphism, etc), and some people give me trouble for it.  I understand C++ is there, but for my own personal projects (not at work: at work I use whatever the rest of the team is using because its not my project) I choose C.

 

 

DracoLacorente, Was that style something you eased into on your own?  I'm self-taught so it's a lot of give and take when it comes to style.  I actually started off learning Python somewhat formally but it was so whitespace-unfriendly.  The freedoms C gives you while staying kind of centered is why I like it.  That and of course the inline assembly capability that people frown upon.  I don't want to be that guy but it honestly just rings my bells.

 

I've seen it answered a few times but how does inline assembly fare as far as portabliity goes?  I don't do it at all yet but would that make it slightly more portable or the opposite? 


Edited by iGoogleThis, 21 March 2013 - 06:03 PM.

Chris LeJohn

Build Engineer (RE)
Gnovahex Computing


#14 Ravyne   Crossbones+   -  Reputation: 7116

Like
4Likes
Like

Posted 21 March 2013 - 07:15 PM

Ravyne your entire post was super insightful! That number 1 is definitely the key, figure if you can do that part right then optimazation should become less and less of an issue. For me I'd rehash the thing till it's blazing fast and after awhile I suppose if you're good enough at that you can almost fit all of those steps into just number one there from the start. My C code can be unruly at times because I like to avoid using too many pointers but that's a whole different topic. Suppose to goes back to the performance thing being essential.

 

I'm glad you found it useful, but I'm worried you misinterpret it a bit -- I want to drive home the point that is in no way meant to be viewed as something you can compress into a single step, nor should you want to. It's a process that aims to put your efforts squarely where they belong, while leaving code at the most abstracted level that it can meet performance requirements in. Trying to compress it all into one step is impossible without making dangerous assumptions that will end up costing you time, maintainability, and performance. An expert might skip steps 6 and 7 if, and only if, they are certain through wisdom and experience that the compiler cannot or will not generate the best code -- but they will never simply assume that to be the case, nor would they skip the earlier steps without first having hard data to prove that it falls into a performance hot-spot.

 

I'll share something I learned just yesterday which is tangential but illustrative of why the thing you think will work best, often doesn't.

 

I attended a meeting of the Northwest C++ Users Group last night. The topic was Visual Studio's Profile-Guided Optimization (PoGO, for short) feature and how it works. In brief, PoGO works by instrumenting a build of your application which you then run through various performance-sensitive scenarios in order to train it with real data. Then, you do the real (release) build in a way that incorporates that training to help inform the compiler how to generate the best code for real use-cases. For example, based on whether the conditional is likely to be true or false, it might swap the order of conditional branches in an 'if' statement so that the processor speculatively executes the correct branch in the majority of cases. If it does so, the CPU stalls less, and performance is increased. It also applies what it's learned about how-often, and from where every function call is made (this infuences whether the function should be inlined or not). It's all very complex, and I'm simplifying it here, but that's what it does in a nutshell.

 

When we got to the end of the presentation and the speaker was comparing results of PoGO-compiled code vs code compiled with -O2 (the highest level of non-PoGO optimization Visual Studio's compiler supports) there were some really interesting results. Not only was the PoGO-compiled code faster, it was also smaller, and it also only inlined about 5% of the overall call sites, vs around 20% or higher that were inlined by the -O2-compiled code. Now, it performs a number of other optimizations to achieve that, but think about those stats on their own -- The PoGO code used far less inlining than the -O2 code, and was faster and was smaller, all at the same time. Best performance is not achieved by being most-agressive with potential optimizations, its achieved by being really smart about where optimizations are applied, and by applying them in the context of real data about real scenarios.

 

Lets think about that another way: For all of the thousands of PhD-hours and hundreds of millions of dollars thrown into compiler research over decades, not even the compiler (and one of the best in the world, at that) can generate its best code without first profiling it!


Edited by Ravyne, 21 March 2013 - 07:23 PM.


#15 carangil   Members   -  Reputation: 490

Like
0Likes
Like

Posted 21 March 2013 - 09:22 PM

  •  
Quote

I'm a hardcore C programmer, and even program C in an OO style (function pointers in structs for polymorphism, etc), and some people give me trouble for it. I understand C++ is there, but for my own personal projects (not at work: at work I use whatever the rest of the team is using because its not my project) I choose C.




DracoLacorente, Was that style something you eased into on your own? I'm self-taught so it's a lot of give and take when it comes to style. I actually started off learning Python somewhat formally but it was so whitespace-unfriendly. The freedoms C gives you while staying kind of centered is why I like it. That and of course the inline assembly capability that people frown upon. I don't want to be that guy but it honestly just rings my bells.

 

 

I had zero style throughout high school and most of college.  Even after college my style was somewhat poor.  It was after working at my current company for about a year or so that some of the style and design of the code at work began to rub off on me an influence my code at home.  There's a few key experiences that have shook my style quite dramatically:

 

 

  • I took a Java class at school.  The ideas of objects and polymorphism were cool, but I saw a lot of stuff that seemed over-OO-ified.  (the Integer class, for example).  But the idea of interfaces and abstraction really influenced my style.

 

 

  • At work, one of the projects I help maintain is a large C codebase.  It has full manual memory management.  It sucks.  There is one part of that code base (a data tree structure) that uses reference counting.  I've adopted reference counting in my home-coded C projects.
  •  

This is typical of my style at home:

 

typedef struct inner_s {
   char* str;
   void (*do_something)(struct inner_s* inner);
} inner_t;

typedef struct {
   inner_t* i1;
   inner_t* i2;
} outer_t;

void free_outer(outer_t* x)
{
   z_free(x->i1);
   z_free(x->i2);
}

void foo(outer_t* x)
{
	x->i1->do_something(x->i1);
	x->i2->do_something(x->i2);  
}


void free_inner(inner_t* x)
{
  z_free(x->str);
}



void inner_do_something_interesting(inner_t* in)
{
  printf(" oh my god %s!\n", in->str); 
}

void inner_do_something_boring(inner_t* in)
{
  printf("I'm bored\n");
}

inner_t* inner_mk(char* string)
{
   inner_t* inner = z_alloc(sizeof(*inner), free_inner);
   inner->str = z_addref(string);
   inner->do_something = inner_do_something_interesting;
   return inner;
}

outer_t* a = z_alloc( sizeof (*outer), free_outer);


char* reused_string = z_strdup("hello");

a->i1 = inner_mk(reused_string);
a->i2 = inner_mk(reused_string);

//sometimes you want to override a method
a->i2->do_something = inner_do_something_boring;


z_free(reused_string); //don't need this anymore, we inner_mk addref'ed it

foo(a); //this calls the 'do_something' method on both i1 and i2


//sometime later, something else copies a reference to a:

a_copy = z_addref(a);
//
and then maybe the original a goes out of scope:

z_free(a);

//then eventually the copy goes out of scope:

z_free(a_copy);  

//now z->i1, z->i2 are freed, recursively freeing the last reference to 'reused_string' which is now eventually destroyed




   

C++ programmers tell me the above is crazy and I should have made C++ classes for inner and outer, and then used boost smart pointers.  At work I program how I'm supposed to program to get things done.  At home, I program for fun, whatever way I want.



#16 Cornstalks   Crossbones+   -  Reputation: 6974

Like
0Likes
Like

Posted 21 March 2013 - 09:45 PM

C++ programmers tell me the above is crazy and I should have made C++ classes for inner and outer, and then used boost smart pointers.  At work I program how I'm supposed to program to get things done.  At home, I program for fun, whatever way I want.

If you were programming in C++, I might say they have a point. But if you're programming in C then they're clearly crazy for such suggestions smile.png

But to add to the discussion, I think some of the people have made excellent points. I won't repeat everything. I've primarily found knowing assembly useful when debugging release applications (where I haven't been able to reproduce bugs in debug builds), which has been really helpful to me. It doesn't happen a lot, but when it does... well, somebody has to get their hands dirty.

Edited by Cornstalks, 21 March 2013 - 09:54 PM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS