Sign in to follow this  
Ryan_001

Implementing x to the power of y

Recommended Posts

Ryan_001    3476
I'm working on fixed point math library (just for fun/educational value), and I need to implement a few basic math operations. The x to the power-of y type operation has me stumped. Its pretty straight forward when x and y are both integers, but when y has a fractional component it starts to complicate things. Calculating it by converting the decimal to a fraction, then taking the nth root followed by the power would theoretically work, but I can't see this as being remotely efficient, or having very good precision. Using various identities I can transform b^x into e^(x * ln(b)) which seems like it would be easier to compute. I imagine implementing ln(x) and e^x would be easier, and its nice to have those functions. But here I'm kinda stumped. I looked all over and haven't seen any info on how to go about implementing these. I imagine it'd be somewhere along the lines of a Taylor series expansion of some sort into a lookup table, but to be honest I'm not really sure where to start. It'd also be nice to use 2^(x * log2(b)) instead I imagine. But again, I'm kinda stumped at this point and not quite sure where to start. I'm not really looking for a ready-made answer, but rather just a point/link in the right direction would be greatly appreciated. Or even some general reading material, if anyone has stumbled across any on the internet.

Share this post


Link to post
Share on other sites
Emergent    982
Here's my suggestion, pending replies from more expert posters:

Any real exponent can be approximated arbitrarily well by some rational number m/n. So I'll start by assuming that we're really working with a rational exponent.

You want to find

x = b^(m/n)

or equivalently, the solution to

x^n = b^m

or

x^n - b^m = 0

So this becomes "just" a root-finding problem in x. I figure you probably know a bit about numerical root finding, so I'll leave it at that.

Sound reasonable?

Share this post


Link to post
Share on other sites
Ryan_001    3476
Interesting, never thought of doing it like that. I was implementing e^x with a (I think its called) power series approximation and then was going to use that with a Newton-Raphson to approximate ln(x). And if I understand correctly, Newton's method is a root-finding algorithm. Course skipping the extra steps (computation of e^x and ln(x)) might be faster.

Thanks, sounds reasonable to me ;)

Share this post


Link to post
Share on other sites
Emergent    982
Quote:
Original post by Ryan_001
Interesting, never thought of doing it like that. [...] And if I understand correctly, Newton's method is a root-finding algorithm.

Quote:
Original post by Ryan_001
Calculating it by converting the decimal to a fraction, then taking the nth root followed by the power would theoretically work, but I can't see this as being remotely efficient, or having very good precision.


Funny thing is, come to think of it, that's pretty much exactly what I suggested. ;-)

Quote:
Original post by Ryan_001
I was implementing e^x with a (I think its called) power series approximation and then was going to use that with a Newton-Raphson to approximate ln(x).


That's what some guys on another forum were doing. I don't know how it compares. Maybe try both and see which one's faster?

Also, I've got to think about this accuracy issue. Are there any numerical analysts in the house?

Share this post


Link to post
Share on other sites
TheUnbeliever    963
Quote:
Calculating it by converting the decimal to a fraction, then taking the nth root followed by the power would theoretically work, but I can't see this as being remotely efficient, or having very good precision.


Not that I have any better suggestions than this, but wouldn't you want to take the power then the root (rather than losing information when you take the root, and amplifying any resultant errors when you take the power)? (And - potentially - vice versa for -1 < base < 1?)

EDIT: On second thoughts, you probably want to ignore me. ;-) The resolution of floats decreases as their magnitude increases.

Share this post


Link to post
Share on other sites
beun    160
Quote:
Using various identities I can transform b^x into e^(x * ln(b)) which seems like it would be easier to compute. I imagine implementing ln(x) and e^x would be easier, and its nice to have those functions


Well, that method is being used in most pocket calculators (http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Exponential_identity).

So I guess that's probably a good method, especially if, as you said, you'd want to implement the exp. and the natural log. function anyway.

Share this post


Link to post
Share on other sites
Ryan_001    3476
Ok, I decided to use a Cordic implementation. I tried Taylor series, but they can take forever to converge (there are ways around it, but it didn't seem like the best way to do it).

Everything worked quite well, I got sin, cos, tan, working perfect. Arcsin and cos were good everywhere except near the very edges of their domain, they seem to kinda get 'stuck' (ie. the values for a small range in the domain never change), but I think it may have to do with a precision issue in the look-up table. Anyone else played with a Cordic implementation and see this at all?

Now the hyperbolic functions (sinh, cosh, ect...) are used to compute ln, exp, which in turn can be used to calculate power. Cordic works for those to, but I seem to have a rather weird problem.

So for any Cordic gurus around (probably should try this in its own thread, or perhaps a dsp forum, but who knows):

All of the documents/papers I've read state how certain iterations have to be repeated in order to get proper convergence (for the hyperbolic functions). Most papers state it as { 1, 2, 3, 4, 4, ..., i, 3i + 1 } or something like that. Which I would interpret as repeat 4th, 7th, 10th, 13th, ect... iteration. I saw one paper write that the repeated intervals were 4, 13, 40, 121, ...., which leaves me kinda confused as how 3i + 1 = { 4, 13, 40, 121, ...., }. None the less its all kinda a moot point because when comparing my sinh vs another sinh (both the standard sinh included in math.h and my sharp calculator), having no extra iterations (ie. not repeating 4, 7, or anything of the sort) gives the most accurate answer.

So as is I'm rather confused. Are the papers I'm reading off (and mind u I've gathered info from at least 5-6 different sources, though maybe they're all referencing the same 1 faulty document??)? Are the implementations on both my calculator off? Or am I missing something here (granted the most likely answer)?

Share this post


Link to post
Share on other sites
Ryan_001    3476
Ok, after a few days of playing around I finally figured it all out ;) I thought I'd just wrap up this thread with a few tips in case anyone in the future stumbles across it in a search.

The above error I was finding was due to a scaling error (I had computed the wrong gain). I found the easiest way to compute the Cordic gain was simply through the method found in here:

http://www.voidware.com/cordic.htm

Rather than trying to compute the product of all the terms, simply rotate a unit vector zero degrees (or compute cos(0), same thing). The results will be the gain and/or scale factor (whatever the paper happens to be calling it), for the number of iterations u need. Much quicker and much less error prone.

The above website covers almost all the functions u'll need, but most of them are only valid for a small range of inputs. This paper:

http://spanky.fractint.org/pub/fractals/docs/cordic.math

has a list of identities to use to (tables 3 and 4 in particular are what ur looking for).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this