Archived

This topic is now archived and is closed to further replies.

wolverine

np completeness and dynamic programming

Recommended Posts

Hello . I''m looking for some on-line info about dynamic programming and np completeness classes (especialy reductions, to prove that a problem is np complete). I can''t seem to find any good info about this on the net. It always points to some book or it''s just to *heavy* to understand. Can you help me with some links. please? Thanks.

Share this post


Link to post
Share on other sites
Dynamic programming : the problem is decomposed in sub-problems, similarly to recursion, but unlike recursion, you save the results of the subproblems so that you don''t have to solve them over and over again.

P-class : class of problems that can be solved in polynomial time by a deterministic Turing Machine.
NP-class : class of problems that can be solved in polynomial time by a nondeterministic Turing Machine.

NP-complete : the problem is in NP and is NP-hard.
NP-hard : all problems in NP can be reduced in polynomial time to this problem.

Reduction : you show that there is a way to convert your problem into another problem so that that second problem can be used to solve your first problem.

It _is_ a complex, "heavy" subject (taught as a graduate-level CS class here (hey I get to TA it ), so I''m not surprised you find explanations ''heavy''. If you point me to one such and tell me what you don''t understand (don''t answer ''everything''), maybe I can help you... or tell you "but this is crystal-clear".

Share this post


Link to post
Share on other sites
1. Dynamic Programming

I find it hard to believe that you''ve search the Internet and not yet found enough information on dynamic programming. Search harder.

2. NP-Hard Problems

I wonder if we''d be doing your homework for you? I''m afraid that there''s nothing on the web (or otherwise) that will make P vs. NP proofs easy to understand. Micheal Sipser''s book "Introduction to the Theory of Computation" is a good place to start learning.

Share this post


Link to post
Share on other sites
quote:
Original post by Graylien
1. Dynamic Programming

I find it hard to believe that you''ve search the Internet and not yet found enough information on dynamic programming. Search harder.

2. NP-Hard Problems

I wonder if we''d be doing your homework for you? I''m afraid that there''s nothing on the web (or otherwise) that will make P vs. NP proofs easy to understand. Micheal Sipser''s book "Introduction to the Theory of Computation" is a good place to start learning.



1. Dynamic Programming
I have found information on the net regarding dynamic programming. I just haven´t found information that regarded what i needed. When i mean dynamic programming i don´t just mean: here´s Fibonacci, make it polinomial.
When i mean dynamic programming, i mean the steps needed, giving a certain problem, to make an algorithm based on dynamic programming. And on that, the net is short.

2. NP-Hard Problems

I wonder if we''d be doing your homework for you?
I DON''T have this habbit. Do you?


I''m afraid that there''s nothing on the web (or otherwise) that will make P vs. NP proofs easy to understand. Micheal Sipser''s book "Introduction to the Theory of Computation" is a good place to start learning.


Yes, it''s a complicated topic. I''ve read Introductions to Algorithms by Cormen, Leiserson and Rivest.


The only reason i''m making this post it''s that when i study hard material i like to have different sources of knowledge. It helps me to better understand the matter at hands. And, since i don´t have the money to buy every book i see, i turn my self to the net. Can you blame me for this?



On dynamic programming, im having some troubles on the initial step of the algorithm (on how to find (filling) the right table for the problem).


On np-completeness, the reductions strike me down.

Share this post


Link to post
Share on other sites
quote:
Original post by wolverine
On dynamic programming, im having some troubles on the initial step of the algorithm (on how to find (filling) the right table for the problem).



The initialization step will, of course, be different for every problem. What do you mean by the "right" table? Do you mean the main "dynamic programming" table, the "traceback" table? (Dynamic Programming isn''t an off-the-shelf algorithm, but rather a methodology for solving a wide range of problems)

quote:
Original post by wolverine
I DON''T have this habbit. Do you?


I''m glad to hear that this is not the case. I correct alot of homework that is copied directly from Internet sites. Don''t be insulted because I mentioned that your post looked suspiciously like someone trying to get their homework done the easy way. I''ve seen scholastic careers ruined because of cheating; it''s a shame when it happens (and hence the reason I make it a point to say something).

quote:
Original post by wolverine
The only reason i''m making this post it''s that when i study hard material i like to have different sources of knowledge. It helps me to better understand the matter at hands.



As do I. It helps to see things from different angles.

quote:
Original post by wolverine
And, since i don´t have the money to buy every book i see, i turn my self to the net. Can you blame me for this?


Did I blame you? No. Can you blame me for suggesting the best introductory book I''ve read on the topic? No. Since I don''t remember any good websites on the topic, I thought I could help by suggesting a book. The book should be available in most school libraries.

Share this post


Link to post
Share on other sites
Sorry if i came too hard on you.

hmm, for example, when you solve the most common subsequence problem, you use a different table from the pocket change problem. My main problem with dynamic programming is really the starting point. Like you said, dynamic programming isn´t an off-the-shelf algorithm. What i''m laking is, i think, the correct way at looking at the problem and starting to formulate the solution.






Share this post


Link to post
Share on other sites
Ok here goes. Obviously I cannot give a complete lesson in dynamic programming, but here''s something (without pictures it''s impossible to explain certain things).

I know that you already understand the basics of dynamic programming so I''m just going to ramble on for a bit. My hope is that I might use a different wording than you''re used to for one or two key points and that might make a difference.

Dynamic programming is not a hard topic so there''s a chance that your confusion is due to some trivial misunderstanding.



-------------------------------------------------
Cormen, Leiserson and Rivest (CLR) decompose the general dynamic programming strategy into four steps (chapter 16 in the first edition):


1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in bottom-up fashion
4. Construct the optimal solution from computed information


Consider the Longest Common Subsequence (LCS) problem. This is a very basic example but it illustrates very well how to use dynamic programming. You''ve mentioned that you have the CLR book so I won''t go into detail, but let me know if there''s any trouble.

As you know, when solving the LCS problem using dynamic programming we start with a table. Above the top row we place the first string (call it t ), and the second string (s ) goes along the left edge of the table.

Obviously up until now everything is simple. The next step isn''t hard but it''s the most crucial step. You must decide the recurrence relation that defines the optimal substructure.

The initial conditions for the LCS define the degenerate case when the LCS of two strings has length zero, i.e. the two input strings have not a single character in common. This sitation is awarded the lowest possible score, which we''ll define as 0.

Consider now what it means to move from cell to cell in our dynamic programming table. Moving down one cell implies that we''re skipping a character from s , moving one cell to the right means we''re skipping the next character from t . It should be clear now why, when solving the LCS problem, all the entries in the first row and first column of the dynamic programming matrix are set to zero.

The final type of movement is in the diagonal direction. When this happens we''re interested in one character from each of s and t , i.e. there''s a match.

This then leads to the scoring function . We have already decided that for the LCS problem the lowest score shall be zero. Furthermore, we note that either two characters match or they do not match; the scoring function is thus a binary mapping.

The final thing to consider is how to add up the scores. When the algorithm is in full swing, we always have three options: move one cell down, move one cell to the right or move diagonally down. Since we''re interested in finding matches, we always take the diagonal option when it''s present. Otherwise we choose the maximum of the scores in the cells above and to the left of the current one. We''ll use these scores like Hansel and Gretel used breadcrumbs to find our way back (but in our case we''re interested in the best path). The cells in the dynamic programming matrix can be filled in in row-major order or column-major order if desired.

-------------------------------------------------

For many simple problems it all boils down to choosing the appropriate intial conditions. It''s almost more artistic than mathematical.

When I choose my initial conditions and form the recurrence relation I always ask myself what it means when I''m moving from cell to cell in the dynamic programming matrix. Once you understand this then everything tends to fall into place. From then on it becomes much easier to quickly apply dynamic programming to a wide range of problems.

I hope this helps even if it''s only a tiny bit. When I started writing this I didn''t think it would be so hard to describe this stuff without using mathematical notation and pictures.

Good Luck.

Share this post


Link to post
Share on other sites
Working on new and different problems is, of course, a great idea. Since you already knew the LCS problem, might I suggest that you try one of the various sequence alignment problems common in computational molecular biology? These problems are similar to the LCS problem but there are slight differences. If you can work on one or two of those you''ll master dynamic programming in no time. Trying search for "Amino Acid Global Alignment" or "Local Alignment Affine Gap Penalty" in conjunction with the words "dynamic programming". I bet you''ll find a million links (Google is our best friend).

Share this post


Link to post
Share on other sites