Pattern matching. What is it used for?

Started by
11 comments, last by SamLowry 12 years, 3 months ago
I know this is a complicated subject, but I'm such a beginner in it. However, my question is pretty straightforward: When or why would I use pattern-matching? In what sort of program or problem would it be useful? Does anyone have an example (of code) to demonstrate its use (in any language)?

Beginner in Game Development?  Read here. And read here.

 

Advertisement
First thing that comes to mind is gesture-based input -- like spell-casting in black & white, or most any Wii game.
Pattern matching is an overloaded term. Do you mean pattern matching the language feature, as found in Haskell or Scala, used to break data structures apart? If so, it is useful for pulling apart a structured entity in a syntactically clean way that resembles (sometimes exactly matches) the syntax used to create the entity.

Haskell in particular uses this technique very effectively almost everywhere. A quick google for Haskell Pattern Matching will give excellent examples. Here's one from the wikibook:
dropThree ([x,y,z] ++ xs) = xs
This defines a function that drops the first three items from a list. It takes one argument, the list, which it matches to the pattern of three list items [x,y,z] concatenated with the rest of a list xs. It returns (is equal to in Haskell syntax) xs. So in effect, you use the syntax to create a list with three items prepended to pull three items off instead by putting the syntax in the argument field. Very cool!

Is there a particular language you are looking at that caused you to ask? The particulars vary.
regex?
In addition to Koobs' answer, it's often used as an inline branching mechanism in functional languages where you would've used switch statements (or a pile of if/else blocks) in imperative languages. Since functional languages often use more list/tuple manipulation and stuff like Maybe/Nothing the scenario arises more often there.

[edit: so the Maybe/Nothing stuff (and even the tuples) are properly called Algebraic Data Types. The common example is working with a tree:

match node with
| Leaf x -> // do stuff with x
| Branch left right ->
// recurse left
// recurse right

]

Pattern matching is an overloaded term. Do you mean pattern matching the language feature, as found in Haskell or Scala, used to break data structures apart? If so, it is useful for pulling apart a structured entity in a syntactically clean way that resembles (sometimes exactly matches) the syntax used to create the entity.

Haskell in particular uses this technique very effectively almost everywhere. A quick google for Haskell Pattern Matching will give excellent examples. Here's one from the wikibook:
dropThree ([x,y,z] ++ xs) = xs
This defines a function that drops the first three items from a list. It takes one argument, the list, which it matches to the pattern of three list items [x,y,z] concatenated with the rest of a list xs. It returns (is equal to in Haskell syntax) xs. So in effect, you use the syntax to create a list with three items prepended to pull three items off instead by putting the syntax in the argument field. Very cool!

Is there a particular language you are looking at that caused you to ask? The particulars vary.


Well I'm using Scheme and I know it has some pattern matching and Haskell and O'Caml have it in their languages as well.

Beginner in Game Development?  Read here. And read here.

 

I don't know Scheme, but it looks like it has pattern matching comparable to Haskell (less pretty, IMHO, since you have to use an explicit 'match' form). Telastyn and I summed it up pretty well. It allows branching and destructuring on values. http://docs.racket-l...ence/match.html has some good examples of what [s]PLT Scheme[/s] Racket can do with matching, but I imagine you already read the examples and want to know about real-world stuff. Worth noting, though, is the sheer number of things you can match on, including regexs!

In Haskell (which is the language I have most pattern matching experience in), you can pattern match in a lot of places implicitly so you end up using it often. I often break tuple data apart into useful names instead of using fst, snd, and the like. If I have data that can be one of several forms (online examples usually mention expressions for parsers), I would use the branching feature in place of a switch or cond. I also break the data into useful names rather than using named fields.

Contrived example in messy psuedocode: Say you had a network protocol with a message type like

Message = Move Int,Int or Login String or Logout or Say String,String


I would write the function

def Handle(Message)
Move X, Y: <move the character by X and Y>
Login Name: print 'Hello, ' + Name
Logout: print 'Bye!'
Say Name, Text: print Name + " says " + Text


Is this helpful?
C++ template specialization is a crude form of pattern matching. For better or worse, it's what allows a wide variety of compile-time meta-programming techniques.
Pattern matching is the functional way of doing things, it's in a sense orthogonal to the OO-way. Given code which you may not alter, OO does allow you to define extra subclasses (for which you need to implement the necessary abstract methods, etc.), but you cannot add new methods to an existing class hierarchy "from the outside". The functional way allows you to define new functions on an existing data type, but you cannot add extra "subtypes" as you can in OO (you would have to change the data types definition, and add the extra cases to all other functions operating on it.) Common Lisp is an example of a language which allows you to work in both dimensions: you can add a new subtype, and update (without having to modify existing code) all existing function/method definitions so that they know how to operate on the new subtype.

Pattern matching also has the advantage that you can use nested patterns. An example (possibly full of bugs)


data Formula = Var Char
| Or Formula Formula
| And Formula Formula
| ForAll Char Formula
| Exists Char Formula
| Implies Formula Formula
| Not Formula
deriving (Show)


normalize :: Formula -> Formula
-------------------------------
normalize (Not (Exists x f)) = normalize $ ForAll x $ Not f
normalize (Not (ForAll x f)) = normalize $ Exists x $ Not f
normalize (Not (Or f f')) = normalize $ And (Not f) (Not f')
normalize (Not (And f f')) = normalize $ Or (Not f) (Not f')
normalize (Not (Not f)) = normalize f
normalize (Or (And f f') f'') = normalize $ And (Or f f'') (Or f' f'')
normalize (Or f (And f' f'')) = normalize $ And (Or f f') (Or f f'')
normalize (Implies f f') = normalize $ Or (Not f) f'
normalize (And f f') = And (normalize f) (normalize f')
normalize (Or f f') = Or (normalize f) (normalize f')
normalize (Not f) = Not $ normalize f
normalize (ForAll x f) = ForAll x $ normalize f
normalize (Exists x f) = Exists x $ normalize f
normalize f@(Var _) = f


Inductive types and matching on their values is also important for proving stuff, as it allows for induction hypotheses, etc. but this might be a bit out of scope.

Given code which you may not alter, OO does allow you to define extra subclasses (for which you need to implement the necessary abstract methods, etc.), but you cannot add new methods to an existing class hierarchy "from the outside".


There's nothing about OO in and of itself that prevents this. Though the languages that served to popularise OO tend not to support features such as structural typing or pattern matching.

This topic is closed to new replies.

Advertisement