• FEATURED

View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# Pattern matching. What is it used for?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

12 replies to this topic

### #1Alpheus  GDNet+

Posted 22 January 2012 - 11:49 PM

I know this is a complicated subject, but I'm such a beginner in it. However, my question is pretty straightforward: When or why would I use pattern-matching? In what sort of program or problem would it be useful? Does anyone have an example (of code) to demonstrate its use (in any language)?
External Articulation of Concepts Materializes Innate Knowledge of One's Craft and Science

Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley

If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts

Spoiler

### #2Hodgman  Moderators

Posted 23 January 2012 - 12:10 AM

First thing that comes to mind is gesture-based input -- like spell-casting in black & white, or most any Wii game.

### #3Snarkerd  Members

Posted 23 January 2012 - 02:05 AM

Pattern matching is an overloaded term. Do you mean pattern matching the language feature, as found in Haskell or Scala, used to break data structures apart? If so, it is useful for pulling apart a structured entity in a syntactically clean way that resembles (sometimes exactly matches) the syntax used to create the entity.

Haskell in particular uses this technique very effectively almost everywhere. A quick google for Haskell Pattern Matching will give excellent examples. Here's one from the wikibook:
dropThree ([x,y,z] ++ xs) = xs
This defines a function that drops the first three items from a list. It takes one argument, the list, which it matches to the pattern of three list items [x,y,z] concatenated with the rest of a list xs. It returns (is equal to in Haskell syntax) xs. So in effect, you use the syntax to create a list with three items prepended to pull three items off instead by putting the syntax in the argument field. Very cool!

Is there a particular language you are looking at that caused you to ask? The particulars vary.

### #4szigeti_roland  Members

Posted 23 January 2012 - 02:22 AM

regex?

### #5Telastyn  Members

Posted 23 January 2012 - 08:30 AM

In addition to Koobs' answer, it's often used as an inline branching mechanism in functional languages where you would've used switch statements (or a pile of if/else blocks) in imperative languages. Since functional languages often use more list/tuple manipulation and stuff like Maybe/Nothing the scenario arises more often there.

[edit: so the Maybe/Nothing stuff (and even the tuples) are properly called Algebraic Data Types. The common example is working with a tree:
match node with
| Leaf x -> // do stuff with x
| Branch left right ->
// recurse left
// recurse right

]

### #6Alpheus  GDNet+

Posted 23 January 2012 - 09:29 AM

Pattern matching is an overloaded term. Do you mean pattern matching the language feature, as found in Haskell or Scala, used to break data structures apart? If so, it is useful for pulling apart a structured entity in a syntactically clean way that resembles (sometimes exactly matches) the syntax used to create the entity.

Haskell in particular uses this technique very effectively almost everywhere. A quick google for Haskell Pattern Matching will give excellent examples. Here's one from the wikibook:
dropThree ([x,y,z] ++ xs) = xs
This defines a function that drops the first three items from a list. It takes one argument, the list, which it matches to the pattern of three list items [x,y,z] concatenated with the rest of a list xs. It returns (is equal to in Haskell syntax) xs. So in effect, you use the syntax to create a list with three items prepended to pull three items off instead by putting the syntax in the argument field. Very cool!

Is there a particular language you are looking at that caused you to ask? The particulars vary.

Well I'm using Scheme and I know it has some pattern matching and Haskell and O'Caml have it in their languages as well.
External Articulation of Concepts Materializes Innate Knowledge of One's Craft and Science

Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley

If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts

Spoiler

### #7Snarkerd  Members

Posted 23 January 2012 - 01:15 PM

I don't know Scheme, but it looks like it has pattern matching comparable to Haskell (less pretty, IMHO, since you have to use an explicit 'match' form). Telastyn and I summed it up pretty well. It allows branching and destructuring on values. http://docs.racket-l...ence/match.html has some good examples of what PLT Scheme Racket can do with matching, but I imagine you already read the examples and want to know about real-world stuff. Worth noting, though, is the sheer number of things you can match on, including regexs!

In Haskell (which is the language I have most pattern matching experience in), you can pattern match in a lot of places implicitly so you end up using it often. I often break tuple data apart into useful names instead of using fst, snd, and the like. If I have data that can be one of several forms (online examples usually mention expressions for parsers), I would use the branching feature in place of a switch or cond. I also break the data into useful names rather than using named fields.

Contrived example in messy psuedocode: Say you had a network protocol with a message type like
Message = Move Int,Int or Login String or Logout or Say String,String


I would write the function
def Handle(Message)
Move X, Y: <move the character by X and Y>
Login Name: print 'Hello, ' + Name
Logout: print 'Bye!'
Say Name, Text: print Name + " says " + Text


### #8e‍dd  Members

Posted 23 January 2012 - 01:35 PM

C++ template specialization is a crude form of pattern matching. For better or worse, it's what allows a wide variety of compile-time meta-programming techniques.

### #9SamLowry  Members

Posted 25 January 2012 - 08:02 AM

Pattern matching is the functional way of doing things, it's in a sense orthogonal to the OO-way. Given code which you may not alter, OO does allow you to define extra subclasses (for which you need to implement the necessary abstract methods, etc.), but you cannot add new methods to an existing class hierarchy "from the outside". The functional way allows you to define new functions on an existing data type, but you cannot add extra "subtypes" as you can in OO (you would have to change the data types definition, and add the extra cases to all other functions operating on it.) Common Lisp is an example of a language which allows you to work in both dimensions: you can add a new subtype, and update (without having to modify existing code) all existing function/method definitions so that they know how to operate on the new subtype.

Pattern matching also has the advantage that you can use nested patterns. An example (possibly full of bugs)

data Formula = Var Char
| Or Formula Formula
| And Formula Formula
| ForAll Char Formula
| Exists Char Formula
| Implies Formula Formula
| Not Formula
deriving (Show)

normalize :: Formula -> Formula
-------------------------------
normalize (Not (Exists x f)) = normalize $ForAll x$ Not f
normalize (Not (ForAll x f)) = normalize $Exists x$ Not f
normalize (Not (Or f f')) = normalize $And (Not f) (Not f') normalize (Not (And f f')) = normalize$ Or (Not f) (Not f')
normalize (Not (Not f)) = normalize f
normalize (Or (And f f') f'') = normalize $And (Or f f'') (Or f' f'') normalize (Or f (And f' f'')) = normalize$ And (Or f f') (Or f f'')
normalize (Implies f f') = normalize $Or (Not f) f' normalize (And f f') = And (normalize f) (normalize f') normalize (Or f f') = Or (normalize f) (normalize f') normalize (Not f) = Not$ normalize f
normalize (ForAll x f) = ForAll x $normalize f normalize (Exists x f) = Exists x$ normalize f
normalize f@(Var _) = f


Inductive types and matching on their values is also important for proving stuff, as it allows for induction hypotheses, etc. but this might be a bit out of scope.

### #10e‍dd  Members

Posted 25 January 2012 - 03:47 PM

Given code which you may not alter, OO does allow you to define extra subclasses (for which you need to implement the necessary abstract methods, etc.), but you cannot add new methods to an existing class hierarchy "from the outside".

There's nothing about OO in and of itself that prevents this. Though the languages that served to popularise OO tend not to support features such as structural typing or pattern matching.

### #11SamLowry  Members

Posted 26 January 2012 - 01:32 AM

Given code which you may not alter, OO does allow you to define extra subclasses (for which you need to implement the necessary abstract methods, etc.), but you cannot add new methods to an existing class hierarchy "from the outside".

There's nothing about OO in and of itself that prevents this. Though the languages that served to popularise OO tend not to support features such as structural typing or pattern matching.

Depends on your definition of OO. If you take it "whatever C++/java/C#/Eiffel implement", then there's no simple way of adding methods. But of course, in the "true" definition of OO, I'm sure no one includes "and it absolutely must be impossible to do this". Do you know of a language which supports adding methods, and performs some sort of static verification?

### #12Telastyn  Members

Posted 26 January 2012 - 08:07 AM

C#'s extension methods serve this purpose though they're not strictly 'added' depending on your definition/requirements.

### #13SamLowry  Members

Posted 26 January 2012 - 11:12 AM

C#'s extension methods serve this purpose though they're not strictly 'added' depending on your definition/requirements.

C# extension methods don't really count in my eyes: they're nothing more than syntactic sugar for static methods in a separate class. There's no dynamic dispatch going on. (But admittedly, they're still quite useful.)

Someone told me some next version of java will have some sort of extension mechanism, but based on the information I got it would still require a lot of instanceofs to operate on existing hierarchies.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.