Archive | Lambda RSS for this section

List as a Monad

Much ink has been spilled on the subject of “Lists as Monads”. Just the fact that they are of course, not what it means for you. This doesn’t really seem surprising to me; there are better monads to use to describe how monads work. Additionally, lists have their own special functions for working with them; you don’t fmap a list, you just call map.

Presumably, Lists as Monads is something that seasoned Monadimancers understand. Maybe it’s some sort of rite of passage. But no more. Today I’ll shed light on the arcane mysteries of Lists as Monads.

Motivation

But first we need a problem to solve. For my example I’ll be using the cartesian product of two lists. The cartesian product of two sets (we’ll be using lists) is defined as:

A X B is the set of all ordered pairs (a, b) such that: a is a member of set A and b is a member of set B

…translated into Haskell we should get a function with the following signature:

cartProd :: [a] -> [b] -> [(a, b)]

How might this function look? Well, we’d need three functions. The first function should have the signature:

cartProd'' :: a -> b -> (a, b)

This function is pretty straight forward, it pairs an a and a b. Next we’d need the function:

cartProd' :: [b] -> a -> [(a, b)]

This function maps (cartProd'' a) over b, producing the list [(a, b)]. Finally, we need a function to tie it all together:

cartProd :: [a] -> [b] -> [(a, b)]

This function maps (cartProd' b) over a, then concats the resulting list of lists. An implementation might look like this:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = concat $ map (cartProd'' r) l where cartProd' :: [b] -> a -> [(a, b)] cartProd' r l = map (cartProd''' l) r where cartProd'' :: a -> b -> (a, b) cartProd'' l r = (l, r)

Did you go cross-eyed? Not exactly the most concise function that’s ever been written. Surely there is a better way…

Binding Our Lists

You may remember a few weeks back I talked about using the Maybe Monad. The gist of that post being that if you’re inside a do block, you can treat your Maybe as if it were just a plain value. It turns out that we can do something similar with Lists.

The Monad instance for [] is somewhat complicated, but it’s operation is straight forward. If >>= takes a Monad a, a function that takes an a and returns a Monad b, and returns a Monad b, what happens if we bind a simple function to [1]?

Prelude> [1] >>= (\a -> return $ a + 1) [2]

…ok, so it added 1 to 1, and stuffed it into a list. Seems pretty straight forward. Let’s try something a little more complicated:

Prelude> [1, 2] >>= (\a -> return $ a + 1) [2,3]

…so this time it added 1 to each list node, as if we’d called map ((+) 1) [1, 2]. Let’s try something else:

Prelude> [1] >>= (\a -> [(a + 1), (a - 1)]) [2,0]

…this time we tried it in reverse. We bound [1] to a function that returns a list with two elements. The resulting list contained two elements. Again, nothing ground breaking here, but what if we do both?

Prelude> [1, 2] >>= (\a -> [(a + 1), (a - 1)]) [2,0,3,1]

…now we get a list with four elements: 1+1, 1-1, 2+1, and 2-1. To replicate this behavior we can’t just map the lambda over the original list. We need to add a call to concat. Let’s expand this our a bit further:

Prelude> [1, 2] >>= (\a -> [(a + 1), (a - 1)]) >>= (\b -> [b, b]) [2,2,0,0,3,3,1,1]

…all simple functions, but if we were to try to do this without the use of List’s monadic functions it’d become a mess like my cartProd function. Speaking of which…

The Better Way

Getting back to our original topic. Now that we have a feel for how List’s monadic interface works, how could we re-implement cartProd to not be terrible? Simple:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = do l' <- l r' <- r [(l', r')]

I’m often hesitant to toot my own horn and call something I wrote “elegant”, but it’s hard to argue that this function isn’t. In three short lines, I managed to re-implement that nested monstrosity I wrote before. There is one fairly massive gotcha here…

In the first and second lines, we bind our two lists to a label. It’s important to note that the order of these matters. The lists will be cycled through from the bottom up. Meaning that for each element of l, all elements of r will be evaluated. For example, using our cartProd:

Prelude> cartProd [1,2] [3,4] [(1,3),(1,4),(2,3),(2,4)]

1 is paired with 3, then 1 is paired with 4, then 2 is paired with 3 and 2 is paired with 4. Were we to swap the order in which we bind l and r to look like this:

cartProd :: [a] -> [b] -> [(a, b)] cartProd l r = do r' <- r l' <- l [(l', r')]

Then the output would look like this:

Prelude> cartProd [1,2] [3,4] [(1,3),(2,3),(1,4),(2,4)]

1 is paired with 3, then 2 is paired with 3, then 1 is paired with 4, then 2 is paired with 4. Before the first element of the tuple was grouped together where here the second element is. For our purposes, both results are valid per the definition of cartesian product, but that may not hold for you so be careful.

Maybe I Should Be In The Maybe Monad

If you’ve spent any time with Haskell, then you’ve surely encountered Maybe. Maybe is Haskell’s version of testing your pointer for NULL, only its better because it’s impossible to accidentally dereference a Nothing.

You’ve also probably thought it was just so annoying. You test your Maybe a to ensure it’s not Nothing, but you still have to go about getting the value out of the Maybe so you can actually do something with it.

The Problem

Let’s forgo the usual contrived examples and look at an actual problem I faced. While working on the Server Console, I was faced with dealing with a query string. The end of a query string contains key/value pairs. Happstack conveniently decodes these into this type:

[(String, Input)]

I needed to write a function to lookup a key within this pair, and return it’s value. As we all know, there’s no way to guarantee that a given key is in the list, so the function must be able to handle this. There are a few ways we could go about this, but this seems to me to be an ideal place to use a Maybe. Suppose we write our lookup function like so:

lookup :: Request -> String -> Maybe String

This is logically sound, but now we have an annoying Maybe to work with. Suppose we’re working in a ServerPart Response. We might write a response function like so:

handler :: ServerPart Response handler = do req <- askRq paths <- return $ rqPaths req page <- return $ lookup req "page_number" case page of Nothing -> mzero (Just a) -> do items <- return $ lookup req "items_per_page" case items of Nothing -> mzero (just b) -> h' paths a b

Yucky! After each call to lookup, we check to see if the call succeeded. This gives us a giant tree that’s surely pushing off the right side of my blog page. There must be a better way.

Doing It Wrong

Shockingly, this is not the best way to do this. It turns out that writing our functions in the Maybe monad is the answer. Take the following function:

hTrpl :: Request -> Maybe ([String], String, String) hTrpl r = do paths <- return $ rqPaths r page <- lookup r "page_number" items <- lookup r "items_per_page" return (paths, page, items)

… now we can re-write handler like so:

handler :: ServerPart Response handler = do req <- askRq triple <- return $ hTrpl req case triple of Nothing -> mzero (Just (a, b, c)) -> h' a b c

Much better, right? But why don’t we have to test the return values of lookup? The answer to that question lies in the implementation of Maybe‘s >>= operator:

instance Monad Maybe where (Just x) >>= k = k x Nothing >>= _ = Nothing

Recall that do notation is just syntactic sugar around >>= and a whole bunch of lambdas. With that in mind, you can see that we are actually binding functions together. Per Maybe‘s bind implementation, if you bind Just a to a function, it calls the function on a. If you bind Nothing to a function, it ignores the function, and just returns Nothing.

What this means for us is that so long as we’re inside the Maybe monad, we can pretend all functions return successful values. Maybe allows us to defer testing for failure! The first time a Nothing is returned, functions stop getting called, so we don’t even have to worry about performance losses from not immediately returning from the function! So long as we’re inside of Maybe, there will be Peace On Earth. We code our successful code branch, and then when all is said and done and the dust has settled, we can see if it all worked out.

Next time you find yourself testing a Maybe more than once in a function, ask yourself: should I be in the Maybe monad right now?

A Trip To The Magical Land Of Monads

Monads can be a tricky topic. On the surface, they’re not hard at all. Taking Maybe for instance:

Prelude> Just 1 Just 1 Prelude> Nothing Nothing

That couldn’t possibly be easier. Something is either Just [something], or it is Nothing. However Monad world is the magical world of gotchas, unenforceable rules, and magical syntax. I had been planning on writing a post on Monads, much like I did with Applicatives and Functors. While researching this topic, I’ve determined that I’m not qualified to speak on this subject yet.

I am qualified to discuss the usage of Monads. I feel that say you are going to “learn how to do Monads” is similar to saying you are going to “learn how to do data structures”. Data structures are similar to each other in that they serve a common purpose: to contain data. However, a Vector is nothing like a Linked List, which is nothing like a Hash Map.

Similarly, a Maybe is nothing like an IO, which is nothing like a Writer. While they are all Monads, they serve different purposes. Today, I’d like to lay some groundwork on the topic and talk about binding functions.

On Magical Syntax

Much like Functor and Applicative, Monad brings functions that allow you to use normal values with monadic values. Monad brings the >>= operator. This operator is called the bind operator. (this is important for monads in general, but I won’t get into this today. Just know that the operator’s name is bind) The signature for >>= is:

(>>=) :: m a -> (a -> m b) -> m b

As you can see, it takes a Monad that contains an a, a function that takes an a and returns a Monad b, and returns a Monad b. Basically, it calls the function on the value contained in the monad, and does whatever additional action is appropriate for the monad, and returns a monad containing the result. In short: it is fmap for Monads. Let’s take a look at a quick example for Maybe:

Prelude> Just 1 >>= (\a -> Just $ a + 5) Just 6

As you can see, we bind Just 1 to a function that takes a value, adds it to 5, and wraps the result in a Just, which results in Just 6. Bind will correctly handle Nothing as well:

Prelude> Nothing >>= (\a -> Just $ a + 5) Nothing

Still with me? Good, because things are about to get magical.

Do Notation

Monads are so special, they have their own magical syntax! When working with monads, you may use do notation. What does do notation look like? Let’s take a look at a sample function:

justAdd :: (Num a) => a -> a -> Maybe a justAdd a b = Just $ a + b

This function takes 2 numbers, adds them, and wraps them up in a Maybe. Nothing earth shattering here. Let’s take a look at how to work with these using Bind:

justAddDemoBind = justAdd 2 2 >>= (justAdd 4) justAddDemoBindNothing = Nothing >>= (justAdd 4)

I’ve defined 2 simple functions here. The first calls justAdd 2 2, which returns Just 4. The function (justAdd 4) is then applied to it using the bind operator, which will return Just 8. The second attempts to apply the function (justAdd 4) to Nothing. Since the bind operator is smart enough to handle this, the final result of this function is Nothing. Simple, really. Now, let’s see do notation in action:

justAddDemoDo = do first <- justAdd 2 2 justAdd 4 first justAddDemoDoNothing = do first <- Nothing justAdd 4 first

Looks like a completely different language, right? In fact, these two functions do the exact same thing as the previous two. The purpose of do notation is to make working with Monads easier. In practice, if your logic is non-trivial, you end up with hugely nested statements. To see what’s going on, let’s break justAddDemoDo down:

justAddDemoDo = do

In the first line, we open our do block.

first <- justAdd 2 2

In the second line, we call justAdd 2 2, and assign the result to the name first. Notice that <- operator? That works basically the same as it does in list comprehensions, it does the assigning.

justAdd 4 first

Finally, we add 4 to first, resulting in Just 8, which is the value returned from the function. It seems like we’re treating our monad contained in first as a regular value. This is part of the magic of do notation. In fact, if this line were written as:justAdd first 4 it would have worked.

Another very important thing to note is that the last line of a do block must be an expression that returns a Monad! GHCI will throw a fit if it’s not.

The Pervasiveness Of Do

As you can see, under the hood, do notation just uses the >>= operator. You can also see that it is much simpler and cleaner looking than using the bind operator. However, that doesn’t mean that do is better. Like many things, it comes down to personal choice.

When reading literature on Haskell, you should be prepared to interpret do blocks, and usage of the bind operator. Like any tool, do and bind are not always the correct one. Picking the correct tool for the job is part of being a programmer. Hopefully this tutorial gave you enough familiarity to be able to use both.

What Functors Can Do For You

One of the major aspects of Functional Programming, its lack of side effects, can be one of its major stumbling blocks. Sure, taking a value as a parameter and using it without creating a variable is a simple matter for basic types, but what of more complicated structures? How does one go about unwrapping a structure, operating on its value, and re-wrapping the value in said structure without losing it’s data? The answer is by using a convoluted set of functions that take way too many arguments just to preserve the state. Or, you could use a functor.

So I Overload Operator()?

No, we aren’t talking about C++ Classes with an operator() implementation to make it look like a function. Functors in Haskell are a completely different animal. The easiest way to explain it is to look at an example. By way of example, let’s take a look at the Maybe monad:

data Maybe a = Nothing | Just a deriving (Eq, Ord)

As you can see, Maybe has two type constructors: Nothing or Just something. Maybe is used to represent a calculation that may have failed. If the calculation failed, Nothing is returned. If it succeeded, Just whatever is returned. This is all fine and good, but Just whatever isn’t the same thing as whatever; it is encapsulated in a maybe, and must be extracted if it is to be used.

How do you do that? Well, first you have to account for a Nothing value. Assuming you have a Just, you have to extract it, probably using a new function. Then you have to use the value. Afterwards, you probably want to put it back into the Maybe. Sounds like a lot of work, right? Let’s take a look at what Functor has to offer us:

class Functor f where fmap :: (a -> b) -> f a -> f b

So, a functor defines 1 function (actually it defines more, but they have default implementations and can be ignored in 99% of cases) fmap, which takes a function that takes 1 argument of type a and returns a result of type b, a Functor a, and returns a Functor b. Basically, this amounts to fmap takes a function, calls it on the passed in functor, and returns a functor with the resulting value.

Maybe This Is Helpful?

Let’s take a look at how this works with Maybe. Here is Maybe‘s Functor implementation:

instance Functor Maybe where fmap _ Nothing = Nothing fmap f (Just a) = Just (f a)

Basically, if fmap Nothing is called, Nothing is returned. Otherwise, the passed in function is called on the value contained in the Just, and a new Just is returned with the result. This is a much more elegant way to deal with structures, there is no checking required, fmap knows how to do the right thing.

A Bit Limiting

You may be thinking to yourself “Isn’t this a bit limiting? I can only fmap using a function that takes 1 argument!” You’d be right, if not for Haskell’s partial function application. Say we want to add 2 to the Int contained in a Just. If we didn’t have partial function application, we would have to do something ugly like this:

fmap (\a -> a + 2) (Just 2)

… or maybe something truly awful like this:

addTwo :: Int -> Int addTwo a = a + 2

Luckily for us, we live in a just and righteous world where we can just partially apply +:

fmap (+2) (Just 2)

While this is a trivial example, I feel it adequately illustrates the point; you can fmap a function that takes any amount of arguments if you partially apply it!

This leads me to an important point that most literature leaves out: you must apply arguments from left to right, the order of arguments is important! When you’re writing functions, think about how users might want to partially apply them. Take these two functions for example:

padString1 :: String -> Int -> String padString1 s 0 = s padString1 s n = padString1 (s ++ "-") (n - 1) padString2 :: Int -> String -> String padString2 0 s = s padString2 n s = padString2 (n - 1) (s ++ "-")

Both of these functions do the same thing: they take a String and an Int, and append n -‘s to the string. The difference is how they can be partially applied: padString1 has the String first, so it can only be partially applied with the String. Conversely, padString2 has the Int first, so it can only be partially applied with the Int. For this reason, when creating functions, take time to consider these issues. There’s probably a whole blog post to be written about not messing this up. Maybe when I feel qualified, I’ll write it! For now, just keep it in mind.

Sounds Familiar

You’ve heard this word before, right? Map… But where?

instance Functor [] where fmap = map

That’s right, Lists are Functors!. The fmap implementation for List just calls the map function that you’re probably familiar with. (if you need a refresher, it takes a function, a list, and returns a list with the function called on all members of the original list)

In fact, most structures in Haskell are Functors. All Monads in the standard library are Functors, so it’s a good bet that you can fmap them. Either way, judicious use of fmap and Functors can make your code much cleaner and more manageable.

Baby’s First Web Application Framework

Deep in the depths of the DMP Lunar Bunker, I have a file server running. I prefer to do my computing on various internet connected devices rather than be tied to one computer. To this end, I try to keep my documents, images, and things of that nature centralized on the server so that I can get to it anywhere. This is all fine and good on a computer, but my iPad and various smartphones don’t have native support for mounting a smb share.

To that end, a project that I’ve had planned for a long time is to develop a web app running on my server to serve that stuff up through a web browser. That said, this post isn’t announcing the beginning of that project. I have enough on my plate: DMP Photo Booth still needs polish, I want to learn Haskell, I have a wedding to plan, and I have to spend this month figuring out what “Linear Algebra” is. Not to mention that the file server is in need of a software refresh. All of those things are higher priority. That said, as an excercise in learning Haskell, I’ve created a toy HTML generator.

doopHtml

That would be the name of my little suite of functions. I feel its name reflects the gravity of the project. All that it really is is a few types, and customized show instances. First, let’s take a look at my types:

type TagName = String type Attributes = [(String, String)] data Child = NullChild | TagChild Tag | StringChild String data Tag = NullTag | Tag TagName Attributes [Child]

First of all, forgive my sloppy indentation style. I haven’t really settled on a code style for Haskell yet. With that out of the way, let’s get to business.

TagName

This is just a type alias for a String. The concept of a math function is one that I understand, but I’ve always hated the trend of giving things useless names like a, i, or Σ instead of names like numerator, index, or Summation over. Sure, it was good enough for Aristotle, but I live in an age with things like “find and replace” and “code completion”. Maybe thousands of years from now, scholars will remember Chris Tetreault, father of the descriptive variable name. (HA!)

But I digress. I’ve created this alias to document when a String in a function argument or type constructor is a tag name.

Attributes

This is a type alias for an array of String pairs. This type represents the attributes list of a Tag. Things like src="https://doingmyprogramming.com/" It’s pretty self-explanatory. An empty list is an allowed value, and represents a tag with no attributes.

Child

Now we’re getting a bit more complicated. If you picture a HTML document as a tree, then the child is the branches from the top level tag. There are three types of Child: NullChild is the absence of a tag, TagChild is a nested Tag, StringChild is just some free floating text.

Tag

And now the meat of it. Tag has two type constructors: NullTag which is the absence of a tag, or Tag.

Tag takes 3 arguments: a TagName, an Attributes, and a list of Child. This is all you really need to represent a basic HTML document. (no embedded CSS)

None of these types have deriving(Show), because they require a custom implementation. Let’s look at that next.

Show

First, let’s take a look at Tag‘s Show implementation:

instance Show Tag where show NullTag = "" show (Tag n [] []) = "<" ++ n ++ ">" show (Tag n a []) = "<" ++ n ++ flattenAttributes a ++ ">" show (Tag n [] c) = show (Tag n [] []) ++ foldl1 (++) (map show c) ++ "</" ++ n ++ ">" show (Tag n a c) = show (Tag n a []) ++ foldl1 (++) (map show c) ++ "</" ++ n ++ ">"

The show tag has 5 patterns:

  • The first pattern is a NullTag. It returns an empty string.
  • The second pattern is a Tag that only has a TagName. It places the TagName between angle brackets.
  • The third pattern is one that also has an Attributes. It flattens the Attributes, and places it and the TagName between angle brackets
  • The fourth pattern is one that has a TagName and a Child list. It :
    • Recursively calls show with the argument (Tag n [] []). This results in the opening tag being printed
    • maps show over the Child list, then folds the returned list of Strings up using foldl1 (++), concatenating all the returned Strings into 1 large String.
    • Closes the tag
  • The fifth pattern has all fields. it works the same as the fourth, but calls the show (Tag n a []) instead.

The Show implementation for Child is much simpler, and I don’t feel it requires explanation:

instance Show Child where show NullChild = "" show (TagChild x) = show x show (StringChild x) = x

But what about that flattenAttributes function earlier? It looks like this:

flattenAttributes :: Attributes -> String flattenAttributes [] = [] flattenAttributes x = foldl (\acc (k,v) -> acc ++ " " ++ k ++ "=" ++ v) "" x

If given an Empty List, it returns an Empty List. In Haskell, [] is the same thing as "", so this will function as an empty String. However, if not given an empty string the work is much more complicated.

First, foldl is called. foldl‘s function signature is: (a -> b -> a) -> a -> [b] -> a, which reads as: “foldl takes a function that takes an a, b, and returns an a. It also takes an a, and a list of b, and it returns an a”. In english, it needs a starting value, a list to fold up, and a function to be called on each list element. I call foldl on an empty string and x, which is the Attributes passed into the function.

The funky bit in the middle of all of that, (\acc (k,v) -> acc ++ " " ++ k ++ "=" ++ v), is called a Lambda. If you’ve ever written an anonymous function in Java, you’ve done this before. The \ at the beginning tells Haskell that what follows is a Lambda. After that, you have your arguments. Remember the signature of the function to be passed into foldl: (a -> b ->a). In this lambda, acc is a, and this is a String. (k,v) is b, a Tuple of two Strings. The part following the -> is the function body. In this lambda, we concatenate the two pairs together into the form of " k=v". That is then concatenated onto acc, which is the results of all previous calls. The final result looks something like " k1=v1 k2=v2 ... kn=vn", which is returned from the function.

Some Helpers

So, now that’s all done, and the Tag type is usable. It’s not user friendly though:

Tag "html" [] [TagChild $ Tag "head" [] [TagChild $ Tag "title" [] [StringChild $ "DMP"]], TagChild $ Tag "body" [] [TagChild $ Tag "a" [("href","doingmyprogramming.com")] [StringChild "Doing My Programming..."]]] <html><head><title>DMP</title></head><body><a href=doingmyprogramming.com>Doing My Programming...</a></body></html>

Barf, right? It’s like writing HTML, only harder! Clearly we need some helper functions to make working with this easier.

tagHtml :: [Child] -> Tag tagHtml c = Tag "html" [] c tagHead :: Attributes -> [Child] -> Tag tagHead a c = Tag "head" a c tagTitle :: String -> Tag tagTitle t = Tag "title" [] [StringChild t] tagBody :: Attributes -> [Child] -> Tag tagBody a c = Tag "body" a c pageSkeleton :: String -> [Child] -> Tag pageSkeleton t b = tagHtml [TagChild $ tagHead [] [TagChild $ tagTitle t], TagChild $ tagBody [] b]

Here is some basic helpers I whipped up while writing this. These four helpers will take just the fields needed to produce their respective tags. Now, we can make a basic page skeleton by calling the pageSkeleton function:

pageSkeleton "DMP" [TagChild $ Tag "a" [("href","http://doingmyprogramming.com")] [StringChild $ "Doing My Programming..."]] <html><head><title>DMP</title></head><body><a href=doingmyprogramming.com>Doing My Programming...</a></body></html>

While that could use some work, it’s enough to see where this is going. Much like Perl’s CGI module, functions can be created to automate much of the boilerplate.

Reinventing The Wheel

All this is fine and good, but surely this has been done already, right? Considering Haskell’s hackage server is implemented in Haskell, it seems that there is almost certainly a better web development framework for Haskell than doopHtml. I encourage you to not develop your next web application with the doopHtml framework, nor should you include your expertise in doopHtml on your resume.

It was good practice though!

%d bloggers like this: