## Posts Tagged ‘lazy’

### Programming Praxis – Streaming Knapsack

May 15, 2012

In today’s Programming Praxis exercise, our goal is to write that a function that gives the first possible knapsack, i.e. the first combination of k values that sum up to a given value n, from a lazy list. The list should not be evaluated further than necessary. Let’s get started, shall we?

Some imports:

```import Data.List
import Debug.Trace
import System.IO```

Rather than constructing a matrix we use a solution that probably isn’t as efficient, but significantly more compact and easy to understand. Just take all possible combinations of numbers and find the first one of length k that sums up to n.

```knapsack :: Num a => Int -> a -> [a] -> Maybe [a]
knapsack k n = find (\s -> length s == k && sum s == n) . subsequences```

To demonstrate that the function is indeed lazy, we apply it to a list that indicates when one of its elements is evaluated. The hFlush is to make sure the result of the first function is printed before the second list gets evaluated, since without it the two results will be printed at the end.

```main :: IO ()
main = do print . knapsack 7 37 \$ map debugEval [1..20]
hFlush stdout
print . knapsack 7 17 \$ map debugEval [1..20]
where debugEval n = trace ("evaluated: " ++ show n) n```

### Programming Praxis – Carl Hewitt’s Same-Fringe Problem

August 3, 2010

In today’s Programming Praxis exercise we have to implement Carl Hewitt’s same-fringe algorithm, which determines if two binary trees have the same leaves in the same order, regardless if their respective structures. The simple solution of flattening both trees and comparing them doesn’t work in most languages, since their strict evaluation requires that the entire flattened list be loaded into memory, which will fail on big trees. Haskell on the other hand is lazy by default, so we don’t need to do anything complicated to have things work correctly. Let’s get started, shall we?

Since we’ll be working with binary trees, we’ll need to define a data structure for them:

```data Tree a = Node (Tree a) (Tree a) | Leaf a
```

Flattening a tree is trivial.

```flatten :: Tree a -> [a]
flatten (Node l r) = flatten l ++ flatten r
flatten (Leaf x)   = [x]
```

As mentioned in the intro, all we need to do is check whether the two flattened lists are equal.

```sameFringe :: Eq a => Tree a -> Tree a -> Bool
sameFringe a b = flatten a == flatten b
```

As always, a test to see if things are working correctly:

```main :: IO ()
main = print \$ sameFringe (Node (Leaf 1) (Node (Leaf 2) (Leaf 3)))
(Node (Node (Leaf 1) (Leaf 2)) (Leaf 3))
```

Looks like they are. But how do we prove this method doesn’t require a big memory overhead? Well, let’s just create some big trees and find out. A binary tree of depth 28 has 2^28 = over 268 million leaves. At the very least, that would result in a list of 1 GB, and that’s assuming 32 bits per integer and no overhead. Since Haskell’s default lists are linked lists, you should probably at least double that. However, the following program

```main = print \$ sameFringe (treeOfDepth 28 (Leaf 1))
(treeOfDepth 28 (Leaf 1))
where treeOfDepth n t = iterate (\x -> Node x x) t !! (n - 1)
```

hums along at a constant memory use of less than 20 MB when interpreted and less than 3 MB when compiled, the compiled version finishing in just under 32 seconds. To quote Larry Kersten, “Hard work often pays off after time, but laziness always pays off now.” 🙂