Programming Praxis – Text File Databases: Part 2

In today’s Programming Praxis exercise, our task is to define functions to map, filter, fold and foreach over records in text file databases, for which we wrote parser’s in the previous exercise.

However, due to the way we wrote the functions last time, there really isn’t much point in doing so. Since the parsers already return a list of records (albeit wrapped in an Either and an IO), you can simply use the map, filter, foldl and mapM_ functions from the Prelude to process them. I suppose that in the Scheme solution it makes a little more sense, since there the parsers only return one record at a time, but even then I’d personally just write a function that returns all the records in a file and then process them like any other list, since it saves you from having to duplicate a lot of existing functions. Additionally, it makes function composition much easier, as the database-specific functions cannot be composed.

Of the four functions mentioned, the only one that warrants a function in Haskell is foreach (or in Haskell terminology, mapM_), since it requires doing something with the potential parse error:

dbMapM_ :: Monad m => (a -> m b) -> Either l [a] -> m ()
dbMapM_ = either (const $ return ()) . mapM_

The other three can just be fmapped over the result of readDB. I won’t bore you with the implementations for map, filter and foldl, since they would be largely identical to the ones found in the Prelude.

main :: IO ()
main = do db <- readDB (fixedLength [5,3,4]) "db_fl.txt"
          print $ map head <$> db
          print $ foldl (const . succ) 0 <$> db
          print $ filter (odd . length) <$> db
          dbMapM_ print db

Tags: , , , , ,

2 Responses to “Programming Praxis – Text File Databases: Part 2”

  1. programmingpraxis Says:

    Laziness helps you. In Haskell, when you return a list of records from the port, you get the list one record at a time. If I were to write the equivalent function in Scheme, I would have to store the entire list — that would be inconvenient, and for a very large file, might not even be possible.

  2. Remco Niemeijer Says:

    Granted, but even in C# or Python you could make a generator function that lazily returns records. I would imagine you can do the same thing in Scheme. Just because a language isn’t lazy by default doesn’t mean you can’t do lazy evaluation. Besides, by that metric your current map and filter implementations are already inconvenient, since mapping the identity function or filtering with an always true predicate gives you the entire list of records just the same.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 34 other followers

%d bloggers like this: