Eighteen months ago we launched The Haskell Unfolder, a YouTube series where we discuss all things Haskell. From the beginning the goal was to cover as broad a spectrum of topics as possible, as well as show-case the work of many different people in the Haskell world. We’d like to think we succeeded at that; below you will find a summary of the topics covered in the first year and a half, organized roughly by category.
All episodes are live-streamed, and we welcome interactivity through questions or comments submitted in the video chat. If you’d like to catch us live, episodes are generally on Wednesdays 8:30 CEST/CET (18:30 UTC/19:30 UTC), usually once every two weeks. New episodes are announced on YouTube, the Well-Typed blog, Reddit, Discourse, Twitter and Mastodon. Most episodes are roughly half an hour. The code samples discussed in the episodes are all available in the Unfolder GitHub repository.
If you enjoy the show, a like-and-subscribe is always appreciated, as is sharing the show with other people who might be interested. There is also some Unfolder merch available.1
Episodes
Beginner friendly
unfoldr
(episode 1)
In this first eponymous episode we take a look at the unfoldr
function
unfoldr :: (b -> Maybe (a, b)) -> b -> [a]
and discuss how it works and how it can be used.
Composing left folds (episode 5)
Based on a former Well-Typed interview problem, we discuss how to perform
multiple simultaneous computations on a text file, gathering some statistics. We
use this as an example to discuss issues of performance, lazy evaluation, and
elegance. We also take a look at how we can use foldl
library [Gabriella Gonzalez] to write well-performing
code without giving up on compositionality.
Learning by testing (aka “Boolean blindness”, episode 7)
We discuss how Booleans convey little information about the outcome of a test,
and how replacing Booleans by other datatypes that produce a witness of the
success or failure of a test can lead to more robust and therefore better code.
For example, instead of filter
filter :: (a -> Bool) -> [a] -> [a]
we might instead prefer mapMaybe
mapMaybe :: (a -> Maybe b) -> [a] -> [b]
This idea is known under many names, such as “Learning by testing” [Conor McBride], “Boolean blindness” [Bob Harper] or “Parse, don’t validate” [Alexis King].
A new perspective on foldl'
(episode 19)
We introduce a useful combinator called repeatedly
repeatedly :: (a -> b -> b) -> ([a] -> b -> b)
which captures the concept “repeatedly execute an action to a bunch of arguments”. We discuss both how to implement this combinator as well as some use cases.
Dijkstra’s shortest paths (episode 20)
We use Dijkstra’s shortest paths algorithm as an example of how one can go about implementing an algorithm given in imperative pseudo-code in idiomatic Haskell. For an alternative take, check out the Monday Morning Haskell [James Bowen] episode on the same topic .
From Java to Haskell (episode 25)
We translate a gRPC
server written in Java (from a blog post
by Carl Mastrangelo) to Haskell. We use this as an
example to demonstrate some of the conceptual differences of the two languages,
but also observe that the end result of the translation looks perhaps more
similar to the Java version than one might expect. Unlike most of our episodes,
we hope that this one is understandable to any software developer, even people
without any previous exposure to Haskell. Of course, we won’t be able to explain
everything, but the example used should help to establish an idea of the look
and feel of Haskell code, and perhaps learn a bit more about the relationship
between the object-oriented and functional programming paradigms.
This episode is not about doing object oriented programming in Haskell; if you’d like to know more about that see our blogpost on this topic, as well as episode 13 of the Haskell Unfolder on open recursion.
Duality (episode 27)
Duality is the idea that two concepts are “similar but opposite” in some precise sense. The discovery of a duality enables us to use our understanding of one concept to help us understand the dual concept, and vice versa. It is a powerful technique in many disciplines, including computer science. We discuss how we can use duality in a very practical way, as a guiding principle leading to better library interfaces and a tool to find bugs in our code.
This episode focuses on design philosophy rather than advanced Haskell concepts, and should consequently be of interest to beginners and advanced Haskell programmers alike (we do not use any Haskell beyond Haskell 2010). Indeed, the concepts apply in other languages also (but we assume familiarity with Haskell syntax).
Episode 24 on generic folds and unfolds discusses another example of duality often exploited in Haskell, though that application is perhaps somewhat more advanced.
Solving tic-tac-toe (episode 32)
We develop an implementation of a simple game from scratch: tic-tac-toe. After having implemented the rules, we show how to actually solve the game and allow optimal play by producing a complete game tree and using a naive minimax algorithm for evaluating states.
Generating visualizations with diagrams
(episode 33)
We look at the diagrams
package [Brent Yorgey],
which provides a domain-specific language embedded into Haskell for describing
all sorts of pictures and visualisations. Concretely, we visualise the game tree
of tic-tac-toe that we computed in episode
32, with the goal of producing a
picture similar to the one in xkcd “Tic-Tac-Toe”.
However, this episode is understandable without having watched the previous
episode.
Reasoning
Laws (episode 8)
Many of Haskell’s abstractions come with laws; well-known examples include the
Functor
type class with the functor laws and the Monad
type class with the
monad laws, but this is not limited to type classes; for example, lenses come
with a set of laws, too. To people without a mathematical background such laws
may seem intimidating; how would one even start to think about proving them for
our own abstractions? We discuss examples of these laws, show how to prove them,
and discuss common techniques for such proofs, including induction. We also
discuss why these laws matter; what goes wrong when they do not hold?
Parametricity (episode 12)
We look at the concept of parametricity, also known as “theorems for free”. The idea of parametricity is that you can make non-trivial statements about polymorphic functions simply by looking at their types, and without knowing anything about their implementation. We focus on the intuition, not the theory, and explain how parametricity can generally help with reasoning about programs. We also briefly talk about how parametricity can help with property based testing [J.P. Bernardy, P. Jansson and K. Claessen].
For a more in-depth discussion of the theory of parametricity, refer to the Parametricity Tutorial part 1 and part 2 on the Well-Typed blog.
Performance and optimization
In addition to the Unfolder episodes mentioned in this section, Well-Typed offers a course on Performance and Optimisation.
GHC Core (episode 9)
After parsing, renaming and type checking, GHC translates Haskell programs into its internal Core language. It is Core where most optimisations happen, so in order to get a better idea of what your program gets compiled to, being able to read and understand Core code is quite useful. Core is in essence a drastically simplified and more explicit version of Haskell. We look at a number of simple Haskell programs and see how they get represented in Core.
foldr-build
fusion (episode 22)
When composing several list-processing functions, GHC employs an optimisation
called foldr-build
fusion, sometimes also known as short-cut fusion or
deforestation [A. Gill, J. Launchbury, S. Peyton
Jones]. Fusion combines functions in such a way that any intermediate lists can
often be eliminated completely. We look at how this optimisation works, and at
how it is implemented in GHC: not as built-in compiler magic, but rather via
user-definable rewrite rules.
Specialisation (episode 23)
Overloaded functions are common in Haskell, but they come with a cost. Thankfully, the GHC specialiser is extremely good at removing that cost. We can therefore write high-level, polymorphic programs and be confident that GHC will compile them into very efficient, monomorphised code. In this episode, we demystify the seemingly magical things that GHC is doing to achieve this.
This episode is hosted by a guest speaker, Finley McIlwaine, who has also written two blog posts on this topic: Choreographing a dance with the GHC specializer (Part 1) and part 2.
Types
If you are interested in the topics around Haskell’s advanced type system, you might be interested in Well-Typed course on the Haskell Type System.
Quantified constraints (episode 2)
We discuss the QuantifiedConstraints
language extension, which was introduced
in the paper Quantified Class Constraints
[G. Bottu, G. Karachalias, T. Schrijvers, B. Oliveira and P. Wadler]. We
also briefly talk about the problems of combining quantified constraints
with type families, as also discussed in GHC ticket #14860.
We assume familiarity with type classes. An understanding of type families is helpful for a part of the episode, but is not a requirement.
Injectivity (episode 3)
We discuss in what way parameterised datatypes are injective and type families are not. We also discuss injective type families, a relatively recent extension based on the paper Injective Type Families [J. Stolarek, S. Peyton-Jones, R. Eisenberg].
We assume familiarity with Haskell fundamentals, in particular datatypes. Knowledge of type families is certainly helpful, but not required.
Computing type class dictionaries (episode 6)
A function with a Show a
constraint
f :: Show a => ...
wants evidence that type a
has a Show
instance. But what if we want to
return such evidence from a function?
f :: ..... -> .. Show a ..
We discuss what the signature of such a function looks like, how we can compute such constraints, and when we might want to.
Higher-kinded types (episode 14)
We take a look at the common design pattern where we abstract all the fields of a record type over a type constructor which can then be instantiated to the identity to get the original record type back, but also to various other interesting type constructors.
data Episode f = MkEpisode {
number :: f Int
title :: f Text
, abstract :: f Text
, }
We look at a few examples, and discuss how common operations on such types naturally lead to the use of higher-rank polymorphic types.
There are a number of packages on Hackage that implement (variations on)
this pattern:
hkd
[Edward Kmett],
barbies
[Daniel Gorin],
rel8
[Oliver Charles],
beam
[Travis Athougies] as well as
our own package sop-core
.
Computing constraints (episode 18)
Sometimes, for example when working with type-level lists, you have to compute with constraints. For example, you might want to say that a constraint holds for all types in a type-level list:
type All :: (Type -> Constraint) -> [Type] -> Constraint
type family All c xs :: Constraint where
..
We explore this special case of type-level programming in Haskell. We also revisit type class aliases and take a closer look at exactly how and why they work.
Type families and overlapping instances (episode 28)
We discuss a programming technique which allows us to replace overlapping instances with a decision procedure implemented using type families. The result is a bit more verbose, but arguably clearer and more flexible.
The type of runST
(episode 30)
In Haskell, the ST
type offers a restricted subset of
the IO
functionality: it provides mutable variables, but nothing else. The
advantage is that we can use mutable storage locally, because unlike IO
, ST
allows us to escape from its realm via the function runST
. However, runST
has a so-called rank-2 type:
runST :: (forall s. ST s a) -> a
We discuss why this seemingly complicated type is necessary to preserve the safety of the operation.
Testing
falsify
(episode 4)
We discuss falsify
, our new library for property based
testing in Haskell, inspired by the Hypothesis [David
MacIver] library for Python.
For a more in-depth discussion, see also the blog post falsify
:
Hypothesis-inspired shrinking for Haskell or the paper
falsify
: Internal shrinking reimagined for Haskell [Edsko de Vries].
Testing without a reference (episode 21)
The best case scenario when testing a piece of software is when we have a reference implementation to compare against. Often however such a reference is not available, raising the question how to test a function if we cannot verify what that function computes exactly. We consider how to define properties to verify the implementation of Dijkstra’s shortest path algorithm we discussed in Episode 20; you may wish to watch that episode first, but it’s not required: we mostly treat the algorithm as a black box for the sake of testing it.
We can only scratch the surface here; for an in-depth discussion of this topic, we highly recommend How to Specify It!: A Guide to Writing Properties of Pure Functions [John Hughes].
Exceptions, annotations and backtraces (episode 29)
Version 9.10 of GHC introduces an extremely useful new feature: exception annotations and automatic exception backtraces. This new feature, four years in the making, can be a life-saver when debugging code and has not received nearly as much attention as it deserves. We therefore give an overview of the changes and discuss how we can take advantage of them.
Note: in the episode we discuss that part 4 of the proposal on
WhileHandling
was still under
discussion; it has since been accepted.
Debugging and preventing space leaks with nothunks
(episode 31)
Debugging space leaks can be one of the more difficult aspects of writing
professional Haskell code. An important source of space leaks are unevaluated
thunks in long-lived application data; we take a look at how we can take
advantage of the nothunks
library to make
debugging and preventing these kinds of leaks significantly easier.
Programming techniques
generalBracket
(episode 10)
Exception handling is difficult, especially in the presence of asynchronous
exceptions. In this episode we revise the basics of bracket
and why it’s so important. We then discuss its generalisation
generalBracket
and its application in monad stacks.
Open recursion (episode 13)
Open recursion is a technique for defining objects in Haskell whose behaviour can be adjusted after they have been defined. It can be used to do some form of object-oriented programming in Haskell, and is also an interesting technique in its own right. Part of the episode is based on the paper The Different Aspects of Monads and Mixins [B. Oliveira].
If you’d like to step through the evaluation of the examples discussed in the episode yourself, you can find them in the announcement of the episode. For a more in-depth discussion of using open recursion for object oriented programming in Haskell, see the blog post Object Oriented Programming in Haskell.
Interruptible operations (episode 15)
In episode 10 on generalBracket
we
discussed asynchronous exceptions: exceptions that can be thrown to a thread at
any point. In that episode we saw that correct exception handling in the
presence of asynchronous exceptions relies on carefully controlling precisely
when they are delivered by masking (temporarily postponing) asynchronous
exceptions at specific points. However, even when asynchronous exceptions are
masked, some specific instructions can still be interrupted by asynchronous
exceptions (technically, these are now synchronous). We discuss how this works,
why it is important, and how to take interruptibility into account.
Monads and deriving-via (episode 16)
DerivingVia
is a GHC extension based on the paper
Deriving Via: or, How to Turn Hand-Written Instances into an Anti-Pattern
[B. Blöndal, Andres Löh, R. Scott]. We discuss how this extension can be used to
capture rules that relate type classes to each other. As a specific example, we
discuss the definition of the Monad
type class: ever since
this definition was changed back in 2015 in the Applicative-Monad
Proposal, instantiating Monad
to a new datatype requires
quite a bit of boilerplate code. By making the relation between “classic monads”
and “new monads” explicit and using deriving-via, we can eliminate the
boilerplate.
This episode was inspired by a Mastodon post by Martin Escardo.
Circular programs (episode 17)
A circular program is a program that depends on its own result. It may be
surprising that this works at all, but laziness makes it possible if output
becomes available sooner than it is required. We take a look at several examples
of circular programs: the classic yet somewhat contrived
RepMin
problem [Richard Bird], the
naming of bound variables in a lambda expression [E. Axelsson and K. Claessen], and
breadth-first labelling [G. Jones and J. Gibbons].
Generic (un)folds (episode 24)
We connect back to the very beginning of the Haskell Unfolder and talk about unfolds and folds. But this time, not only on lists, but on a much wider class of datatypes, namely those that can be written as a fixed point of a functor.
Variable-arity functions (episode26)
We take look at how one can use Haskell’s class system to encode functions that take a variable number of arguments, and also discuss some examples where such functions can be useful.
Community
Haskell at ICFP (episode 11)
We chat about ICFP (the International Conference on Functional Programming), the Haskell Symposium, and HIW (the Haskell Implementors’ Workshop) from 4-9 September 2023 in Seattle. We highlight a few select papers/presentations from these events:
HasChor
: Functional Choreographic Programming for All [G. Shen, S. Kashiwa, L. Kuper]- An Exceptional Actor System [P. Redmond, L. Kuper]
- GHC Plugin for Setting Breakpoints [A. Allen]
Going forward
Of course this is only the beginning. The Haskell Unfolder continues, with the next episode airing next week (October 16). If you have any feedback on the Unfolder, or if there are specific topics you’d like to see covered, we’d love to hear from you! Feel free to email us at info@well-typed.com or leave a comment below any of the videos.
Well-Typed makes no money off the Unfolder merch sales; your money goes straight to Spreadshirt.↩︎