Personal tools

Library/CC-delcont

From HaskellWiki

< Library(Difference between revisions)
Jump to: navigation, search
(resumable parsers)
m (Fixed stray bracket)
(One intermediate revision by one user not shown)
Line 157: Line 157:
   
 
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.
 
The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.
  +
  +
=== Breadth-first Traversal ===
  +
  +
This example is an adaptation of an example from a set of slides by Olivier Danvy, [http://www.brics.dk/~danvy/delimited-continuations-blues.pdf Delimited-Continuations Blues]. It involves the traversal of a binary tree, so let's first define such a type:
  +
  +
<haskell>
  +
data Tree a = Node a (Tree a) (Tree a) | Leaf a
  +
  +
t = Node 1 (Node 2 (Leaf 3)
  +
(Node 4 (Leaf 5)
  +
(Leaf 6)))
  +
(Node 7 (Node 8 (Leaf 9)
  +
(Leaf 10))
  +
(Leaf 11))
  +
  +
toList (Leaf i) = [i]
  +
toList (Node a t1 t2) = a : toList t1 ++ toList t2
  +
</haskell>
  +
  +
toList is a pre-order, depth-first traversal, and t is ordered so that such a traversal yields [1 .. 11]. Depth-first traversals are clearly the easiest to write in a language like Haskell, since recursive descent on the trees can be used. To perform a breadth-first traversal, one would likely keep a list of sub-trees at a given level, and pass through the list at each level, visiting roots, and producing a new list of the children one level down. Which is a bit more bookkeeping. However, it turns out that delimited control can allow one to write breadth-first traversal in a recursive descent style similar to the depth-first traversal (modulo the need for monads):
  +
  +
<haskell>
  +
visit :: MonadDelimitedCont p s m => p [a] -> Tree a -> m ()
  +
visit p = visit'
  +
where
  +
visit' (Leaf i) = control p $ \k -> (i:) `liftM` k (return ())
  +
visit' (Node i t1 t2) = control p $ \k -> do a <- k (return ())
  +
visit' t2
  +
visit' t1
  +
(i:) `liftM` return a
  +
  +
bf :: MonadDelimitedCont p s m => Tree a -> m [a]
  +
bf t = reset $ \p -> visit p t >> return []
  +
</haskell>
  +
  +
And, a quick check shows that 'bf t2' yields:
  +
  +
[5,6,9,10,3,4,8,11,2,7,1]
  +
  +
(Note that in this example, since elements are always pre-pended, the element visited first will be last in the list, and vice versa; so this is a pre-order breadth-first traversal).
  +
  +
So, how exactly does this work? As the slides say, the key idea is to "return before recursively traversing the subtrees." This is accomplished through the use of the delimited control operator 'control.' At the Node stage of a traversal, control is used to capture the sub-continuation that comes after said Node (which is, effectively, the traversal over the rest of the nodes at the same level). However, instead of descending depth-first style, that sub-continuation is immediately invoked, the result being called a. Only after that are the sub-trees descended into.
  +
  +
It should be noted, also, that this particular example can be used to display a difference between 'shift' (the so-called 'static' delimited operator) and 'control' (which is one of the 'dynamic' operators). The difference between the two is that in 'shift p (\k -> e)' calls to k are delimited by the prompt p, whereas in control, they are not (in both, e is). This has important consequences. For instance, at some point in a traversal an evaluation may look something like:
  +
  +
<haskell>
  +
delimited (visit' t2 >> visit' t1)
  +
</haskell>
  +
  +
Which, using some simplified notation/traversal, expands to:
  +
  +
<haskell>
  +
delimited (control (\k -> k () >> visit' t22 >> visit' t21)
  +
>> control (\k -> k () >> visit' t12 >> visit' t11))
  +
</haskell>
  +
  +
Which, due to the effects of control turns into:
  +
  +
<haskell>
  +
delimited ((control (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)
  +
  +
==>
  +
  +
delimited (visit' t22 >> visit' t21 >> visit' t12 >> visit' t11)
  +
</haskell>
  +
  +
In other words, using 'control' ends up building and executing a sequence of traversals at the same level, after the actions for the above level performed by the 'k ()'. The control operators of the lower level are then free to close over, and manipulate all the visitations on there level. This is why the result is a breadth-first traversal. However, replacing control with shift, we get:
  +
  +
<haskell>
  +
delmited (visit' t2 >> visit' t1)
  +
  +
==>
  +
  +
delimited ((shift (\k -> k () >> visit' t22 >> visit' t21))
  +
>> (shift (\k -> k () >> visit' t12 >> visit' t11)))
  +
  +
==>
  +
  +
delimited (delimited (shift (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)
  +
</haskell>
  +
  +
And already we can see a difference. The sub-traversal of t1 is now isolated, and control effects (via shift, at least) therein cannot affect the sub-traversal of t2. So, control effects no longer affect an entire level of the whole tree, and instead are localized to a given node and its descendants. In such a case, we end up with an ordinary depth-first traversal (although the sub-continuations allow the visitation of each node to look a bit different than toList, and since we're always pre-pending, as we get to a node, the results are reversed compared to toList).
  +
  +
In any case, the desired result has been achieved: A slightly modified recursive descent traversal has allowed us to express breadth-first search (and depth-first search in the same style is a matter of substitution of control operators) without having to do the normal list-of-sub-trees sort of bookkeeping (although the added weight of working with delimited control may more than outweigh that).
  +
  +
For a more in-depth discussion of the differences between shift, control and other, similar operators, see Shift to Control, cited below.
   
 
=== Resumable Parsing ===
 
=== Resumable Parsing ===

Revision as of 05:21, 14 June 2011


Contents

1 Introduction

This page is intended as a brief overview of delimited continuations and related constructs, and how they can be used in Haskell. It uses the library CC-delcont as a vehicle for doing so, but the examples should be general enough so that if you have another implementation, they should be relatively straight forward to port (whenever possible, I have endeavored not to use the operators on abstract prompt and sub-continuation types from CC-delcont, instead using the more typical, functional operators).

2 The basics

2.1 Undelimited continuations

If you've taken university courses in computer science, or done much investigation of language design, you've probably encountered continuations before. The author first recalls learning about them in a class on said subject, where they were covered very briefly, and it was mentioned (without proof; and no proof will be provided here) that they could be used as a basis upon which all control flow operators could be built. At the time, they seemed rather abstract and unwieldy. Perhaps they could be used to implement any more common control flow pattern, but why bother, when, as far as language implementation concerns go, it's easier to implement (and understand) most common control flow directly than it is to implement continuations?

As far as usage goes, continuations are probably most closely associated with Scheme, and its call-with-current-continuation function (abbreviated to Haskell's version, callCC from now on), although many other languages have them (undelimited continuations for Haskell are provided by the Cont monad and ContT transformer). They're often regarded as being difficult to understand, as their use can cause very complex control flow patterns (much like GOTO, although more sophisticated), though reduced to their basics, they aren't that hard to understand.

A continuation of an expression is, in a loose sense, 'the stuff that happens after the expression.' An example to refer to may help:

m >>= f >>= g >>= h

Here we have an ordinary monadic pipeline. A computation m is run, and its result is fed into f, and so on. We might ask what the continuation of 'm' is, the portion of the program that executes after m, and it looks something like:

\x -> f x >>= g >>= h

The continuation takes the value produced by m, and feeds it into 'the rest of the program.' But, the fact that we can represent this using functions as above should be a clue that continuations can be built up using them, and indeed, this is the case. There is a standard way to transform a program written normally (or in a monadic style, as above) into a program in which continuations, represented as functions, are passed around explicitly (known as the CPS transform), and this is what Cont/ContT does.

However, such a transform would be of little use if the passed continuations were inaccessible (as with any monad), and callCC is just the operator for the job. It will call a function with the implicitly passed continuation, so in:

callCC (\k -> e) >>= f >>= g >>= h

'k' will be set to a function that is something like the above '\x -> f x >>= g >>= h'. However, in some sense, it is not an ordinary function, as it will never return to the point where it is invoked. Instead, calling 'k' should be viewed as execution jumping to the point where callCC was invoked, with the entire 'callCC (..)' expression replaced with the value passed to 'k'. So k is not merely a normal function, but a way of feeding a value into into an execution context (and this is reflected in its monadic type: a -> Cont b).

So, what is all this good for? Well, a standard example is that one can use continuations to capture a method of escaping from loops (particularly nested ones), and if you ponder for a while, you might be able to imagine implementing some sort of exception mechanism with them. A simple example is computing the product of a list of numbers:

prod l = callCC (\k -> loop k l)
 where
 loop _ []     = return 1
 loop k (0:_)  = k 0
 loop k (x:xs) = do n <- loop k xs ; return (n*x)

Under normal circumstances, the loop will simply multiply all the numbers. However, if a 0 is detected, there is no need to multiply anything, the answer will always be 0. So, the continuation is invoked, and 0 is returned immediately, without performing any multiplications.

2.2 Delimited continuations

So, continuations (hopefully) seem pretty clear, and at least theoretically useful. Where do delimited continuations come into the picture?

The story (according to the hearsay the author has come across) goes back again to Scheme. As was mentioned earlier, callCC is often associated with it. Another thing closely associated with Scheme (and Lisp in general) is interactive environments in which code can be defined and run (much like our own Hugs and GHCi). Naturally, it would be nice if such environments could themselves be written in Scheme.

However, continuations in Scheme are not implemented as they are in Haskell. In Haskell, continuation using code is tagged with a monadic type, and one must use runCont(T) to run such computations, and the effects can't escape it. In Scheme, continuations are native, and all code can capture them, and capturing them captures not 'the rest of the Cont(T) computation,' but 'the rest of the program.' And if the interactive loop is written in Scheme, this includes the loop itself, so programs run within the session can affect the session itself.

Now, this might be a minor nit, but it is a nit nonetheless, and luckily for us, it led to the idea of delimited continuations. The idea was, of course, to tag a point at which the interactive loop invoked some sub-program, and then control flow operators such as callCC would only be able to capture a portion of the program up to the marker. To the sub-program, this is all that's of interest anyhow. Such a setup would solve the issue nicely.

However, once one has the ability to create such markers, why not put them in the hands of the programmer? Then, instead of them being able to capture 'the rest of the program's execution,' they would be able to delimit, capture and manipulate arbitrary portions of their programs. And indeed, such operations can be useful.

3 Samples

3.1 Iterators

So, what are delimited continuations good for? Well, suppose we have a binary tree data type like so:

data Tree a = Leaf | Branch a (Tree a) (Tree a)
 
empty = Leaf
singleton a = Branch a Leaf Leaf
 
insert b Leaf = Branch b Leaf Leaf
insert b (Branch a l r)
    | b < a = Branch a (insert b l) r
    | otherwise = Branch a l (insert b r)
 
fold :: (a -> b -> b -> b) -> b -> Tree a -> b
fold f z Leaf = z
fold f z (Branch a l r) = f a (fold f z l) (fold f z r)
 
for :: Monad m => Tree a -> (a -> m b) -> m ()
for t f = fold (\a l r -> l >> f a >> r) (return ()) t
Now, we have a fold over our data type, and as shown, we can therefore write a monadic iteration function 'for' over it (this is actually done for arbitrary data types in
Data.Foldable
). The fold is a fine method of traversing the data structure to operate on elements in most cases. However, what if we wanted something more like an iterator object, which somehow captured the traversal of the tree, remembering what element we're currently at, and which come next?

Well, it turns out one can build just such an object using continuations. It is indeed possible to build it using undelimited continuations, but it's rather complex to do so (I'll not include code that does, as I don't feel like figuring out all the details). However, it turns out it's easy using delimited continuations:

data Iterator m a = Done | Cur a (m (Iterator m a))
 
begin :: MonadDelimitedCont p s m => Tree a -> m (Iterator m a)
begin t = reset $ \p ->
            for t (\a ->
                shift p (\k -> return (Cur a (k $ return ())))) >> return Done
 
current :: Iterator m a -> Maybe a
current Done      = Nothing
current (Cur a _) = Just a
 
next :: Monad m => Iterator m a -> m (Iterator m a)
next Done      = return Done
next (Cur _ i) = i
 
finished :: Iterator m a -> Bool
finished Done = True
finished _    = False

So, clearly, Iterator is the type of iterators over a tree. current, next and done are some utility functions for operating on them. The interesting work is done in the begin function.

There are two delimited control operators in play here. First is reset, which is a way to place a delimiter around a computation. The term 'p' is simply a way to reference that delimiter; the library I'm working with allows for many named delimiters to exist, and for control operators to specify which delimiters they're working with (so a control operator may capture the continuation up to p, even if it runs into a delimiter q sooner, provided p /= q).

The other operator is shift, which is used to capture the delimited continuation. In many ways, it's like callCC, but with an important difference: it aborts the captured continuation. When callCC is called on a function f, if f returns normally, execution will pick up from just after the callCC. However, when shift is called, the continuation between the call and the enclosing prompt is packaged up (into 'k' here), and passed to the function, and a normal return will return to the place where the delimiter was set, not where shift was called.

With this in mind, we can begin to analyze the 'begin' function. First, it delimits a computation with the delimiter 'p'. Next, it begins to loop over the tree. For each element, we use 'shift' to capture "the rest of the loop", calling it 'k'. We then package that, and the current tree element, into an Iterator object, and return it. Since the shift has aborted the rest of the loop (for the time being), it returns to where 'reset' was called, and the function returns the iterator object (wrapped in a monad, of course).

The main remaining piece of interest is when next goes to get the next element of the traversal. When this happens, 'k $ return ()' is executed, which invokes the captured continuation (with the value (), because the loop doesn't take the return value of the traversal function into account anyway). This, essentially, re-enters the loop. If there is a next element, then the traversal function is called with it, shift will once again capture 'the rest of the loop' (from a later point that before, though), and return an iterator object with the new current element and continuation. If there are no new elements, then control will pass out of the loop to the following computation, which is, in this case, 'return Done', so in either case, an Iterator object is the result, and the types work out.

We can test our iterator like so:

main :: IO ()
main = runCCT $ do t <- randomTree 10
                   i <- begin t
                   doStuff i
 where
 doStuff i
    | finished i = return ()
    | otherwise  = do i'  <- next i
                      i'' <- next i
                      liftIO $ print (fromJust $ current i :: Int)
                      doStuff i'
 
randomTree n = rt empty n
 where
 rt t 0 = return t
 rt t n = do r <- liftIO randomIO
             rt (insert r t) (n - 1)

The output of which might go something like:

  -1937814587
  -1171184756
  -1068642732
  -741588272
  -553872051
  -499564662
  -421862876
  -59900888
  315891595
  1868487875

The example shows one possibly interesting property: one can re-use old iterators without affecting new ones. In this case, we call 'next' on the same iterator twice, but it doesn't advance the iterator twice. Our iterators behave like an ordinary functional data structure, even though they're built out of somewhat out-of-the-ordinary components.

3.2 Breadth-first Traversal

This example is an adaptation of an example from a set of slides by Olivier Danvy, Delimited-Continuations Blues. It involves the traversal of a binary tree, so let's first define such a type:

data Tree a = Node a (Tree a) (Tree a) | Leaf a
 
t = Node 1 (Node 2 (Leaf 3)
                   (Node 4  (Leaf 5)
                            (Leaf 6)))
           (Node 7 (Node 8  (Leaf 9)
                            (Leaf 10))
                   (Leaf 11))
 
toList (Leaf i) = [i]
toList (Node a t1 t2) = a : toList t1 ++ toList t2

toList is a pre-order, depth-first traversal, and t is ordered so that such a traversal yields [1 .. 11]. Depth-first traversals are clearly the easiest to write in a language like Haskell, since recursive descent on the trees can be used. To perform a breadth-first traversal, one would likely keep a list of sub-trees at a given level, and pass through the list at each level, visiting roots, and producing a new list of the children one level down. Which is a bit more bookkeeping. However, it turns out that delimited control can allow one to write breadth-first traversal in a recursive descent style similar to the depth-first traversal (modulo the need for monads):

visit :: MonadDelimitedCont p s m => p [a] -> Tree a -> m ()
visit p = visit'
 where
 visit' (Leaf i)       = control p $ \k -> (i:) `liftM` k (return ())
 visit' (Node i t1 t2) = control p $ \k -> do a <- k (return ())
                                              visit' t2
                                              visit' t1
                                              (i:) `liftM` return a
 
bf :: MonadDelimitedCont p s m => Tree a -> m [a]
bf t = reset $ \p -> visit p t >> return []

And, a quick check shows that 'bf t2' yields:

   [5,6,9,10,3,4,8,11,2,7,1]

(Note that in this example, since elements are always pre-pended, the element visited first will be last in the list, and vice versa; so this is a pre-order breadth-first traversal).

So, how exactly does this work? As the slides say, the key idea is to "return before recursively traversing the subtrees." This is accomplished through the use of the delimited control operator 'control.' At the Node stage of a traversal, control is used to capture the sub-continuation that comes after said Node (which is, effectively, the traversal over the rest of the nodes at the same level). However, instead of descending depth-first style, that sub-continuation is immediately invoked, the result being called a. Only after that are the sub-trees descended into.

It should be noted, also, that this particular example can be used to display a difference between 'shift' (the so-called 'static' delimited operator) and 'control' (which is one of the 'dynamic' operators). The difference between the two is that in 'shift p (\k -> e)' calls to k are delimited by the prompt p, whereas in control, they are not (in both, e is). This has important consequences. For instance, at some point in a traversal an evaluation may look something like:

delimited (visit' t2 >> visit' t1)

Which, using some simplified notation/traversal, expands to:

delimited (control (\k -> k () >> visit' t22 >> visit' t21)
  >> control (\k -> k () >> visit' t12 >> visit' t11))

Which, due to the effects of control turns into:

delimited ((control (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)
 
 ==>
 
delimited (visit' t22 >> visit' t21 >> visit' t12 >> visit' t11)

In other words, using 'control' ends up building and executing a sequence of traversals at the same level, after the actions for the above level performed by the 'k ()'. The control operators of the lower level are then free to close over, and manipulate all the visitations on there level. This is why the result is a breadth-first traversal. However, replacing control with shift, we get:

delmited (visit' t2 >> visit' t1)
 
  ==>
 
delimited ((shift (\k -> k () >> visit' t22 >> visit' t21))
  >> (shift (\k -> k () >> visit' t12 >> visit' t11)))
 
  ==>
 
delimited (delimited (shift (\k -> k () >> visit' t12 >> visit' t11)) >> visit' t22 >> visit' t21)

And already we can see a difference. The sub-traversal of t1 is now isolated, and control effects (via shift, at least) therein cannot affect the sub-traversal of t2. So, control effects no longer affect an entire level of the whole tree, and instead are localized to a given node and its descendants. In such a case, we end up with an ordinary depth-first traversal (although the sub-continuations allow the visitation of each node to look a bit different than toList, and since we're always pre-pending, as we get to a node, the results are reversed compared to toList).

In any case, the desired result has been achieved: A slightly modified recursive descent traversal has allowed us to express breadth-first search (and depth-first search in the same style is a matter of substitution of control operators) without having to do the normal list-of-sub-trees sort of bookkeeping (although the added weight of working with delimited control may more than outweigh that).

For a more in-depth discussion of the differences between shift, control and other, similar operators, see Shift to Control, cited below.

3.3 Resumable Parsing

Our next example concerns a Haskell version of a post to the OCaml mailing list. The translation was originally given on the haskell-cafe mailing list, and complete code and some additional discussion can be found there.

The problem is similar to the above iterator example. Specifically, we are in need of a parser that can take fragments of input at a time, suspending for more input after each fragment, until such time as it can be provided. However, there are already plenty of fine parsing libraries available, and ideally, we don't want to have to re-write a new library from scratch just to have this resumable parser feature.

As it turns out, delimited continuations provide a fairly straightforward way to have our cake and eat it too in this case. First, we'll need a data type for the resumable parser.

data Request m a = Done a | ReqChar (Maybe Char -> m (Request m a))

Such a parser is either complete, or in a state of requesting more characters. Again, we'll have some convenience functions for working on the data type:

provide :: Monad m => Char -> Request m a -> m (Request m a)
provide _ d@(Done _)  = return d
provide c (ReqChar k) = k (Just c)
 
provideString :: Monad m => String -> Request m a -> m (Request m a)
provideString []     s = return s
provideString (x:xs) s = provide x s >>= provideString xs
 
finish :: Monad m => Request m a -> m (Request m a)
finish d@(Done _)  = return d
finish (ReqChar k) = k Nothing

So, 'provide' feeds a character into a parser, 'provideString' feeds in a string, and 'finish' informs the parser that there are no more characters to be had.

Finally, we need to have some way of suspending parsing and waiting for characters. This is exactly what delimited continuations do for us. The hook we'll use to get control over the parser is through the character stream it takes as input:

toList :: Monad m => m (Maybe a) -> m [a]
toList gen = gen >>= maybe (return []) (\c -> liftM (c:) $ toList gen)
 
streamInvert :: MonadDelimitedCont p s m => p (Request m a) -> m (Maybe Char)
streamInvert p = shift p (\k -> return $ ReqChar (k . return))
 
invertParse :: MonadDelimitedCont p s m => (String -> a) -> m (Request m a)
invertParse parser = reset $ \p -> (Done . parser) `liftM` toList (streamInvert p)

So, 'toList' simply takes a monadic action that may produce a character, and uses it to produce a list of characters (stopping when it sees a 'Nothing'). 'streamInvert' is just such a monadic, character-producing action (given a delimiter). Each time it is run, it captures a sub-continuation (here, 'the rest of the list generation'), and puts it in a Request object. We can then pass around the Request object, and feed characters in as desired (via 'provide' and 'provideString' above), gradually building the list of characters to be parsed.

In the 'invertParse' method, this gradually produced list is fed through a parser (of type String -> a, so it doesn't need to know about the delimited continuation monad we're using), and the output of the parser is packaged in a finished (Done) Request object, so when we finally call 'finish', we will be able to access the results of the parser.

For this example, the words function suffices as a parser:

gradualParse :: [String]
gradualParse = runCC $ do p1 <- invertParse words
                          p2 <- provideString "The quick" p1
                          p3 <- provideString " brown fox jum" p2
                          p4 <- provideString "ps over the laz" p3
                          p5 <- provideString "y dog" p4 >>= finish
                          p6 <- provideString "iest dog" p4 >>= finish
                          let (Done l1) = p5
                              (Done l2) = p6
                          return (l1 ++ l2)
 
main :: IO ()
main = mapM_ putStrLn gradualParse

And we get output:

 The
 quick
 brown
 fox
 jumps
 over
 the
 lazy
 dog
 The
 quick
 brown
 fox
 jumps
 over
 the
 laziest
 dog

So, the resumable parser works. It will pause at arbitrary places in the parse, even in the middle of tokens, and wait for more input. And one can resume a parse from any point to which a Request pointer is saved without interfering with other resumable parser objects.

(A note: depending on what exactly one wants to do with such parsers, there are a few nits in the above implementation. It doesn't exactly match the semantics of the OCaml parser. For more information on this topic, see the linked mailing-list thread, as it discusses the issues, their causes, and provides an alternate implementation (which changes mostly the parser, not the delimited continuation end) which matches the OCaml version much more closely)

4 CC-delcont

4.1 Installation

Packages are available on Hackage:

 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/CC-delcont

The library is cabalized, so installation should be as simple as:

 runhaskell Setup.lhs configure
 runhaskell Setup.lhs build
 sudo runhaskell Setup.lhs install

(to install to the default directory, /usr/local/lib on Unix)

5 More information

A google search for delimited continuations will likely yield plenty of interesting resources on the subject. However, the following resources proved especially useful to the author when he was investigating them:

  • Delimited continuations in operating systems -- This paper provides excellent insight into how delimited continuations can arise as a natural solution/model for real problems, specifically in the context of implementing an operating system.
  • A Monadic Framework for Delimited Continuations -- This is the paper from which the implementation of the above library was derived. It's quite thorough in its explanation of the motivations for the interface, and also has several possible implementations thereof (though CC-delcont uses only one).
  • Delimited Dynamic Binding -- This paper, and related code, served as the basis for the dynamically scoped variable portion of the CC-delcont library. It explains the rationale for having dynamic scoping and delimited control interact in the way they do in the library, and goes through the implementation of dynamic variables in terms of delimited continuations.
  • Shift to control -- This paper explores four different sets of delimited control operators (all of which are implemented in CC-delcont), and their implementation. Though it's not directly relevant to this particular library, it provides some good insight into delimited continuations and their implementation in general.
  • Oleg Kiselyov's continuation page -- Contains plenty of excellent information on delimited continuations and the like (including some of the above papers), including examples of their use in Haskell.