state of the cabal (preprocessors)

Henrik Nilsson nhn at Cs.Nott.AC.UK
Wed Oct 27 14:12:45 EDT 2004


Hi Isaac,
CC Others

 > Right, but I was actually talking about the 'build' step, which is a
 > bit funny... For the build step, we have to do preprocessing still, so
 > that the developer or user of VS can use the code.
 >
 > For the install step, unlike the other systems, we're proposing that
 > we do the preprocessing here rather than at the build step since
 > presumably it would be more convenient to be platform independent at
 > that point?
 >
 > I'm not sure why this is such a great idea.  For each other haskell
 > implementation, once you build, it's tied to the platform, and you no
 > longer need the build tools.  Why not do the same thing for hugs?
 >
 > Otherwise, the install step won't be based on the sources produced
 > during the build step, but rather at the unpreprocessed sources.
 > Usually the build step prepares the sources.
 >
 > Also, what if there are other preprocessors that have to be run after
 > cpp?  Are we going to require these to be installed on the target
 > machine too?
 >
 > Sorry if I'm going over ground we've already covered... I am in a
 > training class this week and don't have much internet time.  Feel free
 > to point me to the archive URLs for any of my questions  :)

I'm not sure I quite understand. But maybe it would be useful if I
outline the basic approach we took in the Yale/Yampa build system
in this respect.

One main design goal was that compilers and interpreters should
be treated as similarly as possible. To that end, we decided that
preprocessed sources (all forms of preprocessing: cpp, arrows, ...)
are to an interpreter what object code is to a compiler.

[We even applied this principle for applications by generating
a wrapper to invoke the application when building for an interpreter
(using runhugs in the case of Hugs) to play the role of the linked
executable that would be produced when using a compiler.]

Thus, for both compilers and interpreters, building a library or
an application from a source distribution means that the entire
tool chain has to be available.

In our system, we don't put generated code in a dist/build directory
for this case (possibly a mistake). Things are built "in place", and
then copied to the installation directories.

Cpp-ing for Hugs is a special case and is delayed until the installation
step.

This does not mean that building is a "no op" for an interpreter
like Hugs. There can still be quite a bit of building,
e.g. running Happy and Greencard. Moreover, we also run the
arrows preprocessor during the build step (we had adopted
special file name extensions to identify arrowized source).
But I guess that arrow preprocessing will have to be handled
more like cpp-ing under Cabal.

Our idea for handling binary distributions (although we never got around
to fully implement this), was to "install" to a temporary directory,
and then to wrap up that directory along with a script (a specially
generated Makefile in our case) that could carry out the real
installation later.

Thus, for a compiler, a binary installation is what you would expect.
In particular, no special tools are needed when the end-user installs
on his or her system.

For an interpreter, a "binary distribution" means that the preprocessed
sources ends up in the distribution. Again, the end-user then do not
need any special tools for installation. But of course, just as for a
normal binary distribution, the preprocessed sources are likely to be
platform specific.

Thus, the same pros and cons would hold for binary versus source
distribution both for compilers and interpreters. If one wants the
flexibility of choosing the Haskell system and the operating system
platform at installation time, then one needs a source distribution.
If one is interested in installing a library for a particular
Haskell system on a particular platform with the least amount
of fuss (and without a complete tool chain), then one can pick a
"binary" distribution if available.

I hope this at least clarifies the approach we took in the Yale/Yampa
build system. I think it is fairly natural and general, although
possibly a bit naive when it comes to distributions.

Best regards,

/Henrik

-- 
Henrik Nilsson
School of Computer Science and Information Technology
The University of Nottingham
nhn at cs.nott.ac.uk

This message has been scanned but we cannot guarantee that it and any
attachments are free from viruses or other damaging content: you are
advised to perform your own checks.  Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.



More information about the Libraries mailing list