[Haskell-cafe] Re: GSoC: Improving Cabal's Test Support

Rogan Creswick creswick at gmail.com
Tue Apr 6 19:43:54 EDT 2010


On Tue, Apr 6, 2010 at 4:03 PM, Gregory Crosswhite
<gcross at phys.washington.edu> wrote:
>
> Rather that starting from scratch, you should strongly consider adapting something like test-framework to this task, as it already has done the heavy work of creating a way to combine tests from different frameworks into a single suite

I want to second this -- test-framework would be a good place to
start, and you would be able to accomplish quite a lot more during the
summer.  Your proposal addresses (at least!) two different problems:

  * updating cabal so that it can handle the build/test process; and,
  * combining HUnit / QuickCheck / etc. to present a uniform interface.

test-framework and test-runner both address the second problem, and
those solutions can be kept separate, at least for now.  Figuring out
the best way to specify test commands, dependencies, build/execution
order, etc. is going to take some substantial effort, and I think that
should be the first goal of the project.

More comments in-line below...

On Apr 6, 2010, at 3:51 PM, Thomas Tuegel wrote:
>    Package Description File Syntax
>
> The syntax for designating test executables in package description
> files will be based on the existing syntax for describing executables.
> Such a stanza in the hypothetical package's description file would
> look like:
>
>> Test foo-tests
>>    main-is: foo-tests.hs
>>    build-depends: haskell-foo, Cabal, QuickCheck
>
> This example is obviously minimal; this is really an 'Executable'
> stanza by another name, so any options recognized there would also be
> valid here.

Cabal allows for multiple executable sections -- are multiple test
sections allowed? If so, how are they handled when 'cabal test' is
invoked?  If not, will there be any support for running multiple test
suites? (more on this below).

While the test executable could be configured to run different sets of
tests (at runtime? hm.. we may need more flags to 'cabal test'), there
are some situations it's necessary to build multiple test suites
because of odd library dependencies.  (for example, testing certain
combinations of libraries--imagine supporting multiple versions of
ghc.)

The existing Executable sections may serve the need fine, if we could
specify how to run the tests in a different way.  Perhaps a list of
test commands could be specified instead, eg:

> TestCommands: foo-test-ghc6.6,
>                            foo-test-ghc6.8,
>                            foo-props --all

Anyhow, just food for thought.

> described by 'Executable' stanzas.  With tests enabled, the test
> programs will be executed and their results collected by Cabal during
> the 'test' stage.

Where are the results collected, and in what format? My preference is
to choose a sensible default (dist/test-results?) and allow it to be
overridden in the cabal file.

>> module Distribution.Test where
>>
>> type Name = String
>> type Result = Maybe Bool

I think you're reinventing the wheel a bit here, (see comments above
about test-framework).

That aside, Result is still too restrictive.  Ignored tests may well
need justification (why were they not run?).  There may also be
multiple ways to ignore tests, and it isn't clear to me what those
are, or which are important.

I also feel pretty strongly that Result should distinguish between
test failures and tests that failed due to exceptional circumstances.
I don't know of any frameworks that do this in Haskell yet, but it has
proven to be a useful distinction in other languages.

I'm not commenting on the rest of the framework proposal because I
don't think the point of this SoC project is to write another testing
framework.

> The 'cabal test' command will run tests by default, but support two
> other options:
>
>    1.  '--report [file]', which will produce a nicely formatted
> report of the test results stored in the named file, or of the last
> run of the package's test suite if no file is specified, and
>    2.  '--diff file1 file2', which will show the differences between
> the test results stored it two different files.

See my comments about running multiple test suites, and parameterized
test suites above.  I think richer parameters are necessary.
(possibly just a --pass-through flag that hands all the subsequent
parameters off to the test executable(s))

--Rogan


More information about the Haskell-Cafe mailing list