Any remaining test patches?

Johan Tibell johan.tibell at gmail.com
Mon May 23 15:39:45 CEST 2011


On Sat, May 21, 2011 at 4:20 PM, Duncan Coutts
<duncan.coutts at googlemail.com> wrote:
> Here's the equivalent bit of my design (the TestResult is the same):
>
> data TestInstance
>   = TestInstance {
>       run            :: IO TestResult
>       name           :: String,
>
>       concurrentSafe :: Bool,
>       expectedFail   :: Bool,
>
>       options        :: [OptionDescr]
>       setOption      :: String -> String -> Either String TestInstance
>     }

I cannot think of a straightforward way to implement setOption in the
above design. One would have to "store" options in the run closure. A
type class approach would allow the test framework to use extra fields
in the record that implements the type class to store the options e.g.

class TestInstance a where
    setOption :: String -> String -> a -> Either String TestInstance
    run :: a -> IO TestResult

data MyTestType where
    options :: [MyOptionType]  -- Test framework specific
    property :: Property  -- Test framework specific

instance TestInstance MyTestType where
    setOption k v t = case parse k v of
         Left -> Left "Failed to parse option"
         Right opt -> Right $ t { options = opt : options t }

    run t = runProperty (property t) (options t)

--| Convert specific test type in this test framework to the test type wrapper.
test prop = MyTestType [] prop

> Minor difference: extra flags for indicating whether the test is safe
> for concurrent execution (e.g. if it's pure or has only local side
> effects) and whether the test case is in fact expected to fail (so
> numbers of expected vs unexpected failures can be reported).

I prefer "exclusive" to "concurrentSafe", as there might be tests that
are concurrency safe but should still be run in isolation. Not a big
difference in practice though.

Do we really need expectedFail, it feels quite GHC specific and there
are other options, like commenting out the test or using a tags
mechanism (see my reply to your other email).

Here are some other attributes me might want to consider:

* size - How long is this test expected to take? You might want to run
all small and medium size tests on every commit but reserve large and
huge tests for before a release.

* timeout - The maximum amount of time the test is expected to run.
The test agent should kill the test after this time has passed.
Timeout could get a default value based on size. Test agents should
probably apply some sort of timeout even if we don't let users specify
it on a per test basis.

Johan



More information about the cabal-devel mailing list