qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman michael at snoyman.com
Tue Feb 25 20:38:38 UTC 2014


On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg at gregorycollins.net>wrote:

>
> On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <michael at snoyman.com>wrote:
>
>> But that's only one half of the "package interoperability" issue. I face
>> this first hand on a daily basis with my Stackage maintenance. I spend far
>> more time reporting issues of restrictive upper bounds than I do with
>> broken builds from upstream changes. So I look at this as purely a game of
>> statistics: are you more likely to have code break because version 1.2 of
>> text changes the type of the map function and you didn't have an upper
>> bound, or because two dependencies of yours have *conflicting* versions
>> bounds on a package like aeson[2]? In my experience, the latter occurs far
>> more often than the former.
>
>
> That's because you maintain a lot of packages, and you're considering
> buildability on short time frames (i.e. you mostly care about "does all the
> latest stuff build right now?"). The consequences of violating the PVP are
> that as a piece of code ages, the probability that it still builds goes to
> *zero*, even if you go and dig out the old GHC version that you were
> using at the time. I find this really unacceptable, and believe that people
> who are choosing not to be compliant with the policy are BREAKING HACKAGE
> and making life harder for everyone by trading convenience now for
> guaranteed pain later. In fact, in my opinion the server ought to be
> machine-checking PVP compliance and refusing to accept packages that don't
> obey the policy.
>
> Like Ed said, this is pretty cut and dried: we have a policy, you're
> choosing not to follow it, you're not in compliance, you're breaking stuff.
> We can have a discussion about changing the policy (and this has definitely
> been discussed to death before), but I don't think your side has the
> required consensus/votes needed to change the policy. As such, I really
> wish that you would reconsider your stance here.
>
>
I really don't like this appeal to authority. I don't know who the "royal
we" is that you are referring to here, and I don't accept the premise that
the rest of us must simply adhere to a policy because "it was decided." "My
side" as you refer to it is giving concrete negative consequences to the
PVP. I'd expect "your side" to respond in kind, not simply assert that
we're "breaking Hackage" and other such hyperbole.

Now, I think I understand what you're alluding to. Assuming I understand
you correctly, I think you're advocating irresponsible development. I have
codebases which I maintain and which use older versions of packages. I know
others who do the same. The rule for this is simple: if your development
process only works by assuming third parties to adhere to some rules you've
established, you're in for a world of hurt. You're correct: if everyone
rigidly followed the PVP, *and* no one every made any mistakes, *and* the
PVP solved all concerns, then you could get away with the development
practices you're talking about.

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a
transitive dependency introduces new exports, or provides new typeclass
instances, a fully PVP-compliant stack can be broken. (If anyone doubts
this claim, let me know, I can spell out the details. This has come up in
practice.)
* People make mistakes. I've been bitten by people making breaking changes
in point releases by mistake. If the only way your build will succeed is by
assuming no one will ever mess up, you're in trouble.
* Just because your code *builds*, doesn't mean your code *works*.
Semantics can change: bugs can be introduced, bugs that you depended upon
can be resolved, performance characteristics can change in breaking ways,
etc.

I absolutely believe that, if you want to have code that builds reliably,
you have to specify all of your deep dependencies. That's what I do for any
production software, and it's what I recommend to anyone who will listen to
me. Trying to push this off as a responsibility of every Hackage package
author is (1) shifting the burden to the wrong place, and (2)
irresponsible, since some maintainer out in the rest of the world has no
obligation to make sure your code keeps working. That's your responsibility.


> I've long maintained that the solution to this issue should be tooling.
> The dependency graph that you stipulate in your cabal file should be a
> *warrant* that "this package is known to be compatible with these versions
> of these packages". If a new major version of package "foo" comes out, a
> bumper tool should be able to try relaxing the dependency and seeing if
> your package still builds, bumping your version number accordingly based on
> the PVP rules. Someone released a tool to attempt to do this a couple of
> days ago --- I haven't tried it yet but surely with a bit of group effort
> we can improve these tools so that they really fast and easy to use.
>
> Of course, people who want to follow PVP are also going to need tooling to
> make sure their programs still build in the future because so many people
> have broken the policy in the past -- that's where proposed kludges like
> "cabal freeze" are going to come in.
>
>
This is where we apparently fundamentally disagree. cabal freeze IMO is not
at all a kludge. It's the only sane approach to reliable builds. If I ran
my test suite against foo version 1.0.1, performed manual testing on 1.0.1,
did my load balancing against 1.0.1, I don't want some hotfix build to
automatically get upgraded to version 1.0.2, based on the assumption that
foo's author didn't break anything.

Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140225/0f956ca8/attachment.html>


More information about the Libraries mailing list