Herbert Valerio Riedel | 16 Feb 11:14 2014
Picon

Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property

Hello *,

Right now, there seems to be no "defined" way to create a zero
'Bits'-value (w/o requiring also a 'Num' instance), that has all bits
cleared without involving at least two operations from the 'Bits' class
(e.g. `clearBit (bit 0) 0` or `let x = bit 0 in xor x x`).

OTOH, introducing a new method 'class Bits a where bitZero :: a' seems
overkill to me.

However, "bit (-1)"[1] seems to result in just such a zero-value for all
'Bits' instances from base, so I'd hereby propose to simply document
this as an expected property of 'bit', as well as the recommended way to
introduce a zero-value (for when 'Num' is not available).

Discussion period: 2 weeks

 [1]: ...or more generally 'bit n == 0' for n<0, as it's usually
      implemented as 'bit n = 1 shift n'
Henning Thielemann | 16 Feb 13:17 2014
Picon

Re: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property

Am 16.02.2014 11:14, schrieb Herbert Valerio Riedel:
> Hello *,
>
> Right now, there seems to be no "defined" way to create a zero
> 'Bits'-value (w/o requiring also a 'Num' instance), that has all bits
> cleared without involving at least two operations from the 'Bits' class
> (e.g. `clearBit (bit 0) 0` or `let x = bit 0 in xor x x`).
>
> OTOH, introducing a new method 'class Bits a where bitZero :: a' seems
> overkill to me.
>
> However, "bit (-1)"

It would be better to forbid "bit (-1)" by the type system, if this 
would be possible. If only indices would be allowed that actually exist, 
then we would have a nice law like "popCount (bit n) == 1". I would not 
like that the exceptional "bit (-1)" becomes the blessed way to create 
zeros. An additional "zero" method with a default implementation would 
be the clean way.
ARJANEN Loïc Jean David | 16 Feb 15:10 2014
Picon

Re: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property

Hello,

I'll have to come down against that proposal because, at least on amd64 for 64 
bits-sized types (Int, Int64, Word & Word64), it doesn't works.
That's probably because bit n is usually implemented as 1 shifL n and that on 
this architechture, the arithmetic left-shift instruction takes only the shift 
count's 6 low-order bits into account. So we have bit (-1) = shiftL 1 (-1) = 
shiftL 1 63 = 2 ^ 63.
I suspect the same may happen for 32 bits-sized types  on x86 where only the 5 
low order-bits of the shift count are considered.

Regards,
ARJANEN Loïc

Le dimanche 16 février 2014, 11:14:12 Herbert Valerio Riedel a écrit :
> Hello *,
> 
> Right now, there seems to be no "defined" way to create a zero
> 'Bits'-value (w/o requiring also a 'Num' instance), that has all bits
> cleared without involving at least two operations from the 'Bits' class
> (e.g. `clearBit (bit 0) 0` or `let x = bit 0 in xor x x`).
> 
> OTOH, introducing a new method 'class Bits a where bitZero :: a' seems
> overkill to me.
> 
> However, "bit (-1)"[1] seems to result in just such a zero-value for all
> 'Bits' instances from base, so I'd hereby propose to simply document
> this as an expected property of 'bit', as well as the recommended way to
> introduce a zero-value (for when 'Num' is not available).
> 
(Continue reading)

Herbert Valerio Riedel | 16 Feb 17:42 2014
Picon

Proposal: add new Data.Bits.Bits(bitZero) method (was: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property)

Hi,

On 2014-02-16 at 15:10:42 +0100, ARJANEN Loïc Jean David wrote:
> I'll have to come down against that proposal because, at least on amd64 for 64 
> bits-sized types (Int, Int64, Word & Word64), it doesn't works.

You're right, I don't know how I could have missed that :-/

Since the presumed pre-condition for the proposal (that 'bit (-1) == 0'
would already hold) I hereby amend the proposal to

> Introduce a new class method
>
>   class Bits a where
>       ...
>       -- | Value with all bits cleared
>       bitZero :: a
>       ...
>
> modulo naming of 'bitZero'

(I'm hesitant to consume "zero" from the namespace as was suggested by Henning)
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Edward Kmett | 16 Feb 17:45 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method (was: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property)

`bitZero` sounds like it should equal `bit 0`, which it doesn't. 

zeroBits ?

-Edward


On Sun, Feb 16, 2014 at 11:42 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
Hi,

On 2014-02-16 at 15:10:42 +0100, ARJANEN Loïc Jean David wrote:
> I'll have to come down against that proposal because, at least on amd64 for 64
> bits-sized types (Int, Int64, Word & Word64), it doesn't works.

You're right, I don't know how I could have missed that :-/

Since the presumed pre-condition for the proposal (that 'bit (-1) == 0'
would already hold) I hereby amend the proposal to

> Introduce a new class method
>
>   class Bits a where
>       ...
>       -- | Value with all bits cleared
>       bitZero :: a
>       ...
>
> modulo naming of 'bitZero'

(I'm hesitant to consume "zero" from the namespace as was suggested by Henning)
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Eric Mertens | 16 Feb 17:51 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method (was: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property)

The last time this topic came up (back when Num was split off) the discussion was derailed by the idea of a Bool class and then forgotten. I've hoped for an explicit zero element since then.

+1 zeroBits

On Feb 16, 2014 8:45 AM, "Edward Kmett" <ekmett <at> gmail.com> wrote:
`bitZero` sounds like it should equal `bit 0`, which it doesn't. 

zeroBits ?

-Edward


On Sun, Feb 16, 2014 at 11:42 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
Hi,

On 2014-02-16 at 15:10:42 +0100, ARJANEN Loïc Jean David wrote:
> I'll have to come down against that proposal because, at least on amd64 for 64
> bits-sized types (Int, Int64, Word & Word64), it doesn't works.

You're right, I don't know how I could have missed that :-/

Since the presumed pre-condition for the proposal (that 'bit (-1) == 0'
would already hold) I hereby amend the proposal to

> Introduce a new class method
>
>   class Bits a where
>       ...
>       -- | Value with all bits cleared
>       bitZero :: a
>       ...
>
> modulo naming of 'bitZero'

(I'm hesitant to consume "zero" from the namespace as was suggested by Henning)
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 16 Feb 18:02 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Am 16.02.2014 17:51, schrieb Eric Mertens:

> The last time this topic came up (back when Num was split off) the
> discussion was derailed by the idea of a Bool class and then forgotten.
> I've hoped for an explicit zero element since then.

I think it was this discussion:
   https://www.haskell.org/pipermail/libraries/2011-October/016919.html
wren ng thornton | 27 Feb 02:30 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method (was: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property)

I'm +1 for the following slightly extended proposal:

* add zeroBits
* with default implementation: clearBit (bit 0) 0
* and requiring the following laws for all n valid for the type:
    * clearBit zeroBits n == zeroBits
    * setBit zeroBits n == bit n
    * testBit zeroBits n == False
    * popCount zeroBits == 0

--

-- 
Live well,
~wren
Edward Kmett | 27 Feb 22:39 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method (was: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property)

I like those laws insofar as they go.

It'd be nice to sit down and specify the rest of the Data.Bits laws some day. (Not volunteering!)

-Edward


On Wed, Feb 26, 2014 at 8:30 PM, wren ng thornton <winterkoninkje <at> gmail.com> wrote:
I'm +1 for the following slightly extended proposal:

* add zeroBits
* with default implementation: clearBit (bit 0) 0
* and requiring the following laws for all n valid for the type:
    * clearBit zeroBits n == zeroBits
    * setBit zeroBits n == bit n
    * testBit zeroBits n == False
    * popCount zeroBits == 0

--
Live well,
~wren
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 28 Feb 10:36 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method


Fyi, here's a draft patch:

 https://github.com/hvr/packages-base/commit/03c015951a385533b9f419863c37b7df1f791190

feel free to comment on the patch (by annotating on GitHub, or
discussing it here), as this is what I'll probably push to GHC HEAD
after the deadline.

Note: I still have to go over the existing Bits instances, to see if GHC
is properly inlining/constant-folding 'clearBit (bit 0) 0' with the
default-impl, or if I need to add a couple of 'zeroBits = 0' to the
instances in `base`.

On 2014-02-27 at 22:39:27 +0100, Edward Kmett wrote:
> I like those laws insofar as they go.
>
> It'd be nice to sit down and specify the rest of the Data.Bits laws some
> day. (Not volunteering!)
>
> -Edward
>
>
> On Wed, Feb 26, 2014 at 8:30 PM, wren ng thornton
> <winterkoninkje <at> gmail.com>wrote:
>
>> I'm +1 for the following slightly extended proposal:
>>
>> * add zeroBits
>> * with default implementation: clearBit (bit 0) 0
>> * and requiring the following laws for all n valid for the type:
>>     * clearBit zeroBits n == zeroBits
>>     * setBit zeroBits n == bit n
>>     * testBit zeroBits n == False
>>     * popCount zeroBits == 0
>>
>> --
>> Live well,
>> ~wren
>> _______________________________________________
>> Libraries mailing list
>> Libraries <at> haskell.org
>> http://www.haskell.org/mailman/listinfo/libraries
>>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries

--

-- 
"Elegance is not optional" -- Richard O'Keefe
Twan van Laarhoven | 17 Feb 14:18 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

+1 to `zeroBits`. The name fits in with `setBit` etc.
I'm also okay with the name `zero`.

Twan

On 16/02/14 17:45, Edward Kmett wrote:
> `bitZero` sounds like it should equal `bit 0`, which it doesn't.
>
> zeroBits ?
>
> -Edward
>
>
> On Sun, Feb 16, 2014 at 11:42 AM, Herbert Valerio Riedel <hvr <at> gnu.org
> <mailto:hvr <at> gnu.org>> wrote:
>
>     Hi,
>
>     On 2014-02-16 at 15:10:42 +0100, ARJANEN Loïc Jean David wrote:
>      > I'll have to come down against that proposal because, at least on amd64
>     for 64
>      > bits-sized types (Int, Int64, Word & Word64), it doesn't works.
>
>     You're right, I don't know how I could have missed that :-/
>
>     Since the presumed pre-condition for the proposal (that 'bit (-1) == 0'
>     would already hold) I hereby amend the proposal to
>
>      > Introduce a new class method
>      >
>      >   class Bits a where
>      >       ...
>      >       -- | Value with all bits cleared
>      >       bitZero :: a
>      >       ...
>      >
>      > modulo naming of 'bitZero'
>
>     (I'm hesitant to consume "zero" from the namespace as was suggested by Henning)
>     _______________________________________________
>     Libraries mailing list
>     Libraries <at> haskell.org <mailto:Libraries <at> haskell.org>
>     http://www.haskell.org/mailman/listinfo/libraries
>
>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
>

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 16 Feb 17:49 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Am 16.02.2014 17:42, schrieb Herbert Valerio Riedel:

>> Introduce a new class method
>>
>>    class Bits a where
>>        ...
>>        -- | Value with all bits cleared
>>        bitZero :: a
>>        ...
>>
>> modulo naming of 'bitZero'
>
> (I'm hesitant to consume "zero" from the namespace as was suggested by Henning)

Let me defend "zero": Functions in the module that contain "Bit" in the 
name (clearBit, setBit, testBit) access a single bit. All functions that 
access many bits do not contain "Bit" (rotate, shift, xor, popCount). I 
propose to import the "zero" function with qualification, as well as the 
other functions from the module.

I also propose to provide a default implementation of "zero", like 
"clearBit (bit 0) 0". This way, the "zero" method can be introduced 
without breaking existing code.
ARJANEN Loïc Jean David | 16 Feb 19:16 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Le dimanche 16 février 2014, 17:49:23 Henning Thielemann a écrit :
> Am 16.02.2014 17:42, schrieb Herbert Valerio Riedel:
> >> Introduce a new class method
> >> 
> >>    class Bits a where
> >>    
> >>        ...
> >>        -- | Value with all bits cleared
> >>        bitZero :: a
> >>        ...
> >> 
> >> modulo naming of 'bitZero'
> > 
> > (I'm hesitant to consume "zero" from the namespace as was suggested by
> > Henning)
> Let me defend "zero": Functions in the module that contain "Bit" in the
> name (clearBit, setBit, testBit) access a single bit. All functions that
> access many bits do not contain "Bit" (rotate, shift, xor, popCount). I
> propose to import the "zero" function with qualification, as well as the
> other functions from the module.
> 
> I also propose to provide a default implementation of "zero", like
> "clearBit (bit 0) 0". This way, the "zero" method can be introduced
> without breaking existing code.

I'm in favour of that modified proposal, so +1 for a "zero" member of Bits.

And if we fear problems with collisions, we could always name the member 
something like zeroValue or zeroBits.
Henning Thielemann | 16 Feb 19:29 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Am 16.02.2014 19:16, schrieb ARJANEN Loïc Jean David:

> I'm in favour of that modified proposal, so +1 for a "zero" member of Bits.
>
> And if we fear problems with collisions, we could always name the member
> something like zeroValue or zeroBits.

If people adhere to the package versioning policy, then there cannot be 
collisions. That is,

   import Data.Bits

requires strict version bounds (>=x.y.z.w && <x.y.z+1)

and both of

   import Data.Bits (zero)
   import qualified Data.Bits as Bits

allow lax version bounds (>=x.y.z && <x.y+1).

Btw. "rotate" may already cause collisions with geometry libraries.
Herbert Valerio Riedel | 22 Feb 11:03 2014
Picon

[Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Hello *,

Here's a mid-discussion summary of the proposal

>> Introduce a new class method
>>
>>   class Bits a where
>>       ...
>>       -- | Value with all bits cleared
>>       <0-value-method> :: a
>>       ...
>>
>> modulo naming of '<0-value-method>'

from my point of view:

 - The idea came up already in 2011 when Num-superclass was to be removed from Bits
   (but discussion derailed)

 - So far general consensus (i.e. no "-1"s afaics) it's desired to have
   an all-bits-clear value introducing method in 'Bits'

 - Use "clearBit (bit 0) 0" as default implementation for smooth upgrade-path

 - Nameing for <0-value-method> boils down to two candidates:

    a) 'Data.Bits.zero'

        - based on the idea tha 'Data.Bits' ought to be imported
          qualified (or with explicit import-list) anyway
          (-> thus following PVP practice)

        - many existing Data.Bits.Bits methods such as 'rotate',
          'complement', 'popCount', 'xor', or 'shift' don't have
          the name 'bit' in it (and those few that have, operate
          on single bits)

        - supporters (in no particular order):

           - ARJANEN Loïc Jean David
           - Henning Thielemann
           - Herbert Valerio Riedel (+0.99)
           - Twan van Laarhoven

    b) 'Data.Bits.zeroBits'

        - more verbose name reduces risk of namespace conflicts with unqualified imports

        - supporters (in no particular order):

           - Edward Kmett
           - Eric Mertens
           - Herbert Valerio Riedel
           - Twan van Laarhoven
           - (maybe?) ARJANEN Loïc Jean David

    So far there doesn't seem to be a very clear preference for
    'zeroBits' over 'zero'. It might help, if those how expressed some
    kind of support for both variants could clarify if their preference
    has any bias towards 'zeroBits' or 'zero'.

Cheers,
   hvr
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 22 Feb 15:56 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Zero is a big bit of name space.  Id favor zerobits over zero. 

On Saturday, February 22, 2014, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:

Hello *,

Here's a mid-discussion summary of the proposal

>> Introduce a new class method
>>
>>   class Bits a where
>>       ...
>>       -- | Value with all bits cleared
>>       <0-value-method> :: a
>>       ...
>>
>> modulo naming of '<0-value-method>'

from my point of view:

 - The idea came up already in 2011 when Num-superclass was to be removed from Bits
   (but discussion derailed)

 - So far general consensus (i.e. no "-1"s afaics) it's desired to have
   an all-bits-clear value introducing method in 'Bits'

 - Use "clearBit (bit 0) 0" as default implementation for smooth upgrade-path

 - Nameing for <0-value-method> boils down to two candidates:

    a) 'Data.Bits.zero'

        - based on the idea tha 'Data.Bits' ought to be imported
          qualified (or with explicit import-list) anyway
          (-> thus following PVP practice)

        - many existing Data.Bits.Bits methods such as 'rotate',
          'complement', 'popCount', 'xor', or 'shift' don't have
          the name 'bit' in it (and those few that have, operate
          on single bits)

        - supporters (in no particular order):

           - ARJANEN Loïc Jean David
           - Henning Thielemann
           - Herbert Valerio Riedel (+0.99)
           - Twan van Laarhoven

    b) 'Data.Bits.zeroBits'

        - more verbose name reduces risk of namespace conflicts with unqualified imports

        - supporters (in no particular order):

           - Edward Kmett
           - Eric Mertens
           - Herbert Valerio Riedel
           - Twan van Laarhoven
           - (maybe?) ARJANEN Loïc Jean David


    So far there doesn't seem to be a very clear preference for
    'zeroBits' over 'zero'. It might help, if those how expressed some
    kind of support for both variants could clarify if their preference
    has any bias towards 'zeroBits' or 'zero'.


Cheers,
   hvr
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 10:47 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method

Am 22.02.2014 15:56, schrieb Carter Schonwald:

> Zero is a big bit of name space.  Id favor zerobits over zero.

... only if you plan to add another "zero" thing to the Data.Bits 
module. Conflicts with other modules must be resolved with importing 
mechanisms, but not by finding names that are different from all other 
identifiers in all other modules in all other packages. It just makes no 
sense to have a nice module system, but use it like the #include 
directive of the C preprocessor and eventually maybe complain about the 
deficiencies of the current module system.
Daniel Trstenjak | 24 Feb 11:14 2014
Picon

Re: Proposal: add new Data.Bits.Bits(bitZero) method


On Mon, Feb 24, 2014 at 10:47:47AM +0100, Henning Thielemann wrote:
> ... only if you plan to add another "zero" thing to the Data.Bits
> module. Conflicts with other modules must be resolved with importing
> mechanisms, but not by finding names that are different from all
> other identifiers in all other modules in all other packages. It
> just makes no sense to have a nice module system, but use it like
> the #include directive of the C preprocessor and eventually maybe
> complain about the deficiencies of the current module system.

Yes, I'm thinking in the same way.

Having "unique" names also seems to encourage to import modules
unqualified, which sometimes is understandable - and I won't tell that
I'm never doing it ;) - but at the end might have its own issues.

Sure, the possibility to get future conflicts by importing 'zero'
unqualified might be higher than with 'zeroBits', but using the
qualified name 'Bits.zero' isn't any longer and it's more likely
that you won't be getting conflicts in the future that way.

And if you're having some local, algorithmic code, these post fixes
like 'Bits' are just annoying and cluttering your code.

Greetings,
Daniel
Edward Kmett | 22 Feb 16:48 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

I am notably strongly against taking 'zero' as it is much more appropriately used by more algebraic classes, and it is an annoyingly common name to take for such an often unqualified import.

-Edward


On Sat, Feb 22, 2014 at 5:03 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
Hello *,

Here's a mid-discussion summary of the proposal

>> Introduce a new class method
>>
>>   class Bits a where
>>       ...
>>       -- | Value with all bits cleared
>>       <0-value-method> :: a
>>       ...
>>
>> modulo naming of '<0-value-method>'

from my point of view:

 - The idea came up already in 2011 when Num-superclass was to be removed from Bits
   (but discussion derailed)

 - So far general consensus (i.e. no "-1"s afaics) it's desired to have
   an all-bits-clear value introducing method in 'Bits'

 - Use "clearBit (bit 0) 0" as default implementation for smooth upgrade-path

 - Nameing for <0-value-method> boils down to two candidates:

    a) 'Data.Bits.zero'

        - based on the idea tha 'Data.Bits' ought to be imported
          qualified (or with explicit import-list) anyway
          (-> thus following PVP practice)

        - many existing Data.Bits.Bits methods such as 'rotate',
          'complement', 'popCount', 'xor', or 'shift' don't have
          the name 'bit' in it (and those few that have, operate
          on single bits)

        - supporters (in no particular order):

           - ARJANEN Loïc Jean David
           - Henning Thielemann
           - Herbert Valerio Riedel (+0.99)
           - Twan van Laarhoven

    b) 'Data.Bits.zeroBits'

        - more verbose name reduces risk of namespace conflicts with unqualified imports

        - supporters (in no particular order):

           - Edward Kmett
           - Eric Mertens
           - Herbert Valerio Riedel
           - Twan van Laarhoven
           - (maybe?) ARJANEN Loïc Jean David


    So far there doesn't seem to be a very clear preference for
    'zeroBits' over 'zero'. It might help, if those how expressed some
    kind of support for both variants could clarify if their preference
    has any bias towards 'zeroBits' or 'zero'.


Cheers,
   hvr
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Twan van Laarhoven | 24 Feb 18:34 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

I agree with Edward that it would be better to reserve 'zero' for something like 
an additive identity.

I'll withdraw my +1 for the name `zero`, in favor of +1 for `zeroBits`.

Twan

On 22/02/14 16:48, Edward Kmett wrote:
> I am notably strongly against taking 'zero' as it is much more appropriately
> used by more algebraic classes, and it is an annoyingly common name to take for
> such an often unqualified import.
>
> -Edward
>
>
> On Sat, Feb 22, 2014 at 5:03 AM, Herbert Valerio Riedel <hvr <at> gnu.org
> <mailto:hvr <at> gnu.org>> wrote:
>
>     Hello *,
>
>     Here's a mid-discussion summary of the proposal
>
>      >> Introduce a new class method
>      >>
>      >>   class Bits a where
>      >>       ...
>      >>       -- | Value with all bits cleared
>      >>       <0-value-method> :: a
>      >>       ...
>      >>
>      >> modulo naming of '<0-value-method>'
>
>     from my point of view:
>
>       - The idea came up already in 2011 when Num-superclass was to be removed
>     from Bits
>         (but discussion derailed)
>
>       - So far general consensus (i.e. no "-1"s afaics) it's desired to have
>         an all-bits-clear value introducing method in 'Bits'
>
>       - Use "clearBit (bit 0) 0" as default implementation for smooth upgrade-path
>
>       - Nameing for <0-value-method> boils down to two candidates:
>
>          a) 'Data.Bits.zero'
>
>              - based on the idea tha 'Data.Bits' ought to be imported
>                qualified (or with explicit import-list) anyway
>                (-> thus following PVP practice)
>
>              - many existing Data.Bits.Bits methods such as 'rotate',
>                'complement', 'popCount', 'xor', or 'shift' don't have
>                the name 'bit' in it (and those few that have, operate
>                on single bits)
>
>              - supporters (in no particular order):
>
>                 - ARJANEN Loïc Jean David
>                 - Henning Thielemann
>                 - Herbert Valerio Riedel (+0.99)
>                 - Twan van Laarhoven
>
>          b) 'Data.Bits.zeroBits'
>
>              - more verbose name reduces risk of namespace conflicts with
>     unqualified imports
>
>              - supporters (in no particular order):
>
>                 - Edward Kmett
>                 - Eric Mertens
>                 - Herbert Valerio Riedel
>                 - Twan van Laarhoven
>                 - (maybe?) ARJANEN Loïc Jean David
>
>
>          So far there doesn't seem to be a very clear preference for
>          'zeroBits' over 'zero'. It might help, if those how expressed some
>          kind of support for both variants could clarify if their preference
>          has any bias towards 'zeroBits' or 'zero'.
>
>
>     Cheers,
>         hvr
>     _______________________________________________
>     Libraries mailing list
>     Libraries <at> haskell.org <mailto:Libraries <at> haskell.org>
>     http://www.haskell.org/mailman/listinfo/libraries
>
>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
>

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 18:41 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 24.02.2014 18:34, schrieb Twan van Laarhoven:
> I agree with Edward that it would be better to reserve 'zero' for
> something like an additive identity.

Why would you want to reserve Bits.zero for an additive zero? This makes 
no sense.

> I'll withdraw my +1 for the name `zero`, in favor of +1 for `zeroBits`.

What is bad about Bits.zero?
Brandon Allbery | 24 Feb 18:57 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Mon, Feb 24, 2014 at 12:41 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
Am 24.02.2014 18:34, schrieb Twan van Laarhoven:

I agree with Edward that it would be better to reserve 'zero' for
something like an additive identity.

Why would you want to reserve Bits.zero for an additive zero? This makes no sense.

There is something vaguely smelly about specifically omitting the context

it is an annoyingly common name to take for
> such an often unqualified import.

in the original message. Yes, we're quite aware you do not consider it legitimate. Distorting someone else's meaning to press your point is also not legitimate.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 19:09 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 24.02.2014 18:57, schrieb Brandon Allbery:
> On Mon, Feb 24, 2014 at 12:41 PM, Henning Thielemann
> <schlepptop <at> henning-thielemann.de
> <mailto:schlepptop <at> henning-thielemann.de>> wrote:
>
>     Am 24.02.2014 18 <tel:24.02.2014%2018>:34, schrieb Twan van Laarhoven:
>
>         I agree with Edward that it would be better to reserve 'zero' for
>         something like an additive identity.
>
>
>     Why would you want to reserve Bits.zero for an additive zero? This
>     makes no sense.
>
>
> There is something vaguely smelly about specifically omitting the context
>
>  > it is an annoyingly common name to take for
>> such an often unqualified import.
>
> in the original message. Yes, we're quite aware you do not consider it
> legitimate. Distorting someone else's meaning to press your point is
> also not legitimate.

The phrase "reserve 'zero'" suggests that once we choose Bits.zero, the 
identifier 'zero' is reserved once and for all and cannot be used for 
something different anymore. That is, this phrasing removes the option 
of qualified imports from the scope and thus generates the wrong context.

Can someone please, please tell me why we must avoid qualified imports 
at all costs? Why is this option repeatedly ignored when just saying 
zeroBits (+1) or zero (-1)?
Edward Kmett | 24 Feb 20:56 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Henning,

As far as I know the only serious proponent of using qualified imports for all imports all the time is you.

Name conflicts don't affect you. We get that. We got that loud and clear virtually every time the naming of pretty much anything has arisen on this mailing list for the last few years.

That doesn't change the fact that your practice and common practice diverge.

I'm hard pressed to like an option that causes pain for the sloppy majority.

-Edward


On Mon, Feb 24, 2014 at 1:09 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
Am 24.02.2014 18:57, schrieb Brandon Allbery:
On Mon, Feb 24, 2014 at 12:41 PM, Henning Thielemann
<schlepptop <at> henning-thielemann.de
<mailto:schlepptop <at> henning-thielemann.de>> wrote:

    Am 24.02.2014 18 <tel:24.02.2014%2018>:34, schrieb Twan van Laarhoven:


        I agree with Edward that it would be better to reserve 'zero' for
        something like an additive identity.


    Why would you want to reserve Bits.zero for an additive zero? This
    makes no sense.


There is something vaguely smelly about specifically omitting the context

 > it is an annoyingly common name to take for
such an often unqualified import.

in the original message. Yes, we're quite aware you do not consider it
legitimate. Distorting someone else's meaning to press your point is
also not legitimate.

The phrase "reserve 'zero'" suggests that once we choose Bits.zero, the identifier 'zero' is reserved once and for all and cannot be used for something different anymore. That is, this phrasing removes the option of qualified imports from the scope and thus generates the wrong context.

Can someone please, please tell me why we must avoid qualified imports at all costs? Why is this option repeatedly ignored when just saying zeroBits (+1) or zero (-1)?


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Evan Laforge | 24 Feb 21:28 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Mon, Feb 24, 2014 at 11:56 AM, Edward Kmett <ekmett <at> gmail.com> wrote:
> Henning,
>
> As far as I know the only serious proponent of using qualified imports for
> all imports all the time is you.

Well, so am I, but I'm not the crusading sort.  Unqualified import as
default is definitely the dominant style, to the point where language
extensions tend to assume it, e.g. record puns, or even the whole
shared record names debate.

Interestingly, python and java seem to lean the other way.  Not sure
about ocaml, but I remember qualified names from back in the day.  A
culture thing, I guess.  Infix operators and backticks are uniquely
haskelly things that probably contribute a little.
Henning Thielemann | 24 Feb 21:30 2014
Picon

qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Am 24.02.2014 20:56, schrieb Edward Kmett:

> As far as I know the only serious proponent of using qualified imports
> for all imports /all/ the time is you.

First, there are some important packages like containers and bytestring 
that clearly are intended for qualified imports, and I really like that 
insertFM was replaced by Map.insert in the past. Second, your argument 
addresses a person not the issue.

> Name conflicts don't affect you. We get that. We got that loud and clear
> virtually every time the naming of pretty much anything has arisen on
> this mailing list for the last few years.

Sure, because there is no general discussion about the pros and cons of 
various naming styles. That's why it is discussed for every single 
identifier. How should a general discussion look like? What could be its 
outcome? That I am no longer allowed to propose the qualified import style?

> That doesn't change the fact that your practice and common practice diverge.

And I challenge the common practice, because it prefers convenience for 
few package authors over convenience for many package user (that may not 
even be a Haskell programmer).

For an example let me look at your lens package. You use unqualified and 
implicit imports. That is according to the PVP you would need to use 
tight version bounds like "containers >=0.4.0 && <0.5.0.1", but you 
don't. That is, your package does not conform to the PVP. It implies 
that users may have to fix your package when one of the imported 
packages starts to export identifiers that clash with those from "lens". 
Of course, there is no need to conform to the PVP. But it works best if 
many people adhere to it.

I have not written the PVP, that is, there must be at least one other 
person who cares. I understand the need for it and try to comply to it. 
Maybe I am in a minority. Looking at Hackage it seems I am in a 
minority. But I believe I am in the minority who cares whether packages 
work together. Shall I capitulate to the majority which does not seem to 
care about package interoperability?

I am also ok if sloppy common practice happens in many Hackage packages. 
I do not need to use them. But I need to use 'base'.
Henning Thielemann | 24 Feb 22:43 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Am 24.02.2014 21:30, schrieb Henning Thielemann:

> For an example let me look at your lens package. You use unqualified and
> implicit imports. That is according to the PVP you would need to use
> tight version bounds like "containers >=0.4.0 && <0.5.0.1", but you
> don't.

Sorry, it must be "containers >=0.4.0 && <0.5.1", but it is "containers 
 >=0.4.0 && <0.6".
Edward A Kmett | 24 Feb 22:53 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

...and I have had a half dozen problems caused by this policy in 5 years precisely because people are careful
with their names in the packages I do depend upon.

What I maintain now is approximately a full time job. Depending on minor versions and multiplying my
workload by a nontrivial factor to eliminate a non-problem isn't going to happen.

-Edward

> On Feb 24, 2014, at 4:43 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
> 
> Am 24.02.2014 21:30, schrieb Henning Thielemann:
> 
>> For an example let me look at your lens package. You use unqualified and
>> implicit imports. That is according to the PVP you would need to use
>> tight version bounds like "containers >=0.4.0 && <0.5.0.1", but you
>> don't.
> 
> Sorry, it must be "containers >=0.4.0 && <0.5.1", but it is "containers >=0.4.0 && <0.6".
> 
Henning Thielemann | 28 Feb 21:10 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I got distracted by writing a tool that checks consistency of package 
dependencies and import styles. It's not yet perfect, but usable:

https://hackage.haskell.org/package/check-pvp

Description:
   Check whether the version ranges used in the  <at> Build-Depends <at>  field
   matches the style of module imports
   according to the Package Versioning Policy (PVP).
   See <http://www.haskell.org/haskellwiki/Package_versioning_policy>.
   The tool essentially looks for any dependency
   like  <at> containers >=0.5 && <0.6 <at> 
   that allows the addition of identifiers to modules
   within the version range.
   Then it checks whether all module imports from  <at> containers <at> 
   are protected against name clashes
   that could be caused by addition of identifiers.
   .
   You must run the tool in a directory containing a Cabal package.
   .
   > $ check-pvp
   .
   This requires that the package is configured,
   since only then the association of packages to modules is known.
   If you want to run the tool on a non-configured package
   you may just check all imports for addition-proof style.
   .
   > $ check-pvp --include-all
   .
   It follows a detailed description of the procedure
   and the rationale behind it.
   .
   First the program classifies all dependencies
   in the Cabal file of the package.
   You can show all classifications with the  <at> --classify-dependencies <at>  
option,
   otherwise only problematic dependencies are shown.
   .
   A dependency like  <at> containers >=0.5.0.3 && <0.5.1 <at> 
   does not allow changes of the API of  <at> containers <at> 
   and thus the program does not check its imports.
   Clashing import abbreviations are an exception.
   .
   The dependency  <at> containers >=0.5.1 && <0.6 <at> 
   requires more care when importing modules from  <at> containers <at> 
   and this is what the program is going to check next.
   This is the main purpose of the program!
   I warmly recommend this kind of dependency range
   since it greatly reduces the work
   to keep your package going together with its imported packages.
   .
   Dependencies like  <at> containers >=0.5 <at>  or  <at> containers >=0.5 && <1 <at> 
   are always problematic,
   since within the specified version ranges identifier can disappear.
   There is no import style that protects against removed identifiers.
   .
   An inclusive upper bound as in  <at> containers >=0.5 && <=0.6 <at> 
   will also cause a warning, because it is unnecessarily strict.
   If you know that  <at> containers-0.6 <at>  works for you,
   then  <at> containers-0.6.0.1 <at>  or  <at> containers-0.6.1 <at>  will also work,
   depending on your import style.
   A special case of inclusive upper bounds are specific versions
   like in  <at> containers ==0.6 <at> .
   The argument for the warning remains the same.
   .
   Please note that the check of ranges
   is performed entirely on the package description.
   The program will not inspect the imported module contents.
   E.g. if you depend on  <at> containers >=0.5 && <0.6 <at> 
   but import in a way that risks name clashes,
   then you may just extend the dependency to  <at> containers >=0.5 && <0.6.1 <at> 
   in order to let the checker fall silent.
   If you use the dependency  <at> containers >=0.5 && <0.6.1 <at> 
   then the checker expects that you have verified
   that your package works with all versions of kind  <at> 0.5.x <at> 
   and the version  <at> 0.6.0 <at> .
   Other versions would then work, too,
   due to the constraints imposed by package versioning policy.
   .
   Let us now look at imports
   that must be protected against identifier additions.
   .
   The program may complain about a lax import.
   This means you have imported like
   .
   > import Data.Map as Map
   .
   Additions to  <at> Data.Map <at>  may clash with other identifiers,
   thus you must import either
   .
   > import qualified Data.Map as Map
   .
   or
   .
   > import Data.Map (Map)
   .
   The program may complain about an open list of constructors as in
   .
   > import Data.Sequence (ViewL(..))
   .
   Additions of constructors to  <at> ViewL <at>  may also conflict with other 
identifiers.
   You must instead import like
   .
   > import Data.Sequence (ViewL(EmptyL, (:<)))
   .
   or
   .
   > import qualified Data.Sequence as Seq
   .
   The program emits an error on clashing module abbreviations like
   .
   > import qualified Data.Map.Lazy as Map
   > import qualified Data.Map.Strict as Map
   .
   This error is raised
   whenever multiple modules are imported with the same abbreviation,
   where at least one module is open for additions.
   Our test is overly strict in the sense that it also blames
   .
   > import qualified Data.Map as Map
   > import qualified Data.Map as Map
   .
   but I think it is good idea to avoid redundant imports anyway.
   .
   Additionally you can enable a test for hiding imports
   with the  <at> --pedantic <at>  option.
   The import
   .
   > import Data.Map hiding (insert)
   .
   is not bad in the sense of the PVP,
   but this way you depend on the existence of the identifier  <at> insert <at> 
   although you do not need it.
   If it is removed in a later version of  <at> containers <at> ,
   then your import breaks although you did not use the identifier.
   .
   Finally you can control what items are checked.
   First of all you can select the imports that are checked.
   Normally the imports are checked that belong to lax dependencies
   like  <at> containers >=0.5 && <0.6 <at> .
   However this requires the package to be configured
   in order to know which import belongs to which dependency.
   E.g.  <at> Data.Map <at>  belongs to  <at> containers <at> .
   You can just check all imports for being addition-proof
   using the  <at> --include-all <at>  option.
   Following you can write the options
    <at> --include-import <at> ,
    <at> --exclude-import <at> ,
    <at> --include-dependency <at> ,
    <at> --exclude-dependency <at> 
   that allow to additionally check or ignore imports
   from certain modules or packages.
   These modifiers are applied from left to right.
   E.g.  <at> --exclude-import=Prelude <at>  will accept any import style for 
 <at> Prelude <at> 
   and  <at> --exclude-dependency=foobar <at>  will ignore the package  <at> foobar <at> ,
   say, because it does not conform to the PVP.
   .
   Secondly, you may ignore certain modules or components of the package
   using the options
    <at> --exclude-module <at> ,
    <at> --exclude-library <at> ,
    <at> --exclude-executables <at> ,
    <at> --exclude-testsuites <at> ,
    <at> --exclude-benchmarks <at> .
   E.g.  <at> --exclude-module=Paths_PKG <at>  will exclude the Paths module
   that is generated by Cabal.
   I assume that it will always be free of name clashes.
   .
   Known problems:
   .
   * The program cannot automatically filter out the  <at> Paths <at>  module.
   .
   * The program cannot find and check preprocessed modules.
   .
   * The program may yield wrong results in the presence of Cabal 
conditions.
   .
   If this program proves to be useful
   it might eventually be integrated in the  <at> check <at>  command of 
 <at> cabal-install <at> .
   See <https://github.com/haskell/cabal/issues/1703>.
   .
   Alternative:
   If you want to allow exclusively large version ranges, i.e.  <at> >=x.y && 
<x.y+1 <at> ,
   then you may also add the option  <at> -fwarn-missing-import-lists <at> 
   to the  <at> GHC-Options <at>  fields of your Cabal file.
   See <https://ghc.haskell.org/trac/ghc/ticket/4977>.
   Unfortunately there is no GHC warning on clashing module abbreviations.
   See <https://ghc.haskell.org/trac/ghc/ticket/4980>.
Edward Kmett | 28 Feb 21:23 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I just want to say thank you for writing this tool.

I may not agree with your interpretation of the PVP on this particular issue, and will probably only use it on a couple of smaller packages that have almost no imports, but at least putting code out there to help those who do want to work in that style is a very helpful and valuable thing.

-Edward


On Fri, Feb 28, 2014 at 3:10 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
I got distracted by writing a tool that checks consistency of package dependencies and import styles. It's not yet perfect, but usable:

https://hackage.haskell.org/package/check-pvp


Description:
  Check whether the version ranges used in the <at> Build-Depends <at> field
  matches the style of module imports
  according to the Package Versioning Policy (PVP).
  See <http://www.haskell.org/haskellwiki/Package_versioning_policy>.
  The tool essentially looks for any dependency
  like <at> containers >=0.5 && <0.6 <at>
  that allows the addition of identifiers to modules
  within the version range.
  Then it checks whether all module imports from <at> containers <at>
  are protected against name clashes
  that could be caused by addition of identifiers.
  .
  You must run the tool in a directory containing a Cabal package.
  .
  > $ check-pvp
  .
  This requires that the package is configured,
  since only then the association of packages to modules is known.
  If you want to run the tool on a non-configured package
  you may just check all imports for addition-proof style.
  .
  > $ check-pvp --include-all
  .
  It follows a detailed description of the procedure
  and the rationale behind it.
  .
  First the program classifies all dependencies
  in the Cabal file of the package.
  You can show all classifications with the <at> --classify-dependencies <at> option,
  otherwise only problematic dependencies are shown.
  .
  A dependency like <at> containers >=0.5.0.3 && <0.5.1 <at>
  does not allow changes of the API of <at> containers <at>
  and thus the program does not check its imports.
  Clashing import abbreviations are an exception.
  .
  The dependency <at> containers >=0.5.1 && <0.6 <at>
  requires more care when importing modules from <at> containers <at>
  and this is what the program is going to check next.
  This is the main purpose of the program!
  I warmly recommend this kind of dependency range
  since it greatly reduces the work
  to keep your package going together with its imported packages.
  .
  Dependencies like <at> containers >=0.5 <at> or <at> containers >=0.5 && <1 <at>
  are always problematic,
  since within the specified version ranges identifier can disappear.
  There is no import style that protects against removed identifiers.
  .
  An inclusive upper bound as in <at> containers >=0.5 && <=0.6 <at>
  will also cause a warning, because it is unnecessarily strict.
  If you know that <at> containers-0.6 <at> works for you,
  then <at> containers-0.6.0.1 <at> or <at> containers-0.6.1 <at> will also work,
  depending on your import style.
  A special case of inclusive upper bounds are specific versions
  like in <at> containers ==0.6 <at> .
  The argument for the warning remains the same.
  .
  Please note that the check of ranges
  is performed entirely on the package description.
  The program will not inspect the imported module contents.
  E.g. if you depend on <at> containers >=0.5 && <0.6 <at>
  but import in a way that risks name clashes,
  then you may just extend the dependency to <at> containers >=0.5 && <0.6.1 <at>
  in order to let the checker fall silent.
  If you use the dependency <at> containers >=0.5 && <0.6.1 <at>
  then the checker expects that you have verified
  that your package works with all versions of kind <at> 0.5.x <at>
  and the version <at> 0.6.0 <at> .
  Other versions would then work, too,
  due to the constraints imposed by package versioning policy.
  .
  Let us now look at imports
  that must be protected against identifier additions.
  .
  The program may complain about a lax import.
  This means you have imported like
  .
  > import Data.Map as Map
  .
  Additions to <at> Data.Map <at> may clash with other identifiers,
  thus you must import either
  .
  > import qualified Data.Map as Map
  .
  or
  .
  > import Data.Map (Map)
  .
  The program may complain about an open list of constructors as in
  .
  > import Data.Sequence (ViewL(..))
  .
  Additions of constructors to <at> ViewL <at> may also conflict with other identifiers.
  You must instead import like
  .
  > import Data.Sequence (ViewL(EmptyL, (:<)))
  .
  or
  .
  > import qualified Data.Sequence as Seq
  .
  The program emits an error on clashing module abbreviations like
  .
  > import qualified Data.Map.Lazy as Map
  > import qualified Data.Map.Strict as Map
  .
  This error is raised
  whenever multiple modules are imported with the same abbreviation,
  where at least one module is open for additions.
  Our test is overly strict in the sense that it also blames
  .
  > import qualified Data.Map as Map
  > import qualified Data.Map as Map
  .
  but I think it is good idea to avoid redundant imports anyway.
  .
  Additionally you can enable a test for hiding imports
  with the <at> --pedantic <at> option.
  The import
  .
  > import Data.Map hiding (insert)
  .
  is not bad in the sense of the PVP,
  but this way you depend on the existence of the identifier <at> insert <at>
  although you do not need it.
  If it is removed in a later version of <at> containers <at> ,
  then your import breaks although you did not use the identifier.
  .
  Finally you can control what items are checked.
  First of all you can select the imports that are checked.
  Normally the imports are checked that belong to lax dependencies
  like <at> containers >=0.5 && <0.6 <at> .
  However this requires the package to be configured
  in order to know which import belongs to which dependency.
  E.g. <at> Data.Map <at> belongs to <at> containers <at> .
  You can just check all imports for being addition-proof
  using the <at> --include-all <at> option.
  Following you can write the options
  <at> --include-import <at> ,
  <at> --exclude-import <at> ,
  <at> --include-dependency <at> ,
  <at> --exclude-dependency <at>
  that allow to additionally check or ignore imports
  from certain modules or packages.
  These modifiers are applied from left to right.
  E.g. <at> --exclude-import=Prelude <at> will accept any import style for <at> Prelude <at>
  and <at> --exclude-dependency=foobar <at> will ignore the package <at> foobar <at> ,
  say, because it does not conform to the PVP.
  .
  Secondly, you may ignore certain modules or components of the package
  using the options
  <at> --exclude-module <at> ,
  <at> --exclude-library <at> ,
  <at> --exclude-executables <at> ,
  <at> --exclude-testsuites <at> ,
  <at> --exclude-benchmarks <at> .
  E.g. <at> --exclude-module=Paths_PKG <at> will exclude the Paths module
  that is generated by Cabal.
  I assume that it will always be free of name clashes.
  .
  Known problems:
  .
  * The program cannot automatically filter out the <at> Paths <at> module.
  .
  * The program cannot find and check preprocessed modules.
  .
  * The program may yield wrong results in the presence of Cabal conditions.
  .
  If this program proves to be useful
  it might eventually be integrated in the <at> check <at> command of <at> cabal-install <at> .
  See <https://github.com/haskell/cabal/issues/1703>.
  .
  Alternative:
  If you want to allow exclusively large version ranges, i.e. <at> >=x.y && <x.y+1 <at> ,
  then you may also add the option <at> -fwarn-missing-import-lists <at>
  to the <at> GHC-Options <at> fields of your Cabal file.
  See <https://ghc.haskell.org/trac/ghc/ticket/4977>.
  Unfortunately there is no GHC warning on clashing module abbreviations.
  See <https://ghc.haskell.org/trac/ghc/ticket/4980>.


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 28 Feb 22:22 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I second that sentiment of appreciation.  As much as we all disagree on some things, we all agree you make a tremendous amount of neat code available to the community. 

On Friday, February 28, 2014, Edward Kmett <ekmett <at> gmail.com> wrote:

I just want to say thank you for writing this tool.

I may not agree with your interpretation of the PVP on this particular issue, and will probably only use it on a couple of smaller packages that have almost no imports, but at least putting code out there to help those who do want to work in that style is a very helpful and valuable thing.

-Edward


On Fri, Feb 28, 2014 at 3:10 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
I got distracted by writing a tool that checks consistency of package dependencies and import styles. It's not yet perfect, but usable:

https://hackage.haskell.org/package/check-pvp


Description:
  Check whether the version ranges used in the <at> Build-Depends <at> field
  matches the style of module imports
  according to the Package Versioning Policy (PVP).
  See <http://www.haskell.org/haskellwiki/Package_versioning_policy>.
  The tool essentially looks for any dependency
  like <at> containers >=0.5 && <0.6 <at>
  that allows the addition of identifiers to modules
  within the version range.
  Then it checks whether all module imports from <at> containers <at>
  are protected against name clashes
  that could be caused by addition of identifiers.
  .
  You must run the tool in a directory containing a Cabal package.
  .
  > $ check-pvp
  .
  This requires that the package is configured,
  since only then the association of packages to modules is known.
  If you want to run the tool on a non-configured package
  you may just check all imports for addition-proof style.
  .
  > $ check-pvp --include-all
  .
  It follows a detailed description of the procedure
  and the rationale behind it.
  .
  First the program classifies all dependencies
  in the Cabal file of the package.
  You can show all classifications with the <at> --classify-dependencies <at> option,
  otherwise only problematic dependencies are shown.
  .
  A dependency like <at> containers >=0.5.0.3 && <0.5.1 <at>
  does not allow changes of the API of <at> containers <at>
  and thus the program does not check its imports.
  Clashing import abbreviations are an exception.
  .
  The dependency <at> containers >=0.5.1 && <0.6 <at>
  requires more care when importing modules from <at> containers <at>
  and this is what the program is going to check next.
  This is the main purpose of the program!
  I warmly recommend this kind of dependency range
  since it greatly reduces the work
  to keep your package going together with its imported packages.
  .
  Dependencies like <at> containers >=0.5 <at> or <at> containers >=0.5 && <1 <at>
  are always problematic,
  since within the specified version ranges identifier can disappear.
  There is no import style that protects against removed identifiers.
  .
  An inclusive upper bound as in <at> containers >=0.5 && <=0.6 <at>
  will also cause a warning, because it is unnecessarily strict.
  If you know that <at> containers-0.6 <at> works for you,
  then <at> containers-0.6.0.1 <at> or <at> containers-0.6.1 <at> will also work,
  depending on your import style.
  A special case of inclusive upper bounds are specific versions
  like in <at> containers ==0.6 <at> .
  The argument for the warning remains the same.
  .
  Please note that the check of ranges
  is performed entirely on the package description.
  The program will not inspect the imported module contents.
  E.g. if you depend on <at> containers >=0.5 && <0.6 <at>
  but import in a way that risks name clashes,
  then you may just extend the dependency to <at> containers >=0.5 && <0.6.1 <at>
  in order to let the checker fall silent.
  If you use the dependency <at> containers >=0.5 && <0.6.1 <at>
  then the checker expects that you have verified
  that your package works with all versions of kind <at> 0.5.x <at>
  and the version <at> 0.6.0 <at> .
  Other versions would then work, too,
  due to the constraints imposed by package versioning policy.
  .
  Let us now look at imports
  that must be protected against identifier additions.
  .
  The program may complain about a lax import.
  This means you have imported like
  .
  > import Data.Map as Map
  .
  Additions to <at> Data.Map <at> may clash with other identifiers,
  thus you must import either
  .
  > import qualified Data.Map as Map
  .
  or
  .
  > import Data.Map (Map)
  .
  The program may complain about an open list of constructors as in
  .
  > import Data.Sequence (ViewL(..))
  .
  Additions of constructors to <at> ViewL <at> may also conflict with other identifiers.
  You must instead import like
  .
  > import Data.Sequence (ViewL(Empt
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 28 Feb 21:59 2014
Picon

Re: qualified imports, PVP and so on -> check-pvp

Am 28.02.2014 21:10, schrieb Henning Thielemann:
> I got distracted by writing a tool that checks consistency of package
> dependencies and import styles. It's not yet perfect, but usable:
>
> https://hackage.haskell.org/package/check-pvp
>
>
> Description:
 > ...
>    The program may complain about an open list of constructors as in
>    .
>    > import Data.Sequence (ViewL(..))
>    .
>    Additions of constructors to  <at> ViewL <at>  may also conflict with other
> identifiers.

Maybe I am too strict here. The PVP considers additions of constructors 
and class methods as changes, not additions. That is, we don't need to 
fear additions of constructor and methods within the range >=x.y && 
<x.y+1. I may turn this test into "pedantic". I still prefer explicit 
constructor import lists, since they allow the reader to easily track 
the origin of an identifier - especially if it is gone.
Michael Snoyman | 25 Feb 07:44 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Mon, Feb 24, 2014 at 10:30 PM, Henning Thielemann <schlepptop <at> henning-thielemann.de> wrote:
Am 24.02.2014 20:56, schrieb Edward Kmett:

As far as I know the only serious proponent of using qualified imports
for all imports /all/ the time is you.

First, there are some important packages like containers and bytestring that clearly are intended for qualified imports, and I really like that insertFM was replaced by Map.insert in the past. Second, your argument addresses a person not the issue.

Name conflicts don't affect you. We get that. We got that loud and clear
virtually every time the naming of pretty much anything has arisen on
this mailing list for the last few years.

Sure, because there is no general discussion about the pros and cons of various naming styles. That's why it is discussed for every single identifier. How should a general discussion look like? What could be its outcome? That I am no longer allowed to propose the qualified import style?

That doesn't change the fact that your practice and common practice diverge.

And I challenge the common practice, because it prefers convenience for few package authors over convenience for many package user (that may not even be a Haskell programmer).

For an example let me look at your lens package. You use unqualified and implicit imports. That is according to the PVP you would need to use tight version bounds like "containers >=0.4.0 && <0.5.0.1", but you don't. That is, your package does not conform to the PVP. It implies that users may have to fix your package when one of the imported packages starts to export identifiers that clash with those from "lens". Of course, there is no need to conform to the PVP. But it works best if many people adhere to it.

I have not written the PVP, that is, there must be at least one other person who cares. I understand the need for it and try to comply to it. Maybe I am in a minority. Looking at Hackage it seems I am in a minority. But I believe I am in the minority who cares whether packages work together. Shall I capitulate to the majority which does not seem to care about package interoperability?

I am also ok if sloppy common practice happens in many Hackage packages. I do not need to use them. But I need to use 'base'.


This email seems to conflate a number of different issues together. I'd like to address them separately.

Firstly, regarding naming style. As I see it, there are essentially three camps on this one:

1. Short names that are intended to be imported qualified. Examples: Data.Map, Data.ByteString.
2. Longer names that can be imported unqualified. Examples: Data.IORef, Control.Concurrent.MVar.
3. Typeclass-based approaches that generalize across multiple libraries. Examples: Data.Foldable, Data.Traversable.

The initial discussion came down to an argument between (1) and (2). What I disagree in your email Henning is the implication that this "sloppy practice" in base will somehow negatively affect you. As I see it, the sum total of negative effect is that you'll be required to type a few extra characters (e.g., zeroBits instead of zero). Am I missing something? Given that (a) a huge amount of base already works this way, (b) having the longer name will allow the unqualified import approach, and (c) it has a small impact on those wanting unqualified imports, I'd come down with a strong vote in favor of option (2).

Next is the issue of PVP. I am someone who has stopped religiously following the PVP in the past few years. Your email seems to imply that only those following the PVP care about making sure that "packages work together." I disagree here; I don't use the PVP specifically because I care about package interoperability.

The point of the PVP is to ensure that code builds. It's a purely compile-time concept. The PVP solves the problem of an update to a dependency causing a downstream package to break. And assuming everyone adheres to it[1], it ensures that cabal will never try to perform a build which isn't guaranteed to work.

But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

My point here is: please don't try to frame this argument as "sloppy people who hate compatibility" versus "PVP-adhering people who make Hackage better." Some of us who have stopped following the PVP have done so for very principled reasons, even if you disagree with them. (And Edward's comments on maintenance effort is not to be ignored either.)

Three final points:

* I know that back in the base 3/4 transition there was good reason for upper bounds on base. Today, it makes almost no sense: it simply prevents cabal from even *trying* to perform a compilation. Same goes with libraries like array and template-haskell, which make up most of the issue with testing of GHC 7.8 Stackage builds[3]. Can any PVP proponent explain why these upper bounds still help us on corelibs?
* If you're concerned about your production code base breaking by changes on Hackage, you're doing it wrong. Basing your entire production build on the presumption that Hackage maintainers perfectly follow the PVP and never make any mistakes in new releases of their packages is a recipe for disaster. You should be pinning down the exact versions of packages you depend on. Greg Weber described a technique for this[4].
* The PVP doesn't in any way solve all problems. You can perfectly adhere to the PVP and still experience breakage. I've seen a number of examples of this in the past, mostly to do with the fact that you don't lock down the versions of transitive dependencies, which can cause re-exports to include new functions or expose (or hide) typeclass instances.

Michael

[1] And never makes any mistakes of course.
[2] This just occurred in Stackage.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Daniel Trstenjak | 25 Feb 08:43 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


Hi Michael,

> (b) having the longer name will allow the unqualified import approach ...

The longer name just reduces the possibility of conflicts, which might
be already sufficient and in the case of 'zeroBits' it might be really
the right thing.

I think the point of proponents of qualified imports is, that by having
'Bits.zero' instead of 'zeroBits' you're typing almost the same amount
of characters, but the first solution is safer. 

I don't think that anybody would like to write 'Bits.zeroBits', sure you
could still explicitly import 'zeroBits' and still have the same safety,
but then you have more work with the imports, and at the end nobody 
wants to do more work and this seems to be the common ground of users
and non users of qualified imports ;).

> But that's only one half of the "package interoperability" issue. I face this
> first hand on a daily basis with my Stackage maintenance. I spend far more time
> reporting issues of restrictive upper bounds than I do with broken builds from
> upstream changes. So I look at this as purely a game of statistics: are you
> more likely to have code break because version 1.2 of text changes the type of
> the map function and you didn't have an upper bound, or because two
> dependencies of yours have *conflicting* versions bounds on a package like
> aeson[2]? In my experience, the latter occurs far more often than the former.

I mostly came to the conclusion, that the PVP is perfectly fine for
binaries/executables, especially in conjunction with a cabal sandbox,
but in a lot of cases annoying for libraries.

Greetings,
Daniel
Herbert Valerio Riedel | 25 Feb 10:12 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-25 at 07:44:45 +0100, Michael Snoyman wrote:

[...]

> * I know that back in the base 3/4 transition there was good reason for
> upper bounds on base. Today, it makes almost no sense: it simply prevents
> cabal from even *trying* to perform a compilation. Same goes with libraries
> like array and template-haskell, which make up most of the issue with
> testing of GHC 7.8 Stackage builds[3]. Can any PVP proponent explain why
> these upper bounds still help us on corelibs?

I assume by 'corelibs' you mean the set of non-upgradeble libs,
i.e. those tied to the compiler version? (E.g. `bytestring` would be
upgradeable, as opposed to `base` or `template-haskell`)

Well, `base` (together with the other few non-upgradeable libs) is
indeed a special case; also, in `base` usually care is taken to avoid
semantic changes (not visible at the type-signature level), so an upper
bound doesn't gain that much in terms of protecting against semantic
breakages.

Otoh, the situation changes if you have a library where you have
different versions, which are tied to different version ranges of base,
where you want Cabal to select the matching version. Admittedly, this is
a special case for when use of MIN_VERSION_base() wouldn't suffice, but
I wanted to give an example exploiting upper-bounds on the `base` lib.

There's one other possible minor benefit I can think of, that upper
bounds give over compile-errors, which is a more user-friendly message,
to point to the reason of the failure, instead of requiring you guess
what the actual cause of the compile-error was. But for non-upgradeable
packages such as `base`, which do big major version jumps for almost
every release (mostly due to changes in GHC modules exposing internals
or adding type-class instances[1]), erring on the
confusing-compile-error side seems to provide more value.

So, as for `base` I mostly agree, that there seems to be little benefit
for upper bounds, *unless* a base3/4 situation comes up again in the
future. So, I'd suggest (for those who don't want to follow PVP with
`base`) to keep using at least a "super"-major upper bound, such as
'base < 5' to leave a door open for such an eventuality.

Cheers,
  hvr

 [1]: I'd argue (but I'd need research this, to back this up with
      numbers), that we're often suffering from the PVP, because it
      requires us to perform major-version jumps mostly due to
      typeclasses, in order to protect against conflicts with possible
      non-hideable orphan-instances; and that (as some have suggested in
      past already), we might want to reconsider requiring only a minor
      bump on instance-additions, and discourage the orphan-instance
      business by requiring those packages to have tighter-than-major
      upper-bounds
Michael Snoyman | 25 Feb 10:23 2014

Re: qualified imports, PVP and so on




On Tue, Feb 25, 2014 at 11:12 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-25 at 07:44:45 +0100, Michael Snoyman wrote:

[...]

> * I know that back in the base 3/4 transition there was good reason for
> upper bounds on base. Today, it makes almost no sense: it simply prevents
> cabal from even *trying* to perform a compilation. Same goes with libraries
> like array and template-haskell, which make up most of the issue with
> testing of GHC 7.8 Stackage builds[3]. Can any PVP proponent explain why
> these upper bounds still help us on corelibs?

I assume by 'corelibs' you mean the set of non-upgradeble libs,
i.e. those tied to the compiler version? (E.g. `bytestring` would be
upgradeable, as opposed to `base` or `template-haskell`)


Yes, that's what I meant. I realize now corelibs wasn't the right term, but I don't think we *have* a correct term for this. I like your usage of upgradeable.
 
Well, `base` (together with the other few non-upgradeable libs) is
indeed a special case; also, in `base` usually care is taken to avoid
semantic changes (not visible at the type-signature level), so an upper
bound doesn't gain that much in terms of protecting against semantic
breakages.

Otoh, the situation changes if you have a library where you have
different versions, which are tied to different version ranges of base,
where you want Cabal to select the matching version. Admittedly, this is
a special case for when use of MIN_VERSION_base() wouldn't suffice, but
I wanted to give an example exploiting upper-bounds on the `base` lib.


That situation is technically possible, but highly unlikely to ever occur in practice. Consider what would have to happen:

foo-1 is released, which works with base 4.5 and 4.6. It has a version bound base >= 4.5 && < 4.7.
foo-2 is released, which only works with base 4.5. It changes its version bound to base >= 4.5 && < 4.6.

In other words, a later release of the package would have to drop support for newer GHCs. The far more likely scenario to occur is where foo-1 simply didn't include upper bounds, and foo-2 adds them in. In that case, cabal will try to use foo-1, even though it won't anyway.

Does anyone have an actual example of base or template-haskell upper bounds that provided benefit?
 
There's one other possible minor benefit I can think of, that upper
bounds give over compile-errors, which is a more user-friendly message,
to point to the reason of the failure, instead of requiring you guess
what the actual cause of the compile-error was. But for non-upgradeable
packages such as `base`, which do big major version jumps for almost
every release (mostly due to changes in GHC modules exposing internals
or adding type-class instances[1]), erring on the
confusing-compile-error side seems to provide more value.


I'd actually argue that this is a disadvantage. It's true that we want users to have a good experience, but the *best* experience would be to let upstream packages get fixed. Imagine a common build error caused by the removal of `catch` from Prelude in base 4.6. With upper bounds, a user gets the error message "doesn't work with base 4.6" and reports to the package maintainer. The package maintainer then needs to download GHC and try to compile his package before getting any idea what the problem is (if there even *is* a problem!).

With more verbose errors, a user could give a meaningful error message and, in many cases, a maintainer would be able to fix the problem without even needing to download a new version of the compiler.
 
So, as for `base` I mostly agree, that there seems to be little benefit
for upper bounds, *unless* a base3/4 situation comes up again in the
future. So, I'd suggest (for those who don't want to follow PVP with
`base`) to keep using at least a "super"-major upper bound, such as
'base < 5' to leave a door open for such an eventuality.


Cheers,
  hvr


 [1]: I'd argue (but I'd need research this, to back this up with
      numbers), that we're often suffering from the PVP, because it
      requires us to perform major-version jumps mostly due to
      typeclasses, in order to protect against conflicts with possible
      non-hideable orphan-instances; and that (as some have suggested in
      past already), we might want to reconsider requiring only a minor
      bump on instance-additions, and discourage the orphan-instance
      business by requiring those packages to have tighter-than-major
      upper-bounds

+1, forcing major version bumps for each new instance just in case someone has an orphan instance is complete overkill.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Brandon Allbery | 25 Feb 16:12 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

Because we keep constantly seeing examples where saving the developer some upper bounds PVP work forces users to deal with unexpected errors, but since Haskell developers don't see that user pain it is considered irrelevant/nonexistent and certainly not any justification for saving developers some work.

Personally, I think any ecosystem which strongly prefers pushing versioning pain points onto end users instead of developers is doing itself a severe disservice.

Are there things that could be improved about versioning policy? Absolutely. But pushing problems onto end users is not an improvement.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Twan van Laarhoven | 25 Feb 16:28 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 25/02/14 16:12, Brandon Allbery wrote:
> On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com
> <mailto:michael <at> snoyman.com>> wrote:
>
>     But that's only one half of the "package interoperability" issue. I face
>     this first hand on a daily basis with my Stackage maintenance. I spend far
>     more time reporting issues of restrictive upper bounds than I do with broken
>     builds from upstream changes. So I look at this as purely a game of
>     statistics: are you more likely to have code break because version 1.2 of
>     text changes the type of the map function and you didn't have an upper
>     bound, or because two dependencies of yours have *conflicting* versions
>     bounds on a package like aeson[2]? In my experience, the latter occurs far
>     more often than the former.
>
>
> I have a question for you.
>
> Is it better to save a developer some work, or is it better to force that work
> onto end users?
>
> Because we keep constantly seeing examples where saving the developer some upper
> bounds PVP work forces users to deal with unexpected errors, but since Haskell
> developers don't see that user pain it is considered irrelevant/nonexistent and
> certainly not any justification for saving developers some work.
>
> Personally, I think any ecosystem which strongly prefers pushing versioning pain
> points onto end users instead of developers is doing itself a severe disservice.
>
> Are there things that could be improved about versioning policy? Absolutely. But
> pushing problems onto end users is not an improvement.

Strict upper bounds are horrible when a new version of, say, the base library 
comes out. In reality 90% of the code will not break, it will just require a new 
release with increased version bounds. These upper bounds actually *hurt* users, 
because they suddenly couldn't use half of Hackage.

This reminds me of the situation of Firefox extensions. In earlier versions of 
the browser these came with strict upper bounds, saying "I work in Firefox 11 up 
to 13". But then every month or so when a new version came out, all extensions 
would stop working. Newer versions of the browser have switched to an 'assume it 
works' model, where problems are reported and only then will the extension be 
disabled.

So, violating upper-bounds should be a warning at most, perhaps for some kind of 
loose 'tested-with' upper bound. Additionally, we need a way to report build 
successes and failures to Hackage, and automatically update these 'tested-with' 
upper bounds.

In other words, make a distinction between upper bounds violations that mean 
"not known to work with versions >X" and "known not to work with versions >X".

Twan
Daniel Trstenjak | 25 Feb 16:38 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


Hi Brandon,

On Tue, Feb 25, 2014 at 10:12:29AM -0500, Brandon Allbery wrote:
> Is it better to save a developer some work, or is it better to force that work
> onto end users?

What is an end user? Someone installing a package containing an executable?
Then the package is an end point in the dependency graph and the PVP can
work pretty well for this case.

But if the package contains a library, then the end user is also the
developer, so you can only choose which kind of pain you prefer.

> Because we keep constantly seeing examples where saving the developer some
> upper bounds PVP work forces users to deal with unexpected errors, but since
> Haskell developers don't see that user pain it is considered irrelevant/
> nonexistent and certainly not any justification for saving developers some
> work.

I think that in most cases it doesn't really make much difference for the
end user, if they're seeing a package version mismatch or if they're
seeing a compile error.

Sure, the package version mismatch is more telling, but in most cases he
will be equally lost and has to ask for help.

Greetings,
Daniel
Brandon Allbery | 25 Feb 17:33 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 10:38 AM, Daniel Trstenjak <daniel.trstenjak <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 10:12:29AM -0500, Brandon Allbery wrote:
> Is it better to save a developer some work, or is it better to force that work
> onto end users?

What is an end user? Someone installing a package containing an executable?
Then the package is an end point in the dependency graph and the PVP can
work pretty well for this case.

But if the package contains a library, then the end user is also the
developer, so you can only choose which kind of pain you prefer.

*A* developer, but not the developer of the package with the loose upper bound or the package that refused to compile with incomprehensible errors because of it, and generally not in a position to recognize the reason for the errors because they don't know the internals of the package they're trying to use. And I am certain of this because I'm sitting in #haskell fielding questions from them multiple times a day when some package gets broken by an overly lax or missing upper bound.

Also note that overly strict versioning certainly also leads to breakage --- but it's reported clearly by cabal as a version issue, not as ghc vomiting up unexpected errors from something that is presented as a curated package that should build without problems.

-- 
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 25 Feb 17:09 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Tue, Feb 25, 2014 at 5:12 PM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

Because we keep constantly seeing examples where saving the developer some upper bounds PVP work forces users to deal with unexpected errors, but since Haskell developers don't see that user pain it is considered irrelevant/nonexistent and certainly not any justification for saving developers some work.

Personally, I think any ecosystem which strongly prefers pushing versioning pain points onto end users instead of developers is doing itself a severe disservice.

Are there things that could be improved about versioning policy? Absolutely. But pushing problems onto end users is not an improvement.


I think it's a false dichotomy. I've received plenty of complaints from users about being unable to install newer versions of some dependency because a library that Yesod depends on has an unnecessary strict upper bound. Are there situations where the PVP saves a user some pain? Yes. Are there situations where the PVP causes a user some pain? Yes.

It's disingenuous to frame this as a black and white "developer vs user" issue, it's far more complex than that. After a lot of experience, I believe the PVP- or at least strict adherence to it- is a net loss.

And I think the *real* solution is something like Stackage, where curators have taken care of the versioning pain points instead of either developers or end users. Linux distributions have been doing this for a long time. 

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 25 Feb 17:38 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

This thread is kinda missing an important point.  


Namely that on hackage now, the admins and trustees have the power to edit the cabal files to fix broken constraints. (As do maintainers of their own packages)

Whether relaxing incorrectly conservative or over strengthening overly lax constraints, this now doesn't require a rerelease to "fix".  There are valid reasons of provinence for why that might not make sense in all case

Unless I'm missing the point, doesn't that solve most of the matter?
-carter

On Tuesday, February 25, 2014, Michael Snoyman <michael <at> snoyman.com> wrote:



On Tue, Feb 25, 2014 at 5:12 PM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

Because we keep constantly seeing examples where saving the developer some upper bounds PVP work forces users to deal with unexpected errors, but since Haskell developers don't see that user pain it is considered irrelevant/nonexistent and certainly not any justification for saving developers some work.

Personally, I think any ecosystem which strongly prefers pushing versioning pain points onto end users instead of developers is doing itself a severe disservice.

Are there things that could be improved about versioning policy? Absolutely. But pushing problems onto end users is not an improvement.


I think it's a false dichotomy. I've received plenty of complaints from users about being unable to install newer versions of some dependency because a library that Yesod depends on has an unnecessary strict upper bound. Are there situations where the PVP saves a user some pain? Yes. Are there situations where the PVP causes a user some pain? Yes.

It's disingenuous to frame this as a black and white "developer vs user" issue, it's far more complex than that. After a lot of experience, I believe the PVP- or at least strict adherence to it- is a net loss.

And I think the *real* solution is something like Stackage, where curators have taken care of the versioning pain points instead of either developers or end users. Linux distributions have been doing this for a long time. 

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 25 Feb 17:58 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Tue, Feb 25, 2014 at 6:38 PM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
This thread is kinda missing an important point.  

Namely that on hackage now, the admins and trustees have the power to edit the cabal files to fix broken constraints. (As do maintainers of their own packages)

Whether relaxing incorrectly conservative or over strengthening overly lax constraints, this now doesn't require a rerelease to "fix".  There are valid reasons of provinence for why that might not make sense in all case

Unless I'm missing the point, doesn't that solve most of the matter?
-carter



The question would still remain: who's responsible for making those changes, and what is the default position for the version bounds? We could default to leaving version bounds off, and add them after the fact as necessary. This would reduce developer and Hackage maintainer overhead, but some users may get the "scary" error messages[1]. Or we could default to the PVP approach, and then increase the work on developers/maintainers, with the flip side that (1) users will never get the "scary" error messages, and (2) until developers/maintainers make the change, users may be blocked from even attempting to compile packages together.

There's also the issue that, currently, Hackage2 has turned off developer abilities to change version bounds, so all of the version tweaking onus would fall to admins and trustees.

Overall, I don't see this as a big improvement over the PVP status quo. It's not any harder for me to upload version 1.0.2.1 of a package with a tweaked version bound than to go to the Hackage web interface and manually edit version 1.0.2's cabal file. What I see the editing feature as very useful for is if we want to add upper bounds after the fact.

Michael

[1] Which I still think have value, since they are far more informative to a package author.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Erik Hesselink | 25 Feb 18:05 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I believe this feature is currently turned off, because it comes with its own set of problems.

Erik

On Feb 25, 2014 5:38 PM, "Carter Schonwald" <carter.schonwald <at> gmail.com> wrote:
This thread is kinda missing an important point.  

Namely that on hackage now, the admins and trustees have the power to edit the cabal files to fix broken constraints. (As do maintainers of their own packages)

Whether relaxing incorrectly conservative or over strengthening overly lax constraints, this now doesn't require a rerelease to "fix".  There are valid reasons of provinence for why that might not make sense in all case

Unless I'm missing the point, doesn't that solve most of the matter?
-carter

On Tuesday, February 25, 2014, Michael Snoyman <michael <at> snoyman.com> wrote:



On Tue, Feb 25, 2014 at 5:12 PM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

Because we keep constantly seeing examples where saving the developer some upper bounds PVP work forces users to deal with unexpected errors, but since Haskell developers don't see that user pain it is considered irrelevant/nonexistent and certainly not any justification for saving developers some work.

Personally, I think any ecosystem which strongly prefers pushing versioning pain points onto end users instead of developers is doing itself a severe disservice.

Are there things that could be improved about versioning policy? Absolutely. But pushing problems onto end users is not an improvement.


I think it's a false dichotomy. I've received plenty of complaints from users about being unable to install newer versions of some dependency because a library that Yesod depends on has an unnecessary strict upper bound. Are there situations where the PVP saves a user some pain? Yes. Are there situations where the PVP causes a user some pain? Yes.

It's disingenuous to frame this as a black and white "developer vs user" issue, it's far more complex than that. After a lot of experience, I believe the PVP- or at least strict adherence to it- is a net loss.

And I think the *real* solution is something like Stackage, where curators have taken care of the versioning pain points instead of either developers or end users. Linux distributions have been doing this for a long time. 

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Edward Kmett | 25 Feb 18:27 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

This is currently disabled if you go to try it.


On Tue, Feb 25, 2014 at 11:38 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
This thread is kinda missing an important point.  

Namely that on hackage now, the admins and trustees have the power to edit the cabal files to fix broken constraints. (As do maintainers of their own packages)

Whether relaxing incorrectly conservative or over strengthening overly lax constraints, this now doesn't require a rerelease to "fix".  There are valid reasons of provinence for why that might not make sense in all case

Unless I'm missing the point, doesn't that solve most of the matter?
-carter


On Tuesday, February 25, 2014, Michael Snoyman <michael <at> snoyman.com> wrote:



On Tue, Feb 25, 2014 at 5:12 PM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

Because we keep constantly seeing examples where saving the developer some upper bounds PVP work forces users to deal with unexpected errors, but since Haskell developers don't see that user pain it is considered irrelevant/nonexistent and certainly not any justification for saving developers some work.

Personally, I think any ecosystem which strongly prefers pushing versioning pain points onto end users instead of developers is doing itself a severe disservice.

Are there things that could be improved about versioning policy? Absolutely. But pushing problems onto end users is not an improvement.


I think it's a false dichotomy. I've received plenty of complaints from users about being unable to install newer versions of some dependency because a library that Yesod depends on has an unnecessary strict upper bound. Are there situations where the PVP saves a user some pain? Yes. Are there situations where the PVP causes a user some pain? Yes.

It's disingenuous to frame this as a black and white "developer vs user" issue, it's far more complex than that. After a lot of experience, I believe the PVP- or at least strict adherence to it- is a net loss.

And I think the *real* solution is something like Stackage, where curators have taken care of the versioning pain points instead of either developers or end users. Linux distributions have been doing this for a long time. 

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Vincent Hanquez | 25 Feb 18:17 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 15:12, Brandon Allbery wrote:
> On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com 
> <mailto:michael <at> snoyman.com>> wrote:
>
>     But that's only one half of the "package interoperability" issue.
>     I face this first hand on a daily basis with my Stackage
>     maintenance. I spend far more time reporting issues of restrictive
>     upper bounds than I do with broken builds from upstream changes.
>     So I look at this as purely a game of statistics: are you more
>     likely to have code break because version 1.2 of text changes the
>     type of the map function and you didn't have an upper bound, or
>     because two dependencies of yours have *conflicting* versions
>     bounds on a package like aeson[2]? In my experience, the latter
>     occurs far more often than the former.
>
>
> I have a question for you.
>
> Is it better to save a developer some work, or is it better to force 
> that work onto end users?
>
As a *user* of many libraries, I had more problems with libraries that 
follow the PvP religiously than the other way around. I usually like to 
have the latest and greatest libraries, specially text, aeson, and such, 
and there I have to manually bump dependencies of packages I depend on, 
until the developers gets to update the package on hackage (which 
sometimes takes many weeks).

As a *developer*, following the PvP would cost me a lot of my *free* 
time. This is particularly true when the surface of contact with a 
library is small, it's very unlikely that I will run into an API 
changes. When I do, I release a new package quickly that account for the 
API change, or I can put a upper bounds if I can't make the necessary 
changes quickly enough. I usually found out quite quickly with stackage 
nowadays, most of times, before any users get bitten.

Some other time, I'm testing some development ghc or some new unreleased 
libraries, and I need to remove upper bounds from packages so that I can 
test something.

Anyway, there's lots of reason that the PvP doesn't works fully. It 
solves some problems for sure, but sadly swipe all the other problems 
under the carpet. One problem being that a single set of numbers doesn't 
properly account for API complexity and stability that might differ in 
different modules of the same package.

--

-- 
Vincent
Carter Schonwald | 25 Feb 18:26 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

indeed.

So lets think about how to add module types or some approximation thereof to GHC? (seriously, thats the only sane "best solution" i can think of, but its not something that can be done casually). Theres also the fact that any module system design will have to explicitly deal with type class instances in a more explicit fashion than we've done thus far.


On Tue, Feb 25, 2014 at 12:17 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
On 2014-02-25 15:12, Brandon Allbery wrote:

On Tue, Feb 25, 2014 at 1:44 AM, Michael Snoyman <michael <at> snoyman.com <mailto:michael <at> snoyman.com>> wrote:

    But that's only one half of the "package interoperability" issue.
    I face this first hand on a daily basis with my Stackage
    maintenance. I spend far more time reporting issues of restrictive
    upper bounds than I do with broken builds from upstream changes.
    So I look at this as purely a game of statistics: are you more
    likely to have code break because version 1.2 of text changes the
    type of the map function and you didn't have an upper bound, or
    because two dependencies of yours have *conflicting* versions
    bounds on a package like aeson[2]? In my experience, the latter
    occurs far more often than the former.


I have a question for you.

Is it better to save a developer some work, or is it better to force that work onto end users?

As a *user* of many libraries, I had more problems with libraries that follow the PvP religiously than the other way around. I usually like to have the latest and greatest libraries, specially text, aeson, and such, and there I have to manually bump dependencies of packages I depend on, until the developers gets to update the package on hackage (which sometimes takes many weeks).

As a *developer*, following the PvP would cost me a lot of my *free* time. This is particularly true when the surface of contact with a library is small, it's very unlikely that I will run into an API changes. When I do, I release a new package quickly that account for the API change, or I can put a upper bounds if I can't make the necessary changes quickly enough. I usually found out quite quickly with stackage nowadays, most of times, before any users get bitten.

Some other time, I'm testing some development ghc or some new unreleased libraries, and I need to remove upper bounds from packages so that I can test something.

Anyway, there's lots of reason that the PvP doesn't works fully. It solves some problems for sure, but sadly swipe all the other problems under the carpet. One problem being that a single set of numbers doesn't properly account for API complexity and stability that might differ in different modules of the same package.

--
Vincent

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Vincent Hanquez | 25 Feb 22:23 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 17:26, Carter Schonwald wrote:
> indeed.
>
> So lets think about how to add module types or some approximation 
> thereof to GHC? (seriously, thats the only sane "best solution" i can 
> think of, but its not something that can be done casually). Theres 
> also the fact that any module system design will have to explicitly 
> deal with type class instances in a more explicit fashion than we've 
> done thus far.

Yes. I think that's the only way a PvP could actually work. I would 
imagine that it could be quite fiddly and the reason why it wasn't been 
done yet.
But, clearly for this scheme to work, it need to remove the human part 
in the equation as much as possible.

It still wouldn't be perfect, as there would still be stuff that can't 
be accounted for (bug, laziness, performance issues,...) , but clearly 
it would work better than a simple flat sequence of number that is 
suppose to represent many different aspects of a package and the 
author's understanding of a policy.

--

-- 
Vincent
Bardur Arantsson | 26 Feb 07:10 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 18:26, Carter Schonwald wrote:
> indeed.
> 
> So lets think about how to add module types or some approximation thereof
> to GHC? (seriously, thats the only sane "best solution" i can think of, but
> its not something that can be done casually). Theres also the fact that any
> module system design will have to explicitly deal with type class instances
> in a more explicit fashion than we've done thus far.

This may be relevant:

   http://plv.mpi-sws.org/backpack/

Regards,
Bart Massey | 26 Feb 19:32 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Bardur: Yes! I want *this*! In addition to short-circuiting this epic
argument, it looks way better. I've always wished Haskell had the ML
module system, but this looks even better than that in some ways. So,
yes. Let's get backpack into GHC, require it on Hackage, and get on
with it. Note that backpack by itself isn't sufficient, since it only
guarantees type-compatibility, not semantic compatibility. We would
have to add additional rules requiring at least partial semantic
compatibility of any changes to the semantics at a given name. --Bart

On Tue, Feb 25, 2014 at 10:10 PM, Bardur Arantsson <spam <at> scientician.net> wrote:
> On 2014-02-25 18:26, Carter Schonwald wrote:
>> indeed.
>>
>> So lets think about how to add module types or some approximation thereof
>> to GHC? (seriously, thats the only sane "best solution" i can think of, but
>> its not something that can be done casually). Theres also the fact that any
>> module system design will have to explicitly deal with type class instances
>> in a more explicit fashion than we've done thus far.
>
> This may be relevant:
>
>    http://plv.mpi-sws.org/backpack/
>
> Regards,
>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 26 Feb 19:37 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Bart,  backpack is just a first design iteration. We shouldn't regard it as the final answer, but as a starting point for understanding / exploring the design space. 

On Wednesday, February 26, 2014, Bart Massey <bart <at> cs.pdx.edu> wrote:

Bardur: Yes! I want *this*! In addition to short-circuiting this epic
argument, it looks way better. I've always wished Haskell had the ML
module system, but this looks even better than that in some ways. So,
yes. Let's get backpack into GHC, require it on Hackage, and get on
with it. Note that backpack by itself isn't sufficient, since it only
guarantees type-compatibility, not semantic compatibility. We would
have to add additional rules requiring at least partial semantic
compatibility of any changes to the semantics at a given name. --Bart

On Tue, Feb 25, 2014 at 10:10 PM, Bardur Arantsson <spam <at> scientician.net> wrote:
> On 2014-02-25 18:26, Carter Schonwald wrote:
>> indeed.
>>
>> So lets think about how to add module types or some approximation thereof
>> to GHC? (seriously, thats the only sane "best solution" i can think of, but
>> its not something that can be done casually). Theres also the fact that any
>> module system design will have to explicitly deal with type class instances
>> in a more explicit fashion than we've done thus far.
>
> This may be relevant:
>
>    http://plv.mpi-sws.org/backpack/
>
> Regards,
>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 25 Feb 20:23 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to zero, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I've long maintained that the solution to this issue should be tooling. The dependency graph that you stipulate in your cabal file should be a *warrant* that "this package is known to be compatible with these versions of these packages". If a new major version of package "foo" comes out, a bumper tool should be able to try relaxing the dependency and seeing if your package still builds, bumping your version number accordingly based on the PVP rules. Someone released a tool to attempt to do this a couple of days ago --- I haven't tried it yet but surely with a bit of group effort we can improve these tools so that they really fast and easy to use.

Of course, people who want to follow PVP are also going to need tooling to make sure their programs still build in the future because so many people have broken the policy in the past -- that's where proposed kludges like "cabal freeze" are going to come in.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Daniel Trstenjak | 25 Feb 21:17 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


On Tue, Feb 25, 2014 at 11:23:44AM -0800, Gregory Collins wrote:
> Someone released a tool to attempt to do this a couple of days ago ---
> I haven't tried it yet but surely with a bit of group effort we can
> improve these tools so that they really fast and easy to use.

That's an amazing tool ... ;)

> Of course, people who want to follow PVP are also going to need tooling to make
> sure their programs still build in the future because so many people have
> broken the policy in the past -- that's where proposed kludges like "cabal
> freeze" are going to come in.

If I understood it correctly, then cabal >1.19 supports the option '--allow-newer'
to be able to ignore upper bounds, which might solve several of the issues here,
so upper bounds could be set but still ignored if desired.

Greetings,
Daniel
Edward Kmett | 25 Feb 21:26 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

It alleviates the common case, but it doesn't resolve the scenario where someone put a hard bound in for a reason due to a known change in semantics or known incompatibility.


On Tue, Feb 25, 2014 at 3:17 PM, Daniel Trstenjak <daniel.trstenjak <at> gmail.com> wrote:

On Tue, Feb 25, 2014 at 11:23:44AM -0800, Gregory Collins wrote:
> Someone released a tool to attempt to do this a couple of days ago ---
> I haven't tried it yet but surely with a bit of group effort we can
> improve these tools so that they really fast and easy to use.

That's an amazing tool ... ;)

> Of course, people who want to follow PVP are also going to need tooling to make
> sure their programs still build in the future because so many people have
> broken the policy in the past -- that's where proposed kludges like "cabal
> freeze" are going to come in.

If I understood it correctly, then cabal >1.19 supports the option '--allow-newer'
to be able to ignore upper bounds, which might solve several of the issues here,
so upper bounds could be set but still ignored if desired.


Greetings,
Daniel
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 25 Feb 21:38 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg <at> gregorycollins.net> wrote:

On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to zero, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.


I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)
* People make mistakes. I've been bitten by people making breaking changes in point releases by mistake. If the only way your build will succeed is by assuming no one will ever mess up, you're in trouble.
* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I absolutely believe that, if you want to have code that builds reliably, you have to specify all of your deep dependencies. That's what I do for any production software, and it's what I recommend to anyone who will listen to me. Trying to push this off as a responsibility of every Hackage package author is (1) shifting the burden to the wrong place, and (2) irresponsible, since some maintainer out in the rest of the world has no obligation to make sure your code keeps working. That's your responsibility.
 
I've long maintained that the solution to this issue should be tooling. The dependency graph that you stipulate in your cabal file should be a *warrant* that "this package is known to be compatible with these versions of these packages". If a new major version of package "foo" comes out, a bumper tool should be able to try relaxing the dependency and seeing if your package still builds, bumping your version number accordingly based on the PVP rules. Someone released a tool to attempt to do this a couple of days ago --- I haven't tried it yet but surely with a bit of group effort we can improve these tools so that they really fast and easy to use.

Of course, people who want to follow PVP are also going to need tooling to make sure their programs still build in the future because so many people have broken the policy in the past -- that's where proposed kludges like "cabal freeze" are going to come in.


This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Vincent Hanquez | 25 Feb 22:37 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 20:38, Michael Snoyman wrote:
>
>
>
> On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins 
> <greg <at> gregorycollins.net <mailto:greg <at> gregorycollins.net>> wrote:
>
> I really don't like this appeal to authority. I don't know who the 
> "royal we" is that you are referring to here, and I don't accept the 
> premise that the rest of us must simply adhere to a policy because "it 
> was decided." "My side" as you refer to it is giving concrete negative 
> consequences to the PVP. I'd expect "your side" to respond in kind, 
> not simply assert that we're "breaking Hackage" and other such hyperbole.
>
Strongly agreed.

>
>     Of course, people who want to follow PVP are also going to need
>     tooling to make sure their programs still build in the future
>     because so many people have broken the policy in the past --
>     that's where proposed kludges like "cabal freeze" are going to
>     come in.
>
>
> This is where we apparently fundamentally disagree. cabal freeze IMO 
> is not at all a kludge. It's the only sane approach to reliable 
> builds. If I ran my test suite against foo version 1.0.1, performed 
> manual testing on 1.0.1, did my load balancing against 1.0.1, I don't 
> want some hotfix build to automatically get upgraded to version 1.0.2, 
> based on the assumption that foo's author didn't break anything.
>

This is probably also the only sane approach at the moment for safe 
builds. Considering the whole hackage infrastructure is quite insecure 
at the moment (http download/upload, no package signing, etc), freezing 
your build packages after you have audited them is probably the only 
sensible way to ship secure products.

In a production environment (at 2 different work places), i've seen two 
approachs for proper builds:

* still using hackage directly, but pinning each package with a 
cryptographic hash on your build site.
* a private hackage instance where packages are manually imported. build 
is using exclusively this.

Using hackage directly(+ depending on the PvP) is at the moment too much 
like playing russian roulette.

--

-- 
Vincent
Gregory Collins | 25 Feb 22:52 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg <at> gregorycollins.net> wrote:
Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

This is not an appeal to authority, it's an appeal to consensus. The community comes together to work on lots of different projects like Hackage and the platform and we have established procedures and policies (like the PVP and the Hackage platform process) to manage this. I think the following facts are uncontroversial:
  • a Hackage package versioning policy exists and has been published in a known location
  • we don't have another one
  • you're violating it
Now you're right to argue that the PVP as currently constituted causes problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I manage 200 packages and you're driving me insane". And new major base versions cause a month of churn before everything goes green again. Everyone understands this. But the solution is either to vote to change the policy or to write tooling to make your life less insane, not just to ignore it, because the situation this creates (programs bitrot and become unbuildable over time at 100% probability) is really disappointing.

Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

There's a strawman in there -- in an ideal world PVP violations would be rare and would be considered bugs. Also, if it were up to me we'd be machine-checking PVP compliance. I don't know what you're talking about re: "irresponsible development". In the scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2" depends on any version of "bar", and then when "bar-2.0" is released "foo-1.2" stops building and there's no way to fix this besides trial and error because the solver doesn't have enough information to do its work (and it's been lied to!!!). The only practical solutions right now are to:
  • commit to maintaining every program you've ever written on the hackage upgrade treadmill forever, or
  • write down the exact versions of all of the libraries you need in the transitive closure of the dependency graph.
#2 is best practice for repeatable builds anyways and you're right that cabal freeze will help here, but it doesn't help much for all the programs written before "cabal freeze" comes out. 

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)

Of course. But compute the probability of this occurring (rare) vs the probability of breakage given no upper bounds (100% as t -> ∞). Think about what you're saying semantically when you say you depend only on "foo > 3": "foo version 4.0 or any later version". You can't own up to this contract.

* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I think you're making my point for me -- given that this paragraph you wrote is 100% correct, it makes sense for cabal not to try to build against the new version of a dependency until the package maintainer has checked that things still work and given the solver the go-ahead by bumping the package upper bound.

This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

This wouldn't be an assumption, Michael -- the tool should run the build and the test suites. We'd bump version on green tests.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Omari Norman | 25 Feb 23:15 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 4:52 PM, Gregory Collins
<greg <at> gregorycollins.net> wrote:

> write down the exact versions of all of the libraries you need in the
> transitive closure of the dependency graph.

I cobbled together a rudimentary tool that does just that:

https://hackage.haskell.org/package/sunlight

the idea being that, to my knowledge, there were no tools making it
easy to verify that a package builds with the *minimum* specified
versions.  Typical CI testing will eagerly pull the latest
dependencies.

sunlight builds in a sandbox, runs your tests, and snapshots the
resulting GHC package database.  It can do this for multiple GHC
versions, and will do one build with the minimum versions possible (it
does require that you specify a minimum version for each dependency,
but not a maximum).  At least then you can consult a record showing
the exact package graph that actually worked.
Michael Snoyman | 26 Feb 06:39 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Tue, Feb 25, 2014 at 11:52 PM, Gregory Collins <greg <at> gregorycollins.net> wrote:
On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg <at> gregorycollins.net> wrote:
Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

This is not an appeal to authority, it's an appeal to consensus. The community comes together to work on lots of different projects like Hackage and the platform and we have established procedures and policies (like the PVP and the Hackage platform process) to manage this. I think the following facts are uncontroversial:
  • a Hackage package versioning policy exists and has been published in a known location
  • we don't have another one
  • you're violating it
Now you're right to argue that the PVP as currently constituted causes problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I manage 200 packages and you're driving me insane". And new major base versions cause a month of churn before everything goes green again. Everyone understands this. But the solution is either to vote to change the policy or to write tooling to make your life less insane, not just to ignore it, because the situation this creates (programs bitrot and become unbuildable over time at 100% probability) is really disappointing.


You talk about voting on the policy as if that's the natural thing to do. When did we vote to accept the policy in the first place? I don't remember ever putting my name down as "I agree, this makes sense." Talking about voting, violating, complying, etc, in a completely open system like Hackage, makes no sense, and is why your comments come off as an appeal to authority.

If you want to have more rigid rules on what packages can be included, start a downstream, PVP-only Hackage, and don't allow in violating packages. If it takes off, and users have demonstrated that they care very much about PVP compliance, then us PVP naysayers will have hard evidence that our beliefs were mistaken. Right now, it's just a few people constantly accusing us of violations and insisting we spend a lot more work on a policy we believe to be flawed.
 
Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

There's a strawman in there -- in an ideal world PVP violations would be rare and would be considered bugs.

Then you're missing my point completely. You're advocating making package management policy based on developer practices of not pinning down deep dependencies. My point is that *bugs happen*. And as I keep saying, it's not just build-time bugs: runtime bugs are possible and far worse. I see no reason that package authors should go through lots of effort to encourage bad practice.
 
Also, if it were up to me we'd be machine-checking PVP compliance. I don't know what you're talking about re: "irresponsible development". In the scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2" depends on any version of "bar", and then when "bar-2.0" is released "foo-1.2" stops building and there's no way to fix this besides trial and error because the solver doesn't have enough information to do its work (and it's been lied to!!!). The only practical solutions right now are to:
  • commit to maintaining every program you've ever written on the hackage upgrade treadmill forever, or
  • write down the exact versions of all of the libraries you need in the transitive closure of the dependency graph.
#2 is best practice for repeatable builds anyways and you're right that cabal freeze will help here, but it doesn't help much for all the programs written before "cabal freeze" comes out. 


Playing the time machine game is silly. Older programs are broken. End of story. If we all agree to start using the PVP now, it won't fix broken programs. If we release "cabal freeze" now, it won't fix broken programs. But releasing "cabal freeze" *will* prevent this problem from happening in the future.
 
But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)

Of course. But compute the probability of this occurring (rare) vs the probability of breakage given no upper bounds (100% as t -> ∞). Think about what you're saying semantically when you say you depend only on "foo > 3": "foo version 4.0 or any later version". You can't own up to this contract.


That's because you're defining the build-depends to mean "I guarantee this to be the case." I could just as easily argue that `foo < 4` is also a lie: how do you know that it *won't* build? This argument has been had many times, please stop trying to make it seem like a clear-cut argument.
 
* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I think you're making my point for me -- given that this paragraph you wrote is 100% correct, it makes sense for cabal not to try to build against the new version of a dependency until the package maintainer has checked that things still work and given the solver the go-ahead by bumping the package upper bound.


Again, you're missing it. If there's a point release, PVP-based code will automatically start using that new point release. That's simply not good practice for a production system.
 
This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

This wouldn't be an assumption, Michael -- the tool should run the build and the test suites. We'd bump version on green tests.


Maybe you write perfect code every time. But I've seen this process many times in the past:

* Work on version 2 of an application.
* Create a staging build of version 2.
* Run automated tests on version 2.
* QA manually tests version 2.
* Release version 2.
* Three weeks later, discover a bug.
* Write a hotfix, deploy to staging, run automated tests, QA the changed code, and ship.

In these circumstances, it would be terrible if my build system automatically accepted a new point release of a package on Hackage because the PVP says it's OK. Yes, we should all have 100% test coverage, with automated testing that covers all functionality of the product, and every single release would have full test coverage. But we all know that's not the real world. Letting a build system throw variables into an equation is irresponsible.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 26 Feb 09:03 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


On Tue, Feb 25, 2014 at 9:39 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
You talk about voting on the policy as if that's the natural thing to do. When did we vote to accept the policy in the first place? I don't remember ever putting my name down as "I agree, this makes sense." Talking about voting, violating, complying, etc, in a completely open system like Hackage, makes no sense, and is why your comments come off as an appeal to authority.

Michael, where do I start. This policy was written and put into place in October 2007, based on an earlier proposal by Bulat Ziganshin from 2006. Simon Marlow wrote a draft on the wiki, the matter was discussed on haskell-cafe <at> , in #ghc, on the wiki, and presumably in person. Consensus was reached and the policy has been periodically updated by the community since. I don't want to "appeal to authority" here, but from the logs it's clear that the people involved were Simon, Duncan, Ian, Don, etc., i.e. the people actually responsible for building and running all the crap we're talking about in the first place, including Hackage itself.

I know it furthers your argument to question the policy's legitimacy, and I'm sorry consensus was reached without your agreement when it was drafted, but it wasn't dropped from the sky by unaccountable people just to inconvenience you.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 25 Feb 23:52 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:

[...]

> * The PVP itself does *not* guarantee reliable builds in all cases. If a
> transitive dependency introduces new exports, or provides new typeclass
> instances, a fully PVP-compliant stack can be broken. (If anyone doubts
> this claim, let me know, I can spell out the details. This has come up in
> practice.)

...are you simply referring to the fact that in order to guarantee
PVP-semantics of a package version, one has to take care to restrict the
version bounds of that package's build-deps in such a way, that any API
entities leaking from its (direct) build-deps (e.g.  typeclass instances
or other re-exported entities) are not a function of the "internal"
degree of freedoms the build-dep version-ranges provide? Or is there
more to it?
Michael Snoyman | 26 Feb 06:45 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 12:52 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:

[...]

> * The PVP itself does *not* guarantee reliable builds in all cases. If a
> transitive dependency introduces new exports, or provides new typeclass
> instances, a fully PVP-compliant stack can be broken. (If anyone doubts
> this claim, let me know, I can spell out the details. This has come up in
> practice.)

...are you simply referring to the fact that in order to guarantee
PVP-semantics of a package version, one has to take care to restrict the
version bounds of that package's build-deps in such a way, that any API
entities leaking from its (direct) build-deps (e.g.  typeclass instances
or other re-exported entities) are not a function of the "internal"
degree of freedoms the build-dep version-ranges provide? Or is there
more to it?

That's essentially it. I'll give one of the examples I ran into. (Names omitted on purpose, if the involved party wants to identify himself, please do so, I just didn't feel comfortable doing so without your permission.) Version 0.2 of monad-logger included MonadLogger instances for IO and other base monads. For various reasons, these were removed, and the version bumped to 0.3. This is in full compliance with the PVP.

persistent depends on monad-logger. It can work with either version 0.2 or 0.3 of monad-logger, and the cabal file allows this via `monad-logger >= 0.2 && < 0.4` (or something like that). Again, full PVP compliance.

A user wrote code against persistent when monad-logger version 0.2 was available. He used a function that looked like:

runDatabase :: MonadLogger m => Persistent a -> m a

(highly simplified). In his application, he used this in the IO monad. He depended on persistent with proper lower and upper bounds. Once again, full PVP compliance.

Once I released version 0.3 of monad-logger, his next build automatically upgraded him to monad-logger 0.3, and suddenly his code broke, because there's no MonadLogger instance for IO.

Now *if* the program had been using a system like "cabal freeze" or the like, this could have never happened: cabal wouldn't be trying to automatically upgrade to monad-logger 0.3.

Will this kind of bug happen all the time? No, I doubt it. But if the point of the PVP is to guarantee that builds will work (ignoring runtime concerns), and the PVP clearly fails at that job as well, we really need to reassess putting ourselves through this pain and suffering.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 26 Feb 10:05 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-26 at 06:45:30 +0100, Michael Snoyman wrote:
> On Wed, Feb 26, 2014 at 12:52 AM, Herbert Valerio Riedel wrote:
>> On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:
>>
>> [...]
>>
>> > * The PVP itself does *not* guarantee reliable builds in all cases. If a
>> > transitive dependency introduces new exports, or provides new typeclass
>> > instances, a fully PVP-compliant stack can be broken. (If anyone doubts
>> > this claim, let me know, I can spell out the details. This has come up in
>> > practice.)
>>
>> ...are you simply referring to the fact that in order to guarantee
>> PVP-semantics of a package version, one has to take care to restrict the
>> version bounds of that package's build-deps in such a way, that any API
>> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> or other re-exported entities) are not a function of the "internal"
>> degree of freedoms the build-dep version-ranges provide? Or is there
>> more to it?
>
> That's essentially it. I'll give one of the examples I ran into. (Names
> omitted on purpose, if the involved party wants to identify himself, please
> do so, I just didn't feel comfortable doing so without your permission.)
> Version 0.2 of monad-logger included MonadLogger instances for IO and other
> base monads. For various reasons, these were removed, and the version
> bumped to 0.3. This is in full compliance with the PVP.
>
> persistent depends on monad-logger. It can work with either version 0.2 or
> 0.3 of monad-logger, and the cabal file allows this via `monad-logger >=
> 0.2 && < 0.4` (or something like that). Again, full PVP compliance.
>
> A user wrote code against persistent when monad-logger version 0.2 was
> available. He used a function that looked like:
>
> runDatabase :: MonadLogger m => Persistent a -> m a
>
> (highly simplified). In his application, he used this in the IO monad. He
> depended on persistent with proper lower and upper bounds. Once again, full
> PVP compliance.
>
> Once I released version 0.3 of monad-logger, his next build automatically
> upgraded him to monad-logger 0.3, and suddenly his code broke, because
> there's no MonadLogger instance for IO.
>
> Now *if* the program had been using a system like "cabal freeze" or the
> like, this could have never happened: cabal wouldn't be trying to
> automatically upgrade to monad-logger 0.3.
>
> Will this kind of bug happen all the time? No, I doubt it. But if the point
> of the PVP is to guarantee that builds will work (ignoring runtime
> concerns), and the PVP clearly fails at that job as well, we really need to
> reassess putting ourselves through this pain and suffering.

From my point of view, I'd argue that 

 a) 'persistent' failed to live up to the "spirit" of the PVP contract,
    i.e. to expose a "contact-surface" which satisfies certain
    invariants within specific package-version ranges.

 b) However, the PVP can be blamed as well, as in its current form it
    doesn't explicitly address the issue of API-leakage from transitive
    build-dependencies. [1]

The question for me now is whether the PVP is fixable in this respect,
and at what cost.

Moreover, it seems to me, it always comes down to type-class instances
causing most problems with the PVP (either by requiring version-bump
cascades throughout the PVP-adhering domain of Hackage, or by their hard
hard to constraint leakage through package module/boundaries); maybe we
need address this issue at the language-level and provide some facility
for limiting the propagation of type-class instances first.

 [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
      `persistent`'s obligation), could be that the package you
      mentioned (which broke due to monad-logger having a non-monotonic
      API change), might become required to include packages supplying
      the instances they depends upon in their build-depends, thus
      turning an transitive dep into a direct dependency.

Cheers,
  hvr
Michael Snoyman | 26 Feb 10:45 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 11:05 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-26 at 06:45:30 +0100, Michael Snoyman wrote:
> On Wed, Feb 26, 2014 at 12:52 AM, Herbert Valerio Riedel wrote:
>> On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:
>>
>> [...]
>>
>> > * The PVP itself does *not* guarantee reliable builds in all cases. If a
>> > transitive dependency introduces new exports, or provides new typeclass
>> > instances, a fully PVP-compliant stack can be broken. (If anyone doubts
>> > this claim, let me know, I can spell out the details. This has come up in
>> > practice.)
>>
>> ...are you simply referring to the fact that in order to guarantee
>> PVP-semantics of a package version, one has to take care to restrict the
>> version bounds of that package's build-deps in such a way, that any API
>> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> or other re-exported entities) are not a function of the "internal"
>> degree of freedoms the build-dep version-ranges provide? Or is there
>> more to it?
>
> That's essentially it. I'll give one of the examples I ran into. (Names
> omitted on purpose, if the involved party wants to identify himself, please
> do so, I just didn't feel comfortable doing so without your permission.)
> Version 0.2 of monad-logger included MonadLogger instances for IO and other
> base monads. For various reasons, these were removed, and the version
> bumped to 0.3. This is in full compliance with the PVP.
>
> persistent depends on monad-logger. It can work with either version 0.2 or
> 0.3 of monad-logger, and the cabal file allows this via `monad-logger >=
> 0.2 && < 0.4` (or something like that). Again, full PVP compliance.
>
> A user wrote code against persistent when monad-logger version 0.2 was
> available. He used a function that looked like:
>
> runDatabase :: MonadLogger m => Persistent a -> m a
>
> (highly simplified). In his application, he used this in the IO monad. He
> depended on persistent with proper lower and upper bounds. Once again, full
> PVP compliance.
>
> Once I released version 0.3 of monad-logger, his next build automatically
> upgraded him to monad-logger 0.3, and suddenly his code broke, because
> there's no MonadLogger instance for IO.
>
> Now *if* the program had been using a system like "cabal freeze" or the
> like, this could have never happened: cabal wouldn't be trying to
> automatically upgrade to monad-logger 0.3.
>
> Will this kind of bug happen all the time? No, I doubt it. But if the point
> of the PVP is to guarantee that builds will work (ignoring runtime
> concerns), and the PVP clearly fails at that job as well, we really need to
> reassess putting ourselves through this pain and suffering.

From my point of view, I'd argue that

 a) 'persistent' failed to live up to the "spirit" of the PVP contract,
    i.e. to expose a "contact-surface" which satisfies certain
    invariants within specific package-version ranges.


How would persistent have done better? AFAICT, the options are:

1. Do what I did: state a true version dependency on monad-logger, that it works with both version 0.2 and 0.3.
2. Constrain it to one or the other, which would be a falsehood that would restrict users' ability to use the package.

Let's try it a different way. If transformers removed a MonadIO instance between version 2 and 3 of the library, should that mean that every single package with type signatures involving MonadIO should be constrained to one specific version of transformers?
 
 b) However, the PVP can be blamed as well, as in its current form it
    doesn't explicitly address the issue of API-leakage from transitive
    build-dependencies. [1]

The question for me now is whether the PVP is fixable in this respect,
and at what cost.

Moreover, it seems to me, it always comes down to type-class instances
causing most problems with the PVP (either by requiring version-bump
cascades throughout the PVP-adhering domain of Hackage, or by their hard
hard to constraint leakage through package module/boundaries); maybe we
need address this issue at the language-level and provide some facility
for limiting the propagation of type-class instances first.


There's one other issue, which is reexports. As an extreme example, imagine:

* Version 1 of the foo package has the Foo module, and it exports foo1 and foo2.
* Version 2 of the foo package has the Foo module, and it exports foo1.
* Version 1 of the bar package as the Bar module, defined as:

module Bar (module Foo) where
import Foo
* According to the PVP, the bar package can have a version bound on foo of `foo > 1 && < 2.1`.
* User code that depends on foo2 being exported from Bar will be broken by the transitive update of foo.

The example's extreme, but it's the same basic problem as typeclass instances.

Michael
 


 [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
      `persistent`'s obligation), could be that the package you
      mentioned (which broke due to monad-logger having a non-monotonic
      API change), might become required to include packages supplying
      the instances they depends upon in their build-depends, thus
      turning an transitive dep into a direct dependency.


I don't think I follow this comment, sorry.
 
Cheers,
  hvr

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 26 Feb 11:22 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-26 at 10:45:40 +0100, Michael Snoyman wrote:

[...]

>> >> ...are you simply referring to the fact that in order to guarantee
>> >> PVP-semantics of a package version, one has to take care to restrict the
>> >> version bounds of that package's build-deps in such a way, that any API
>> >> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> >> or other re-exported entities) are not a function of the "internal"
>> >> degree of freedoms the build-dep version-ranges provide? Or is there
>> >> more to it?

[...]

>> From my point of view, I'd argue that
>>
>>  a) 'persistent' failed to live up to the "spirit" of the PVP contract,
>>     i.e. to expose a "contact-surface" which satisfies certain
>>     invariants within specific package-version ranges.

> How would persistent have done better? AFAICT, the options are:
>
> 1. Do what I did: state a true version dependency on monad-logger, that it
> works with both version 0.2 and 0.3.

Yes, "persistent" itself does in fact work with both major versions of
"monad-logger", but alas the API reachable through depending solely on
"persistent" leaks details of the underlying monad-logger version used.

...but the PVP's primary statement is define how a package shall behave
from the point of view of its users (where by user I mean package
build-depending on `persistent`). So...

> 2. Constrain it to one or the other, which would be a falsehood that would
> restrict users' ability to use the package.

...this is would actually be, what I'd interpret the PVP to
expect/require from "persistent" in order to satisfy its goal to shield
the package's users from incompatible changes.

> Let's try it a different way. If transformers removed a MonadIO instance
> between version 2 and 3 of the library, should that mean that every single
> package with type signatures involving MonadIO should be constrained to one
> specific version of transformers?

yes, that'd be what I'm suggesting here (the [1] footnote is a different
suggestion for the same problem though)

>>  b) However, the PVP can be blamed as well, as in its current form it
>>     doesn't explicitly address the issue of API-leakage from transitive
>>     build-dependencies. [1]
>>
>> The question for me now is whether the PVP is fixable in this respect,
>> and at what cost.
>>
>> Moreover, it seems to me, it always comes down to type-class instances
>> causing most problems with the PVP (either by requiring version-bump
>> cascades throughout the PVP-adhering domain of Hackage, or by their hard
>> hard to constraint leakage through package module/boundaries); maybe we
>> need address this issue at the language-level and provide some facility
>> for limiting the propagation of type-class instances first.
>>
>>
> There's one other issue, which is reexports. As an extreme example, imagine:
>
> * Version 1 of the foo package has the Foo module, and it exports foo1 and
> foo2.
> * Version 2 of the foo package has the Foo module, and it exports foo1.
> * Version 1 of the bar package as the Bar module, defined as:

Yes, I'm well aware of this problem, but that's easier to control, as
you can use explicit import/export lists to constraint what entities you
expose to direct users of your package. That's what I'm doing e.g. in

 http://hackage.haskell.org/package/deepseq-generics-0.1.1.1/docs/Control-DeepSeq-Generics.html

where I'm explicitly naming the entities I re-export from
Control.DeepSeq for convenience. (However, I'm lacking such a facility
for instances)

>
> module Bar (module Foo) where
> import Foo
> * According to the PVP, the bar package can have a version bound on foo of
> `foo > 1 && < 2.1`.
> * User code that depends on foo2 being exported from Bar will be broken by
> the transitive update of foo.
>
> The example's extreme, but it's the same basic problem as typeclass
> instances.

[...]

>>  [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
>>       `persistent`'s obligation), could be that the package you
>>       mentioned (which broke due to monad-logger having a non-monotonic
>>       API change), might become required to include packages supplying
>>       the instances they depends upon in their build-depends, thus
>>       turning an transitive dep into a direct dependency.

> I don't think I follow this comment, sorry.

I'm basically just saying, that the package which used "persistent",
ought to add "monad-logger ==0.2.*" to its direct build-dependencies, as
it depends on an instance provided by monad-logger. The huge down-side
is, that you'd have to know about type-class instances leaked through
persistent, in order to know that'd you have to add some of
"persistent"'s transitive build-depends to your own package, in order to
safe yourself from missing out on type-class instances.
Michael Snoyman | 26 Feb 11:42 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 12:22 PM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-26 at 10:45:40 +0100, Michael Snoyman wrote:

[...]

>> >> ...are you simply referring to the fact that in order to guarantee
>> >> PVP-semantics of a package version, one has to take care to restrict the
>> >> version bounds of that package's build-deps in such a way, that any API
>> >> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> >> or other re-exported entities) are not a function of the "internal"
>> >> degree of freedoms the build-dep version-ranges provide? Or is there
>> >> more to it?

[...]

>> From my point of view, I'd argue that
>>
>>  a) 'persistent' failed to live up to the "spirit" of the PVP contract,
>>     i.e. to expose a "contact-surface" which satisfies certain
>>     invariants within specific package-version ranges.

> How would persistent have done better? AFAICT, the options are:
>
> 1. Do what I did: state a true version dependency on monad-logger, that it
> works with both version 0.2 and 0.3.

Yes, "persistent" itself does in fact work with both major versions of
"monad-logger", but alas the API reachable through depending solely on
"persistent" leaks details of the underlying monad-logger version used.

...but the PVP's primary statement is define how a package shall behave
from the point of view of its users (where by user I mean package
build-depending on `persistent`). So...

> 2. Constrain it to one or the other, which would be a falsehood that would
> restrict users' ability to use the package.

...this is would actually be, what I'd interpret the PVP to
expect/require from "persistent" in order to satisfy its goal to shield
the package's users from incompatible changes.

> Let's try it a different way. If transformers removed a MonadIO instance
> between version 2 and 3 of the library, should that mean that every single
> package with type signatures involving MonadIO should be constrained to one
> specific version of transformers?

yes, that'd be what I'm suggesting here (the [1] footnote is a different
suggestion for the same problem though)


So let's analyze how far you want to go here. Imagine if version 0.3.0 of transformers did not have a MonadIO instance for StateT, and version 0.3.1 added it. Now some library has a function:

myFunc :: MonadIO m => Int -> m String

What versions of transformers is it allowed to work with? If it allows version 0.3.0 and 0.3.1, and a user depends on the presence of the MonadIO StateT instance, the build can be broken by moving back to version 0.3.0 (which may be demanded by some other packages dependencies). This is simply the reverse of the monad-logger situation, where an instance was added instead of being removed. I don't see a reasonable solution to this situation... well, besides everyone just trusting a curator to build all of these packages for them.

And just to be clear: if persistent had bumped its lower version bound on monad-logger, then users still on the old version of monad-logger would be unable to upgrade, and for no real reason. persistent would have required a major version bump most likely[1], which would have caused all packages downstream from it to do version bumps as well.

Forgetting about my position as a library author, or as a Stackage maintainer, and speaking purely as a library *user*, this would be a terrible situation to be in.

[1] That's another hole in the PVP I think. It doesn't explicitly address the issue of an API change by eliminating compatibility with a previously accepted dependency, but I've seen huge breakages occur due to this.
 
>>  b) However, the PVP can be blamed as well, as in its current form it
>>     doesn't explicitly address the issue of API-leakage from transitive
>>     build-dependencies. [1]
>>
>> The question for me now is whether the PVP is fixable in this respect,
>> and at what cost.
>>
>> Moreover, it seems to me, it always comes down to type-class instances
>> causing most problems with the PVP (either by requiring version-bump
>> cascades throughout the PVP-adhering domain of Hackage, or by their hard
>> hard to constraint leakage through package module/boundaries); maybe we
>> need address this issue at the language-level and provide some facility
>> for limiting the propagation of type-class instances first.
>>
>>
> There's one other issue, which is reexports. As an extreme example, imagine:
>
> * Version 1 of the foo package has the Foo module, and it exports foo1 and
> foo2.
> * Version 2 of the foo package has the Foo module, and it exports foo1.
> * Version 1 of the bar package as the Bar module, defined as:

Yes, I'm well aware of this problem, but that's easier to control, as
you can use explicit import/export lists to constraint what entities you
expose to direct users of your package. That's what I'm doing e.g. in

 http://hackage.haskell.org/package/deepseq-generics-0.1.1.1/docs/Control-DeepSeq-Generics.html

where I'm explicitly naming the entities I re-export from
Control.DeepSeq for convenience. (However, I'm lacking such a facility
for instances)


Agreed, the reexport issue is something that can be dealt with, whereas typeclasses don't have such a facility right now. I just wanted to point it out to make sure we were considering all issues.
 
>
> module Bar (module Foo) where
> import Foo
> * According to the PVP, the bar package can have a version bound on foo of
> `foo > 1 && < 2.1`.
> * User code that depends on foo2 being exported from Bar will be broken by
> the transitive update of foo.
>
> The example's extreme, but it's the same basic problem as typeclass
> instances.

[...]

>>  [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
>>       `persistent`'s obligation), could be that the package you
>>       mentioned (which broke due to monad-logger having a non-monotonic
>>       API change), might become required to include packages supplying
>>       the instances they depends upon in their build-depends, thus
>>       turning an transitive dep into a direct dependency.

> I don't think I follow this comment, sorry.

I'm basically just saying, that the package which used "persistent",
ought to add "monad-logger ==0.2.*" to its direct build-dependencies, as
it depends on an instance provided by monad-logger. The huge down-side
is, that you'd have to know about type-class instances leaked through
persistent, in order to know that'd you have to add some of
"persistent"'s transitive build-depends to your own package, in order to
safe yourself from missing out on type-class instances.

And my approach is that the only sane way to create repeatable builds is to *always* list the exact versions of all packages you depend upon. And in my opinion, it's far more important to ensure that the code behaves the same way than that it simply builds. The only real way to do that is to always use the same versions of all dependencies.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Edward Kmett | 26 Feb 12:57 2014
Picon

Re: qualified imports, PVP and so on

Michael,

Technically it'd have to do a major version bump to add the instance.

Herbert,

That said, transitively bumping all the dependent packages whenever anyone upstream's guts change in ways you may never know about isn't a palatable option. It requires every user of every library to know about every instance or import statement that could transitively drag along an orphan even in private modules. This just isn't a realistic model of user behavior.

I'm unwilling to accept the corollary that I cannot be compatible with both the current release of a package and the current platform if there have been any instances added in between.

If the user wants to lock down the instances they can fix the version of the upstream dependency that is supplying it.

The way I see it, if I don't supply the data type, and I don't supply the class, then using a class is fine in my API across major versions. Otherwise nobody can ship anything that crosses more than even one base version let alone versions of other packages.

The only breakages that can occur be introduced are all due to orphan instances. 

What you propose when carried through to its logical conclusion would basically kill all development between platform releases.

-Edward




On Wed, Feb 26, 2014 at 5:42 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 12:22 PM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-26 at 10:45:40 +0100, Michael Snoyman wrote:

[...]

>> >> ...are you simply referring to the fact that in order to guarantee
>> >> PVP-semantics of a package version, one has to take care to restrict the
>> >> version bounds of that package's build-deps in such a way, that any API
>> >> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> >> or other re-exported entities) are not a function of the "internal"
>> >> degree of freedoms the build-dep version-ranges provide? Or is there
>> >> more to it?

[...]

>> From my point of view, I'd argue that
>>
>>  a) 'persistent' failed to live up to the "spirit" of the PVP contract,
>>     i.e. to expose a "contact-surface" which satisfies certain
>>     invariants within specific package-version ranges.

> How would persistent have done better? AFAICT, the options are:
>
> 1. Do what I did: state a true version dependency on monad-logger, that it
> works with both version 0.2 and 0.3.

Yes, "persistent" itself does in fact work with both major versions of
"monad-logger", but alas the API reachable through depending solely on
"persistent" leaks details of the underlying monad-logger version used.

...but the PVP's primary statement is define how a package shall behave
from the point of view of its users (where by user I mean package
build-depending on `persistent`). So...

> 2. Constrain it to one or the other, which would be a falsehood that would
> restrict users' ability to use the package.

...this is would actually be, what I'd interpret the PVP to
expect/require from "persistent" in order to satisfy its goal to shield
the package's users from incompatible changes.

> Let's try it a different way. If transformers removed a MonadIO instance
> between version 2 and 3 of the library, should that mean that every single
> package with type signatures involving MonadIO should be constrained to one
> specific version of transformers?

yes, that'd be what I'm suggesting here (the [1] footnote is a different
suggestion for the same problem though)


So let's analyze how far you want to go here. Imagine if version 0.3.0 of transformers did not have a MonadIO instance for StateT, and version 0.3.1 added it. Now some library has a function:

myFunc :: MonadIO m => Int -> m String

What versions of transformers is it allowed to work with? If it allows version 0.3.0 and 0.3.1, and a user depends on the presence of the MonadIO StateT instance, the build can be broken by moving back to version 0.3.0 (which may be demanded by some other packages dependencies). This is simply the reverse of the monad-logger situation, where an instance was added instead of being removed. I don't see a reasonable solution to this situation... well, besides everyone just trusting a curator to build all of these packages for them.

And just to be clear: if persistent had bumped its lower version bound on monad-logger, then users still on the old version of monad-logger would be unable to upgrade, and for no real reason. persistent would have required a major version bump most likely[1], which would have caused all packages downstream from it to do version bumps as well.

Forgetting about my position as a library author, or as a Stackage maintainer, and speaking purely as a library *user*, this would be a terrible situation to be in.

[1] That's another hole in the PVP I think. It doesn't explicitly address the issue of an API change by eliminating compatibility with a previously accepted dependency, but I've seen huge breakages occur due to this.
 
>>  b) However, the PVP can be blamed as well, as in its current form it
>>     doesn't explicitly address the issue of API-leakage from transitive
>>     build-dependencies. [1]
>>
>> The question for me now is whether the PVP is fixable in this respect,
>> and at what cost.
>>
>> Moreover, it seems to me, it always comes down to type-class instances
>> causing most problems with the PVP (either by requiring version-bump
>> cascades throughout the PVP-adhering domain of Hackage, or by their hard
>> hard to constraint leakage through package module/boundaries); maybe we
>> need address this issue at the language-level and provide some facility
>> for limiting the propagation of type-class instances first.
>>
>>
> There's one other issue, which is reexports. As an extreme example, imagine:
>
> * Version 1 of the foo package has the Foo module, and it exports foo1 and
> foo2.
> * Version 2 of the foo package has the Foo module, and it exports foo1.
> * Version 1 of the bar package as the Bar module, defined as:

Yes, I'm well aware of this problem, but that's easier to control, as
you can use explicit import/export lists to constraint what entities you
expose to direct users of your package. That's what I'm doing e.g. in

 http://hackage.haskell.org/package/deepseq-generics-0.1.1.1/docs/Control-DeepSeq-Generics.html

where I'm explicitly naming the entities I re-export from
Control.DeepSeq for convenience. (However, I'm lacking such a facility
for instances)


Agreed, the reexport issue is something that can be dealt with, whereas typeclasses don't have such a facility right now. I just wanted to point it out to make sure we were considering all issues.
 
>
> module Bar (module Foo) where
> import Foo
> * According to the PVP, the bar package can have a version bound on foo of
> `foo > 1 && < 2.1`.
> * User code that depends on foo2 being exported from Bar will be broken by
> the transitive update of foo.
>
> The example's extreme, but it's the same basic problem as typeclass
> instances.

[...]

>>  [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
>>       `persistent`'s obligation), could be that the package you
>>       mentioned (which broke due to monad-logger having a non-monotonic
>>       API change), might become required to include packages supplying
>>       the instances they depends upon in their build-depends, thus
>>       turning an transitive dep into a direct dependency.

> I don't think I follow this comment, sorry.

I'm basically just saying, that the package which used "persistent",
ought to add "monad-logger ==0.2.*" to its direct build-dependencies, as
it depends on an instance provided by monad-logger. The huge down-side
is, that you'd have to know about type-class instances leaked through
persistent, in order to know that'd you have to add some of
"persistent"'s transitive build-depends to your own package, in order to
safe yourself from missing out on type-class instances.

And my approach is that the only sane way to create repeatable builds is to *always* list the exact versions of all packages you depend upon. And in my opinion, it's far more important to ensure that the code behaves the same way than that it simply builds. The only real way to do that is to always use the same versions of all dependencies.

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 12:59 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 1:57 PM, Edward Kmett <ekmett <at> gmail.com> wrote:
Michael,

Technically it'd have to do a major version bump to add the instance.


You're right, my mistake. I don't think that distinction affects the rest of my description, however.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Johan Tibell | 26 Feb 13:03 2014
Picon

Re: qualified imports, PVP and so on

I think we should relax the PVP requirement to bump the major version number when adding an instance and instead require that the major version bump is only required when using orphan instances and otherwise only a minor version bump is required. Unless I missed some case, code that depends on a library that follows this rule should not not break.

Here's my reasoning:

If you add a non-orphan instance, it must be because

 * you define the data type or the type class in your package and
 * depend on a package that declares the other entity.

Therefore, no package that depend on your package can declare a non-orphan instance that could collide with the instance you declare.



On Wed, Feb 26, 2014 at 12:57 PM, Edward Kmett <ekmett <at> gmail.com> wrote:
Michael,

Technically it'd have to do a major version bump to add the instance.

Herbert,

That said, transitively bumping all the dependent packages whenever anyone upstream's guts change in ways you may never know about isn't a palatable option. It requires every user of every library to know about every instance or import statement that could transitively drag along an orphan even in private modules. This just isn't a realistic model of user behavior.

I'm unwilling to accept the corollary that I cannot be compatible with both the current release of a package and the current platform if there have been any instances added in between.

If the user wants to lock down the instances they can fix the version of the upstream dependency that is supplying it.

The way I see it, if I don't supply the data type, and I don't supply the class, then using a class is fine in my API across major versions. Otherwise nobody can ship anything that crosses more than even one base version let alone versions of other packages.

The only breakages that can occur be introduced are all due to orphan instances. 

What you propose when carried through to its logical conclusion would basically kill all development between platform releases.

-Edward




On Wed, Feb 26, 2014 at 5:42 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 12:22 PM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-26 at 10:45:40 +0100, Michael Snoyman wrote:

[...]

>> >> ...are you simply referring to the fact that in order to guarantee
>> >> PVP-semantics of a package version, one has to take care to restrict the
>> >> version bounds of that package's build-deps in such a way, that any API
>> >> entities leaking from its (direct) build-deps (e.g.  typeclass instances
>> >> or other re-exported entities) are not a function of the "internal"
>> >> degree of freedoms the build-dep version-ranges provide? Or is there
>> >> more to it?

[...]

>> From my point of view, I'd argue that
>>
>>  a) 'persistent' failed to live up to the "spirit" of the PVP contract,
>>     i.e. to expose a "contact-surface" which satisfies certain
>>     invariants within specific package-version ranges.

> How would persistent have done better? AFAICT, the options are:
>
> 1. Do what I did: state a true version dependency on monad-logger, that it
> works with both version 0.2 and 0.3.

Yes, "persistent" itself does in fact work with both major versions of
"monad-logger", but alas the API reachable through depending solely on
"persistent" leaks details of the underlying monad-logger version used.

...but the PVP's primary statement is define how a package shall behave
from the point of view of its users (where by user I mean package
build-depending on `persistent`). So...

> 2. Constrain it to one or the other, which would be a falsehood that would
> restrict users' ability to use the package.

...this is would actually be, what I'd interpret the PVP to
expect/require from "persistent" in order to satisfy its goal to shield
the package's users from incompatible changes.

> Let's try it a different way. If transformers removed a MonadIO instance
> between version 2 and 3 of the library, should that mean that every single
> package with type signatures involving MonadIO should be constrained to one
> specific version of transformers?

yes, that'd be what I'm suggesting here (the [1] footnote is a different
suggestion for the same problem though)


So let's analyze how far you want to go here. Imagine if version 0.3.0 of transformers did not have a MonadIO instance for StateT, and version 0.3.1 added it. Now some library has a function:

myFunc :: MonadIO m => Int -> m String

What versions of transformers is it allowed to work with? If it allows version 0.3.0 and 0.3.1, and a user depends on the presence of the MonadIO StateT instance, the build can be broken by moving back to version 0.3.0 (which may be demanded by some other packages dependencies). This is simply the reverse of the monad-logger situation, where an instance was added instead of being removed. I don't see a reasonable solution to this situation... well, besides everyone just trusting a curator to build all of these packages for them.

And just to be clear: if persistent had bumped its lower version bound on monad-logger, then users still on the old version of monad-logger would be unable to upgrade, and for no real reason. persistent would have required a major version bump most likely[1], which would have caused all packages downstream from it to do version bumps as well.

Forgetting about my position as a library author, or as a Stackage maintainer, and speaking purely as a library *user*, this would be a terrible situation to be in.

[1] That's another hole in the PVP I think. It doesn't explicitly address the issue of an API change by eliminating compatibility with a previously accepted dependency, but I've seen huge breakages occur due to this.
 
>>  b) However, the PVP can be blamed as well, as in its current form it
>>     doesn't explicitly address the issue of API-leakage from transitive
>>     build-dependencies. [1]
>>
>> The question for me now is whether the PVP is fixable in this respect,
>> and at what cost.
>>
>> Moreover, it seems to me, it always comes down to type-class instances
>> causing most problems with the PVP (either by requiring version-bump
>> cascades throughout the PVP-adhering domain of Hackage, or by their hard
>> hard to constraint leakage through package module/boundaries); maybe we
>> need address this issue at the language-level and provide some facility
>> for limiting the propagation of type-class instances first.
>>
>>
> There's one other issue, which is reexports. As an extreme example, imagine:
>
> * Version 1 of the foo package has the Foo module, and it exports foo1 and
> foo2.
> * Version 2 of the foo package has the Foo module, and it exports foo1.
> * Version 1 of the bar package as the Bar module, defined as:

Yes, I'm well aware of this problem, but that's easier to control, as
you can use explicit import/export lists to constraint what entities you
expose to direct users of your package. That's what I'm doing e.g. in

 http://hackage.haskell.org/package/deepseq-generics-0.1.1.1/docs/Control-DeepSeq-Generics.html

where I'm explicitly naming the entities I re-export from
Control.DeepSeq for convenience. (However, I'm lacking such a facility
for instances)


Agreed, the reexport issue is something that can be dealt with, whereas typeclasses don't have such a facility right now. I just wanted to point it out to make sure we were considering all issues.
 
>
> module Bar (module Foo) where
> import Foo
> * According to the PVP, the bar package can have a version bound on foo of
> `foo > 1 && < 2.1`.
> * User code that depends on foo2 being exported from Bar will be broken by
> the transitive update of foo.
>
> The example's extreme, but it's the same basic problem as typeclass
> instances.

[...]

>>  [1]: An alternative to what I'm suggesting in 'a)' (i.e. that it'd be
>>       `persistent`'s obligation), could be that the package you
>>       mentioned (which broke due to monad-logger having a non-monotonic
>>       API change), might become required to include packages supplying
>>       the instances they depends upon in their build-depends, thus
>>       turning an transitive dep into a direct dependency.

> I don't think I follow this comment, sorry.

I'm basically just saying, that the package which used "persistent",
ought to add "monad-logger ==0.2.*" to its direct build-dependencies, as
it depends on an instance provided by monad-logger. The huge down-side
is, that you'd have to know about type-class instances leaked through
persistent, in order to know that'd you have to add some of
"persistent"'s transitive build-depends to your own package, in order to
safe yourself from missing out on type-class instances.

And my approach is that the only sane way to create repeatable builds is to *always* list the exact versions of all packages you depend upon. And in my opinion, it's far more important to ensure that the code behaves the same way than that it simply builds. The only real way to do that is to always use the same versions of all dependencies.

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries



_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 13:09 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 2:03 PM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
I think we should relax the PVP requirement to bump the major version number when adding an instance and instead require that the major version bump is only required when using orphan instances and otherwise only a minor version bump is required. Unless I missed some case, code that depends on a library that follows this rule should not not break.

Here's my reasoning:

If you add a non-orphan instance, it must be because

 * you define the data type or the type class in your package and
 * depend on a package that declares the other entity.

Therefore, no package that depend on your package can declare a non-orphan instance that could collide with the instance you declare.



+1. If we're discussing PVP changes, the other one I'd like to propose is:

Don't include upper bounds on base, template-haskell, or other libraries which cannot be upgraded, unless you know with certainty that your package will not compile with those other versions. Motivation:

* The bounds will never help cabal choose a better build plan.
* The bounds may cause valid builds to never be attempted.
* The bounds make it very difficult to check and debug new versions of GHC.
* Including the bounds if you know the build will fail makes for more user-friendly messages.
* Leaving off the bounds if you're not certain will result in users getting more verbose error messages. While uglier, these error messages will be helpful for the package maintainer to adjust the package for the new version of GHC.

I'd also want to push the bounds a little bit further a make a distinction between experimental and stable packages, but that's a bigger proposal and I'd rather start with something more modest.

Michael 
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 26 Feb 13:25 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-26 at 13:09:37 +0100, Michael Snoyman wrote:

[...]

> +1. If we're discussing PVP changes, the other one I'd like to propose is:
>
> Don't include upper bounds on base, template-haskell, or other libraries
> which cannot be upgraded, unless you know with certainty that your package
> will not compile with those other versions. Motivation:
>
> * The bounds will never help cabal choose a better build plan.

...this assumes (as I mentioned in an earlier post) that GHC is never
going to ship again with two versions of base (like in the past with
base3/4). For that case, we'd want at least something like `base < 5` as
upper bound in place (with the policy that `5.*` will only ever be
reached if something really disruptive is done to `base`)
Ivan Lazar Miljenovic | 26 Feb 13:39 2014
Picon

Re: qualified imports, PVP and so on

On 26 February 2014 23:25, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
> On 2014-02-26 at 13:09:37 +0100, Michael Snoyman wrote:
>
> [...]
>
>> +1. If we're discussing PVP changes, the other one I'd like to propose is:
>>
>> Don't include upper bounds on base, template-haskell, or other libraries
>> which cannot be upgraded, unless you know with certainty that your package
>> will not compile with those other versions. Motivation:
>>
>> * The bounds will never help cabal choose a better build plan.
>
> ...this assumes (as I mentioned in an earlier post) that GHC is never
> going to ship again with two versions of base (like in the past with
> base3/4). For that case, we'd want at least something like `base < 5` as
> upper bound in place (with the policy that `5.*` will only ever be
> reached if something really disruptive is done to `base`)

Agreed; there were a few packages that failed on Gentoo because the
author stated that it worked with "base < 5", even though they'd only
tested it with cabal-install and at the time it was defaulting to
base-3 (though using the runhaskell Setup.hs method used the "best
version").

That might be reasonable for the other such libraries though.  But to
be specific: are we including libraries such as bytestring,
containers, etc. as those that can be upgraded or cannot be upgraded?

--

-- 
Ivan Lazar Miljenovic
Ivan.Miljenovic <at> gmail.com
http://IvanMiljenovic.wordpress.com
Michael Snoyman | 26 Feb 14:16 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 2:39 PM, Ivan Lazar Miljenovic <ivan.miljenovic <at> gmail.com> wrote:
On 26 February 2014 23:25, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
> On 2014-02-26 at 13:09:37 +0100, Michael Snoyman wrote:
>
> [...]
>
>> +1. If we're discussing PVP changes, the other one I'd like to propose is:
>>
>> Don't include upper bounds on base, template-haskell, or other libraries
>> which cannot be upgraded, unless you know with certainty that your package
>> will not compile with those other versions. Motivation:
>>
>> * The bounds will never help cabal choose a better build plan.
>
> ...this assumes (as I mentioned in an earlier post) that GHC is never
> going to ship again with two versions of base (like in the past with
> base3/4). For that case, we'd want at least something like `base < 5` as
> upper bound in place (with the policy that `5.*` will only ever be
> reached if something really disruptive is done to `base`)

Agreed; there were a few packages that failed on Gentoo because the
author stated that it worked with "base < 5", even though they'd only
tested it with cabal-install and at the time it was defaulting to
base-3 (though using the runhaskell Setup.hs method used the "best
version").

That might be reasonable for the other such libraries though.  But to
be specific: are we including libraries such as bytestring,
containers, etc. as those that can be upgraded or cannot be upgraded?

I'm making the more modest proposal, and just focusing on libraries which cannot at all be upgraded.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 14:15 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 2:25 PM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
On 2014-02-26 at 13:09:37 +0100, Michael Snoyman wrote:

[...]

> +1. If we're discussing PVP changes, the other one I'd like to propose is:
>
> Don't include upper bounds on base, template-haskell, or other libraries
> which cannot be upgraded, unless you know with certainty that your package
> will not compile with those other versions. Motivation:
>
> * The bounds will never help cabal choose a better build plan.

...this assumes (as I mentioned in an earlier post) that GHC is never
going to ship again with two versions of base (like in the past with
base3/4). For that case, we'd want at least something like `base < 5` as
upper bound in place (with the policy that `5.*` will only ever be
reached if something really disruptive is done to `base`)


You're right, that should be included. A further reason would be that Hackage demand an upper bound on base anyway, and it seems a pretty accepted practice to use < 5. If this was standardized in the PVP, I think we'd be better off.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 26 Feb 16:38 2014
Picon

Re: qualified imports, PVP and so on


On Wed, Feb 26, 2014 at 5:15 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
A further reason would be that Hackage demand an upper bound on base anyway

This is policy also and can be changed.


--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Herbert Valerio Riedel | 26 Feb 16:42 2014
Picon

Re: qualified imports, PVP and so on

On 2014-02-26 at 16:38:41 +0100, Gregory Collins wrote:
> On Wed, Feb 26, 2014 at 5:15 AM, Michael Snoyman <michael <at> snoyman.com>wrote:
>
>> A further reason would be that Hackage demand an upper bound on base anyway
>
> This is policy also and can be changed.

Btw, in case anyone is interested in the motivation for that policy at
the time it was enabled:

 http://www.haskell.org/pipermail/cabal-devel/2009-June/005313.html

Cheers,
  hvr
Johan Tibell | 26 Feb 13:45 2014
Picon

Re: qualified imports, PVP and so on

On Wed, Feb 26, 2014 at 1:09 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
* The bounds will never help cabal choose a better build plan.

It won't help cabal but it might inform the user what's wrong so he/she can do something about it. Dependency errors are more high-level than compilation errors.
 
* The bounds make it very difficult to check and debug new versions of GHC.

I believe we added a cabal flag to skip the upper bounds check. Mikhail, do you remember?
 
* Including the bounds if you know the build will fail makes for more user-friendly messages.

I think the more friendly message for base is: this package doesn't work with this (new) version of GHC. We sometimes have breaking base changes (like in the upcoming 7.8 release, which changes some primops.) Having an error on the base version is better than a compilation error in that case.

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 14:18 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 2:45 PM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 1:09 PM, Michael Snoyman <michael <at> snoyman.com> wrote:
* The bounds will never help cabal choose a better build plan.

It won't help cabal but it might inform the user what's wrong so he/she can do something about it. Dependency errors are more high-level than compilation errors.
 
* The bounds make it very difficult to check and debug new versions of GHC.

I believe we added a cabal flag to skip the upper bounds check. Mikhail, do you remember?
 

Even with that flag, we'd still have a bit of a problem. It would be nice if cabal could ignore an upper bound on template-haskell, but respect an upper bound on some other package that *can* be installed with a newer GHC. Perhaps adding that flexibility to cabal would be possible.
 
* Including the bounds if you know the build will fail makes for more user-friendly messages.

I think the more friendly message for base is: this package doesn't work with this (new) version of GHC. We sometimes have breaking base changes (like in the upcoming 7.8 release, which changes some primops.) Having an error on the base version is better than a compilation error in that case.


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Daniel Trstenjak | 26 Feb 15:04 2014
Picon

Re: qualified imports, PVP and so on


On Wed, Feb 26, 2014 at 03:18:45PM +0200, Michael Snoyman wrote:
> Even with that flag, we'd still have a bit of a problem. It would be nice if
> cabal could ignore an upper bound on template-haskell, but respect an upper
> bound on some other package that *can* be installed with a newer GHC. Perhaps
> adding that flexibility to cabal would be possible.

I think that's a call for soft bounds (known to work with) and hard
bounds (known to not work with) and the flag '--allow-newer' would be
only allowed to overwrite the soft bounds.

Greetings,
Daniel
Carter Schonwald | 26 Feb 15:11 2014
Picon

Re: qualified imports, PVP and so on

Yeah.  I think that idea was ok'd by all. 




On Wednesday, February 26, 2014, Daniel Trstenjak <daniel.trstenjak <at> gmail.com> wrote:

On Wed, Feb 26, 2014 at 03:18:45PM +0200, Michael Snoyman wrote:
> Even with that flag, we'd still have a bit of a problem. It would be nice if
> cabal could ignore an upper bound on template-haskell, but respect an upper
> bound on some other package that *can* be installed with a newer GHC. Perhaps
> adding that flexibility to cabal would be possible.

I think that's a call for soft bounds (known to work with) and hard
bounds (known to not work with) and the flag '--allow-newer' would be
only allowed to overwrite the soft bounds.


Greetings,
Daniel
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Edward Kmett | 26 Feb 21:04 2014
Picon

Re: qualified imports, PVP and so on

I very much agree with the sentiment that bounds on, say, template-haskell or ghc-prim never help build plans.

One of the most common complaints I get from users has to do with the garbage type errors they get out of the template-haskell that runs lens when some package somewhere has "upgraded" their `template-haskell` install to a non-compatible version.

By the time they discover this they are often several packages further along in the install and only have the vaguest inkling of what caused the problem in the first place.

Regarding base I still use the < 5 bound, as as Herbert pointed out if we ever needed to ship with support for two versions of base during some big cut-over, it'd be nice to have it work. That said, I'm somewhat leery of how well the community could deal with that in practice. Even the old monads-fd vs mtl split nearly rent the ecosystem asunder. ;)

-Edward


On Wed, Feb 26, 2014 at 7:09 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 2:03 PM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
I think we should relax the PVP requirement to bump the major version number when adding an instance and instead require that the major version bump is only required when using orphan instances and otherwise only a minor version bump is required. Unless I missed some case, code that depends on a library that follows this rule should not not break.

Here's my reasoning:

If you add a non-orphan instance, it must be because

 * you define the data type or the type class in your package and
 * depend on a package that declares the other entity.

Therefore, no package that depend on your package can declare a non-orphan instance that could collide with the instance you declare.



+1. If we're discussing PVP changes, the other one I'd like to propose is:

Don't include upper bounds on base, template-haskell, or other libraries which cannot be upgraded, unless you know with certainty that your package will not compile with those other versions. Motivation:

* The bounds will never help cabal choose a better build plan.
* The bounds may cause valid builds to never be attempted.
* The bounds make it very difficult to check and debug new versions of GHC.
* Including the bounds if you know the build will fail makes for more user-friendly messages.
* Leaving off the bounds if you're not certain will result in users getting more verbose error messages. While uglier, these error messages will be helpful for the package maintainer to adjust the package for the new version of GHC.

I'd also want to push the bounds a little bit further a make a distinction between experimental and stable packages, but that's a bigger proposal and I'd rather start with something more modest.

Michael 

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Johan Tibell | 26 Feb 10:17 2014
Picon

Re: qualified imports, PVP and so on

Hi Michael,

On Wed, Feb 26, 2014 at 6:45 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
That's essentially it. I'll give one of the examples I ran into. (Names omitted on purpose, if the involved party wants to identify himself, please do so, I just didn't feel comfortable doing so without your permission.) Version 0.2 of monad-logger included MonadLogger instances for IO and other base monads. For various reasons, these were removed, and the version bumped to 0.3. This is in full compliance with the PVP.

persistent depends on monad-logger. It can work with either version 0.2 or 0.3 of monad-logger, and the cabal file allows this via `monad-logger >= 0.2 && < 0.4` (or something like that). Again, full PVP compliance.

A user wrote code against persistent when monad-logger version 0.2 was available. He used a function that looked like:

runDatabase :: MonadLogger m => Persistent a -> m a

(highly simplified). In his application, he used this in the IO monad. He depended on persistent with proper lower and upper bounds. Once again, full PVP compliance.

Once I released version 0.3 of monad-logger, his next build automatically upgraded him to monad-logger 0.3, and suddenly his code broke, because there's no MonadLogger instance for IO.

Now *if* the program had been using a system like "cabal freeze" or the like, this could have never happened: cabal wouldn't be trying to automatically upgrade to monad-logger 0.3.

I'm trying to wrap my head around this.

 * Is runDatabase above a function in persistent or related packages or a function that the user wrote? 
 * What was the user's dependency range for monad-logger? If he is using the IO instance of MonadLogger from monad-logger, he ought to have a monad-logger == 2.0.* dependency (since removing instances require a major version bump.)
 * Does any involved package use orphan instances?

Cheers,
Johan

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 10:37 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 11:17 AM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
Hi Michael,

On Wed, Feb 26, 2014 at 6:45 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
That's essentially it. I'll give one of the examples I ran into. (Names omitted on purpose, if the involved party wants to identify himself, please do so, I just didn't feel comfortable doing so without your permission.) Version 0.2 of monad-logger included MonadLogger instances for IO and other base monads. For various reasons, these were removed, and the version bumped to 0.3. This is in full compliance with the PVP.

persistent depends on monad-logger. It can work with either version 0.2 or 0.3 of monad-logger, and the cabal file allows this via `monad-logger >= 0.2 && < 0.4` (or something like that). Again, full PVP compliance.

A user wrote code against persistent when monad-logger version 0.2 was available. He used a function that looked like:

runDatabase :: MonadLogger m => Persistent a -> m a

(highly simplified). In his application, he used this in the IO monad. He depended on persistent with proper lower and upper bounds. Once again, full PVP compliance.

Once I released version 0.3 of monad-logger, his next build automatically upgraded him to monad-logger 0.3, and suddenly his code broke, because there's no MonadLogger instance for IO.

Now *if* the program had been using a system like "cabal freeze" or the like, this could have never happened: cabal wouldn't be trying to automatically upgrade to monad-logger 0.3.

I'm trying to wrap my head around this.

 * Is runDatabase above a function in persistent or related packages or a function that the user wrote? 

It was in persistent. The actual function name is different, but the concept is the same.
 
 * What was the user's dependency range for monad-logger? If he is using the IO instance of MonadLogger from monad-logger, he ought to have a monad-logger == 2.0.* dependency (since removing instances require a major version bump.)

The user didn't directly depend on monad-logger at all, as there was no need with version 0.2 of monad-logger.
 
 * Does any involved package use orphan instances?


Nope, no orphans at all.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Johan Tibell | 26 Feb 12:58 2014
Picon

Re: qualified imports, PVP and so on

On Wed, Feb 26, 2014 at 10:37 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
On Wed, Feb 26, 2014 at 11:17 AM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
 * What was the user's dependency range for monad-logger? If he is using the IO instance of MonadLogger from monad-logger, he ought to have a monad-logger == 2.0.* dependency (since removing instances require a major version bump.)

The user didn't directly depend on monad-logger at all, as there was no need with version 0.2 of monad-logger.

I guess the reason didn't have to depend directly on monad-logger was because he/she never mentioned any types from that package by name and thus didn't require an import of monad-logger modules.

I wonder if we should have the PVP require that if a type you export in your API loses a class instance, then you're required to bump the major version number. This would have helped in this case. 
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 13:02 2014

Re: qualified imports, PVP and so on




On Wed, Feb 26, 2014 at 1:58 PM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 10:37 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
On Wed, Feb 26, 2014 at 11:17 AM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
 * What was the user's dependency range for monad-logger? If he is using the IO instance of MonadLogger from monad-logger, he ought to have a monad-logger == 2.0.* dependency (since removing instances require a major version bump.)

The user didn't directly depend on monad-logger at all, as there was no need with version 0.2 of monad-logger.

I guess the reason didn't have to depend directly on monad-logger was because he/she never mentioned any types from that package by name and thus didn't require an import of monad-logger modules.

I wonder if we should have the PVP require that if a type you export in your API loses a class instance, then you're required to bump the major version number. This would have helped in this case. 

In this case, that change would not have helped at all: the type which lost an instance as IO, which was not exported by persistent (or monad-logger, for that matter). MonadLogger *also* wasn't exported by persistent.

But I don't like this approach to the problem anyway. The theory seems to be that, if we just keep "improving" the PVP, and make library authors' lives more difficult, eventually we'll address all issues. Dependency freezing is a complete solution to the problem described here, I don't understand why there seems to be so much resistance to it.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Vincent Hanquez | 25 Feb 22:16 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 19:23, Gregory Collins wrote:
>
> On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <michael <at> snoyman.com 
> <mailto:michael <at> snoyman.com>> wrote:
>
>     But that's only one half of the "package interoperability" issue.
>     I face this first hand on a daily basis with my Stackage
>     maintenance. I spend far more time reporting issues of restrictive
>     upper bounds than I do with broken builds from upstream changes.
>     So I look at this as purely a game of statistics: are you more
>     likely to have code break because version 1.2 of text changes the
>     type of the map function and you didn't have an upper bound, or
>     because two dependencies of yours have *conflicting* versions
>     bounds on a package like aeson[2]? In my experience, the latter
>     occurs far more often than the former.
>
>
> That's because you maintain a lot of packages, and you're considering 
> buildability on short time frames (i.e. you mostly care about "does 
> all the latest stuff build right now?"). The consequences of violating 
> the PVP are that as a piece of code ages, the probability that it 
> still builds goes to *zero*, even if you go and dig out the old GHC 
> version that you were using at the time. I find this really 
> unacceptable, and believe that people who are choosing not to be 
> compliant with the policy are BREAKING HACKAGE and making life harder 
> for everyone by trading convenience now for guaranteed pain later. In 
> fact, in my opinion the server ought to be machine-checking PVP 
> compliance and refusing to accept packages that don't obey the policy.
If you're going to dig an old ghc version, what's stopping you from 
downloading old packages manually from hackage ? I'm sure it can even be 
automated (more or less).

However, I don't think we should optimise for this use case; I'ld rather 
use maintained packages that are regularly updated. And even if I wanted 
to use an old package, provided it's not tied to something fairly 
internals like GHC's api or such, in a language like haskell, porting to 
recent version of libraries should be easier than in most other language.

Furthermore, some old libraries should not be used anymore. Consider old 
libraries that have security issues for example. Whilst it's not the 
intent, It's probably a good thing that those old libraries don't build 
anymore, and people are forced to move to the latest maintained version.

The PvP at it stand seems to be a refuge for fossilised packages.

> Like Ed said, this is pretty cut and dried: we have a policy, you're 
> choosing not to follow it, you're not in compliance, you're breaking 
> stuff. We can have a discussion about changing the policy (and this 
> has definitely been discussed to death before), but I don't think your 
> side has the required consensus/votes needed to change the policy. As 
> such, I really wish that you would reconsider your stance here.

"we have a policy".

*ouch*, I'm sorry, but I find those biggoted views damaging in a nice 
inclusive haskell community (as I like to view it).

While we may have different opinions, I think we're all trying our best 
to contribute to the haskell ecosystem the way we see fit.

--

-- 
Vincent
MightyByte | 25 Feb 22:34 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 4:16 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
> If you're going to dig an old ghc version, what's stopping you from
> downloading old packages manually from hackage ? I'm sure it can even be
> automated (more or less).

It's much more difficult because the scale is much greater.  Also, if
people aren't putting in version bounds, then you have no clue what
versions to try.  Leaving out version bounds is throwing away
information.

> However, I don't think we should optimise for this use case; I'ld rather use
> maintained packages that are regularly updated.

When I write code and get it working, I want it to work for all time.
There's absolutely no reason we shouldn't be able to make that happen.
 If we ignore this case, then Haskell will never be suitable for use
in serious production situations.  Large organizations want to know
that if they start using something it will continue to work.  (And
don't respond to this with the "avoid success at all costs" line.
Haskell is now mature enough that I and a growing number of other
people use Haskell on a daily basis for mission-critical
applications.)

> And even if I wanted to use an old package, provided it's not tied to something fairly internals like
> GHC's api or such, in a language like haskell, porting to recent version of
> libraries should be easier than in most other language.

It might be easier, but it can still require a LOT of effort...much
more than is justified in some situations.  And that doesn't mean that
in those situations getting old code working doesn't have significant
value.

> Furthermore, some old libraries should not be used anymore. Consider old
> libraries that have security issues for example. Whilst it's not the intent,
> It's probably a good thing that those old libraries don't build anymore, and
> people are forced to move to the latest maintained version.

This argument does not hold water when getting a legacy piece of code
working has significant intrinsic value.  There are plenty of
situations where code can have great value to a person/organization
even if it doesn't touch the wild internet.
Vincent Hanquez | 25 Feb 22:51 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 2014-02-25 21:34, MightyByte wrote:
> On Tue, Feb 25, 2014 at 4:16 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>> If you're going to dig an old ghc version, what's stopping you from
>> downloading old packages manually from hackage ? I'm sure it can even be
>> automated (more or less).
> It's much more difficult because the scale is much greater.  Also, if
> people aren't putting in version bounds, then you have no clue what
> versions to try.  Leaving out version bounds is throwing away
> information.
I'm not saying this is not painful, but i've done it in the past, and 
using dichotomy and educated guesses (for example not using libraries 
released after a certain date), you converge pretty quickly on a solution.

But the bottom line is that it's not the common use case. I rarely have 
to dig old unused code.
>> However, I don't think we should optimise for this use case; I'ld rather use
>> maintained packages that are regularly updated.
> When I write code and get it working, I want it to work for all time.
> There's absolutely no reason we shouldn't be able to make that happen.
>   If we ignore this case, then Haskell will never be suitable for use
> in serious production situations.  Large organizations want to know
> that if they start using something it will continue to work.  (And
> don't respond to this with the "avoid success at all costs" line.
> Haskell is now mature enough that I and a growing number of other
> people use Haskell on a daily basis for mission-critical
> applications.)

This is moot IMHO. A large organisation would *not* rely on cabal, nor 
the PvP to actually download packages properly:
Not only this is insecure, and as Michael mentioned, you would not get 
the guarantee you need anyway.

Even if the above wasn't an issue, Haskell doesn't run in a bubble. I 
don't expect old ghc and old packages to work with newer operating 
systems and newer libraries forever.

>> Furthermore, some old libraries should not be used anymore. Consider old
>> libraries that have security issues for example. Whilst it's not the intent,
>> It's probably a good thing that those old libraries don't build anymore, and
>> people are forced to move to the latest maintained version.
> This argument does not hold water when getting a legacy piece of code
> working has significant intrinsic value.  There are plenty of
> situations where code can have great value to a person/organization
> even if it doesn't touch the wild internet.
Sure, in this case it doesn't apply to my "security issue" example, does 
it ?

--

-- 
Vincent
MightyByte | 26 Feb 00:28 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.

> This is moot IMHO. A large organisation would *not* rely on cabal, nor the
> PvP to actually download packages properly:

Sorry, let me rephrase.  s/Large organizations/organizations/  Not
everyone is big enough to devote the kind of resources it would take
to set up their own system.  I've personally worked at two such
companies.  Building tools that can serve the needs of these
organizations will help the Haskell community as a whole.

> Not only this is insecure, and as Michael mentioned, you would not get the
> guarantee you need anyway.

In many cases security doesn't matter because code doesn't interact
with the outside world.  We're not talking about guaranteeing that
building with a later version is buggy.  We're talking about
guaranteeing that the package will work the way it always worked.
It's kind of a package-level purity/immutability.

> Even if the above wasn't an issue, Haskell doesn't run in a bubble. I don't
> expect old ghc and old packages to work with newer operating systems and
> newer libraries forever.

I don't expect this either.  I expect old packages to work the way
they always worked with the packages they always worked with.
Michael Snoyman | 26 Feb 06:25 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

I still maintain that new codebases should be creating freeze files (or whatever we want to call them), and we need a community supported tool for it. After speaking with various Haskell-based companies, I'm fairly certain just about everyone's reinvented their own proprietary version of such a tool.

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

All of that said, I still think the only real solution is getting end users off of Hackage. We need an intermediate, stabilizing layer. That's why I started Stackage, and I believe that it's the only solution that will ultimately make library authors and end-users happy. Everything we're discussing now is window dressing.

My offer of cabal-timemachine was serious: I'll be happy to start that project, and I *do* think it will solve many people's issues. I'd just like it if it was released concurrently with cabal-freeze, so that once you figure out the right set of packages, you can freeze them in place and never run into these issues again.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
John Lato | 26 Feb 07:03 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

All of that said, I still think the only real solution is getting end users off of Hackage. We need an intermediate, stabilizing layer. That's why I started Stackage, and I believe that it's the only solution that will ultimately make library authors and end-users happy. Everything we're discussing now is window dressing.

A curated ecosystem can certainly function, but it seems like a lot more work than just following the PVP and specifying upper bounds.  And upper bounds are likely to work better with packages that, for whatever reason, aren't in that curated ecosystem.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 08:11 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?
 

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
John Lato | 26 Feb 09:36 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 11:11 PM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?

I had understood people talking about "legacy projects" to mean something other than how you read it.  In which case, I would suggest that there is a third use case, which IMHO is more important than either of the use cases you have identified.  Here's an example:

1.  package foo-0.1 appears on hackage
2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and text>=1.0
4.  users install awesomeApp
5.  package foo-0.2 appears on hackage, with lots of breaking changes
6.  awesomeApp users notice that it sometimes breaks with Hungarian characters, and the problem is traced to an error in text
6.  text-1.0.0.1 is released with some bug fixes
7.  awesomeApp users attempt to do cabal update; cabal install, which fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)

There's nothing in this situation that requires any of these packages be unmaintained.  The problem is that, rather than wanting to reproduce a fixed set of package versions (which cabal already allows for if that's really desired), sometimes it's desirable that updates be held back in active code bases.  Replace "foo" with "QuickCheck" for example (where for a long time users stayed with quickcheck2 because version 3 had major performance regressions in certain use cases).

This sort of conflict used to happen *all the time*, and it's very frustrating to users (because something worked before, now it's not working, and they're not generally in a good position to know why).  It's annoying to reproduce because the install graph cabal produces depends in part on the user's installed packages.  So just because something builds on a developer's box doesn't mean that it would build on the user's box, or it would work for some users but not others (sandboxing has at least helped with that problem).

 

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

That's a straw man, I don't think anyone has argued that they make the user experience better in *all* cases.  The PVP helps significantly, it avoids especially problematic situations like the one above, and in particular it's quite easy for the developer to fix the simple cases.  Unlike the 2006 status quo, when problems required manually solving the dependency graph.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 10:56 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 11:11 PM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?

I had understood people talking about "legacy projects" to mean something other than how you read it.  In which case, I would suggest that there is a third use case, which IMHO is more important than either of the use cases you have identified.  Here's an example:

1.  package foo-0.1 appears on hackage
2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and text>=1.0
4.  users install awesomeApp
5.  package foo-0.2 appears on hackage, with lots of breaking changes
6.  awesomeApp users notice that it sometimes breaks with Hungarian characters, and the problem is traced to an error in text
6.  text-1.0.0.1 is released with some bug fixes
7.  awesomeApp users attempt to do cabal update; cabal install, which fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)

There's nothing in this situation that requires any of these packages be unmaintained.  The problem is that, rather than wanting to reproduce a fixed set of package versions (which cabal already allows for if that's really desired), sometimes it's desirable that updates be held back in active code bases.  Replace "foo" with "QuickCheck" for example (where for a long time users stayed with quickcheck2 because version 3 had major performance regressions in certain use cases).

This sort of conflict used to happen *all the time*, and it's very frustrating to users (because something worked before, now it's not working, and they're not generally in a good position to know why).  It's annoying to reproduce because the install graph cabal produces depends in part on the user's installed packages.  So just because something builds on a developer's box doesn't mean that it would build on the user's box, or it would work for some users but not others (sandboxing has at least helped with that problem).


IIUC, this is *exactly* the case of an unmaintained package. I'm not advocating leaving a package like bar-0.1 on Hackage without an upper bound on foo, if it's known that it breaks in that case. In order for the package to be properly maintained, the maintainer would have to (1) make bar work with foo-0.2, or (2) add an upper bound. So to me, this falls squarely into the category of unmaintained.

Let me relax my position just a bit. If package maintainers are not going to be responsive to updates in the Hackage ecosystem, then I agree that they should use the PVP. I also think they should advertise their packages as not being actively maintained, and people should try to avoid using them if possible. But if an author is giving quick updates to packages, I don't see a huge benefit to the PVP for users, and instead see some downsides (inability to test against newer dependencies), not to mention the much higher maintenance burden for library authors.

 

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

That's a straw man, I don't think anyone has argued that they make the user experience better in *all* cases.  The PVP helps significantly, it avoids especially problematic situations like the one above, and in particular it's quite easy for the developer to fix the simple cases.  Unlike the 2006 status quo, when problems required manually solving the dependency graph.

You said:

> Upper bounds lead to a better user experience.

That's what I'm disagreeing with. I do not believe that, overall, the PVP is giving users a better experience. I've had a huge downturn in reported errors with Yesod since I stopped strictly following the PVP. It's anecdotal, but everything in this thread is really anecdotal.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Erik Hesselink | 26 Feb 11:50 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 10:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
> On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
>> Upper bounds lead to a better user experience.
>
> That's what I'm disagreeing with. I do not believe that, overall, the PVP is
> giving users a better experience. I've had a huge downturn in reported
> errors with Yesod since I stopped strictly following the PVP. It's
> anecdotal, but everything in this thread is really anecdotal.

But you also tell people to use stackage and yesod-platform, which
fixes a lot of packages to a specific version, IIRC. That means that
not having upper bounds is kind of moot.

As a counter-anecdote, almost all build-related problems we've had in
the past year have been (http-)conduit or tls related, due to the lack
of upper bounds.

Erik
Michael Snoyman | 26 Feb 11:55 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 12:50 PM, Erik Hesselink <hesselink <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 10:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
> On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
>> Upper bounds lead to a better user experience.
>
> That's what I'm disagreeing with. I do not believe that, overall, the PVP is
> giving users a better experience. I've had a huge downturn in reported
> errors with Yesod since I stopped strictly following the PVP. It's
> anecdotal, but everything in this thread is really anecdotal.

But you also tell people to use stackage and yesod-platform, which
fixes a lot of packages to a specific version, IIRC. That means that
not having upper bounds is kind of moot.

As a counter-anecdote, almost all build-related problems we've had in
the past year have been (http-)conduit or tls related, due to the lack
of upper bounds.

Erik

That's a fair point, I left out X factors in this analysis. So I'll say something else instead: back when I followed strict PVP compliance, I still got a lot of reports of broken builds, and my maintenance overhead was very high. Since I dropped PVP compliance and implemented alternative solutions, the reports I've received have gone down dramatically, and I spend far less time on maintenance.

So color me unconvinced that the PVP really made a big difference in users' experiences.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
MightyByte | 26 Feb 17:21 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 4:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
>
> On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
>>
>> I had understood people talking about "legacy projects" to mean something
>> other than how you read it.  In which case, I would suggest that there is a
>> third use case, which IMHO is more important than either of the use cases
>> you have identified.  Here's an example:
>>
>> 1.  package foo-0.1 appears on hackage
>> 2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
>> 3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and
>> text>=1.0
>> 4.  users install awesomeApp
>> 5.  package foo-0.2 appears on hackage, with lots of breaking changes
>> 6.  awesomeApp users notice that it sometimes breaks with Hungarian
>> characters, and the problem is traced to an error in text
>> 6.  text-1.0.0.1 is released with some bug fixes
>> 7.  awesomeApp users attempt to do cabal update; cabal install, which
>> fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)
>>

There's another simpler example here where foo-0.2 breaks something and
because bar has no upper bound, awesomeApp's build spontaneously fails.  I
don't think it's reasonable to require that package authors specify dependency
bounds for the transitive closure of things they depend on.  That's
unintuitive and it doesn't scale well as the number of packages and average
number of dependencies in a single package increases.  The solver needs to
know that foo-0.2 is not known to work with bar.  We should not throw away
that information.  I've already encountered several cases complex enough that
cabal could not find a solution even though some existed.  We need to bound
this problem with as many tools as we can.

>
> IIUC, this is *exactly* the case of an unmaintained package. I'm not
> advocating leaving a package like bar-0.1 on Hackage without an upper bound
> on foo, if it's known that it breaks in that case. In order for the package
> to be properly maintained, the maintainer would have to (1) make bar work
> with foo-0.2, or (2) add an upper bound. So to me, this falls squarely into
> the category of unmaintained.

Calling that an unmaintained package is intellectually dishonest.  Any
reasonable definition of "unmaintained" will acknowledge that there will
always exist some nonzero amount of time between the time foo is updated and
the time bar and awesomeApp can be fixed without claiming that they are
unmaintained.  This is a fact of the reality we live in.  I strongly believe
that packages should not break spontaneously because of the actions of others.

> If package maintainers are not going to be responsive to updates in the
> Hackage ecosystem, then I agree that they should use the PVP. I also think
> they should advertise their packages as not being actively maintained, and
> people should try to avoid using them if possible. But if an author is
> giving quick updates to packages, I don't see a huge benefit to the PVP for
> users, and instead see some downsides (inability to test against newer
> dependencies), not to mention the much higher maintenance burden for library
> authors.

I'm sorry, but this is completely ridiculous.  There are life situations that
happen that preclude package updates.  And that doesn't mean that the package
is unmaintained.

Here's the other side of the story.  A large majority of snap's breakages have
been caused by dependencies that don't supply upper bounds such as
clientsession, cipher-aes, http-conduit, etc.  In fact, we switched from
http-conduit to http-streams precisely because of this problem.

If package maintainers are not going to be responsible and put proper upper
bounds for all their dependencies, then I think they should advertise their
packages as chronically unstable and likely to break anyone else that uses
them at a moment's notice.  People should avoid using them if possible.

The downsides that you're claiming aren't the fault of the PVP.  They are a
deficiency in our tooling, and fully locking down the transitive closure is
not a good solution.  What we need is a way to differentiate known bad upper
bounds from unverified upper bounds.  It could be something as simple as
allowing <! and < in the cabal file.  Then making the solver aware of this
distinction, so it can search the dependency graph in a more informed manner.

> That's what I'm disagreeing with. I do not believe that, overall, the PVP is
> giving users a better experience. I've had a huge downturn in reported
> errors with Yesod since I stopped strictly following the PVP. It's
> anecdotal, but everything in this thread is really anecdotal.

I suspect that the root cause for you holding tihs position is that you happen
to be operating in a relatively self-contained corner of the package space and
don't often depend on fast-moving packages that aren't under your control.
Michael Snoyman | 26 Feb 20:19 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 6:21 PM, MightyByte <mightybyte <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 4:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I suspect that the root cause for you holding tihs position is that you happen
to be operating in a relatively self-contained corner of the package space and
don't often depend on fast-moving packages that aren't under your control.

Choosing this point in the thread to respond to a few points, for no particular reason.

I'm willing to reconsider my stance on the PVP, and a lot of the feedback here is making me rethink some decisions. Let me take a different stab at the problem.

I think the main problem with PVP right now is one of *scope*. I see it as trying to address two different issues:

1. Make sure that any code that anyone ever wrote will continue to build in the future.
2. Provide guidelines to package authors to make sure that `cabal install foo` works reliably, regardless of what's happened on the rest of Hackage.

It's (1) that I take huge issue with, because (as I've been trying to demonstrate) I don't see any way that a policy like PVP could ever fully solve this problem. And more to the point: I think "my code builds" isn't the goal that we should be striving for anyway, it's the goal of reproducible builds. I simply don't think it's worthwhile to push this burden onto package maintainers, as the only realistic solution is dependency freezing, which does in fact fully solve the problem.

But the feedback I'm reading on (2) is making me reconsider my stance. I have to say though, as much as people in this thread are saying the PVP will solve these problems, I'm highly skeptical, since Yesod *did* in fact follow the PVP very strictly for years, and the result was people complaining about broken builds on a regular basis. It's entirely possible though that this was the fault of cabal-install's older dependency solver, and if Yesod switched over to the PVP now, things would be better.

I still have some concerns about the PVP's stance on a few issues, namely:

1. The upper bounds on non-upgradable packages, which I've raised elsewhere in this thread.
2. The lack of flexibility in interpretation. I think there's a qualitative difference between depending on the text package by simply importing Data.Text (Text), versus using some experimental features of a version 0.1 library.

So I suppose if those points were addressed, a lot of my PVP skepticism would disappear, and I'd be more inclined to coming back into the fold, so to say.

My question is: am I alone in thinking these are in fact issues with the PVP? Is anyone else willing to embark on a more significant overhaul of the policy?

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Roman Cheplyaka | 26 Feb 20:38 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

* Michael Snoyman <michael <at> snoyman.com> [2014-02-26 21:19:25+0200]
> I still have some concerns about the PVP's stance on a few issues, namely:
> 
> 1. The upper bounds on non-upgradable packages, which I've raised elsewhere
> in this thread.
> 2. The lack of flexibility in interpretation. I think there's a qualitative
> difference between depending on the text package by simply importing
> Data.Text (Text), versus using some experimental features of a version 0.1
> library.
> 
> So I suppose if those points were addressed, a lot of my PVP skepticism
> would disappear, and I'd be more inclined to coming back into the fold, so
> to say.
> 
> My question is: am I alone in thinking these are in fact issues with the
> PVP?

No, you're not alone. And here are two other issues with PVP and upper
bounds:
http://ro-che.info/articles/2013-10-05-why-pvp-doesnt-work.html

Roman
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Yitzchak Gale | 26 Feb 21:04 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman wrote:
>> I still have some concerns about the PVP's stance on a few issues, namely:
>>
>> 1. The upper bounds on non-upgradable packages, which I've raised elsewhere
>> in this thread.
>> 2. The lack of flexibility in interpretation. I think there's a qualitative
>> difference between depending on the text package by simply importing
>> Data.Text (Text), versus using some experimental features of a version 0.1
>> library.
>>
>> So I suppose if those points were addressed, a lot of my PVP skepticism
>> would disappear, and I'd be more inclined to coming back into the fold, so
>> to say.
>>
>> My question is: am I alone in thinking these are in fact issues with the
>> PVP?

Roman Cheplyaka wrote:
> No, you're not alone. And here are two other issues with PVP and upper
> bounds:
> http://ro-che.info/articles/2013-10-05-why-pvp-doesnt-work.html

I appreciate those concerns, but they are all concerns about
being able to build packages, not about the semantics of the PVP.

Trouble with building is being addressed by improvements in cabal,
combining better algorithms with better quick and easy ways
to intervene manually. I now find that even when thing go very wrong -
complex dependency SATs, errors in package versions or dependency
bounds in other people's packages, etc. - I can usually get a successful
build quite quickly, and I hope this will only continue to improve.

I agree 100% with Michael's #2 - dependency bounds ranges are a way
of expressing the package author's best assessment of what is needed,
and what will likely be needed in the foreseeable future, for the package
to build. They don't need to follow any hard and fast rules. But yes,
an upper bound is called for in most cases.

Roman is right that it is important for package authors to do the
best they can to support the latest versions of their dependencies.
But even when they do that, upper bounds are not useless. They are
important for people using your packages who need to support
a product version over time. And they are important semantic
information that you can easily make available for use by
current and future build tools.

Thanks,
Yitz
Yitzchak Gale | 26 Feb 20:45 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman wrote:
> 1. Make sure that any code that anyone ever wrote will continue to build in
> the future.
> 2. Provide guidelines to package authors to make sure that `cabal install
> foo` works reliably, regardless of what's happened on the rest of Hackage.
>
> It's (1) that I take huge issue with, because (as I've been trying to
> demonstrate) I don't see any way that a policy like PVP could ever fully
> solve this problem.

As a commercial shop that must reproduce builds,
we have wasted many many hours in the past on
cabal hell. Almost all of it was caused by authors
of packages we depend on omitting upper bounds.

I am aware that some people have other needs,
and have wasted their cabal hell time on the opposite
problem.

However, now the amount of time we waste on cabal
hell, though non-zero, is much less. The reason is better
ways in cabal to tweak cabal's build plan manually and
reproduce winning build plans in the future, such as
local cabal.config files. Whatever time we still do waste
is still caused by people not being careless about
upper bounds. And I believe that people with the
opposite problem are wasting less and less time
on cabal hell, too.

As better tools continually become available, it
becomes less disastrous when dependency
version bounds are not exactly right. But on the
other hand, those bounds are critically important
semantic information about a package that make
a huge contribution to the potential quality that
build tools can achieve. And only the package author
can easily provide that information

So in my opinion, the proven, working, way forward is:

1. Continue to improve cabal's build plan tools, such
as cabal freeze. And yes, cabal-timemachine would
be cool :)

2. Continue to adhere to PVP. Package authors should
do the best they can to guess the range of dependency
versions that their package is very likely to build with.

So for example:

For libraries like the tagged library, or the deepseq library,
which haven't changed in any essential way over any number
of major version bumps, go ahead and omit the upper
bound.

For most vanilla packages, try to be as accurate as you
can about the lower bound (without going crazy), and
use a two-component upper bound.

For base, if you are not using fancy new GHC features,
minor version bumps are unlikely to break your package,
but a major bump of base (if that will ever happen again)
is likely to break quite a few packages and maybe yours.

For maintainers like Edward with a huge burden - well,
I'm sure Edward will figure out the most reasonable
and helpful thing to do in his situation. Perhaps higher
powered tools, like some variation on cabal-bounds,
would be helpful for him.

Etc.

Thanks,
Yitz
Michael Snoyman | 26 Feb 21:00 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 9:45 PM, Yitzchak Gale <gale <at> sefer.org> wrote:
Michael Snoyman wrote:
> 1. Make sure that any code that anyone ever wrote will continue to build in
> the future.
> 2. Provide guidelines to package authors to make sure that `cabal install
> foo` works reliably, regardless of what's happened on the rest of Hackage.
>
> It's (1) that I take huge issue with, because (as I've been trying to
> demonstrate) I don't see any way that a policy like PVP could ever fully
> solve this problem.

As a commercial shop that must reproduce builds,
we have wasted many many hours in the past on
cabal hell. Almost all of it was caused by authors
of packages we depend on omitting upper bounds.

I am aware that some people have other needs,
and have wasted their cabal hell time on the opposite
problem.

However, now the amount of time we waste on cabal
hell, though non-zero, is much less. The reason is better
ways in cabal to tweak cabal's build plan manually and
reproduce winning build plans in the future, such as
local cabal.config files. Whatever time we still do waste
is still caused by people not being careless about
upper bounds. And I believe that people with the
opposite problem are wasting less and less time
on cabal hell, too.

As better tools continually become available, it
becomes less disastrous when dependency
version bounds are not exactly right. But on the
other hand, those bounds are critically important
semantic information about a package that make
a huge contribution to the potential quality that
build tools can achieve. And only the package author
can easily provide that information

So in my opinion, the proven, working, way forward is:

1. Continue to improve cabal's build plan tools, such
as cabal freeze. And yes, cabal-timemachine would
be cool :)

2. Continue to adhere to PVP. Package authors should
do the best they can to guess the range of dependency
versions that their package is very likely to build with.

So for example:

For libraries like the tagged library, or the deepseq library,
which haven't changed in any essential way over any number
of major version bumps, go ahead and omit the upper
bound.


Actually, that's not going to help anyone much. If the library really is completely stable, then who cares if I have to change an upper bound when the package never changes?

The real issue is packages with a largely stable subset, and some other part that's still changing. The two prime examples of this are text and bytestring. Both of them expose an incredibly stable core API, which is what most people use. Occasionally, there is a new feature added, or some more obscure feature changes somehow. But likely 95% of packages depending on these two will never be affected by those changes.

Requiring every usage of text to have an upper bound stratifies Hackage into (1) packages that depend on the new features, and (2) PVP-adhering packages whose authors haven't updated their dependencies yet.

With my Stackage hat on, this is by far the most time-consuming issue I have to deal with. base squarely falls in this category, since the vast majority of it doesn't change between releases. It happens to *also* fall in the category of non-upgradeable packages.
 
For most vanilla packages, try to be as accurate as you
can about the lower bound (without going crazy), and
use a two-component upper bound.

For base, if you are not using fancy new GHC features,
minor version bumps are unlikely to break your package,
but a major bump of base (if that will ever happen again)
is likely to break quite a few packages and maybe yours.


To be clear, base has a major version bump every release of GHC. According to the PVP, the first two components of the version number constitute the major version, while the third number is the minor version.
 
For maintainers like Edward with a huge burden - well,
I'm sure Edward will figure out the most reasonable
and helpful thing to do in his situation. Perhaps higher
powered tools, like some variation on cabal-bounds,
would be helpful for him.

Etc.

Thanks,
Yitz

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
John Lato | 26 Feb 21:01 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 1:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:

On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 11:11 PM, Michael Snoyman <michael <at> snoyman.com> wrote:

On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?

I had understood people talking about "legacy projects" to mean something other than how you read it.  In which case, I would suggest that there is a third use case, which IMHO is more important than either of the use cases you have identified.  Here's an example:

1.  package foo-0.1 appears on hackage
2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and text>=1.0
4.  users install awesomeApp
5.  package foo-0.2 appears on hackage, with lots of breaking changes
6.  awesomeApp users notice that it sometimes breaks with Hungarian characters, and the problem is traced to an error in text
6.  text-1.0.0.1 is released with some bug fixes
7.  awesomeApp users attempt to do cabal update; cabal install, which fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)

There's nothing in this situation that requires any of these packages be unmaintained.  The problem is that, rather than wanting to reproduce a fixed set of package versions (which cabal already allows for if that's really desired), sometimes it's desirable that updates be held back in active code bases.  Replace "foo" with "QuickCheck" for example (where for a long time users stayed with quickcheck2 because version 3 had major performance regressions in certain use cases).

This sort of conflict used to happen *all the time*, and it's very frustrating to users (because something worked before, now it's not working, and they're not generally in a good position to know why).  It's annoying to reproduce because the install graph cabal produces depends in part on the user's installed packages.  So just because something builds on a developer's box doesn't mean that it would build on the user's box, or it would work for some users but not others (sandboxing has at least helped with that problem).


IIUC, this is *exactly* the case of an unmaintained package. I'm not advocating leaving a package like bar-0.1 on Hackage without an upper bound on foo, if it's known that it breaks in that case. In order for the package to be properly maintained, the maintainer would have to (1) make bar work with foo-0.2, or (2) add an upper bound. So to me, this falls squarely into the category of unmaintained.

I disagree.  I think it's unreasonable to expect that maintainers provide 24/7 availability and near-immediate maintenance releases when updated deps are released.  And in the meantime (which may be days, or even a couple weeks for a single-maintainer who might be on holiday), there's plenty of time for this to bite hard.  In the past, this meant that broken packages would remain available on hackage for a long time.  At least now the package maintainers can do so themselves, but it still means that broken packages have escaped into the wild, which is also bad.
 

Let me relax my position just a bit. If package maintainers are not going to be responsive to updates in the Hackage ecosystem, then I agree that they should use the PVP. I also think they should advertise their packages as not being actively maintained, and people should try to avoid using them if possible. But if an author is giving quick updates to packages, I don't see a huge benefit to the PVP for users, and instead see some downsides (inability to test against newer dependencies), not to mention the much higher maintenance burden for library authors.

What do you consider "responsive"?  2 hours?  24?  1 week?  I suspect that you have IMO unrealistic expectations of maintainers, so I think it would be good to get into specifics.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 21:13 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 10:01 PM, John Lato <jwlato <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 1:56 AM, Michael Snoyman <michael <at> snoyman.com> wrote:

On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 11:11 PM, Michael Snoyman <michael <at> snoyman.com> wrote:

On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?

I had understood people talking about "legacy projects" to mean something other than how you read it.  In which case, I would suggest that there is a third use case, which IMHO is more important than either of the use cases you have identified.  Here's an example:

1.  package foo-0.1 appears on hackage
2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and text>=1.0
4.  users install awesomeApp
5.  package foo-0.2 appears on hackage, with lots of breaking changes
6.  awesomeApp users notice that it sometimes breaks with Hungarian characters, and the problem is traced to an error in text
6.  text-1.0.0.1 is released with some bug fixes
7.  awesomeApp users attempt to do cabal update; cabal install, which fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)

There's nothing in this situation that requires any of these packages be unmaintained.  The problem is that, rather than wanting to reproduce a fixed set of package versions (which cabal already allows for if that's really desired), sometimes it's desirable that updates be held back in active code bases.  Replace "foo" with "QuickCheck" for example (where for a long time users stayed with quickcheck2 because version 3 had major performance regressions in certain use cases).

This sort of conflict used to happen *all the time*, and it's very frustrating to users (because something worked before, now it's not working, and they're not generally in a good position to know why).  It's annoying to reproduce because the install graph cabal produces depends in part on the user's installed packages.  So just because something builds on a developer's box doesn't mean that it would build on the user's box, or it would work for some users but not others (sandboxing has at least helped with that problem).


IIUC, this is *exactly* the case of an unmaintained package. I'm not advocating leaving a package like bar-0.1 on Hackage without an upper bound on foo, if it's known that it breaks in that case. In order for the package to be properly maintained, the maintainer would have to (1) make bar work with foo-0.2, or (2) add an upper bound. So to me, this falls squarely into the category of unmaintained.

I disagree.  I think it's unreasonable to expect that maintainers provide 24/7 availability and near-immediate maintenance releases when updated deps are released.  And in the meantime (which may be days, or even a couple weeks for a single-maintainer who might be on holiday), there's plenty of time for this to bite hard.  In the past, this meant that broken packages would remain available on hackage for a long time.  At least now the package maintainers can do so themselves, but it still means that broken packages have escaped into the wild, which is also bad.
 

Let me relax my position just a bit. If package maintainers are not going to be responsive to updates in the Hackage ecosystem, then I agree that they should use the PVP. I also think they should advertise their packages as not being actively maintained, and people should try to avoid using them if possible. But if an author is giving quick updates to packages, I don't see a huge benefit to the PVP for users, and instead see some downsides (inability to test against newer dependencies), not to mention the much higher maintenance burden for library authors.

What do you consider "responsive"?  2 hours?  24?  1 week?  I suspect that you have IMO unrealistic expectations of maintainers, so I think it would be good to get into specifics.

I think I just realized the assumption I've been making which skews my view of things a bit here. I think it's a terrible idea that we're telling new users to go onto Hackage, type cabal install foo, and hope for the best. Not just because of build plan issues. Who knows if the newest version of a package actually works as advertised? There's been no testing period at all, no vetting. This is one of the primary reasons for yesod-platform: it gives users a starting point with a sane build plan and at least some degree of integration testing (though I'll admit that the level of testing isn't nearly to what I'd like it to be).

If the constraints on the system are:

1. Any user at any time must be able to go onto Hackage and cabal install foo.
2. There can be no point at which foo doesn't install.

Then requiring preemptive upper bounds is the only solution. I simply don't like those constraints, and think we're approaching the problem wrong as a community.

There's one other constraint that I'm trying to optimize for, which is "when a user runs `cabal install foo bar`, there's a high probability that a sane build plan exists." This is the constraint which the PVP hinders.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 26 Feb 16:43 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


On Wed, Feb 26, 2014 at 12:36 AM, John Lato <jwlato <at> gmail.com> wrote:
7.  awesomeApp users attempt to do cabal update; cabal install, which fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)

There's nothing in this situation that requires any of these packages be unmaintained.  The problem is that, rather than wanting to reproduce a fixed set of package versions (which cabal already allows for if that's really desired), sometimes it's desirable that updates be held back in active code bases

Not to mention, if I maintain "bar", I can basically never go on vacation, because the dude who maintains "foo" can push a new update and break all my users any time.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Alain O'Dea | 26 Feb 12:47 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Feb 26, 2014, at 7:11, Michael Snoyman <michael <at> snoyman.com> wrote:




On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:

> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.

In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?
 

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

Michael

This is not a new problem.

Java users faced it with Maven and it was solved by curation of Maven Central and the ability to add outside repositories as needed.

Node.js users faced it with NPM and solved it with dependency freezing.

Ruby users faced it with Gem and solved it with dependency freezing.

I imagine there are a world of different solutions to this problem.  The PVP isn't a complete solution, but I consider it to be a sensible baseline (like code style conventions and warning free builds) and it appears to me to be in line with best practices from packaging systems of many other languages.

What follows is my opinion, and it comes from a position of relative inexperience with Haskell and considerable experience operating on other language communities.

I feel that the PVP should be encouraged and violations should be considered bugs. Users and concerned community members should report them to maintainers.  I support having a stable set of packages curated as it serves an immediate need, possibly with an alternate that does PVP-only/gated curation.  I believe Hackage should continue to exist as is (without gated curation) to facilitate availability and sharing of new libraries.  I think standard package dependency freezing metadata and tools should be defined and users encouraged to employ them.

One way or another -- as a user of Haskell -- I would benefit significantly from standard answers to these problems and good examples from the community leadership to follow.

Best,
Alain
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 13:05 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 1:47 PM, Alain O'Dea <alain.odea <at> gmail.com> wrote:
On Feb 26, 2014, at 7:11, Michael Snoyman <michael <at> snoyman.com> wrote:



I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

Michael

This is not a new problem.

Java users faced it with Maven and it was solved by curation of Maven Central and the ability to add outside repositories as needed.

Node.js users faced it with NPM and solved it with dependency freezing.

Ruby users faced it with Gem and solved it with dependency freezing.


You've presented three examples of other languages solving the problems using the two techniques I've been advocating through this thread: curation and dependency freezing. Is there an example of a language that took an approach like the PVP and succeeded in solving it?
 
I imagine there are a world of different solutions to this problem.  The PVP isn't a complete solution, but I consider it to be a sensible baseline (like code style conventions and warning free builds) and it appears to me to be in line with best practices from packaging systems of many other languages.

What follows is my opinion, and it comes from a position of relative inexperience with Haskell and considerable experience operating on other language communities.

I feel that the PVP should be encouraged and violations should be considered bugs. Users and concerned community members should report them to maintainers.

Please, please, please don't actually encourage this. There are many things which I consider bad practice in Haskell code. I don't open up bug reports against each package that disagrees with me. If a package on Hackage in fact does *not* build with some dependency it claims to build against, that's a perfectly reasonable thing to report (and I do so on a regular basis via Stackage). But insisting that people add upper bounds when they've clearly stated they do not want to is crossing the line.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 26 Feb 16:00 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Can we all zoom out for a moment?

Can we turn this into a technology problem we can solve, rather than an anthropological issue?

what about adding some sort of model of "module type signature" tooling to cabal/cabal-install experimentally (and perhaps eventually GHC?). 

A lot of our package version problems stem from ...... our untyped module system.  We currently don't have any static ways of reasoning / checking if two packages of modules are actually intercompatible without actually compiling them! But the information is there! Yes we need to figure out how such a design would support type classes gracefully (in a way we can all be happy with), but thats just a bullet point, not a barrier.

Theres some smart folks who've started exploring the design space in their research (its a huge design space), but perhaps we as a community should actually commit to a "k year plan, for some finite k <= 5" to actually work out typeful tooling for this recurrent library tooling pain, which requires that WE EXPERIMENT :)

seriously, lets stop focusing on the symptom, and do we what we do best, collaborate to build tools that systematically improve all of our respective approaches ('cause lets be honest, i don't think the camps here are going to change, and tribalism never helps anyone).

basically everyone's correct and wrong because its such a darn high dimensional problem that none of us have the time to correctly communicate the full nuances of the respective stances.

As always, sending an email to the libraries list when theres been this many people on the thread is a dangerous thing, but sometimes danger is my middle name.

i imagine several of you are writing up haskell module type systems papers for ICFP when not opining on this thread (not really, but I wish!)

let me start this (hopefully useful aside) with some thoughts


1) pinning package deps to a fixed version is a social construct for encoding "i want this exact set of operations with this semantics"

2) saying i want dependency P to  satisfy version range ">= A  && < B" is a way of saying "assuming my understanding of the range of code semantics and module types exposed over this version range is true / consistent, my code can correctly work over this range"

3) even if we had decent module types and ways of say "code works over this set of interfaces over this version range", versions and things like PVP still have value in communicating when a package is likely to be the same or different

4) cabal does need a proper SMT solver tool for handling version constraints as is, which is a totally unrelated problem, but another fun one thats actually resolvable by way of using technology

seriously, lets all zoom out of ask "how can we come up with a roadmap for evolving haskell tooling that solves the underlying need that this whole thread is indirectly about".  And no it needs to be tool that doesn't pin everything to a single version, thats for APPs not LIBRARIES :)

-Carter



On Wed, Feb 26, 2014 at 7:05 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 1:47 PM, Alain O'Dea <alain.odea <at> gmail.com> wrote:
On Feb 26, 2014, at 7:11, Michael Snoyman <michael <at> snoyman.com> wrote:



I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

Michael

This is not a new problem.

Java users faced it with Maven and it was solved by curation of Maven Central and the ability to add outside repositories as needed.

Node.js users faced it with NPM and solved it with dependency freezing.

Ruby users faced it with Gem and solved it with dependency freezing.


You've presented three examples of other languages solving the problems using the two techniques I've been advocating through this thread: curation and dependency freezing. Is there an example of a language that took an approach like the PVP and succeeded in solving it?
 
I imagine there are a world of different solutions to this problem.  The PVP isn't a complete solution, but I consider it to be a sensible baseline (like code style conventions and warning free builds) and it appears to me to be in line with best practices from packaging systems of many other languages.

What follows is my opinion, and it comes from a position of relative inexperience with Haskell and considerable experience operating on other language communities.

I feel that the PVP should be encouraged and violations should be considered bugs. Users and concerned community members should report them to maintainers.

Please, please, please don't actually encourage this. There are many things which I consider bad practice in Haskell code. I don't open up bug reports against each package that disagrees with me. If a package on Hackage in fact does *not* build with some dependency it claims to build against, that's a perfectly reasonable thing to report (and I do so on a regular basis via Stackage). But insisting that people add upper bounds when they've clearly stated they do not want to is crossing the line.

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 26 Feb 16:45 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


On Wed, Feb 26, 2014 at 3:47 AM, Alain O'Dea <alain.odea <at> gmail.com> wrote:
I imagine there are a world of different solutions to this problem.  The PVP isn't a complete solution, but I consider it to be a sensible baseline (like code style conventions and warning free builds) and it appears to me to be in line with best practices from packaging systems of many other languages.

Like "semantic versioning" (http://semver.org/) which is very well regarded by people I talk to.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Brandon Allbery | 26 Feb 17:01 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 6:47 AM, Alain O'Dea <alain.odea <at> gmail.com> wrote:
Ruby users faced it with Gem and solved it with dependency freezing.

Ruby didn't solve it for anyone but developers. End users are required to build what amounts to a separate Ruby installation using RVM for every Ruby application, because every application works with only its curated sets of gems.

This is, quite simply, nonsense. You've pushed the whole problem into the user's lap and required them to do a lot of extra work for every application they want to use. This "solution" is exactly the thing I want to *avoid*.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Brandon Allbery | 26 Feb 16:57 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 2:11 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

And how much of this is because you use your own versioning riules instead of what everyone else uses, thereby almost guaranteeing conflicts with everyone else when anything is updated? Seriously, the easiest way to get in trouble with the current system is to insist you get to not follow it, because stuff breaks. Asserting it's everyone else's fault isn't a solution.

Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.

(I see someone else later on actually advocated this as a solution. It's not. It's an ongoing maintenance disaster that leads to unmanageable systems.)

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 26 Feb 17:06 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)



On Wednesday, February 26, 2014, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 2:11 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

And how much of this is because you use your own versioning riules instead of what everyone else uses, thereby almost guaranteeing conflicts with everyone else when anything is updated? Seriously, the easiest way to get in trouble with the current system is to insist you get to not follow it, because stuff breaks. Asserting it's everyone else's fault isn't a solution.


Given that yesod followed the pvp strictly for a number of years and still faced a large number of dependency issues, I don't think the problem is in the lack of pvp support.

The most recent example of this issue was a user trying to use ghc 7.8 and getting an error message about an upper bound on base in the esqueleto package. I'm sure most of us on this mailing list would consider this an easy error message, but at least one of our mythical end users was seriously confused by this situation. It turned out that simply dropping the upper bound was sufficient.

So I come back to this being a numbers game. End users don't necessarily find cabal version error messages any better than ghc build failures. The question is how can we minimize any kind of build failure.

Michael

Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.


Actually, why wouldn't stackage as a curated system work for everyone? Stackage already has over 10% of hackage covered, and I'd imagine if you look at hackage download numbers, that covers the vast majority of the most downloaded packages. I really do think that a curated system could make most people happy, if it gets enough buy-in.
 
(I see someone else later on actually advocated this as a solution. It's not. It's an ongoing maintenance disaster that leads to unmanageable systems.)

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 26 Feb 17:35 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

michael,
respectfully, i've had the experience of trying to build a yesod app (git-annex) in a cabal sandbox and i had a version conflict between your own libraries. the only way I could build git-annex was by disabling the web-gui yesod piece. The key piece here is your libraries conflicted with themselves. Thats on you.

likewise, 9 time out of 10, when someone tries haskell for the first and they tell me "cabal didn't work", I can guess "you tried to install yesod". They then say "yes", and then I have to explain to them that "yesod has notoriously fragile build constraints, even people who use yesod find it maddening depending on their use case".

Michael, respectfully, most version constraint build failures are human friendly, yours are cthulian.


I spend a wee bit of time  every major GHC release helping patch all the various libs i use (including helping get pandoc ready for each new ghc). It usaually pretty easy. Its often just needing a teeny bit of CPP for an API change plus adjusting cabal range to be wider. I'm totally ok with doing that 1-2 times a year in exchange for nice things. I'd rather do a test and patch cycle every year than have surprise build failures.



On Wed, Feb 26, 2014 at 11:06 AM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wednesday, February 26, 2014, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 2:11 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

And how much of this is because you use your own versioning riules instead of what everyone else uses, thereby almost guaranteeing conflicts with everyone else when anything is updated? Seriously, the easiest way to get in trouble with the current system is to insist you get to not follow it, because stuff breaks. Asserting it's everyone else's fault isn't a solution.


Given that yesod followed the pvp strictly for a number of years and still faced a large number of dependency issues, I don't think the problem is in the lack of pvp support.

The most recent example of this issue was a user trying to use ghc 7.8 and getting an error message about an upper bound on base in the esqueleto package. I'm sure most of us on this mailing list would consider this an easy error message, but at least one of our mythical end users was seriously confused by this situation. It turned out that simply dropping the upper bound was sufficient.

So I come back to this being a numbers game. End users don't necessarily find cabal version error messages any better than ghc build failures. The question is how can we minimize any kind of build failure.

Michael

Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.


Actually, why wouldn't stackage as a curated system work for everyone? Stackage already has over 10% of hackage covered, and I'd imagine if you look at hackage download numbers, that covers the vast majority of the most downloaded packages. I really do think that a curated system could make most people happy, if it gets enough buy-in.
 
(I see someone else later on actually advocated this as a solution. It's not. It's an ongoing maintenance disaster that leads to unmanageable systems.)

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 27 Feb 06:58 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Wed, Feb 26, 2014 at 6:35 PM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
michael,
respectfully, i've had the experience of trying to build a yesod app (git-annex) in a cabal sandbox and i had a version conflict between your own libraries. the only way I could build git-annex was by disabling the web-gui yesod piece. The key piece here is your libraries conflicted with themselves. Thats on you.


If you have the details on when that happened, I'd be very curious to hear them. I thought my release process had ironed out those possibilities, but if not, I'd like to know about it. Which packages conflicted?
 
likewise, 9 time out of 10, when someone tries haskell for the first and they tell me "cabal didn't work", I can guess "you tried to install yesod". They then say "yes", and then I have to explain to them that "yesod has notoriously fragile build constraints, even people who use yesod find it maddening depending on their use case".


Are these people usually following the actual Yesod installation instructions? Does anyone tell them to follow the Yesod installation instructions? I created the yesod-platform because the PVP *wasn't* fixing this issue. That's why I'm so skeptical that "start using the PVP" would actually fix any of that.
 
Michael, respectfully, most version constraint build failures are human friendly, yours are cthulian.


I spend a wee bit of time  every major GHC release helping patch all the various libs i use (including helping get pandoc ready for each new ghc). It usaually pretty easy. Its often just needing a teeny bit of CPP for an API change plus adjusting cabal range to be wider. I'm totally ok with doing that 1-2 times a year in exchange for nice things. I'd rather do a test and patch cycle every year than have surprise build failures.



On Wed, Feb 26, 2014 at 11:06 AM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wednesday, February 26, 2014, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 2:11 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

And how much of this is because you use your own versioning riules instead of what everyone else uses, thereby almost guaranteeing conflicts with everyone else when anything is updated? Seriously, the easiest way to get in trouble with the current system is to insist you get to not follow it, because stuff breaks. Asserting it's everyone else's fault isn't a solution.


Given that yesod followed the pvp strictly for a number of years and still faced a large number of dependency issues, I don't think the problem is in the lack of pvp support.

The most recent example of this issue was a user trying to use ghc 7.8 and getting an error message about an upper bound on base in the esqueleto package. I'm sure most of us on this mailing list would consider this an easy error message, but at least one of our mythical end users was seriously confused by this situation. It turned out that simply dropping the upper bound was sufficient.

So I come back to this being a numbers game. End users don't necessarily find cabal version error messages any better than ghc build failures. The question is how can we minimize any kind of build failure.

Michael

Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.


Actually, why wouldn't stackage as a curated system work for everyone? Stackage already has over 10% of hackage covered, and I'd imagine if you look at hackage download numbers, that covers the vast majority of the most downloaded packages. I really do think that a curated system could make most people happy, if it gets enough buy-in.
 
(I see someone else later on actually advocated this as a solution. It's not. It's an ongoing maintenance disaster that leads to unmanageable systems.)

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries



_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 27 Feb 09:05 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

i have no clue, i'm just there dealing with the fallout of a confused new person who is convinced that cabal just doesn't work. and no fallout shelters in sight so I have to provide quick solutions that work™ to prevent frustration radiation poisoning.

why not just put all the modules in the same package?  :) 
I jest, BUT i have a real point in saying that.

why should i have to use a wrapper package which pins all the constraints?


On Thu, Feb 27, 2014 at 12:58 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Wed, Feb 26, 2014 at 6:35 PM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
michael,
respectfully, i've had the experience of trying to build a yesod app (git-annex) in a cabal sandbox and i had a version conflict between your own libraries. the only way I could build git-annex was by disabling the web-gui yesod piece. The key piece here is your libraries conflicted with themselves. Thats on you.


If you have the details on when that happened, I'd be very curious to hear them. I thought my release process had ironed out those possibilities, but if not, I'd like to know about it. Which packages conflicted?
 
likewise, 9 time out of 10, when someone tries haskell for the first and they tell me "cabal didn't work", I can guess "you tried to install yesod". They then say "yes", and then I have to explain to them that "yesod has notoriously fragile build constraints, even people who use yesod find it maddening depending on their use case".


Are these people usually following the actual Yesod installation instructions? Does anyone tell them to follow the Yesod installation instructions? I created the yesod-platform because the PVP *wasn't* fixing this issue. That's why I'm so skeptical that "start using the PVP" would actually fix any of that.
 
Michael, respectfully, most version constraint build failures are human friendly, yours are cthulian.


I spend a wee bit of time  every major GHC release helping patch all the various libs i use (including helping get pandoc ready for each new ghc). It usaually pretty easy. Its often just needing a teeny bit of CPP for an API change plus adjusting cabal range to be wider. I'm totally ok with doing that 1-2 times a year in exchange for nice things. I'd rather do a test and patch cycle every year than have surprise build failures.



On Wed, Feb 26, 2014 at 11:06 AM, Michael Snoyman <michael <at> snoyman.com> wrote:


On Wednesday, February 26, 2014, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Wed, Feb 26, 2014 at 2:11 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.

And how much of this is because you use your own versioning riules instead of what everyone else uses, thereby almost guaranteeing conflicts with everyone else when anything is updated? Seriously, the easiest way to get in trouble with the current system is to insist you get to not follow it, because stuff breaks. Asserting it's everyone else's fault isn't a solution.


Given that yesod followed the pvp strictly for a number of years and still faced a large number of dependency issues, I don't think the problem is in the lack of pvp support.

The most recent example of this issue was a user trying to use ghc 7.8 and getting an error message about an upper bound on base in the esqueleto package. I'm sure most of us on this mailing list would consider this an easy error message, but at least one of our mythical end users was seriously confused by this situation. It turned out that simply dropping the upper bound was sufficient.

So I come back to this being a numbers game. End users don't necessarily find cabal version error messages any better than ghc build failures. The question is how can we minimize any kind of build failure.

Michael

Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.


Actually, why wouldn't stackage as a curated system work for everyone? Stackage already has over 10% of hackage covered, and I'd imagine if you look at hackage download numbers, that covers the vast majority of the most downloaded packages. I really do think that a curated system could make most people happy, if it gets enough buy-in.
 
(I see someone else later on actually advocated this as a solution. It's not. It's an ongoing maintenance disaster that leads to unmanageable systems.)

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries




_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Michael Snoyman | 27 Feb 09:12 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Thu, Feb 27, 2014 at 10:05 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
i have no clue, i'm just there dealing with the fallout of a confused new person who is convinced that cabal just doesn't work. and no fallout shelters in sight so I have to provide quick solutions that work™ to prevent frustration radiation poisoning.

why not just put all the modules in the same package?  :) 
I jest, BUT i have a real point in saying that.

why should i have to use a wrapper package which pins all the constraints?


That's a great question. It seems to be the question no one's able to answer, myself included. As I keep saying, Yesod went through years of PVP compliance, and users were constantly having build problems. It may have been the cabal-install dependency solver prior to 0.14, and now that problem is resolved. I'm not sure. I know that the new dependency solver used Yesod as a stress test case.

Releasing yesod-platform was pragmatic on two fronts:

1. It guaranteed a sane build plan when cabal (for whatever reason) couldn't determine one.
2. It prevents users from getting a completely untested set of packages, hopefully insulating them from some of the turbulence of Hackage.

I think you need to dial back your assumptions here. Your initial email made it sound like I'd broken the Yesod stack single-handedly a bunch of times recently. That worries me. If that actually happened, please explain how it happened. If users are getting build errors when they don't use yesod-platform, well, that's something we've known about in the Yesod community for a long time. The PVP didn't solve it, yesod-platform is a hack which fixes most of the end-user issue, and I'd love to get to a real solution.

Michael
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 27 Feb 09:49 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

ah, pardon the implication (one challenge when helping people new to haskell is i don't often get the console logs that were part of their "cabal ").

So we have a mix of concerns, some of which are a historical  artifact of cabal having been terrible in the past, and now its not (quite) as bad, but far from perfect.

things I think would help on all fronts, and I think we (as a community) actually need to figure out how to allocate manpower to the following

a) improve the solver tooling. Get some sort of SMT solver ACTUALLY in cabal.

The constraint plans aren't that complicated compared with SMT solver benchmarks!
 I know of several folks who've done their own internal hacked up cabal to do this, and i've heard rumors that *someone* (don't know )

b) proactively work on tooling to improve how constraint failures are reported (though perhaps having the SMT solver tooling would address this).

It'd be very very helpful to be able to identify the *minimal* set of constraints + deps that create a conflict, and explanations in that error the exhibit how the conflict in constraints is created.

I think everyone agrees these are valuable, its just it requires someone to be able to commit the time to actually doing it. (which is the tricky bit)



On Thu, Feb 27, 2014 at 3:12 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Thu, Feb 27, 2014 at 10:05 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
i have no clue, i'm just there dealing with the fallout of a confused new person who is convinced that cabal just doesn't work. and no fallout shelters in sight so I have to provide quick solutions that work™ to prevent frustration radiation poisoning.

why not just put all the modules in the same package?  :) 
I jest, BUT i have a real point in saying that.

why should i have to use a wrapper package which pins all the constraints?


That's a great question. It seems to be the question no one's able to answer, myself included. As I keep saying, Yesod went through years of PVP compliance, and users were constantly having build problems. It may have been the cabal-install dependency solver prior to 0.14, and now that problem is resolved. I'm not sure. I know that the new dependency solver used Yesod as a stress test case.

Releasing yesod-platform was pragmatic on two fronts:

1. It guaranteed a sane build plan when cabal (for whatever reason) couldn't determine one.
2. It prevents users from getting a completely untested set of packages, hopefully insulating them from some of the turbulence of Hackage.

I think you need to dial back your assumptions here. Your initial email made it sound like I'd broken the Yesod stack single-handedly a bunch of times recently. That worries me. If that actually happened, please explain how it happened. If users are getting build errors when they don't use yesod-platform, well, that's something we've known about in the Yesod community for a long time. The PVP didn't solve it, yesod-platform is a hack which fixes most of the end-user issue, and I'd love to get to a real solution.

Michael

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Greg Weber | 27 Feb 17:30 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I actually think work on the a) cabal solver has been a distraction from more pressing issues: the need for sandboxes (that is done now) and reproducible builds (frozen dependencies). If you look at Ruby's Bundler, which has been extremely successful, it has historically (maybe they have a better solver now) been a dumb tool in terms of its solver that works extremely well. I think 90+% of this conversation is pretty wasteful, because once we have reproducible builds everything is going to change. If the energy could be re-directed to being able to create reproducible builds in Haskell, then we could figure out what the next most important priority is.

Of course, I agree that better error messages like b) are always valuable.

yesod-platform is essentially a reproducible build and it has been able to fix dependency issues that previously seemed unfixable.
At my work we add a cabal.config to our projects to create reproducible builds, and everyone in industry does something similar for their applications.


On Thu, Feb 27, 2014 at 12:49 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
ah, pardon the implication (one challenge when helping people new to haskell is i don't often get the console logs that were part of their "cabal ").

So we have a mix of concerns, some of which are a historical  artifact of cabal having been terrible in the past, and now its not (quite) as bad, but far from perfect.

things I think would help on all fronts, and I think we (as a community) actually need to figure out how to allocate manpower to the following

a) improve the solver tooling. Get some sort of SMT solver ACTUALLY in cabal.

The constraint plans aren't that complicated compared with SMT solver benchmarks!
 I know of several folks who've done their own internal hacked up cabal to do this, and i've heard rumors that *someone* (don't know )

b) proactively work on tooling to improve how constraint failures are reported (though perhaps having the SMT solver tooling would address this).

It'd be very very helpful to be able to identify the *minimal* set of constraints + deps that create a conflict, and explanations in that error the exhibit how the conflict in constraints is created.

I think everyone agrees these are valuable, its just it requires someone to be able to commit the time to actually doing it. (which is the tricky bit)



On Thu, Feb 27, 2014 at 3:12 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Thu, Feb 27, 2014 at 10:05 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
i have no clue, i'm just there dealing with the fallout of a confused new person who is convinced that cabal just doesn't work. and no fallout shelters in sight so I have to provide quick solutions that work™ to prevent frustration radiation poisoning.

why not just put all the modules in the same package?  :) 
I jest, BUT i have a real point in saying that.

why should i have to use a wrapper package which pins all the constraints?


That's a great question. It seems to be the question no one's able to answer, myself included. As I keep saying, Yesod went through years of PVP compliance, and users were constantly having build problems. It may have been the cabal-install dependency solver prior to 0.14, and now that problem is resolved. I'm not sure. I know that the new dependency solver used Yesod as a stress test case.

Releasing yesod-platform was pragmatic on two fronts:

1. It guaranteed a sane build plan when cabal (for whatever reason) couldn't determine one.
2. It prevents users from getting a completely untested set of packages, hopefully insulating them from some of the turbulence of Hackage.

I think you need to dial back your assumptions here. Your initial email made it sound like I'd broken the Yesod stack single-handedly a bunch of times recently. That worries me. If that actually happened, please explain how it happened. If users are getting build errors when they don't use yesod-platform, well, that's something we've known about in the Yesod community for a long time. The PVP didn't solve it, yesod-platform is a hack which fixes most of the end-user issue, and I'd love to get to a real solution.

Michael


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 27 Feb 17:38 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

I don't care. I want a better solver.  And I'm willing to help make it happen. 

On Thursday, February 27, 2014, Greg Weber <greg <at> gregweber.info> wrote:

I actually think work on the a) cabal solver has been a distraction from more pressing issues: the need for sandboxes (that is done now) and reproducible builds (frozen dependencies). If you look at Ruby's Bundler, which has been extremely successful, it has historically (maybe they have a better solver now) been a dumb tool in terms of its solver that works extremely well. I think 90+% of this conversation is pretty wasteful, because once we have reproducible builds everything is going to change. If the energy could be re-directed to being able to create reproducible builds in Haskell, then we could figure out what the next most important priority is.

Of course, I agree that better error messages like b) are always valuable.

yesod-platform is essentially a reproducible build and it has been able to fix dependency issues that previously seemed unfixable.
At my work we add a cabal.config to our projects to create reproducible builds, and everyone in industry does something similar for their applications.


On Thu, Feb 27, 2014 at 12:49 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
ah, pardon the implication (one challenge when helping people new to haskell is i don't often get the console logs that were part of their "cabal ").

So we have a mix of concerns, some of which are a historical  artifact of cabal having been terrible in the past, and now its not (quite) as bad, but far from perfect.

things I think would help on all fronts, and I think we (as a community) actually need to figure out how to allocate manpower to the following

a) improve the solver tooling. Get some sort of SMT solver ACTUALLY in cabal.

The constraint plans aren't that complicated compared with SMT solver benchmarks!
 I know of several folks who've done their own internal hacked up cabal to do this, and i've heard rumors that *someone* (don't know )

b) proactively work on tooling to improve how constraint failures are reported (though perhaps having the SMT solver tooling would address this).

It'd be very very helpful to be able to identify the *minimal* set of constraints + deps that create a conflict, and explanations in that error the exhibit how the conflict in constraints is created.

I think everyone agrees these are valuable, its just it requires someone to be able to commit the time to actually doing it. (which is the tricky bit)



On Thu, Feb 27, 2014 at 3:12 AM, Michael Snoyman <michael <at> snoyman.com> wrote:



On Thu, Feb 27, 2014 at 10:05 AM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
i have no clue, i'm just there dealing with the fallout of a confused new person who is convinced that cabal just doesn't work. and no fallout shelters in sight so I have to provide quick solutions that work™ to prevent frustration radiation poisoning.

why not just put all the modules in the same package?  :) 
I jest, BUT i have a real point in saying that.

why should i have to use a wrapper package which pins all the constraints?


That's a great question. It seems to be the question no one's able to answer, myself included. As I keep saying, Yesod went through years of PVP compliance, and users were constantly having build problems. It may have been the cabal-install dependency solver prior to 0.14, and now that problem is resolved. I'm not sure. I know that the new dependency solver used Yesod as a stress test case.

Releasing yesod-platform was pragmatic on two fronts:

1. It guaranteed a sane build plan when cabal (for whatever reason) couldn't determine one.
2. It prevents users from getting a completely untested set of packages, hopefully insulating them from some of the turbulence of Hackage.

I think you need to dial back your assumptions here. Your initial email made it sound like I'd broken the Yesod stack single-handedly a bunch of times recently. That worries me. If that actually happened, please explain how it happened. If users are getting build errors when they don't use yesod-platform, well, that's something we've known about in the Yesod community for a long time. The PVP didn't solve it, yesod-platform is a hack which fixes most of the end-user issue, and I'd love to get to a real solution.

Michael


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Austin Seipp | 27 Feb 18:35 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Hi Greg,

On Thu, Feb 27, 2014 at 10:30 AM, Greg Weber <greg <at> gregweber.info> wrote:
> I actually think work on the a) cabal solver has been a distraction from
> more pressing issues: the need for sandboxes (that is done now) and
> reproducible builds (frozen dependencies). If you look at Ruby's Bundler,
> which has been extremely successful, it has historically (maybe they have a
> better solver now) been a dumb tool in terms of its solver that works
> extremely well. I think 90+% of this conversation is pretty wasteful,
> because once we have reproducible builds everything is going to change. If
> the energy could be re-directed to being able to create reproducible builds
> in Haskell, then we could figure out what the next most important priority
> is.

I'd like to carefully point out however, that it is not a zero-sum
game - work dedicated to improving the constraint solver is not work
which is implicitly taken away any other set of tools - like a
'freeze' command. There is no 'distraction' IMO - it is a set of
individuals (or companies, even) each with their own priorities. I
think this is the sign of a healthy community, actually - one that
places importance on its tools and seeks to find optimal ways to
improve them in a variety of ways. A freeze command and an improved
solver are both excellent (and worthy) improvements.

In reality, bundler works precisely for the reason you said it did: it
avoids all the actually difficult problems. But that comes at a cost,
because Bundler for example can't actually tell me when things *are*
going to break. If I bump my dependencies, create a new Gemfile lock,
and test - it could all simply explode later on at runtime, even if it
could have been concluded from the constraints that it was all invalid
in the first place. The only thing bundler buys me is that this
explosion won't potentially extend to the rest of my global
environment when it happens. Which is a good thing, truth be told, and
why it is so popular - otherwise this happens constantly.

Despite the fact people complain when cabal bails about reinstalls
breaking things - this is infinitely preferable to things 'working
now' and 'exploding later'. Even in a sandbox, or when you unfreeze
your dependencies to upgrade them. Work on improving the solver here
is not wasted IMO - and even the bundler developers are looking into
things like SAT solvers amongst many options, precisely because the
basic resolver has many bugs. Even if it's all safe in a sandbox:
https://github.com/bundler/bundler/issues/2437

These two concerns are, as far as I can see, in no way opposed in
spirit or practice, and suggesting one is essentially wasted effort
that distracts people - when I see no evidence of that - strikes me as
odd.

> Of course, I agree that better error messages like b) are always valuable.

I think that the only way to do that is to actually stress the solver
and integrate build reports into things like the Hackage server. Going
by heresay from users for these issues is, in my experience,
tremendously difficult. Freezing every dependency to its exact
necessary state so it always works does not necessarily tell us
anything about this behavior in the large (besides "it does work"),
how the solver fails, how to report that in a sensible manner, or what
kind of problems or usage patterns which will arise in conjunction
with these things.

'cabal install' might as well be Prolog and tell me 'Yes' or 'No' if
things will work, if we're not interested in that stuff.

--

-- 
Regards,

Austin Seipp, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/
Greg Weber | 27 Feb 21:36 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)




On Thu, Feb 27, 2014 at 9:35 AM, Austin Seipp <austin <at> well-typed.com> wrote:
Hi Greg,

On Thu, Feb 27, 2014 at 10:30 AM, Greg Weber <greg <at> gregweber.info> wrote:
> I actually think work on the a) cabal solver has been a distraction from
> more pressing issues: the need for sandboxes (that is done now) and
> reproducible builds (frozen dependencies). If you look at Ruby's Bundler,
> which has been extremely successful, it has historically (maybe they have a
> better solver now) been a dumb tool in terms of its solver that works
> extremely well. I think 90+% of this conversation is pretty wasteful,
> because once we have reproducible builds everything is going to change. If
> the energy could be re-directed to being able to create reproducible builds
> in Haskell, then we could figure out what the next most important priority
> is.

I'd like to carefully point out however, that it is not a zero-sum
game - work dedicated to improving the constraint solver is not work
which is implicitly taken away any other set of tools - like a
'freeze' command. There is no 'distraction' IMO - it is a set of
individuals (or companies, even) each with their own priorities. I
think this is the sign of a healthy community, actually - one that
places importance on its tools and seeks to find optimal ways to
improve them in a variety of ways. A freeze command and an improved
solver are both excellent (and worthy) improvements.

I agree that it is not zero sum, but I do think that at some point the wrong priorities must have been chosen since I have to go to special effort to produce a consistent build. Also this is all getting mixed up with a lot of talk about PVP and other things whose relevance changes if the underlying installation machinery supports what every application developer should be doing.


In reality, bundler works precisely for the reason you said it did: it
avoids all the actually difficult problems. But that comes at a cost,
because Bundler for example can't actually tell me when things *are*
going to break. If I bump my dependencies, create a new Gemfile lock,
and test - it could all simply explode later on at runtime, even if it
could have been concluded from the constraints that it was all invalid
in the first place. The only thing bundler buys me is that this
explosion won't potentially extend to the rest of my global
environment when it happens. Which is a good thing, truth be told, and
why it is so popular - otherwise this happens constantly.

This wasn't my experience using bundler. Bundler supports conservative upgrades that create consistent packages. So if you want to upgrade something you place a range on it and ask Bundler to upgrade it. I don't doubt though that it may let you manually subvert the system.


These two concerns are, as far as I can see, in no way opposed in
spirit or practice, and suggesting one is essentially wasted effort
that distracts people - when I see no evidence of that - strikes me as
odd.


I think the industrial Haskell group supported work on a better solver, which was definitely helpful, but I just think it would have been wiser to support work on consistent builds first. I agree that they can be worked on independently.
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 27 Feb 22:00 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

cool, so is docmunch going to allocate some money or manpower to help out? :)


On Thu, Feb 27, 2014 at 3:36 PM, Greg Weber <greg <at> gregweber.info> wrote:



On Thu, Feb 27, 2014 at 9:35 AM, Austin Seipp <austin <at> well-typed.com> wrote:
Hi Greg,

On Thu, Feb 27, 2014 at 10:30 AM, Greg Weber <greg <at> gregweber.info> wrote:
> I actually think work on the a) cabal solver has been a distraction from
> more pressing issues: the need for sandboxes (that is done now) and
> reproducible builds (frozen dependencies). If you look at Ruby's Bundler,
> which has been extremely successful, it has historically (maybe they have a
> better solver now) been a dumb tool in terms of its solver that works
> extremely well. I think 90+% of this conversation is pretty wasteful,
> because once we have reproducible builds everything is going to change. If
> the energy could be re-directed to being able to create reproducible builds
> in Haskell, then we could figure out what the next most important priority
> is.

I'd like to carefully point out however, that it is not a zero-sum
game - work dedicated to improving the constraint solver is not work
which is implicitly taken away any other set of tools - like a
'freeze' command. There is no 'distraction' IMO - it is a set of
individuals (or companies, even) each with their own priorities. I
think this is the sign of a healthy community, actually - one that
places importance on its tools and seeks to find optimal ways to
improve them in a variety of ways. A freeze command and an improved
solver are both excellent (and worthy) improvements.

I agree that it is not zero sum, but I do think that at some point the wrong priorities must have been chosen since I have to go to special effort to produce a consistent build. Also this is all getting mixed up with a lot of talk about PVP and other things whose relevance changes if the underlying installation machinery supports what every application developer should be doing.


In reality, bundler works precisely for the reason you said it did: it
avoids all the actually difficult problems. But that comes at a cost,
because Bundler for example can't actually tell me when things *are*
going to break. If I bump my dependencies, create a new Gemfile lock,
and test - it could all simply explode later on at runtime, even if it
could have been concluded from the constraints that it was all invalid
in the first place. The only thing bundler buys me is that this
explosion won't potentially extend to the rest of my global
environment when it happens. Which is a good thing, truth be told, and
why it is so popular - otherwise this happens constantly.

This wasn't my experience using bundler. Bundler supports conservative upgrades that create consistent packages. So if you want to upgrade something you place a range on it and ask Bundler to upgrade it. I don't doubt though that it may let you manually subvert the system.


These two concerns are, as far as I can see, in no way opposed in
spirit or practice, and suggesting one is essentially wasted effort
that distracts people - when I see no evidence of that - strikes me as
odd.


I think the industrial Haskell group supported work on a better solver, which was definitely helpful, but I just think it would have been wiser to support work on consistent builds first. I agree that they can be worked on independently.

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Greg Weber | 28 Feb 04:17 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

That was the plan. I spend a large amount of company time contributing back to open source. Originally I was going to spend some of it on helping implement cabal freeze, but I left it in the hands of others that were more capable and I haven't checked back in a long time. 


On Thu, Feb 27, 2014 at 1:00 PM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
cool, so is docmunch going to allocate some money or manpower to help out? :)


On Thu, Feb 27, 2014 at 3:36 PM, Greg Weber <greg <at> gregweber.info> wrote:



On Thu, Feb 27, 2014 at 9:35 AM, Austin Seipp <austin <at> well-typed.com> wrote:
Hi Greg,

On Thu, Feb 27, 2014 at 10:30 AM, Greg Weber <greg <at> gregweber.info> wrote:
> I actually think work on the a) cabal solver has been a distraction from
> more pressing issues: the need for sandboxes (that is done now) and
> reproducible builds (frozen dependencies). If you look at Ruby's Bundler,
> which has been extremely successful, it has historically (maybe they have a
> better solver now) been a dumb tool in terms of its solver that works
> extremely well. I think 90+% of this conversation is pretty wasteful,
> because once we have reproducible builds everything is going to change. If
> the energy could be re-directed to being able to create reproducible builds
> in Haskell, then we could figure out what the next most important priority
> is.

I'd like to carefully point out however, that it is not a zero-sum
game - work dedicated to improving the constraint solver is not work
which is implicitly taken away any other set of tools - like a
'freeze' command. There is no 'distraction' IMO - it is a set of
individuals (or companies, even) each with their own priorities. I
think this is the sign of a healthy community, actually - one that
places importance on its tools and seeks to find optimal ways to
improve them in a variety of ways. A freeze command and an improved
solver are both excellent (and worthy) improvements.

I agree that it is not zero sum, but I do think that at some point the wrong priorities must have been chosen since I have to go to special effort to produce a consistent build. Also this is all getting mixed up with a lot of talk about PVP and other things whose relevance changes if the underlying installation machinery supports what every application developer should be doing.


In reality, bundler works precisely for the reason you said it did: it
avoids all the actually difficult problems. But that comes at a cost,
because Bundler for example can't actually tell me when things *are*
going to break. If I bump my dependencies, create a new Gemfile lock,
and test - it could all simply explode later on at runtime, even if it
could have been concluded from the constraints that it was all invalid
in the first place. The only thing bundler buys me is that this
explosion won't potentially extend to the rest of my global
environment when it happens. Which is a good thing, truth be told, and
why it is so popular - otherwise this happens constantly.

This wasn't my experience using bundler. Bundler supports conservative upgrades that create consistent packages. So if you want to upgrade something you place a range on it and ask Bundler to upgrade it. I don't doubt though that it may let you manually subvert the system.


These two concerns are, as far as I can see, in no way opposed in
spirit or practice, and suggesting one is essentially wasted effort
that distracts people - when I see no evidence of that - strikes me as
odd.


I think the industrial Haskell group supported work on a better solver, which was definitely helpful, but I just think it would have been wiser to support work on consistent builds first. I agree that they can be worked on independently.


_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Carter Schonwald | 28 Feb 04:47 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

cool! :)



On Thu, Feb 27, 2014 at 10:17 PM, Greg Weber <greg <at> gregweber.info> wrote:
That was the plan. I spend a large amount of company time contributing back to open source. Originally I was going to spend some of it on helping implement cabal freeze, but I left it in the hands of others that were more capable and I haven't checked back in a long time. 


On Thu, Feb 27, 2014 at 1:00 PM, Carter Schonwald <carter.schonwald <at> gmail.com> wrote:
cool, so is docmunch going to allocate some money or manpower to help out? :)


On Thu, Feb 27, 2014 at 3:36 PM, Greg Weber <greg <at> gregweber.info> wrote:



On Thu, Feb 27, 2014 at 9:35 AM, Austin Seipp <austin <at> well-typed.com> wrote:
Hi Greg,

On Thu, Feb 27, 2014 at 10:30 AM, Greg Weber <greg <at> gregweber.info> wrote:
> I actually think work on the a) cabal solver has been a distraction from
> more pressing issues: the need for sandboxes (that is done now) and
> reproducible builds (frozen dependencies). If you look at Ruby's Bundler,
> which has been extremely successful, it has historically (maybe they have a
> better solver now) been a dumb tool in terms of its solver that works
> extremely well. I think 90+% of this conversation is pretty wasteful,
> because once we have reproducible builds everything is going to change. If
> the energy could be re-directed to being able to create reproducible builds
> in Haskell, then we could figure out what the next most important priority
> is.

I'd like to carefully point out however, that it is not a zero-sum
game - work dedicated to improving the constraint solver is not work
which is implicitly taken away any other set of tools - like a
'freeze' command. There is no 'distraction' IMO - it is a set of
individuals (or companies, even) each with their own priorities. I
think this is the sign of a healthy community, actually - one that
places importance on its tools and seeks to find optimal ways to
improve them in a variety of ways. A freeze command and an improved
solver are both excellent (and worthy) improvements.

I agree that it is not zero sum, but I do think that at some point the wrong priorities must have been chosen since I have to go to special effort to produce a consistent build. Also this is all getting mixed up with a lot of talk about PVP and other things whose relevance changes if the underlying installation machinery supports what every application developer should be doing.


In reality, bundler works precisely for the reason you said it did: it
avoids all the actually difficult problems. But that comes at a cost,
because Bundler for example can't actually tell me when things *are*
going to break. If I bump my dependencies, create a new Gemfile lock,
and test - it could all simply explode later on at runtime, even if it
could have been concluded from the constraints that it was all invalid
in the first place. The only thing bundler buys me is that this
explosion won't potentially extend to the rest of my global
environment when it happens. Which is a good thing, truth be told, and
why it is so popular - otherwise this happens constantly.

This wasn't my experience using bundler. Bundler supports conservative upgrades that create consistent packages. So if you want to upgrade something you place a range on it and ask Bundler to upgrade it. I don't doubt though that it may let you manually subvert the system.


These two concerns are, as far as I can see, in no way opposed in
spirit or practice, and suggesting one is essentially wasted effort
that distracts people - when I see no evidence of that - strikes me as
odd.


I think the industrial Haskell group supported work on a better solver, which was definitely helpful, but I just think it would have been wiser to support work on consistent builds first. I agree that they can be worked on independently.



_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Brandon Allbery | 26 Feb 17:56 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Wed, Feb 26, 2014 at 11:06 AM, Michael Snoyman <michael <at> snoyman.com> wrote:
On Wednesday, February 26, 2014, Brandon Allbery <allbery.b <at> gmail.com> wrote:
Seriously, if everyone follows your versioning system, hsenv becomes essential just as RVM is essential in the Rails community; everyone has to provide their own curated system (your Stackage isn't going to serve everyone's needs, and asking you to make it serve everyone's needs is unreasonable and quite likely an impossible task when everyone else is also doing their own thing) and users have to maintain separate ecosystems for everything they use. This isn't maintainable for much of anyone.


Actually, why wouldn't stackage as a curated system work for everyone? Stackage already has over 10% of hackage covered, and I'd imagine if you look at hackage download numbers, that covers the vast majority of the most downloaded packages. I really do think that a curated system could make most people happy, if it gets enough buy-in.

Because you don't have any particular reason to curate packages you are not using yourself, and as Hackage grows you also won't have *time* to do so, even if you dropped Yesod and devoted your full time to Stackage curation.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gregory Collins | 25 Feb 23:08 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 1:16 PM, Vincent Hanquez <tab <at> snarc.org> wrote:
On 2014-02-25 19:23, Gregory Collins wrote:

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to *zero*, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.
If you're going to dig an old ghc version, what's stopping you from downloading old packages manually from hackage ? I'm sure it can even be automated (more or less).

The solver can't help you here. Like I wrote in my last message to Michael: if I depend on foo-1.2, and foo-1.2 depends on "bar", and "bar-2.0" comes out that breaks "foo-1.2", what can I do? I have to binary search the transitive closure of the dependency space because the solver cannot help.

However, I don't think we should optimise for this use case; I'ld rather use maintained packages that are regularly updated. And even if I wanted to use an old package, provided it's not tied to something fairly internals like GHC's api or such, in a language like haskell, porting to recent version of libraries should be easier than in most other language.

Furthermore, some old libraries should not be used anymore. Consider old libraries that have security issues for example. Whilst it's not the intent, It's probably a good thing that those old libraries don't build anymore, and people are forced to move to the latest maintained version.

The PvP at it stand seems to be a refuge for fossilised packages.

I care much more about programs than about libraries here. Most Haskell programs that were ever written never made it to Hackage. I don't understand the point about old libraries: people will stop using libraries that aren't updated by their maintainers, or someone else will take them over.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

"we have a policy".

*ouch*, I'm sorry, but I find those biggoted views damaging in a nice inclusive haskell community (as I like to view it).

I don't see what bigotry or inclusiveness has to do with this. This is a conversation between insiders anyways :)

While we may have different opinions, I think we're all trying our best to contribute to the haskell ecosystem the way we see fit.

Of course, nobody's saying otherwise. People arguing for the omission of upper bounds often point to breakage caused by the PVP -- I just want to make it clear that people who ignore PVP cause breakage too, and this breakage is worse (because it affects end users instead of Haskell nerds, who know how to fix it). See e.g. https://github.com/snapframework/cufp2011/issues/4 for an instance where one of your packages broke a program of mine for no reason. This program would have continued building fine basically forever if you'd followed the PVP.

G
--
Gregory Collins <greg <at> gregorycollins.net>
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Ganesh Sittampalam | 25 Feb 23:40 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On 25/02/2014 06:44, Michael Snoyman wrote:

> Next is the issue of PVP. I am someone who has stopped religiously
> following the PVP in the past few years. Your email seems to imply that
> only those following the PVP care about making sure that "packages work
> together." I disagree here; I don't use the PVP specifically because I
> care about package interoperability.
> 
> The point of the PVP is to ensure that code builds. It's a purely
> compile-time concept. The PVP solves the problem of an update to a
> dependency causing a downstream package to break. And assuming everyone
> adheres to it[1], it ensures that cabal will never try to perform a
> build which isn't guaranteed to work.
> 
> But that's only one half of the "package interoperability" issue. I face
> this first hand on a daily basis with my Stackage maintenance. I spend
> far more time reporting issues of restrictive upper bounds than I do
> with broken builds from upstream changes. So I look at this as purely a
> game of statistics: are you more likely to have code break because
> version 1.2 of text changes the type of the map function and you didn't
> have an upper bound, or because two dependencies of yours have
> *conflicting* versions bounds on a package like aeson[2]? In my
> experience, the latter occurs far more often than the former.

It's worth mentioning that cabal failing to find a solution is far less
costly for me to discover than cabal finding a solution and then having
a build of a large graph of packages fail, because by that point I've
wasted a lot of time waiting for the build and I now have a thoroughly
confused package database to recover from (whether using a sandbox or not).

Ganesh
Sven Panne | 25 Feb 08:21 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

2014-02-24 21:30 GMT+01:00 Henning Thielemann
<schlepptop <at> henning-thielemann.de>:
> [...] You use unqualified and
> implicit imports. That is according to the PVP you would need to use tight
> version bounds like "containers >=0.4.0 && <0.5.0.1", but you don't. That
> is, your package does not conform to the PVP. [...]

o_O Dumb question: Can somebody please explain why this doesn't
conform to the PVP? I have a very hard time reading that out of
http://www.haskell.org/haskellwiki/Package_versioning_policy. Perhaps
I'm looking at the wrong document or this interpretation is just
wishful thinking...

Regarding upper bounds: I never understood what their advantage should
be, IMHO they only lead to a version hell where you can't compile your
stuff anymore *only* because of the bounds, not because of anything
else.
John Lato | 25 Feb 08:45 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Mon, Feb 24, 2014 at 11:21 PM, Sven Panne <svenpanne <at> gmail.com> wrote:

Regarding upper bounds: I never understood what their advantage should
be, IMHO they only lead to a version hell where you can't compile your
stuff anymore *only* because of the bounds, not because of anything
else.

IMHO this is a bad enough outcome, but it can also allow you to compile code in a way that it behaves incorrectly (if a function's behavior has changed but the type has not).  It also leads to a situation where cabal generates what it thinks is an acceptable dependency solution, but that solution fails, making the user need to solve the dependency tree themselves and specify constraints on the cabal command line.

This is reason the PVP specifies upper bounds on versions: it makes that work the responsibility of the developer rather than the user.  At the time the PVP was introduced, users often experienced serious hardships when installing various combinations of packages, and IIRC it was widely perceived that developers should shoulder the load of making sure their packages would work together as specified.  However, I think the PVP may have been a victim of its own success; user complaints about botched installs and invalid install plans seem quite rare these days, and some developers are pushing back against this extra workload.  (or maybe there are no Haskell users?)

John L.

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Daniel Trstenjak | 25 Feb 08:51 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)


Hi Sven,

> o_O Dumb question: Can somebody please explain why this doesn't
> conform to the PVP? I have a very hard time reading that out of
> http://www.haskell.org/haskellwiki/Package_versioning_policy. Perhaps
> I'm looking at the wrong document or this interpretation is just
> wishful thinking...

If I'm getting it right, you don't have to increase a major version
number if you're e.g. just adding another function.

But if the user of your library imports unqualified or implicit,
then he will also get your added function and this function might
conflict with functions in your code base.

Greetings,
Daniel
Edward Kmett | 25 Feb 16:19 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 2:51 AM, Daniel Trstenjak <daniel.trstenjak <at> gmail.com> wrote:

Hi Sven,

> o_O Dumb question: Can somebody please explain why this doesn't
> conform to the PVP? I have a very hard time reading that out of
> http://www.haskell.org/haskellwiki/Package_versioning_policy. Perhaps
> I'm looking at the wrong document or this interpretation is just
> wishful thinking...

If I'm getting it right, you don't have to increase a major version
number if you're e.g. just adding another function.

But if the user of your library imports unqualified or implicit,
then he will also get your added function and this function might
conflict with functions in your code base.
 
Note: this particular concern would be much lessened at least for local definitions, had we done anything with Lennart's perfectly reasonable suggestion to change the scoping rules to let local definitions win over imports. When I mentioned above that I had had half a dozen problems in five years, four of them would have been resolved successfully by that proposal.

-Edward
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Brandon Allbery | 25 Feb 16:16 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 2:21 AM, Sven Panne <svenpanne <at> gmail.com> wrote:
Regarding upper bounds: I never understood what their advantage should
be, IMHO they only lead to a version hell where you can't compile your
stuff anymore *only* because of the bounds, not because of anything
else.

A couple months ago we had yet another example of "that will never happen" caused by people ignoring upper bounds. Developers never saw any problem, of course; and who cares about all the users who had compiles explode with unexpected errors? I think it took less than two weeks after someone patched up the most visibly affected packages before developers were shouting to remove upper bounds from the PVP again, because the affected users are just users and apparently not important enough to consider when setting versioning policy.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Omari Norman | 25 Feb 16:26 2014

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 10:16 AM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
> because the affected users are just
> users and apparently not important enough to consider when setting
> versioning policy.

Users are important enough to consider, but their needs should not
trump all others.  In particular, (nearly?) all software on Hackage is
given to users at no charge.  Developers invest their time.  Their
needs are important too.  If policies make it too troublesome for
developers to maintain software or publicly post it on Hackage, they
will just stop posting it.

Obviously there is a balance to be struck, as if you make things too
hard for users then there will be no users.  The problem is that the
PVP is putting a considerable maintenance burden on developers but
it's not even clear there is commensurate benefit to users.  Often
it's hard to get different packages to work together because upper
bounds are too tight.
Edward Kmett | 25 Feb 16:33 2014
Picon

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

On Tue, Feb 25, 2014 at 10:16 AM, Brandon Allbery <allbery.b <at> gmail.com> wrote:
On Tue, Feb 25, 2014 at 2:21 AM, Sven Panne <svenpanne <at> gmail.com> wrote:
Regarding upper bounds: I never understood what their advantage should
be, IMHO they only lead to a version hell where you can't compile your
stuff anymore *only* because of the bounds, not because of anything
else.

A couple months ago we had yet another example of "that will never happen" caused by people ignoring upper bounds. Developers never saw any problem, of course; and who cares about all the users who had compiles explode with unexpected errors? I think it took less than two weeks after someone patched up the most visibly affected packages before developers were shouting to remove upper bounds from the PVP again, because the affected users are just users and apparently not important enough to consider when setting versioning policy.

I tried living without upper bounds. My attempt was not motivated out of disdain for users, but from the fact that all of the complaints I had had had been about the opposite, constraints that were too tight. The change was motivate largely by a desire to improve end user experience.

However, after removing the bounds, the situations users wound up in were very hard to fix. From a POSIWID perspective, the purpose of removing upper bounds is to make Haskell nigh unusable without sandboxing or --force.

Consequently, I reverted to PVP compliance within a month. Yes, compliance. Despite Henning's attempt to grab the moral high ground there, the PVP does not require the use of qualified imports to depend on minor versions, it merely indicates that doing so is a slight risk.

To minimize breakage when new package versions are released, you can use dependencies that are insensitive to minor version changes (e.g. foo >= 1.2.1 && < 1.3). However, note that this approach is slightly risky: when a package exports more things than before, there is a chance that your code will fail to compile due to new name-clash errors. The risk from new name clashes may be small, but you are on the safe side if you import identifiers explicitly or using qualification. 

-Edward
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Casey McCann | 24 Feb 20:56 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Mon, Feb 24, 2014 at 1:09 PM, Henning Thielemann
<schlepptop <at> henning-thielemann.de> wrote:
> Am 24.02.2014 18:57, schrieb Brandon Allbery:
>>
>> There is something vaguely smelly about specifically omitting the context
>>
>>  > it is an annoyingly common name to take for
>>>
>>> such an often unqualified import.
>>
>>
>> in the original message. Yes, we're quite aware you do not consider it
>> legitimate. Distorting someone else's meaning to press your point is
>> also not legitimate.
>
>
> The phrase "reserve 'zero'" suggests that once we choose Bits.zero, the
> identifier 'zero' is reserved once and for all and cannot be used for
> something different anymore. That is, this phrasing removes the option of
> qualified imports from the scope and thus generates the wrong context.
>
> Can someone please, please tell me why we must avoid qualified imports at
> all costs? Why is this option repeatedly ignored when just saying zeroBits
> (+1) or zero (-1)?

Because it is a thoroughly irrelevant option, empirically speaking, on
account of approximately nobody actually using Data.Bits that way.

Based on a quick Google search of hackage, here are the use cases
wherein Data.Bits is imported qualified:

- Intentionally defining functions with clashing names which are then
exported with the intent of being imported unqualified elsewhere
- Machine-generated code too lazy to worry about namespace issues
- Commented-out import lines

Your own code, by the way, falling into the third category.

If your primary contention here is that core library APIs should be
re-designed based not on how they are actually used in practice, but
rather on pie-in-the-sky notions regarding how they ought to be used
in some uniquely ideal world, perhaps you should raise that point in
its own thread rather than endlessly hijacking discussions about
modules you may or may not even use. A debate over painting new doors
for a bikeshed is not the time or place to propose tearing down the
entire shed and building a gazebo in its place.

As for the real question, I'd prefer something along the lines of
"clearedBits". Only the haddock comments on the shift functions use
1/0 to talk about individual bits, the function names and the other
haddocks consistently use set/clear. It's not a big deal though.

I'm also wondering if anyone has examples of the name "zero" in code
that's actually used and doesn't expect to stomp all over the
namespace anyway by redefining arithmetic. "zero" sounds suspiciously
like one of those names that's so common nobody actually uses it
because they don't want to clash with all the other places it's being
conspicuously not used. (see: mzero, zeroArrow...)

I'm not at all sure Data.Bits is the flag we'd want to plant in it
regardless, but it would be nice to know if "zero" is actually in
common use.

- C.
Dan Doel | 24 Feb 21:24 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Mon, Feb 24, 2014 at 2:56 PM, Casey McCann <cam <at> uptoisomorphism.net> wrote:
Because it is a thoroughly irrelevant option, empirically speaking, on
account of approximately nobody actually using Data.Bits that way.

There's some reason for that, too. Bits has operators, which are especially ugly when qualified, and I suspect most people are even more annoyed by using two import statements to manage this than they are about using qualified imports in the first place.

In fact, most of the library has unique enough names that it needn't be imported qualified, and qualifying would make code read worse (to me, and I'm sure others; x `Bits.shiftR` n). So in this case, we'd be adding one function that encourages qualification to a module that otherwise doesn't.

-- Dan
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 21:36 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 24.02.2014 21:24, schrieb Dan Doel:
> On Mon, Feb 24, 2014 at 2:56 PM, Casey McCann <cam <at> uptoisomorphism.net
> <mailto:cam <at> uptoisomorphism.net>> wrote:
>
>     Because it is a thoroughly irrelevant option, empirically speaking, on
>     account of approximately nobody actually using Data.Bits that way.
>
>
> There's some reason for that, too. Bits has operators, which are
> especially ugly when qualified, and I suspect most people are even more
> annoyed by using two import statements to manage this than they are
> about using qualified imports in the first place.

For Data.Map we are used to write two import statements. It's not that 
uncommon.

But I agree that qualification and infix operators don't work well 
together. That said, I am also not happy with 'rotate' and 'shift' being 
designed for infix use, since this way I cannot use (.) and ($) for 
composition of bit manipulations.
Henning Thielemann | 24 Feb 21:42 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 24.02.2014 20:56, schrieb Casey McCann:

> As for the real question, I'd prefer something along the lines of
> "clearedBits".

Great, lets call it Bits.cleared, this would make sense with 
qualification and would not risk name clashes for unqualified use. This 
would serve both camps.
Anthony Cowley | 22 Feb 20:51 2014

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is prime
naming real estate. I am +0.5 on the addition overall, as most uses of Bits are with types that also have Num
instances. If we are naming this thing, then I vote for zeroBits.

Anthony

> On Feb 22, 2014, at 5:03 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
> 
> Hello *,
> 
> Here's a mid-discussion summary of the proposal
> 
>>> Introduce a new class method
>>> 
>>>  class Bits a where
>>>      ...
>>>      -- | Value with all bits cleared
>>>      <0-value-method> :: a
>>>      ...
>>> 
>>> modulo naming of '<0-value-method>'
> 
> from my point of view:
> 
> - The idea came up already in 2011 when Num-superclass was to be removed from Bits
>   (but discussion derailed)
> 
> - So far general consensus (i.e. no "-1"s afaics) it's desired to have
>   an all-bits-clear value introducing method in 'Bits'
> 
> - Use "clearBit (bit 0) 0" as default implementation for smooth upgrade-path
> 
> - Nameing for <0-value-method> boils down to two candidates:
> 
>    a) 'Data.Bits.zero'
> 
>        - based on the idea tha 'Data.Bits' ought to be imported
>          qualified (or with explicit import-list) anyway
>          (-> thus following PVP practice)
> 
>        - many existing Data.Bits.Bits methods such as 'rotate',
>          'complement', 'popCount', 'xor', or 'shift' don't have
>          the name 'bit' in it (and those few that have, operate
>          on single bits)
> 
>        - supporters (in no particular order):
> 
>           - ARJANEN Loïc Jean David
>           - Henning Thielemann
>           - Herbert Valerio Riedel (+0.99)
>           - Twan van Laarhoven
> 
>    b) 'Data.Bits.zeroBits'
> 
>        - more verbose name reduces risk of namespace conflicts with unqualified imports
> 
>        - supporters (in no particular order):
> 
>           - Edward Kmett
>           - Eric Mertens
>           - Herbert Valerio Riedel
>           - Twan van Laarhoven
>           - (maybe?) ARJANEN Loïc Jean David
> 
> 
>    So far there doesn't seem to be a very clear preference for
>    'zeroBits' over 'zero'. It might help, if those how expressed some
>    kind of support for both variants could clarify if their preference
>    has any bias towards 'zeroBits' or 'zero'.
> 
> 
> Cheers,
>   hvr
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 10:54 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 22.02.2014 20:51, schrieb Anthony Cowley:

> I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is prime
naming real estate.

Can you promise that you will use tight version bounds on 'base' if you 
import Data.Bits unqualified and without explicit import list in order 
to conform to the PVP?
Ian Lynagh | 24 Feb 22:12 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Sat, Feb 22, 2014 at 02:51:24PM -0500, Anthony Cowley wrote:
> I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is prime
naming real estate. I am +0.5 on the addition overall, as most uses of Bits are with types that also have Num instances.

For those that don't have a Num instance, "zero" may not make as much
sense.

Perhaps something like noBits would be better. And FiniteBits may also
want an allBits?

Thanks
Ian
Edward Kmett | 24 Feb 23:00 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Note: at least for Integer, allBits / oneBits is also definable, despite note being Finite


On Mon, Feb 24, 2014 at 4:12 PM, Ian Lynagh <igloo <at> earth.li> wrote:
On Sat, Feb 22, 2014 at 02:51:24PM -0500, Anthony Cowley wrote:
> I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is prime naming real estate. I am +0.5 on the addition overall, as most uses of Bits are with types that also have Num instances.

For those that don't have a Num instance, "zero" may not make as much
sense.

Perhaps something like noBits would be better. And FiniteBits may also
want an allBits?


Thanks
Ian

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Gershom Bazerman | 25 Feb 07:23 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

The issue isn't about qualified or unqualified names at all. It is about names which express intent clearly and evocatively, and names which are unacceptably ambiguous.

As such, I propose

zero --> whereDidTheBitsGo

and conversely,

allBits --> iHaveAllTheBits

It seems to me that these are expressive names with unmistakable meanings.

-G

On 2/24/14, 5:00 PM, Edward Kmett wrote:
Note: at least for Integer, allBits / oneBits is also definable, despite note being Finite


On Mon, Feb 24, 2014 at 4:12 PM, Ian Lynagh <igloo <at> earth.li> wrote:
On Sat, Feb 22, 2014 at 02:51:24PM -0500, Anthony Cowley wrote:
> I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is prime naming real estate. I am +0.5 on the addition overall, as most uses of Bits are with types that also have Num instances.

For those that don't have a Num instance, "zero" may not make as much
sense.

Perhaps something like noBits would be better. And FiniteBits may also
want an allBits?


Thanks
Ian

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries



_______________________________________________ Libraries mailing list Libraries <at> haskell.org http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Casey McCann | 25 Feb 14:55 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On Tue, Feb 25, 2014 at 1:23 AM, Gershom Bazerman <gershomb <at> gmail.com> wrote:
> The issue isn't about qualified or unqualified names at all. It is about
> names which express intent clearly and evocatively, and names which are
> unacceptably ambiguous.
>
> As such, I propose
>
> zero --> whereDidTheBitsGo
>
> and conversely,
>
> allBits --> iHaveAllTheBits
>
> It seems to me that these are expressive names with unmistakable meanings.

Well, for names in that vein, I'd suggest notOneBit and everyLastBit.
This avoids reinventing the wheel by relying on standard English
idioms, and it's well-known that imitating the flawless logical
structure of the English language is the highest goal for any
programming language.

- C.
Mikhail Vorozhtsov | 26 Feb 21:39 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Wanna paint this bikeshed too! I used `allZeroes` and `allOnes` here[1].

http://hackage.haskell.org/package/data-dword-0.2.2/docs/Data-DoubleWord.html#t:BinaryWord

On 02/25/2014 01:12 AM, Ian Lynagh wrote:
> On Sat, Feb 22, 2014 at 02:51:24PM -0500, Anthony Cowley wrote:
>> I am -1 on the name zero. I don't think importing Data.Bits unqualified is uncommon at all, and zero is
prime naming real estate. I am +0.5 on the addition overall, as most uses of Bits are with types that also
have Num instances.
> For those that don't have a Num instance, "zero" may not make as much
> sense.
>
> Perhaps something like noBits would be better. And FiniteBits may also
> want an allBits?
>
>
> Thanks
> Ian
>
> _______________________________________________
> Libraries mailing list
> Libraries <at> haskell.org
> http://www.haskell.org/mailman/listinfo/libraries
Henning Thielemann | 24 Feb 19:00 2014
Picon

Re: [Mid-discussion Summary] Proposal: add new Data.Bits.Bits(bitZero) method

Am 22.02.2014 11:03, schrieb Herbert Valerio Riedel:

>      So far there doesn't seem to be a very clear preference for
>      'zeroBits' over 'zero'. It might help, if those how expressed some
>      kind of support for both variants could clarify if their preference
>      has any bias towards 'zeroBits' or 'zero'.

It turns out to be another round of the discussion qualified imports vs. 
unqualified imports. Many Haskell programmers seem to avoid qualified 
imports at all costs. I can't explain that, maybe the proponents of 
qualified imports can do it.

But I suspect that what we really discuss is something more critical: 
It's about conformance to PVP vs. non-conformance to PVP and thus 
letting Hackage users fix packages of lazy programmers. I guess, what 
the proponents of "zeroBit" really want, is to import unqualified, 
implicitly and without version bounds when importing 'base'. If you want 
to conform to the PVP and thus give the user a good experience, then you 
have to give up one of these three conveniences. Strict version bounds 
on "base" requires to update Cabal descriptions frequently. I guess you 
don't want that. Explicit imports mean that you have to maintain import 
lists. I guess you don't want that as well. The only convenient option 
is to import qualified. But then Bits.zero is much better than 
Bits.zeroBits.
Herbert Valerio Riedel | 8 Mar 11:15 2014
Picon

[Final Summary] Proposal: add new Data.Bits.Bits(bitZero) method

On 2014-02-22 at 11:03:12 +0100, Herbert Valerio Riedel wrote:
> Here's the result of the proposal
>
>>> Introduce a new class method
>>>
>>>   class Bits a where
>>>       ...
>>>       -- | Value with all bits cleared
>>>       <0-value-method> :: a
>>>       ...
>>>
>>> modulo naming of '<0-value-method>'

Unless I missed any votes, the current tally stands at

for `Bits.zero`

 - ARJANEN Loïc Jean David
 - Henning Thielemann
 - Herbert Valerio Riedel (+0.99)

for 'Data.Bits.zeroBits'

 - Edward Kmett
 - Eric Mertens
 - Herbert Valerio Riedel
 - Twan van Laarhoven
 - (maybe?) ARJANEN Loïc Jean David
 - Anthony Cowley (+0.5)

This tilts the majority towards the name 'zeroBits'

As the proposal-deadline was close to the RC2 release, this made it into
GHC 7.8.1-RC2, and therefore GHC 7.8.1/base-4.7.0.0 will come with
`Data.Bits.zeroBits`[1]

Fwiw, there were some alternative names suggested, but those didn't gain
significant support:

  cleared, noBits, clearedBits, whereDidTheBitsGo, notOneBit, allZeroes

Beyond this proposal, it was hinted at that we might also want a name for
'complement zeroBits' (although that maybe a FiniteBits-instance), names
suggested for that were

  allBits, allOnes, oneBits, iHaveAllTheBits, everyLastBit

Cheers,
  hvr

 [1]: http://git.haskell.org/packages/base.git/commitdiff/147bca65bfac216bddff3ccde89409ca6323bb62
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries
Edward Kmett | 16 Feb 17:43 2014
Picon

Re: Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property

Nothing forbids you from allowing negative bit positions in a data type, for instance for fractional bits in a fixed position numeric type.

Consequently, I'm -1 on this proposal.

You can currently construct 0 several ways, e.g. clearBit (bit 0) 0 could be use to supply a default for any such zeroBits member of the class.

-Edward


On Sun, Feb 16, 2014 at 5:14 AM, Herbert Valerio Riedel <hvr <at> gnu.org> wrote:
Hello *,

Right now, there seems to be no "defined" way to create a zero
'Bits'-value (w/o requiring also a 'Num' instance), that has all bits
cleared without involving at least two operations from the 'Bits' class
(e.g. `clearBit (bit 0) 0` or `let x = bit 0 in xor x x`).

OTOH, introducing a new method 'class Bits a where bitZero :: a' seems
overkill to me.

However, "bit (-1)"[1] seems to result in just such a zero-value for all
'Bits' instances from base, so I'd hereby propose to simply document
this as an expected property of 'bit', as well as the recommended way to
introduce a zero-value (for when 'Num' is not available).

Discussion period: 2 weeks

 [1]: ...or more generally 'bit n == 0' for n<0, as it's usually
      implemented as 'bit n = 1 shift n'
_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

_______________________________________________
Libraries mailing list
Libraries <at> haskell.org
http://www.haskell.org/mailman/listinfo/libraries

Gmane