Adam Gundry | 24 Jun 09:44 2013
Picon
Picon

Overloaded record fields

Hi everyone,

I am implementing an overloaded record fields extension for GHC as a
GSoC project. Thanks to all those who gave their feedback on the
original proposal! I've started to document the plan on the GHC wiki:

http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan

If you have any comments on the proposed changes, or anything is unclear
about the design, I'd like to hear from you.

Thanks,

Adam Gundry
Simon Peyton-Jones | 24 Jun 10:30 2013
Picon

RE: Overloaded record fields


| I am implementing an overloaded record fields extension for GHC as a
| GSoC project. Thanks to all those who gave their feedback on the
| original proposal! I've started to document the plan on the GHC wiki:
| 
| http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/
| Plan
| 
| If you have any comments on the proposed changes, or anything is unclear
| about the design, I'd like to hear from you

By way of context, there have been a succession of "adding-records-to-Haskell" debates which have failed
to reach consensus. That's not because people are awkward.  Rather it's a complex design space with no
global maximum; and because a clean-slate design (even if we were sure of a good one, which we aren't) would
lack backward compatibility.

I have also had the sense of "I wish GHC HQ would just cut to the chase and decide *something*, even if not
everyone thinks it's ideal".

So that's what this proposal is intended to do:

 * It is the smallest increment I can come up with that
   meaningfully addresses the #1 pain point (the inability to
   re-use the same field name in different records).

 * It is backward-compatible.

It does not do everything -- far from it -- leaving the field open for experimentation with more
far-reaching designs.

(Continue reading)

Mateusz Kowalczyk | 24 Jun 11:47 2013
Picon

Re: Overloaded record fields


On 24/06/13 08:44, Adam Gundry wrote:
> Hi everyone,
> 
> I am implementing an overloaded record fields extension for GHC as
> a GSoC project. Thanks to all those who gave their feedback on the 
> original proposal! I've started to document the plan on the GHC
> wiki:
> 
> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
>
>  If you have any comments on the proposed changes, or anything is
> unclear about the design, I'd like to hear from you.
> 
> Thanks,
> 
> Adam Gundry
> 
> _______________________________________________ 
> Glasgow-haskell-users mailing list 
> Glasgow-haskell-users <at> haskell.org 
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> 
On the wiki, you say ` It is critical to support dot-notation.' and
then you follow up with multiple reasons on why this is troublesome
because of the conflict with function composition.

You say `There is an overlap with the function composition operator,
but that is already true with qualified names.'

(Continue reading)

Roman Cheplyaka | 24 Jun 12:04 2013

Re: Overloaded record fields

* Mateusz Kowalczyk <fuuzetsu <at> fuuzetsu.co.uk> [2013-06-24 10:47:09+0100]
> Restricting function composition to have spaces around it will require
> changing a large amount of existing code if one is willing to use it.

I assume this semantics will be triggered only by an extension, so
there'd be no need to change existing code.

> While I personally would like the restriction because I hate seeing
> people skimp out on whitespace around operators, there are a lot of
> people with a different opinion than mine and I imagine it'd be a
> great inconvenience to make them change their code if they want to
> start using SORF.

Well, if they *want* it, it's not unreasonable to require them to *pay*
for it (in the form of adjusting their coding style).

Roman
Mateusz Kowalczyk | 24 Jun 12:05 2013
Picon

Re: Overloaded record fields


On 24/06/13 11:04, Roman Cheplyaka wrote:
> * Mateusz Kowalczyk <fuuzetsu <at> fuuzetsu.co.uk> [2013-06-24
> 10:47:09+0100]
>> Restricting function composition to have spaces around it will
>> require changing a large amount of existing code if one is
>> willing to use it.
> 
> I assume this semantics will be triggered only by an extension, so 
> there'd be no need to change existing code.
> 
>> While I personally would like the restriction because I hate
>> seeing people skimp out on whitespace around operators, there are
>> a lot of people with a different opinion than mine and I imagine
>> it'd be a great inconvenience to make them change their code if
>> they want to start using SORF.
> 
> Well, if they *want* it, it's not unreasonable to require them to
> *pay* for it (in the form of adjusting their coding style).
> 
> Roman
> 
Sure that it's unreasonable to have them change some of their code to
use new, cool features. I'm just questioning whether it's absolutely
necessary to do it by forcing restrictions on what is probably the
most commonly used operator in the whole language.

--

-- 
Mateusz K.
(Continue reading)

Mateusz Kowalczyk | 24 Jun 12:07 2013
Picon

Re: Overloaded record fields


On 24/06/13 11:05, Mateusz Kowalczyk wrote:
> On 24/06/13 11:04, Roman Cheplyaka wrote:
>> * Mateusz Kowalczyk <fuuzetsu <at> fuuzetsu.co.uk> [2013-06-24 
>> 10:47:09+0100]
>>> Restricting function composition to have spaces around it will 
>>> require changing a large amount of existing code if one is 
>>> willing to use it.
> 
>> I assume this semantics will be triggered only by an extension,
>> so there'd be no need to change existing code.
> 
>>> While I personally would like the restriction because I hate 
>>> seeing people skimp out on whitespace around operators, there
>>> are a lot of people with a different opinion than mine and I
>>> imagine it'd be a great inconvenience to make them change their
>>> code if they want to start using SORF.
> 
>> Well, if they *want* it, it's not unreasonable to require them
>> to *pay* for it (in the form of adjusting their coding style).
> 
>> Roman
> 
> Sure that it's unreasonable to have them change some of their code
> to use new, cool features. I'm just questioning whether it's
> absolutely necessary to do it by forcing restrictions on what is
> probably the most commonly used operator in the whole language.
> 
> 
> _______________________________________________ 
(Continue reading)

AntC | 27 Jun 01:10 2013
Picon

Re: Overloaded record fields

> Mateusz Kowalczyk <fuuzetsu <at> fuuzetsu.co.uk> writes:
> 
> 
> On 24/06/13 11:04, Roman Cheplyaka wrote:
> > * Mateusz Kowalczyk <fuuzetsu <at> fuuzetsu.co.uk> [2013-06-24
> > 10:47:09+0100]
> >> Restricting function composition to have spaces around it will
> >> require changing a large amount of existing code if one is
> >> willing to use it.
> > 
> > 
> Sure that it's unreasonable to have them change some of their code to
> use new, cool features. I'm just questioning whether it's absolutely
> necessary to do it by forcing restrictions on what is probably the
> most commonly used operator in the whole language.
> 

For my 2-penn'orth, requiring spaces around composition won't break any of 
my code: I always do put spaces.

Re use of 'infix' dots, I grew up on Russell & Whitehead's Principia 
Mathematica, Wittgenstein's Tractatus, BCPL (dots inside names), LISP's 
dotted pairs (which IIRC always have surrounding spaces), SQL's 
table.field.

I find it is Haskell that is 'odd man out'.

OTOH, there is a high cost in breaking that existing code with non-spaced 
composition, and I'm not convinced that record-field access alone is a 
sufficient benefit. (This is what SPJ would call a low power to weight 
(Continue reading)

Oliver Charles | 24 Jun 16:40 2013
Picon

Re: Overloaded record fields

On 06/24/2013 08:44 AM, Adam Gundry wrote:
> Hi everyone,
>
> I am implementing an overloaded record fields extension for GHC as a
> GSoC project. Thanks to all those who gave their feedback on the
> original proposal! I've started to document the plan on the GHC wiki:
>
> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
>
> If you have any comments on the proposed changes, or anything is unclear
> about the design, I'd like to hear from you.
>
> Thanks,
>
> Adam Gundry
>
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users <at> haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
The wiki page says:

The base design has the following distinct components:

  * A library class

    class Has (r :: *) (f :: String) (t :: *) where
      get :: r -> t

  * A record declaration generates an instance declaration for each
(Continue reading)

Erik Hesselink | 24 Jun 16:50 2013
Picon

Re: Overloaded record fields

It looks like this instance is partial. Note that the record field 'y'
is also a partial function in plain Haskell. I've always considered
this a misfeature, but perhaps fixing that is outside the scope of
this proposal.

Erik

On Mon, Jun 24, 2013 at 4:40 PM, Oliver Charles <ollie <at> ocharles.org.uk> wrote:
> On 06/24/2013 08:44 AM, Adam Gundry wrote:
>> Hi everyone,
>>
>> I am implementing an overloaded record fields extension for GHC as a
>> GSoC project. Thanks to all those who gave their feedback on the
>> original proposal! I've started to document the plan on the GHC wiki:
>>
>> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
>>
>> If you have any comments on the proposed changes, or anything is unclear
>> about the design, I'd like to hear from you.
>>
>> Thanks,
>>
>> Adam Gundry
>>
>> _______________________________________________
>> Glasgow-haskell-users mailing list
>> Glasgow-haskell-users <at> haskell.org
>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> The wiki page says:
>
(Continue reading)

Dominique Devriese | 26 Jun 14:15 2013
Picon

Re: Overloaded record fields

I think it's a good idea to push forward on the records design because
it seems futile to hope for an ideal consensus proposal.

The only thing I dislike though is that dot notation is special-cased to
record projections.  I would prefer to have dot notation for a
general, very tightly-binding reverse application, and the type of the record
selector for a field f changed to "forall r t. r { f :: t } => r -> t"
instead of
"SomeRecordType -> t".  Such a general reverse application dot would
allow things like "string.toUpper" and for me personally, it would
make a Haskell OO library that I'm working on more elegant...

But I guess you've considered such a design and decided against it,
perhaps because of the stronger backward compatibility implications of
changing the selectors' types?

Dominique

2013/6/24 Adam Gundry <adam.gundry <at> strath.ac.uk>:
> Hi everyone,
>
> I am implementing an overloaded record fields extension for GHC as a
> GSoC project. Thanks to all those who gave their feedback on the
> original proposal! I've started to document the plan on the GHC wiki:
>
> http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
>
> If you have any comments on the proposed changes, or anything is unclear
> about the design, I'd like to hear from you.
>
(Continue reading)

Simon Peyton-Jones | 26 Jun 22:39 2013
Picon

RE: Overloaded record fields

|  record projections.  I would prefer to have dot notation for a
|  general, very tightly-binding reverse application, and the type of the record
|  selector for a field f changed to "forall r t. r { f :: t } => r -> t"
|  instead of "SomeRecordType -> t".  Such a general reverse application dot would
|  allow things like "string.toUpper" and for me personally, it would
|  make a Haskell OO library that I'm working on more elegant...

Actually I *hadn't* considered that.   I'm sure it's been suggested before (there has been so much
discussion), but I had not really thought about it in the context of our very modest proposal.

We're proposing, in effect, that ".f" is a postfix function with type "forall r t. r { f :: t } => r -> t".   You
propose to decompose that idea further, into (a) reverse function application and (b) a first class
function f.

It is kind of weird that
	f . g  means    \x. f (g x)
but     f.g    means    g f

but perhaps it is not *more* weird than our proposal.

Your proposal also allows things like

	data T = MkT { f :: Int }

	foo :: [T] -> [Int]
	foo = map f xs

because the field selector 'f' has the very general type you give, but the type signature would be enough to
fix it.  Or, if foo lacks a type signature, I suppose we'd infer

(Continue reading)

Edward Kmett | 26 Jun 22:53 2013
Picon

Re: Overloaded record fields

Note: the lens solution already gives you 'reverse function application' with the existing (.) due to CPS in the lens type.


-Edward

On Wed, Jun 26, 2013 at 4:39 PM, Simon Peyton-Jones <simonpj <at> microsoft.com> wrote:
|  record projections.  I would prefer to have dot notation for a
|  general, very tightly-binding reverse application, and the type of the record
|  selector for a field f changed to "forall r t. r { f :: t } => r -> t"
|  instead of "SomeRecordType -> t".  Such a general reverse application dot would
|  allow things like "string.toUpper" and for me personally, it would
|  make a Haskell OO library that I'm working on more elegant...

Actually I *hadn't* considered that.   I'm sure it's been suggested before (there has been so much discussion), but I had not really thought about it in the context of our very modest proposal.

We're proposing, in effect, that ".f" is a postfix function with type "forall r t. r { f :: t } => r -> t".   You propose to decompose that idea further, into (a) reverse function application and (b) a first class function f.

It is kind of weird that
        f . g  means    \x. f (g x)
but     f.g    means    g f

but perhaps it is not *more* weird than our proposal.

Your proposal also allows things like

        data T = MkT { f :: Int }

        foo :: [T] -> [Int]
        foo = map f xs

because the field selector 'f' has the very general type you give, but the type signature would be enough to fix it.  Or, if foo lacks a type signature, I suppose we'd infer

        foo :: (r { f::a }) => [r] -> [a]

which is also fine.

It also allows you to use record field names in prefix position, just as now, which is a good thing.

In fact, your observation allows us to regard our proposal as consisting of two entirely orthogonal parts
  * Generalise the type of record field selectors
  * Introduce period as reverse function application

Both have merit.

Simon

|  -----Original Message-----
|  From: glasgow-haskell-users-bounces <at> haskell.org [mailto:glasgow-haskell-users-
|  bounces <at> haskell.org] On Behalf Of Dominique Devriese
|  Sent: 26 June 2013 13:16
|  To: Adam Gundry
|  Cc: glasgow-haskell-users <at> haskell.org
|  Subject: Re: Overloaded record fields
|
|  I think it's a good idea to push forward on the records design because
|  it seems futile to hope for an ideal consensus proposal.
|
|  The only thing I dislike though is that dot notation is special-cased to
|  record projections.  I would prefer to have dot notation for a
|  general, very tightly-binding reverse application, and the type of the record
|  selector for a field f changed to "forall r t. r { f :: t } => r -> t"
|  instead of
|  "SomeRecordType -> t".  Such a general reverse application dot would
|  allow things like "string.toUpper" and for me personally, it would
|  make a Haskell OO library that I'm working on more elegant...
|
|  But I guess you've considered such a design and decided against it,
|  perhaps because of the stronger backward compatibility implications of
|  changing the selectors' types?
|
|  Dominique
|
|  2013/6/24 Adam Gundry <adam.gundry <at> strath.ac.uk>:
|  > Hi everyone,
|  >
|  > I am implementing an overloaded record fields extension for GHC as a
|  > GSoC project. Thanks to all those who gave their feedback on the
|  > original proposal! I've started to document the plan on the GHC wiki:
|  >
|  > http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
|  >
|  > If you have any comments on the proposed changes, or anything is unclear
|  > about the design, I'd like to hear from you.
|  >
|  > Thanks,
|  >
|  > Adam Gundry
|  >
|  > _______________________________________________
|  > Glasgow-haskell-users mailing list
|  > Glasgow-haskell-users <at> haskell.org
|  > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
|
|  _______________________________________________
|  Glasgow-haskell-users mailing list
|  Glasgow-haskell-users <at> haskell.org
|  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
AntC | 27 Jun 01:51 2013
Picon

Re: Overloaded record fields

> Simon Peyton-Jones <simonpj <at> microsoft.com> writes:
> 
> |  record projections.  I would prefer to have dot notation for a
> |  general, very tightly-binding reverse application, ...
> |  Such a general reverse application dot would
> |  allow things like "string.toUpper" and for me personally, it would
> |  make a Haskell OO library that I'm working on more elegant...
> 
> Actually I *hadn't* considered that.   I'm sure it's been suggested 
before (there has been so much
> discussion), but I had not really thought about it in the context of our 
very modest proposal.

Thanks Simon, 

I'd better start by saying that I'm very keen for Adam to get going on 
this and produce something/anything better than H98's record fields. So I 
fully understand you're trying to make this a minimal proposal.

At risk of "I told you so" dot as postfix apply is exactly what I had in 
mind for my record proposals (DORF and TPDORF on the wiki):

- Since H98 field selectors are just functions we could leave them as is
  (leave the selector as Mono/specific to a data type)
- make the new OverloadedRecordFields also just functions
  (via the Has instance
   -- in effect this is PolyRecordFields per the wiki Plan.)
- make Virtual record fields just functions
  (don't need a Has instance, and don't get into trouble with update)
- (then toUpper could seem like a record field kinda thing)

All of them could use the dot notation syntax. (As tight-binding reverse 
function apply.)

    person.lastName.toUpper    -- <==> toUpper (lastName person)

So, as you say:
> 
> It also allows you to use record field names in prefix position, just as 
now, which is a good thing.  
> 
> In fact, your observation allows us to regard our proposal as consisting 
of two entirely orthogonal parts
>   * Generalise the type of record field selectors
>   * Introduce period as reverse function application
> 

Exactly! (I did tell you so: 
http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFi
elds/DotPostfix -- billed as "optional syntactic sugar")
So make those two orthogonal extensions. 

For people who really don't like breaking their existing code that uses 
dot as composition in tight-binding contexts (and they were vociferous), 
they simply don't switch on the ‑XDotPostfixFuncApply extension, and they 
can still get the benefits of OverloadedRecordFields.

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
AntC | 27 Jun 02:12 2013
Picon

Re: Overloaded record fields

> 
>     person.lastName.toUpper    -- <==> toUpper (lastName person)
> 

Oops! that should be one of:

      person.lastName.head.toUpper

      person.lastName.(map toUpper)
Simon Peyton-Jones | 27 Jun 11:33 2013
Picon

RE: Overloaded record fields

|  Exactly! (I did tell you so:
| http://hackage.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields/DotPostfix
|  billed as "optional syntactic sugar")

I confess that I had not fully taken in this suggestion; thank you for reminding me.  The last round exceeded
my input bandwidth, and in any case I often need to be told things more than once.

Anyway, glad to hear what others think about the idea.

Simon
Edward Kmett | 27 Jun 05:11 2013
Picon

Re: Overloaded record fields

Let me take a couple of minutes to summarize how the lens approach tackles the composition problem today without requiring confusing changes in the lexical structure of the language. 


I'll digress a few times to showcase how this actually lets us make more powerful tools than are available in standard OOP programming frameworks as I go. 

The API for lens was loosely inspired once upon a time by Erik Meijer's old 'the power is in the dot' paper, but the bits and pieces have nicely become more orthogonal.

Lens unifies the notion of (.) from Haskell with the notion of (.) as a field accessor by choosing an interesting form for the domain and codomain of the functions it composes.

I did a far more coherent introduction at New York Haskell http://www.youtube.com/watch?v=cefnmjtAolY&hd=1&t=75s that may be worth sitting through if you have more time. 

In particular in that talk I spend a lot of time talking about all of the other lens-like constructions you can work with. More resources including several blog posts, announcements, a tutorial, etc. are available on http://lens.github.com/

A lens that knows how to get a part p out of a whole w looks like

type Lens' w p = forall f. Functor f => (p -> f p) -> w -> f w

In the talk I linked above, I show how this is equivalent to a getter/setter pair.

Interestingly because the function is already CPSd, this composition is the 'reverse' composition you expect.

You can check that:

(.) :: Lens a b -> Lens b c -> Lens a c

The key here is that a lens is a function from a domain of (p -> f p)   to a codomain of (w -> f w) and therefore they compose with (.) from the Prelude.  

We can compose lenses that know how to access parts of a structure in a manner analogous to writing a Traversable instance.

Lets consider the lens that accesses the second half of a tuple:

_2 f (a,b) = (,) a <$> f b

We can write a combinator that use these lenses to read and write their respective parts:



import Control.Applicative
infixl 8 ^.s ^. l = getConst (l Const s)
With that combinator in hand:
("hello","world")^._2 = "world"

(1,(3,4))^._2._2 = 4 -- notice the use of (.) not (^.) when chaining these.

Again this is already in the order an "OOP programmer" expects when you go compose them!

_1 f (a,b) = (,b) <$> f a

(1,(3,4))^._2._1 = 3

The fixity of (^.) was chosen carefully so that the above parses as

(1,(3,4))^.(_2._1)

If you just write the definitions for the lenses I gave above and let type inference give you their types they turn out to be more general than the signature for Lens'  above.

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

With that type you could choose to write the signatures above as:

_1 :: Lens (a,c) (b,c) a b
_2 :: Lens (c,a) (c,b) a b
(^.) :: s -> ((a -> Const a b) -> s -> Const a t) -> a
But we don't need the rank-2 aliases for anything other than clarity. In particular the code above can be written and typechecked entirely in Haskell 98.
We can also generate a 'getter' from a normal haskell function such that it can be composed with lenses and other getters:
to :: (s -> a) -> (a -> Const r b) -> s -> Const r tto sa acr = Const . getConst . acr . sa
x^.to f = getConst (to f Const s) = getConst ((Const . getConst . Const . f) s) = f s
Then the examples where folks have asked to be able to just compose in an arbitrary Haskell function become:
(1,"hello")^._2.to length = 5
We can also write back through a lens:
They take on the more general pattern that actually allows type changing assignment.
modify :: ((a -> Identity b) -> s -> Identity t) -> (a -> b) -> s -> t modify l ab = runIdentity . l (Identity . ab)
set l b = modify l (const b)
These can be written entirely using 'base' rather than with Identity from transformers by replacing Identity with (->) ()
With that in hand we can state the 'Setter' laws:
modify l id = idmodify l f . modify l g = modify l (f . g)
These are just the Functor laws!
and we can of course make a 'Setter' for any Functor that you could pass to modify:
mapped :: Functor f => (a -> Identity b) -> f a -> Identity (f b)mapped aib = Identity . fmap (runIdentity . aib)
then you can verify that 
modify mapped ab = runIdentity . Identity . fmap (Identity . runIdentity ab) = fmap ab modify (mapped.mapped) = fmap.fmap
'mapped' isn't a full lens. You can't read from 'mapped' with (^.). Try it. Similarly 'to' gives you merely a 'Getter', not something suitable to modify. You can't 'modify the output of 'to', the types won't let you. (The lens type signatures are somewhat more complicated here because they want the errors to be in instance resolution rather than unification, for readability's sake)
But we can still use modify on any lens, because Identity is a perfectly cromulent Functor.
modify _2 (+2) (1,2) = (1,4) modify _2 length (1,"hello") = (1,5) -- notice the change of type!modify (_2._1) (+1) (1,(2,3)) = (1,(3,3)) modify (_2.mapped) (+1) (1,[2,3,4]) = (1,[3,4,5])
We can also define something very lens-like that has multiple targets. In fact we already know the canonical example of this, 'traverse' from Data.Traversable. So we'll call them traversals.
We can use modify on any 'traversal' such as traverse:
modify traverse (+1) [1,2,3] = [2,3,4]
This permits us to modify multiple targets with a lens in a coherent, possibly type changing manner.
We can make new traversals that don't exactly match the types in Data.Traversable as well:
type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t
both :: Traversal (a,a) (b,b) a b both f (a,b) = (,) <$> f a <*> f b
modify both (+1) (1,2) = (3,4)
The laws for a traversal are a generalization of the Traversable laws.
Compositions of traversals form valid traversals.
Lens goes farther and provides generalizations of Foldables as 'Folds', read-only getters, etc. just by changing the constraints on 'f' in the (a -> f b) -> s -> f t form.
The key observation here is that we don't need to make up magic syntax rules for (.) just to get reverse application. We already have it!
The only thing we needed was a slightly different (.)-like operator to start the chain ((^.) above.).
This is nice because it allows us to talk about compositions of lenses as first class objects we can pass around.
Moreover they compose naturally with traversals, and the idioms we already know how to use with traverse apply. In fact if you squint you can recognize the code for modify and (^.) from the code for foldMapDefault and fmapDefault in Data.Traversable, except we just pass in the notion of 'traverse' as the extra lens-like argument.
Every Lens is a valid Traversal. 
modify (both._1) (+1) ((1,2),(3,4)) = ((2,2),(4,4))
If you have a lens foo and a lens bar then baz = foo.bar is also a lens.
We can make lenses that can access fairly complex structures. e.g. we can make lenses that let us both read and write whether or not something is in a Set: contains :: Ord k => k -> Lens' (Set k) Bool
contains k f s = (\b -> if b then Set.insert k s else Set.delete k s) <$> f (Set.member k s)


singleton 4 ^. contains 4 = True

singleton 4 ^. contains 5 = False

set (contains 5) True (singleton 4) = fromList [4,5]

This sort of trick has been often used to idiomatically allow for sets of flags to be thrown in data types as a field.

data Flags = Foo | ...
data Bar a = Bar { barA :: a,  barFlags :: Set Flags }

flags f (Bar a flgs) = Bar a <$> f flgs

foo = flags.contains Foo



We can similarly access the membership of a map as a lens.

alterF :: Ord k => Int -> (Maybe a -> f (Maybe a)) -> Map k a -> f (Map k a)

This can be viewed as:

alterF :: Ord k => Int -> Lens' (Map k a) (Maybe a)


or the lens that accesses a field out of a record type:

data Foo = Foo { _fooX, _fooY :: Int }

fooY f (Foo x y) = Foo x <$> f y

The latter usecase is the only one that we've been considering in the record debate, but having a solution that extends to cover all of these strikes me as valuable.

Defining these lenses do not take us outside of Haskell 98. They do not require anything that isn't currently provided by base.

Just a couple more notes: 

I tried to keep the above more or less self-contained. It doesn't use very 'idiomatic' lens code. Normally most of the lens users would use code like:

(1,2) & _2 .~ "hello" = (1,"hello")
  where
    x & f = f x
    l .~ a = modify l (const a) -- with appropriate fixities, etc.

Also of concern to me is that it is already common practice among uses of lens to elide spaces around (.) when composing lenses, so such a syntactic change is going to break a lot of code or at least break a lot of habits.

The relevance to the discussion at hand I think is that (^.) is a rather simple combinator that can be defined in the language today. It is one that has been defined in multiple libraries (lens, lens-family, etc.) It doesn't require weird changes to the syntax of the language and notably once you 'start' accessing into a structure with it, the subsequent dots are just Prelude dots and the result is more powerful in that it generalizes in more directions.

This approach already has hundreds of users (we have 90+ users in #haskell-lens 24 hours a day on freenode, packdeps shows ~80 reverse dependencies http://packdeps.haskellers.com/reverse/lens, etc.) and it doesn't break any existing code.

Simon, the 'makeLenses' 'makeClassy' and 'makeFields' template-haskell functions for lens try to tackle the SORF/DORF-like aspects. These are what Greg Weber was referring to in that earlier email. Kickstarting that discussion probably belongs in another email as this one is far to long, as there a lot of points in the design space there that can be explored.

-Edward

On Wed, Jun 26, 2013 at 4:39 PM, Simon Peyton-Jones <simonpj <at> microsoft.com> wrote:
|  record projections.  I would prefer to have dot notation for a
|  general, very tightly-binding reverse application, and the type of the record
|  selector for a field f changed to "forall r t. r { f :: t } => r -> t"
|  instead of "SomeRecordType -> t".  Such a general reverse application dot would
|  allow things like "string.toUpper" and for me personally, it would
|  make a Haskell OO library that I'm working on more elegant...

Actually I *hadn't* considered that.   I'm sure it's been suggested before (there has been so much discussion), but I had not really thought about it in the context of our very modest proposal.

We're proposing, in effect, that ".f" is a postfix function with type "forall r t. r { f :: t } => r -> t".   You propose to decompose that idea further, into (a) reverse function application and (b) a first class function f.

It is kind of weird that
        f . g  means    \x. f (g x)
but     f.g    means    g f

but perhaps it is not *more* weird than our proposal.

Your proposal also allows things like

        data T = MkT { f :: Int }

        foo :: [T] -> [Int]
        foo = map f xs

because the field selector 'f' has the very general type you give, but the type signature would be enough to fix it.  Or, if foo lacks a type signature, I suppose we'd infer

        foo :: (r { f::a }) => [r] -> [a]

which is also fine.

It also allows you to use record field names in prefix position, just as now, which is a good thing.

In fact, your observation allows us to regard our proposal as consisting of two entirely orthogonal parts
  * Generalise the type of record field selectors
  * Introduce period as reverse function application

Both have merit.

Simon

|  -----Original Message-----
|  From: glasgow-haskell-users-bounces <at> haskell.org [mailto:glasgow-haskell-users-
|  bounces <at> haskell.org] On Behalf Of Dominique Devriese
|  Sent: 26 June 2013 13:16
|  To: Adam Gundry
|  Cc: glasgow-haskell-users <at> haskell.org
|  Subject: Re: Overloaded record fields
|
|  I think it's a good idea to push forward on the records design because
|  it seems futile to hope for an ideal consensus proposal.
|
|  The only thing I dislike though is that dot notation is special-cased to
|  record projections.  I would prefer to have dot notation for a
|  general, very tightly-binding reverse application, and the type of the record
|  selector for a field f changed to "forall r t. r { f :: t } => r -> t"
|  instead of
|  "SomeRecordType -> t".  Such a general reverse application dot would
|  allow things like "string.toUpper" and for me personally, it would
|  make a Haskell OO library that I'm working on more elegant...
|
|  But I guess you've considered such a design and decided against it,
|  perhaps because of the stronger backward compatibility implications of
|  changing the selectors' types?
|
|  Dominique
|
|  2013/6/24 Adam Gundry <adam.gundry <at> strath.ac.uk>:
|  > Hi everyone,
|  >
|  > I am implementing an overloaded record fields extension for GHC as a
|  > GSoC project. Thanks to all those who gave their feedback on the
|  > original proposal! I've started to document the plan on the GHC wiki:
|  >
|  > http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
|  >
|  > If you have any comments on the proposed changes, or anything is unclear
|  > about the design, I'd like to hear from you.
|  >
|  > Thanks,
|  >
|  > Adam Gundry
|  >
|  > _______________________________________________
|  > Glasgow-haskell-users mailing list
|  > Glasgow-haskell-users <at> haskell.org
|  > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
|
|  _______________________________________________
|  Glasgow-haskell-users mailing list
|  Glasgow-haskell-users <at> haskell.org
|  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
AntC | 27 Jun 08:14 2013
Picon

Re: Overloaded record fields

> Edward Kmett <ekmett <at> gmail.com> writes:
> 
> Let me take a couple of minutes to summarize how the lens approach 
tackles the composition problem today without requiring confusing changes 
in the lexical structure of the language. 

Thank you Edward, I do find the lens approach absolutely formidable. And I 
have tried to read the (plentiful) documentation. But I haven't seen a 
really, really simple example that shows the correspondence with H98 
records and fields -- as simple as Adam's example in the wiki. (And this 
message from you doesn't achieve that either. Sorry, but tl;dr, and there 
isn't even a record decl in it.)

Does the lens approach meet SPJ's criteria of:
 * It is the smallest increment I can come up with that
   meaningfully addresses the #1 pain point (the inability to
   re-use the same field name in different records).

 * It is backward-compatible.

[I note BTW that as the "Plan" currently stands, the '.field' postfix 
pseudo-operator doesn't rate too high on backward-compatible.]

I do think that freeing up the name space by not auto-generating a record-
type-bound field selector will help some of the naming work-rounds in the 
lens TH.

> ...

You say:
> 
>  template-haskell functions for lens try to tackle the SORF/DORF-like 
aspects. These are what Greg Weber was referring to in that earlier email. 
> 

errm I didn't see an email from Greg(?)

AntC

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Edward Kmett | 27 Jun 10:38 2013
Picon

Re: Overloaded record fields


On Thu, Jun 27, 2013 at 2:14 AM, AntC <anthony_clayden <at> clear.net.nz> wrote:
> Edward Kmett <ekmett <at> gmail.com> writes:
>
> Let me take a couple of minutes to summarize how the lens approach
tackles the composition problem today without requiring confusing changes
in the lexical structure of the language. 

Thank you Edward, I do find the lens approach absolutely formidable. And I
have tried to read the (plentiful) documentation. But I haven't seen a
really, really simple example that shows the correspondence with H98
records and fields -- as simple as Adam's example in the wiki. (And this
message from you doesn't achieve that either. Sorry, but tl;dr, and there
isn't even a record decl in it.)

There was this one buried down near the bottom.

data Foo = Foo { _fooX, _fooY :: Int }

fooY f (Foo x y) = Foo x <$> f y
 
We could implement that lens more like:

fooY :: Lens' Foo Int
fooY f s = (\a -> r { _fooY = a }) <$> f (_fooY s) 

if you really want to see more record sugar in there, but the code means the same thing.

So let me show you exactly what you just asked for. The correspondence with the getter and setter for the field:

The correspondence with the getter comes from choosing to use the appropriate functor. With some thought it becomes obvious that it should be Const. I won't explain why as that apparently triggers tl;dr. ;)

s ^. l = getConst (l Const s)
Recall that fmap f (Const a) = Const a, so
s ^. fooY = getConst ((\a -> r { _fooY = a }) <$> Const (_fooY s)) = getConst (Const (_fooY s)) = _fooY s
and we can recover the setter by choosing the Functor to be Identity.
modify l f s = runIdentity (l (Identity . f) s)
modify fooY f s = runIdentity (fooY (Identity . f) s) = runIdentity ((\a -> r { _fooY = a }) <$> (Identity . f) (_fooY s) )
if you remove the newtype noise thats the same as
modify fooY f s = s { _fooY = f (_fooY s) }
Similarly after expansion:
set fooY a s = s { _fooY = a }
I sought to give a feel for the derivation in the previous email rather than specific examples, but to work through that and the laws takes a fair bit of text. There isn't any getting around it.



With language support one could envision an option where record declarations cause the generation of lenses using whatever scheme one was going to use for the 'magic (.)' in the first place. 

The only difference is you get something that can already be used as both the getter and setter and which can be composed with other known constructions as well, isomorphisms, getters, setters, traversals, prisms, and indexed variants all fit this same mold and have a consistent theoretical framework.

Does the lens approach meet SPJ's criteria of:
 * It is the smallest increment I can come up with that
   meaningfully addresses the #1 pain point (the inability to
   re-use the same field name in different records).

The lens approach is orthogonal to the SORF/DORF design issue. It simply provides a way to make the field accessors compose together in a more coherent way, and helps alleviate the need to conconct confusing semantics around (.), by showing that the existing ones are enough. 

 * It is backward-compatible.

Lens already works today. So I'd dare say that the thing that works today is compatible with what already works today, yes. ;) 

[I note BTW that as the "Plan" currently stands, the '.field' postfix
pseudo-operator doesn't rate too high on backward-compatible.]

I do think that freeing up the name space by not auto-generating a record-
type-bound field selector will help some of the naming work-rounds in the
lens TH.

I'm going to risk going back into tl;dr territory in response to the comment about lens TH:

Currently lens is pretty much non-commital about which strategy to use for field naming / namespace management.

We do have three template-haskell combinators that provide lenses for record types in lens, but they are more or less just 'what we can do in the existing ecosystem'.

I am _not_ advocating any of these, merely describing what we already can do today with no changes required to the language at all.

makeLenses - does the bare minimum to allow for type changing assignment
makeClassy - allows for easy 'nested record types' 
makeFields - allows for highly ad hoc per field-name reuse

Consider

data Foo a = Foo { _fooBar :: Int, _fooBaz :: a }

and we can see what is generated by each.

makeLenses ''Foo

generates the minimum possible lens support

fooBar :: Lens' (Foo a) Int
fooBar f s = (\a -> s { _fooBar = a }) <$> f (_fooBar a)

fooBaz :: Lens (Foo a) (Foo b) a b
fooBaz f s = (\a -> s { _fooBaz = a }) <$> f (_fooBaz a)

makeClassy ''Foo generates

class HasFoo t a | t -> a where
   foo :: Lens' t (Foo a)
   fooBar :: Lens' t Int
   fooBaz :: Lens' t a
   -- with default definitions of fooBar and fooBaz in terms of the simpler definitions above precomposed with foo

It then provides

instance HasFoo (Foo a) a where
  foo = id

This form is particularly nice when you want to be able to build up composite states that have 'Foo' as part of a larger state.

data MyState = MyState { _myStateFoo :: Foo Double, _myStateX :: (Int, Double) }
makeClassy ''MyState

instance HasFoo MyState Double where
  foo = myStateFoo

This lets us write some pretty sexy code using HasFoo constraints and MonadState.

blah :: (MonadState s m, HasFoo s a) => m a 
blah = do 
  fooBar += 1
  use fooBaz

and that code can run in State Foo or State MyState or other transformer towers that offer a state that subsumes them transparently.

This doesn't give the holy grail of having perfect field name reuse, but it does give a weaker notion of reuse in that you can access fields in part of a larger whole.

I said above that I don't wholly endorse any one of these options, but I do view 'makeClassy' as having effectively removed all pressure for a better record system from the language for me personally. It doesn't permit some of the wilder ad hoc overloadings, but the constraints on it feel very "Haskelly".

Finally,

To provide full field name reuse, we currently use 

makeFields ''Foo  which is perhaps a bit closer to one of the existing record proposals.
  
It takes the membernames and uses rules to split it apart into data type name and field part, and then makes instances of Has<FieldName> for each one.

There are issues with all 3 of these approaches. I personally prefer the middle existing option, because I get complete control over naming even if I have to be more explicit.

I wasn't purporting to solve this portion of the record debate, however. 

I was claiming that lenses offered a superior option to giving back 'r { f :: t } => r -> t

-Edward

> ...

You say:
>
>  template-haskell functions for lens try to tackle the SORF/DORF-like
aspects. These are what Greg Weber was referring to in that earlier email.
>

errm I didn't see an email from Greg(?)

Sorry, I was dragged into this thread by Simon forwarding me an email -- apparently it was in another chain. 

-Edward
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Brandon Allbery | 27 Jun 12:54 2013
Picon

Re: Overloaded record fields

On Thu, Jun 27, 2013 at 2:14 AM, AntC <anthony_clayden <at> clear.net.nz> wrote:
Does the lens approach meet SPJ's criteria of:
 * It is the smallest increment I can come up with that
   meaningfully addresses the #1 pain point (the inability to
   re-use the same field name in different records).

 * It is backward-compatible.

It's difficult to get more backward compatible than "is already working, without any changes to the compiler or standard libraries at all". Note in particular that (.) is not redefined or changed. I think the only pain point is code that itself defines (^.) or functions beginning with an underscore.

As for reusing the same field in different records, the point of lens is it's a generic accessor/mutator mechanism. It doesn't just support different records, it supports different pretty much everything — no significant difference between a record, a tuple, a list, a Map, .... And it composes very well, so it's absurdly easy to drill down into a complex nested structure.

--
brandon s allbery kf8nh                               sine nomine associates
allbery.b <at> gmail.com                                  ballbery <at> sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Dominique Devriese | 27 Jun 09:54 2013
Picon

Re: Overloaded record fields

Simon,

Yes, your summary is exactly what I meant.

2013/6/26 Simon Peyton-Jones <simonpj <at> microsoft.com>:
> In fact, your observation allows us to regard our proposal as consisting of two entirely orthogonal parts
>   * Generalise the type of record field selectors
>   * Introduce period as reverse function application

As Anthony points out below, I think the orthogonality is also an
important benefit.  It could allow people like Edward and others who
dislike DotAsPostFixApply to still use OverloadedRecordFields.  I
expect just the OverloadedRecordFields extension would fit reasonably
well into the existing lens libraries somehow.

Regards
Dominique
AntC | 27 Jun 14:37 2013
Picon

Re: Overloaded record fields

> 
> ... the orthogonality is also an important benefit.
>  It could allow people like Edward and others who dislike ... 
>  to still use ...
> 

Folks, I'm keenly aware that GSoC has a limited timespan; and that there 
has already been much heat generated on the records debate.

Perhaps we could concentrate on giving Adam a 'plan of attack', and help 
resolving any difficulties he runs into. I suggest:

1. We postpone trying to use postfix dot:
   It's controversial.
   The syntax looks weird whichever way you cut it.
   It's sugar, whereas we'd rather get going on functionality.
   (This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)

2. Implement class Has with method getFld, as per Plan.

3. Implement the Record field constraints new syntax, per Plan.

4. Implicitly generate Has instances for record decls, per Plan.
   Including generating for imported records, 
   even if they weren't declared with the extension.
   (Option (2) on-the-fly.)

5. Implement Record update, per Plan.

6. Support an extension to suppress generating field selector functions.
   This frees the namespace.
   (This is -XNoMonoRecordFields in the Plan,
    but Simon M said he didn't like the 'Mono' in that name.)
   Then lenses could do stuff (via TH?) with the name.

   [Those who've followed so far, will notice that
    I've not yet offered a way to select fields.
    Except with explicit getFld method.
    So this 'extension' is actually 'do nothing'.]

7. Implement -XPolyRecordFields, not quite per Plan.
   This generates a poly-record field selector function:

       x :: r {x :: t} => r -> t    -- Has r "x" t => ...
       x = getFld

    And means that H98 syntax still works:

       x e     -- we must know e's type to pick which instance

    But note that it must generate only one definition
    for the whole module, even if x is declared in multiple data types.
    (Or in both a declared and an imported.)

    But not per the Plan:
    Do _not_ export the generated field selector functions.
    (If an importing module wants field selectors,
     it must set the extension, and generate them for imported data types.
     Otherwise we risk name clash on the import.
     This effectively blocks H98-style modules
     from using the 'new' record selectors, I fear.)
    Or perhaps I mean that the importing module could choose
    whether to bring in the field selector function??
    Or perhaps we export/import-control the selector function
    separately to the record and field name???

    Taking 6. and 7. together means that for the same record decl:
    * one importing module could access it as a lens
    * another could use field selector functions

8. (If GSoC hasn't expired yet!)
   Implement ‑XDotPostfixFuncApply as an orthogonal extension ;-).

AntC

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Gershom Bazerman | 27 Jun 14:59 2013
Picon

Re: Overloaded record fields

On 6/27/13 8:37 AM, AntC wrote:
> Folks, I'm keenly aware that GSoC has a limited timespan; and that there
> has already been much heat generated on the records debate.
>
> Perhaps we could concentrate on giving Adam a 'plan of attack', and help
> resolving any difficulties he runs into. I suggest:
>
Adam already has a plan of attack. It is in his proposal, and he appears 
to be proceeding with it. The great strength of his original GSoC 
proposal is that it recognized that we had spent some years debating 
bikeshed colors, but nobody had actually gone off and started to build 
the bikeshed to begin with. I imagine that if all goes well, Adam will 
complete the shed, and it will barely be painted at all.

Have no fear, at that point, there will still be plenty of time for 
debates to grind progress to a screeching halt :-)

--Gershom
Barney Hilken | 27 Jun 15:10 2013

Re: Overloaded record fields

This (AntC's points 1-8) is the best plan yet. By getting rid of dot notation, things
become more compatible with existing code. The only dodgy bit is import/export in point 7:

> 7. Implement -XPolyRecordFields, not quite per Plan.
>   This generates a poly-record field selector function:
> 
>       x :: r {x :: t} => r -> t    -- Has r "x" t => ...
>       x = getFld
> 
>    And means that H98 syntax still works:
> 
>       x e     -- we must know e's type to pick which instance
> 
>    But note that it must generate only one definition
>    for the whole module, even if x is declared in multiple data types.
>    (Or in both a declared and an imported.)
> 
>    But not per the Plan:
>    Do _not_ export the generated field selector functions.
>    (If an importing module wants field selectors,
>     it must set the extension, and generate them for imported data types.
>     Otherwise we risk name clash on the import.
>     This effectively blocks H98-style modules
>     from using the 'new' record selectors, I fear.)
>    Or perhaps I mean that the importing module could choose
>    whether to bring in the field selector function??
>    Or perhaps we export/import-control the selector function
>    separately to the record and field name???

I don't see the problem with H98 name clash. A field declared in a -XPolyRecordFields
module is just a polymorphic function; of course you can't use it in record syntax in a
-XNoPolyRecordFields module, but you can still use it.

I think a -XPolyRecordFields module should automatically hide all imported H98 field names and
generate one Has instance per name on import. That way you could import two clashing H98
modules and the clash would be resolved automatically.

Barney.
Adam Gundry | 27 Jun 16:41 2013
Picon
Picon

Re: Overloaded record fields

Thanks everyone for the illuminating discussion, and for your awareness
of the dangers of bikeshedding. ;-) I think we are making progress though.

I like the idea of making -XFunnyDotSyntax or whatever a separate
extension. It's simple, resolves something of a love-hate issue, and
reduces backwards incompatibility for people who want overloaded record
fields in their existing code. Perhaps we can leave the arguments over
dot syntax for another thread?

There are a bunch of options for translating record fields into selector
functions:
 * monomorphically, as in Haskell 98, which is simple and robust but
doesn't allow overloading;
 * polymorphically, with Has, which permits overloading and is often the
'right' thing (but not always: it isn't great for higher-rank fields,
and can result in too much ambiguity);
 * do nothing in GHC itself, so the namespace is left open for lens or
another library to do wonderful things.

Rather than committing to one of these options, let's allow all of them.
If we start thinking of modules as importing/exporting *field names*,
rather than *selector functions*, perhaps we can allow each module to
decide for itself (via appropriate extensions) how it wants to bring
them in to scope.

I'll see what Simon thinks, draft an updated Plan, and continue trying
to understand what this will mean for GHC's Glorious Renamer...

Adam

On 27/06/13 14:10, Barney Hilken wrote:
> This (AntC's points 1-8) is the best plan yet. By getting rid of dot notation, things
> become more compatible with existing code. The only dodgy bit is import/export in point 7:
> 
>> 7. Implement -XPolyRecordFields, not quite per Plan.
>>   This generates a poly-record field selector function:
>>
>>       x :: r {x :: t} => r -> t    -- Has r "x" t => ...
>>       x = getFld
>>
>>    And means that H98 syntax still works:
>>
>>       x e     -- we must know e's type to pick which instance
>>
>>    But note that it must generate only one definition
>>    for the whole module, even if x is declared in multiple data types.
>>    (Or in both a declared and an imported.)
>>
>>    But not per the Plan:
>>    Do _not_ export the generated field selector functions.
>>    (If an importing module wants field selectors,
>>     it must set the extension, and generate them for imported data types.
>>     Otherwise we risk name clash on the import.
>>     This effectively blocks H98-style modules
>>     from using the 'new' record selectors, I fear.)
>>    Or perhaps I mean that the importing module could choose
>>    whether to bring in the field selector function??
>>    Or perhaps we export/import-control the selector function
>>    separately to the record and field name???
> 
> I don't see the problem with H98 name clash. A field declared in a -XPolyRecordFields
> module is just a polymorphic function; of course you can't use it in record syntax in a
> -XNoPolyRecordFields module, but you can still use it.
> 
> I think a -XPolyRecordFields module should automatically hide all imported H98 field names and
> generate one Has instance per name on import. That way you could import two clashing H98
> modules and the clash would be resolved automatically.
> 
> Barney.
> 
> 
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users <at> haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> 
Evan Laforge | 27 Jun 23:08 2013
Picon

Re: Overloaded record fields

I'm reluctant to add yet another opinion, but, oh what the heck:

For me, lenses basically already solve the record problem.  The only
missing thing is to integrate them better with record declaration
syntax.  Having to rely on TH and then write a makeLenses splice is
just too much friction to have lenses added everywhere automatically.
TH makes compiles more fragile and slower, and worst of all introduces
a declaration order constraint in the source file.  So I declare by
hand, but even at one line per lens, it's still too much friction,
because after all they don't start saving you lines until you want to
update a nested field.

In addition, I can't use unqualified names because I would want to
import only the record lenses unqualified, not the rest of the module.
 So I'd have to move all the data types into individual modules,
because giant long import lists is definitely way too much friction.
But the separate module is no good, because that in turns doesn't let
you preserve invariants by restricting exports.

So if I were doing the GSoC thing, here's what I would do:

1 - Add an option to add a 'deriving (Lens)' to record declarations.
That makes the record declare lenses instead of functions.  That's
already most of the problem solved, for me, because it removes the TH
friction.

2 - The next step is to allow 'deriving (ClassyLens)', which declares
lenses plus the typeclasses to allow shared names.  Then when you
write 'import M (Record(..))', the (..) will import the stuff
generated by 'deriving (ClassyLens)', i.e. the class methods.  Now you
can drop the record name prefixing and import unqualified to drop the
module name qualification as well.

It's still not ideal because you would have to add the unqualified
'import M (Record(..))', but is better than having to write out the
list of fields every single time.  And actually I'm not sure I could
use even that, because record field names are usually the same as what
you want to name the variable, e.g.: "name = parent.name .^ person".
A hardcoded record field thing like SORF has the edge here because you
can write "name = person.parent.name" without 'name' clashing.  But in
every other respect, lenses fit much better into the rest of the
language and are much more powerful for much less (i.e. none!) ad-hoc
language level complexity to support them, so to me they clearly win
at power to weight ratio.

But I don't mind always qualifying, e.g. 'name = Person.parent .
Person.name .^ person' and avoiding the whole classes and unqualified
import hassle, so just step 1 is basically problem solved (in fact,
this is what I already do, just without the automatic lens
generation).

Alas, not everyone shares my attitude towards qualification.  In fact,
I'm probably in a small minority (hi Henning!).  Which is sad, because
just doing #1 would be so easy!  Maybe I should just go do it myself,
it's not like it would conflict with any of the other proposed
extensions.
Daniel Trstenjak | 28 Jun 10:55 2013
Picon

Re: Overloaded record fields


Hi Evan,

> 1 - Add an option to add a 'deriving (Lens)' to record declarations.
> That makes the record declare lenses instead of functions.

Well, no, that's exactly the kind of magic programming language hackery,
that Haskell shouldn't be part of.

Deriving should only add something, but not change the behaviour of the underived case.

I'm really for convenience, but it shouldn't be added willy-nilly,
because in the long term this creates more harm.

Greetings,
Daniel
Simon Peyton-Jones | 28 Jun 09:27 2013
Picon

RE: Overloaded record fields

| Folks, I'm keenly aware that GSoC has a limited timespan; and that there
| has already been much heat generated on the records debate.

I am also keenly aware of this.  I think the plan Ant outlines below makes sense; I'll work on it with Adam.

I have, however, realised why I liked the dot idea.  Consider

	f r b = r.foo && b

With dot-notation baked in (non-orthogonally), f would get the type

	f :: (r { foo::Bool }) => r -> Bool -> Bool

With the orthogonal proposal, f is equivalent to
	f r b = foo r && b

Now it depends. 

* If there is at least one record in scope with a field "foo" 
  and no other foo's, then you get the above type

* If there are no records in scope with field "foo"
  and no other foo's, the program is rejected

* If there are no records in scope with field "foo"
  but there is a function "foo", then the usual thing happens.

This raises the funny possibility that you might have to define a local type
	data Unused = U { foo :: Int }
simply so that there *is* at least on "foo" field in scope.

I wanted to jot this point down, but I think it's a lesser evil than falling into the dot-notation swamp. 
After all, it must be vanishingly rare to write a function manipulating "foo" fields when there are no such
records around. It's just a point to note (NB Adam: design document).

Simon

| -----Original Message-----
| From: glasgow-haskell-users-bounces <at> haskell.org [mailto:glasgow-haskell-
| users-bounces <at> haskell.org] On Behalf Of AntC
| Sent: 27 June 2013 13:37
| To: glasgow-haskell-users <at> haskell.org
| Subject: Re: Overloaded record fields
| 
| >
| > ... the orthogonality is also an important benefit.
| >  It could allow people like Edward and others who dislike ...
| >  to still use ...
| >
| 
| Folks, I'm keenly aware that GSoC has a limited timespan; and that there
| has already been much heat generated on the records debate.
| 
| Perhaps we could concentrate on giving Adam a 'plan of attack', and help
| resolving any difficulties he runs into. I suggest:
| 
| 1. We postpone trying to use postfix dot:
|    It's controversial.
|    The syntax looks weird whichever way you cut it.
|    It's sugar, whereas we'd rather get going on functionality.
|    (This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
| 
| 2. Implement class Has with method getFld, as per Plan.
| 
| 3. Implement the Record field constraints new syntax, per Plan.
| 
| 4. Implicitly generate Has instances for record decls, per Plan.
|    Including generating for imported records,
|    even if they weren't declared with the extension.
|    (Option (2) on-the-fly.)
| 
| 5. Implement Record update, per Plan.
| 
| 6. Support an extension to suppress generating field selector functions.
|    This frees the namespace.
|    (This is -XNoMonoRecordFields in the Plan,
|     but Simon M said he didn't like the 'Mono' in that name.)
|    Then lenses could do stuff (via TH?) with the name.
| 
|    [Those who've followed so far, will notice that
|     I've not yet offered a way to select fields.
|     Except with explicit getFld method.
|     So this 'extension' is actually 'do nothing'.]
| 
| 7. Implement -XPolyRecordFields, not quite per Plan.
|    This generates a poly-record field selector function:
| 
|        x :: r {x :: t} => r -> t    -- Has r "x" t => ...
|        x = getFld
| 
|     And means that H98 syntax still works:
| 
|        x e     -- we must know e's type to pick which instance
| 
|     But note that it must generate only one definition
|     for the whole module, even if x is declared in multiple data types.
|     (Or in both a declared and an imported.)
| 
|     But not per the Plan:
|     Do _not_ export the generated field selector functions.
|     (If an importing module wants field selectors,
|      it must set the extension, and generate them for imported data
| types.
|      Otherwise we risk name clash on the import.
|      This effectively blocks H98-style modules
|      from using the 'new' record selectors, I fear.)
|     Or perhaps I mean that the importing module could choose
|     whether to bring in the field selector function??
|     Or perhaps we export/import-control the selector function
|     separately to the record and field name???
| 
|     Taking 6. and 7. together means that for the same record decl:
|     * one importing module could access it as a lens
|     * another could use field selector functions
| 
| 8. (If GSoC hasn't expired yet!)
|    Implement ‑XDotPostfixFuncApply as an orthogonal extension ;-).
| 
| AntC
| 
| 
| 
| 
| _______________________________________________
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users <at> haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
AntC | 28 Jun 13:16 2013
Picon

Re: Overloaded record fields

> Simon Peyton-Jones <simonpj <at> microsoft.com> writes:
> 
> I have, however, realised why I liked the dot idea.  Consider
> 
> 	f r b = r.foo && b
> 

Thanks Simon, I'm a little puzzled what your worry is.

> With dot-notation baked in (non-orthogonally), f would get the type
>
>	f :: (r { foo::Bool }) => r -> Bool -> Bool
> 
> With the orthogonal proposal, f is equivalent to
> 	f r b = foo r && b
> 
> Now it depends. 
> 
> * If there is at least one record in scope with a field "foo" 
>   and no other foo's, then you get the above type
> 

I don't think the compiler has to go hunting for 'records in scope'.
There is one of two situations in force:

Step 6. -XNoMonoRecordFields  
        Then function foo is not defined.
        (Or at least not by the record fields mechanism.)
        This is exactly so that the program can define
        its own access method (perhaps lenses,
         perhaps a function foo with a different type,
         the namespace is free for experiments).

Step 7. -XPolyRecordFields
        Then function foo is defined with the same type
        as would be for (.foo) in the baked-in approach. IOW

 	f r b = (.foo) r && b     -- baked-in
        f r b = foo r && b        -- non-baked-in, as you put

        foo = getFld :: (r { foo :: Bool } ) => r -> Bool

        So the type you give would be inferred for function f.

        At the use site for f (say applied to record type Bar).
        We need:

            instance (t ~ Bool) => Has Bar "foo" t where ...

        So generate that on-the-fly.

If the program declares a separate function foo,
then we have 'vanilla' name clash, just like double-declaring any name.
(Just like a H98 record with field foo, then declaring a function foo.)

Or is the potential difficulty something like this:

+ function f is defined as above in a module with -XPolyRecordFields.
+ function f is exported/imported.
+ the importing module also uses -XPolyRecordFields.
+ now in the importing module we try to apply f to a record.
  (say type Baz, not having field foo)
+ the compiler sees the (r { foo :: Bool }) constraint from f.

The compiler tries to generate on-the-fly:

    instance (t ~ Bool) => Has Baz "foo" t where
        getFld (MkBaz { foo = foo }) = foo  -- no such field

    But this could happen within a single module.
    At this point, we need Adam to issue a really clear error message.

Or perhaps the importing module uses H98 records.
And it applies f to a record type Baz.
And there is a field foo type Bool in data type Baz.
Then there's a function:

    foo :: Baz -> Bool       -- H98 field selector

Now we _could_ generate an instance `Has Baz "foo" t`.
And it wouldn't clash with Mono field selector foo.

But the extension is switched off. So we'll get:

    No instance `Has Baz "foo" t` arising from the use of `f` ...

(It's this scenario that led me to suggest in step 7
that when exporting field foo,
_don't_ export field selector function foo.)

> 
> This raises the funny possibility that you might have to define a local 
type
> 	data Unused = U { foo :: Int }
> simply so that there *is* at least on "foo" field in scope.
> 

No, I don't see that funny decls are needed.

AntC

> 
> | -----Original Message-----
> | From: glasgow-haskell-users On Behalf Of AntC
> | Sent: 27 June 2013 13:37
> | 
> | 7. Implement -XPolyRecordFields, not quite per Plan.
> |    This generates a poly-record field selector function:
> | 
> |        x :: r {x :: t} => r -> t    -- Has r "x" t => ...
> |        x = getFld
> | 
> |     And means that H98 syntax still works:
> | 
> |        x e     -- we must know e's type to pick which instance
> | 
> |     But note that it must generate only one definition
> |     for the whole module, even if x is declared in multiple data types.
> |     (Or in both a declared and an imported.)
> | 
> |     But not per the Plan:
> |     Do _not_ export the generated field selector functions.
> |     (If an importing module wants field selectors,
> |      it must set the extension, and generate them for imported data
> | types.
> |      Otherwise we risk name clash on the import.
> |      This effectively blocks H98-style modules
> |      from using the 'new' record selectors, I fear.)
> |     Or perhaps I mean that the importing module could choose
> |     whether to bring in the field selector function??
> |     Or perhaps we export/import-control the selector function
> |     separately to the record and field name???
> | 
Malcolm Wallace | 28 Jun 14:32 2013

Re: Overloaded record fields


On 28 Jun 2013, at 12:16, AntC wrote:

> Thanks Simon, I'm a little puzzled what your worry is.
> 
>> 	f r b = r.foo && b

>> With dot-notation baked in (non-orthogonally), f would get the type
>> 
>> 	f :: (r { foo::Bool }) => r -> Bool -> Bool
>> 
>> With the orthogonal proposal, f is equivalent to
>> 	f r b = foo r && b

I believe Simon's point is that, if dot is special, we can infer the "Has" type above, even if the compiler is
not currently aware of any actual record types that contain a "foo" field.  If dot is not special, then there
*must* be some record containing "foo" already in scope, otherwise you cannot infer that type - you would
get a "name not in scope" error instead.

The former case, where you can use a selector for a record that is not even defined yet, leads to good library
separation.  The latter case couples somewhat-polymorphic record selectors to actual definitions.

Unless you require the type signature to be explicit, instead of inferred.

(For the record, I deeply dislike making dot special, so I would personally go for requiring the explicit
type signature in this situation.)

Regards,
    Malcolm
AntC | 28 Jun 15:46 2013
Picon

Re: Overloaded record fields

> Malcolm Wallace <malcolm.wallace <at> me.com> writes:
> 
> >> 
> >> With the orthogonal proposal, f is equivalent to
> >> 	f r b = foo r && b
> 
> I believe Simon's point is that, if dot is special, we can infer 
the "Has" type above, even if the compiler is
> not currently aware of any actual record types that contain a "foo" 
field.

Thanks Malcolm, yes I think I do understand what Simon had in mind.
In effect .foo is a kind of literal.
It 'stands for' the String type "foo" :: Symbol parameter to Has.
(And that's "very odd", as SPJ's SORF write-up points out, because that 
isn't an explicit parameter to getFld.)

But contrast H98 field selector functions. They're regular functions, 
nothing about them to show they're specific to a record decl. And they 
work (apart from the non-overloadability).

So all we're doing is moving to foo being an overloaded field selection 
function. And it's a regular overloaded function, which resolves through 
instance matching.

>  If dot is not special, then there
> *must* be some record containing "foo" already in scope, ...

I think you have it the wrong way round.
Field selector function foo must be in scope.
(Or rather what I mean is that name foo must be in scope,
and it's in-scope binding must be to a field selector.)

And function foo must be in scope because there's a record in scope with 
field foo, that generated the function via -XPolyRecordFields.

> 
> ..., where you can use a selector for a record that is not
> even defined yet, leads to good library separation.

You can't do that currently. So I think you're asking for something beyond 
Simon's "smallest increment".

> 
> Unless you require the type signature to be explicit, instead of 
inferred.

Well, I think that's reasonable to require a signature if you "use a 
selector for a record that is not even defined yet". I'm not convinced 
there's a strong enough use case to try to support auto type inference. 
Simon said "vanishingly rare".

> 
> (For the record, I deeply dislike making dot special, so I would 
personally go for requiring the explicit
> type signature in this situation.)
> 
> Regards,
>     Malcolm
> 
AntC | 30 Jun 13:36 2013
Picon

Re: Overloaded record fields

> Malcolm Wallace <malcolm.wallace <at> me.com> writes:
> 
> I believe Simon's point is that, if dot is special, we can infer 
the "Has" type above, even if the compiler is
> not currently aware of any actual record types that contain a "foo" 
field. ...
> 
> (For the record, I deeply dislike making dot special, ...

Simon, Malcolm, here's a solution (at risk of more bikeshedding on syntax).

    e { foo }

  * The braces say 'here comes a record'.
  * Also say 'expect funny business with names'.
  * The absence of `=` says this is getFld, not update.
  * This is not currently valid syntax [**], so we don't break code.
  * It's postfix. (And only a couple more chars than infix dot.)
  * So perhaps an IDE can see the opening brace and prompt for fields?
    (Perhaps some IDE's do this already for record update?)
  * If we need to disambiguate the record type:

    e :: T Int { foo }       -- as the Plan suggests for record update

Development would fit into the 'sequence of attack' as 5b., with record 
update syntax.

[**] ghc 7.6.1 rejects the syntax, and suggests you need NamedFieldPuns.
     But if you set that, you get a weird type error,
     which suggests ghc is de-sugaring to { foo = foo }.
     We'd have to fix that.

The syntax is valid in a pattern (with NamedFieldPuns).
Indeed the proposed syntax echoes pattern match:

    e .$ (\ (MkFoo { foo }) -> foo )       -- (.$) = flip ($)

We'd better insist NamedFieldPuns is on to allow the proposal.
Otherwise the syntax would have to be:

    e { foo = foo }     -- ambiguous with update

In fact the proposal is an enhancement to NamedFieldPuns,
'repurposed' for OverloadedRecordFields.

Possible future development:

    e { foo, bar, baz }  -- produces a tuple ( _, _, _ )
                         -- with fields in order given
                         -- _not_ 'canonical' order in the data type

  * By coincidence, that syntax is per one of the dialects for
    relational algebra projection over a tuple.

  * Possibly useful for overloaded comprehensions?:

    [ x { foo, bar, baz } | x <- xs ]

    [ { foo, bar } | { foo, bar, baz } <- xs, baz = 27 ]

AntC
Malcolm Wallace | 1 Jul 09:18 2013

Re: Overloaded record fields

> 
> Simon, Malcolm, here's a solution (at risk of more bikeshedding on syntax).
> 
>    e { foo }
> 
>  * The braces say 'here comes a record'.
>  * Also say 'expect funny business with names'.
>  * The absence of `=` says this is getFld, not update.
>  * This is not currently valid syntax [**], so we don't break code.
>  * It's postfix. (And only a couple more chars than infix dot.)
>  * So perhaps an IDE can see the opening brace and prompt for fields?
>    (Perhaps some IDE's do this already for record update?)

I like it.  It fits with the existing syntax.  Nested records are chained:

    foo{bar}{subbar}{zed}

> Possible future development:
> 
>    e { foo, bar, baz }  -- produces a tuple ( _, _, _ )
>                         -- with fields in order given
>                         -- _not_ 'canonical' order in the data type
> 
>  * By coincidence, that syntax is per one of the dialects for
>    relational algebra projection over a tuple.

Not quite so keen on this.  I would argue that in relational algebra (which I use a lot, although with a
dynamically-type API, rather than strongly-typed), the ordering of columns and rows is never
significant, and should never be exposed to the user directly.  I long for the day when we can offer a
strong-typing to Relations, but it would be worse to pretend that something is kind-of relation-like,
without the underlying properties that make it powerful.

Regards,
    Malcolm
Barney Hilken | 1 Jul 11:50 2013

Re: Overloaded record fields

(sorry, accidentally failed to send this to the list)

All this extra syntax, whether it's ., #, or {} seems very heavy for a problem described as very rare.
Why not simply use a declaration

	field name

whose effect is to declare 

	name :: r {name ::t} => r -> t
	name = getFld

unless name is already in scope as a field name, in which case the declaration does nothing?
Then we could continue to use standard functional notation for projection, and still deal with the
case of unused projections.

Barney.
Adam Gundry | 1 Jul 14:53 2013
Picon
Picon

Re: Overloaded record fields

Hi all,

I have amended the plan [1] as a result of the ongoing discussion,
including leaving the syntax alone for the time being, so record
projections are written prefix.

Regarding Barney's suggestion of field declarations:

On 01/07/13 10:50, Barney Hilken wrote:
> All this extra syntax, whether it's ., #, or {} seems very heavy for a problem described as very rare.
> Why not simply use a declaration
> 
> 	field name
> 
> whose effect is to declare 
> 
> 	name :: r {name ::t} => r -> t
> 	name = getFld
> 
> unless name is already in scope as a field name, in which case the declaration does nothing?

This makes sense. I guess the question is whether a new declaration form
is justified. The implementation is slightly more subtle than you
suggest, because we don't know whether `name` will be brought into scope
as a field later, in which case the definition would clash with the
actual field. It should be equivalent to defining

data Unused { name :: () }
data Unused2 { name :: () }

(twice so that there is always ambiguity about a use of `name`).

Adam

[1]
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan
Ryan Newton | 2 Jul 23:33 2013
Picon

Re: Overloaded record fields

+1 for orthogonal -XDotPostfixApply.

I'm in the absolutely-love-using-dot-for-records camp, but I also like "foo s1.toUpper s2.toUpper".  

We're already so lazy about parentheses -- e.g. the non-haskellers I show code to are always appalled at $ ;-) -- and this helps scrap more parens!



On Mon, Jul 1, 2013 at 8:53 AM, Adam Gundry <adam.gundry <at> strath.ac.uk> wrote:
Hi all,

I have amended the plan [1] as a result of the ongoing discussion,
including leaving the syntax alone for the time being, so record
projections are written prefix.

Regarding Barney's suggestion of field declarations:

On 01/07/13 10:50, Barney Hilken wrote:
> All this extra syntax, whether it's ., #, or {} seems very heavy for a problem described as very rare.
> Why not simply use a declaration
>
>       field name
>
> whose effect is to declare
>
>       name :: r {name ::t} => r -> t
>       name = getFld
>
> unless name is already in scope as a field name, in which case the declaration does nothing?

This makes sense. I guess the question is whether a new declaration form
is justified. The implementation is slightly more subtle than you
suggest, because we don't know whether `name` will be brought into scope
as a field later, in which case the definition would clash with the
actual field. It should be equivalent to defining

data Unused { name :: () }
data Unused2 { name :: () }

(twice so that there is always ambiguity about a use of `name`).

Adam

[1]
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Plan



_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
harry | 18 Jul 18:20 2013
Picon

Re: Overloaded record fields

+1 for the -XDotPostfixApply proposal

--
View this message in context: http://haskell.1045720.n5.nabble.com/Overloaded-record-fields-tp5731998p5733121.html
Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.
Dominique Devriese | 28 Jun 17:48 2013
Picon

Re: Overloaded record fields

Simon,

I see your point.  Essentially, the original proposal keeps the
namespace for field names syntactically distinguishable from that of
functions, so that the type given to "r.foo" doesn't depend on what is
in scope.  (.foo) is always defined and it is always a function of
type "(r { foo::t }) => r -> t". With the "orthogonal proposal", it
would only be defined if there is a record with a foo field in scope,
although its definition or type does not actually depend on the
record.   One would then need to define an Unused record with a field
foo, or declare the following
  foo :: r { foo ::t} => r -> t
  foo = getFld
to essentially declare that foo should be treated as a field selector
and I'm not even sure if type inference would work for this
definition... Maybe we could provide syntax like a declaration "field
foo;" as equivalent to the latter, but I have to acknowledge that this
is a downside for the "orthogonal proposal".

Regards,
Dominique

2013/6/28 Simon Peyton-Jones <simonpj <at> microsoft.com>:
> | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
> | has already been much heat generated on the records debate.
>
> I am also keenly aware of this.  I think the plan Ant outlines below makes sense; I'll work on it with Adam.
>
> I have, however, realised why I liked the dot idea.  Consider
>
>         f r b = r.foo && b
>
> With dot-notation baked in (non-orthogonally), f would get the type
>
>         f :: (r { foo::Bool }) => r -> Bool -> Bool
>
> With the orthogonal proposal, f is equivalent to
>         f r b = foo r && b
>
> Now it depends.
>
> * If there is at least one record in scope with a field "foo"
>   and no other foo's, then you get the above type
>
> * If there are no records in scope with field "foo"
>   and no other foo's, the program is rejected
>
> * If there are no records in scope with field "foo"
>   but there is a function "foo", then the usual thing happens.
>
> This raises the funny possibility that you might have to define a local type
>         data Unused = U { foo :: Int }
> simply so that there *is* at least on "foo" field in scope.
>
> I wanted to jot this point down, but I think it's a lesser evil than falling into the dot-notation swamp. 
After all, it must be vanishingly rare to write a function manipulating "foo" fields when there are no such
records around. It's just a point to note (NB Adam: design document).
>
> Simon
>
> | -----Original Message-----
> | From: glasgow-haskell-users-bounces <at> haskell.org [mailto:glasgow-haskell-
> | users-bounces <at> haskell.org] On Behalf Of AntC
> | Sent: 27 June 2013 13:37
> | To: glasgow-haskell-users <at> haskell.org
> | Subject: Re: Overloaded record fields
> |
> | >
> | > ... the orthogonality is also an important benefit.
> | >  It could allow people like Edward and others who dislike ...
> | >  to still use ...
> | >
> |
> | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
> | has already been much heat generated on the records debate.
> |
> | Perhaps we could concentrate on giving Adam a 'plan of attack', and help
> | resolving any difficulties he runs into. I suggest:
> |
> | 1. We postpone trying to use postfix dot:
> |    It's controversial.
> |    The syntax looks weird whichever way you cut it.
> |    It's sugar, whereas we'd rather get going on functionality.
> |    (This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
> |
> | 2. Implement class Has with method getFld, as per Plan.
> |
> | 3. Implement the Record field constraints new syntax, per Plan.
> |
> | 4. Implicitly generate Has instances for record decls, per Plan.
> |    Including generating for imported records,
> |    even if they weren't declared with the extension.
> |    (Option (2) on-the-fly.)
> |
> | 5. Implement Record update, per Plan.
> |
> | 6. Support an extension to suppress generating field selector functions.
> |    This frees the namespace.
> |    (This is -XNoMonoRecordFields in the Plan,
> |     but Simon M said he didn't like the 'Mono' in that name.)
> |    Then lenses could do stuff (via TH?) with the name.
> |
> |    [Those who've followed so far, will notice that
> |     I've not yet offered a way to select fields.
> |     Except with explicit getFld method.
> |     So this 'extension' is actually 'do nothing'.]
> |
> | 7. Implement -XPolyRecordFields, not quite per Plan.
> |    This generates a poly-record field selector function:
> |
> |        x :: r {x :: t} => r -> t    -- Has r "x" t => ...
> |        x = getFld
> |
> |     And means that H98 syntax still works:
> |
> |        x e     -- we must know e's type to pick which instance
> |
> |     But note that it must generate only one definition
> |     for the whole module, even if x is declared in multiple data types.
> |     (Or in both a declared and an imported.)
> |
> |     But not per the Plan:
> |     Do _not_ export the generated field selector functions.
> |     (If an importing module wants field selectors,
> |      it must set the extension, and generate them for imported data
> | types.
> |      Otherwise we risk name clash on the import.
> |      This effectively blocks H98-style modules
> |      from using the 'new' record selectors, I fear.)
> |     Or perhaps I mean that the importing module could choose
> |     whether to bring in the field selector function??
> |     Or perhaps we export/import-control the selector function
> |     separately to the record and field name???
> |
> |     Taking 6. and 7. together means that for the same record decl:
> |     * one importing module could access it as a lens
> |     * another could use field selector functions
> |
> | 8. (If GSoC hasn't expired yet!)
> |    Implement -XDotPostfixFuncApply as an orthogonal extension ;-).
> |
> | AntC
> |
> |
> |
> |
> | _______________________________________________
> | Glasgow-haskell-users mailing list
> | Glasgow-haskell-users <at> haskell.org
> | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users <at> haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Carter Schonwald | 30 Jun 08:06 2013
Picon

Re: Overloaded record fields

at the risk of contributing to this bike shedding discussion, I'd like to chime in:

Lets not break compose. 

Also: why not something nearly as simple that ISN'T used already, eg (.$) operator or something? (.) has enough overloading of what it can mean already!  Now we have that (.) means 3 *completely* different things when we do "A.f" ,  "A . f" , and  "a.f" !  So we have an unprincipled conflation of *different* syntactic and semantic things, and no deep reason for this aside from "cause its what everyone else does". 

Also unaddressed is how error messages for type and syntax will need to be changed to handle any ambiguities that arise! Such engineering is beyond the scope of whats feasible in a single summer I think... would it not be better to choose an operator that *does not* create a potential conflation with extant standard infix operators and qualified names?

Consider a strawman of (.$), which also doesn't not result in any syntactic ambiguity, AND reuses familiar notational conventions of "compose apply" AND resembles the conventional field accessor, AND to the best of my knowledge isn't used in any current major libraries on hackage. (a quick search with hayoo/holumbus indicates only one package on hackage, that hasn't been touched in 5+ years has that infix operation) 

Lets just give the darn field application its own function! "$" reads as apply, why not ".$" for "field apply"?  Lets just make this 
a first class operation that has highest precedence! 

eg 
(.$)::   r ->(r { fieldName ::b } => r->b)->b
(.$) rec fun = --- stuff here

Summary:
Lets not make a wide spread syntactic element MORE Confusing. Please. Also as explained by others, it will break Lens which is now a *very* widely used library by many in the community. Theres no good reason. At all. 

I welcome an explanation that motivates the . dot syntax and all the extra ghc flag hoops people are bikeshedding around that are necessitated by the syntactic tomfoolery, but I can not imagine any good reason aside from people's feelings and the inertia of opinions already developed.


cheers, either way, i'm excited about the prospect of making it easier to write Symbol Singletons more easily as a result of the more *important* elements of this work.

-Carter




On Fri, Jun 28, 2013 at 11:48 AM, Dominique Devriese <dominique.devriese <at> cs.kuleuven.be> wrote:
Simon,

I see your point.  Essentially, the original proposal keeps the
namespace for field names syntactically distinguishable from that of
functions, so that the type given to "r.foo" doesn't depend on what is
in scope.  (.foo) is always defined and it is always a function of
type "(r { foo::t }) => r -> t". With the "orthogonal proposal", it
would only be defined if there is a record with a foo field in scope,
although its definition or type does not actually depend on the
record.   One would then need to define an Unused record with a field
foo, or declare the following
  foo :: r { foo ::t} => r -> t
  foo = getFld
to essentially declare that foo should be treated as a field selector
and I'm not even sure if type inference would work for this
definition... Maybe we could provide syntax like a declaration "field
foo;" as equivalent to the latter, but I have to acknowledge that this
is a downside for the "orthogonal proposal".

Regards,
Dominique

2013/6/28 Simon Peyton-Jones <simonpj <at> microsoft.com>:
> | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
> | has already been much heat generated on the records debate.
>
> I am also keenly aware of this.  I think the plan Ant outlines below makes sense; I'll work on it with Adam.
>
> I have, however, realised why I liked the dot idea.  Consider
>
>         f r b = r.foo && b
>
> With dot-notation baked in (non-orthogonally), f would get the type
>
>         f :: (r { foo::Bool }) => r -> Bool -> Bool
>
> With the orthogonal proposal, f is equivalent to
>         f r b = foo r && b
>
> Now it depends.
>
> * If there is at least one record in scope with a field "foo"
>   and no other foo's, then you get the above type
>
> * If there are no records in scope with field "foo"
>   and no other foo's, the program is rejected
>
> * If there are no records in scope with field "foo"
>   but there is a function "foo", then the usual thing happens.
>
> This raises the funny possibility that you might have to define a local type
>         data Unused = U { foo :: Int }
> simply so that there *is* at least on "foo" field in scope.
>
> I wanted to jot this point down, but I think it's a lesser evil than falling into the dot-notation swamp.  After all, it must be vanishingly rare to write a function manipulating "foo" fields when there are no such records around. It's just a point to note (NB Adam: design document).
>
> Simon
>
> | -----Original Message-----
> | From: glasgow-haskell-users-bounces <at> haskell.org [mailto:glasgow-haskell-
> | users-bounces <at> haskell.org] On Behalf Of AntC
> | Sent: 27 June 2013 13:37
> | To: glasgow-haskell-users <at> haskell.org
> | Subject: Re: Overloaded record fields
> |
> | >
> | > ... the orthogonality is also an important benefit.
> | >  It could allow people like Edward and others who dislike ...
> | >  to still use ...
> | >
> |
> | Folks, I'm keenly aware that GSoC has a limited timespan; and that there
> | has already been much heat generated on the records debate.
> |
> | Perhaps we could concentrate on giving Adam a 'plan of attack', and help
> | resolving any difficulties he runs into. I suggest:
> |
> | 1. We postpone trying to use postfix dot:
> |    It's controversial.
> |    The syntax looks weird whichever way you cut it.
> |    It's sugar, whereas we'd rather get going on functionality.
> |    (This does mean I'm suggesting 'parking' Adam's/Simon's syntax, too.)
> |
> | 2. Implement class Has with method getFld, as per Plan.
> |
> | 3. Implement the Record field constraints new syntax, per Plan.
> |
> | 4. Implicitly generate Has instances for record decls, per Plan.
> |    Including generating for imported records,
> |    even if they weren't declared with the extension.
> |    (Option (2) on-the-fly.)
> |
> | 5. Implement Record update, per Plan.
> |
> | 6. Support an extension to suppress generating field selector functions.
> |    This frees the namespace.
> |    (This is -XNoMonoRecordFields in the Plan,
> |     but Simon M said he didn't like the 'Mono' in that name.)
> |    Then lenses could do stuff (via TH?) with the name.
> |
> |    [Those who've followed so far, will notice that
> |     I've not yet offered a way to select fields.
> |     Except with explicit getFld method.
> |     So this 'extension' is actually 'do nothing'.]
> |
> | 7. Implement -XPolyRecordFields, not quite per Plan.
> |    This generates a poly-record field selector function:
> |
> |        x :: r {x :: t} => r -> t    -- Has r "x" t => ...
> |        x = getFld
> |
> |     And means that H98 syntax still works:
> |
> |        x e     -- we must know e's type to pick which instance
> |
> |     But note that it must generate only one definition
> |     for the whole module, even if x is declared in multiple data types.
> |     (Or in both a declared and an imported.)
> |
> |     But not per the Plan:
> |     Do _not_ export the generated field selector functions.
> |     (If an importing module wants field selectors,
> |      it must set the extension, and generate them for imported data
> | types.
> |      Otherwise we risk name clash on the import.
> |      This effectively blocks H98-style modules
> |      from using the 'new' record selectors, I fear.)
> |     Or perhaps I mean that the importing module could choose
> |     whether to bring in the field selector function??
> |     Or perhaps we export/import-control the selector function
> |     separately to the record and field name???
> |
> |     Taking 6. and 7. together means that for the same record decl:
> |     * one importing module could access it as a lens
> |     * another could use field selector functions
> |
> | 8. (If GSoC hasn't expired yet!)
> |    Implement -XDotPostfixFuncApply as an orthogonal extension ;-).
> |
> | AntC
> |
> |
> |
> |
> | _______________________________________________
> | Glasgow-haskell-users mailing list
> | Glasgow-haskell-users <at> haskell.org
> | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users <at> haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Stephen Paul Weber | 27 Jun 16:35 2013
Picon

Re: Overloaded record fields

Somebody claiming to be Simon Peyton-Jones wrote:
>It is kind of weird that
>	f . g  means    \x. f (g x)
>but     f.g    means    g f

Anything that makes f.g mean something different from f . g just makes the 
code soup.

F . g being different from F.g is already *very* unfortunate.  The 
capital-letter makes it normally not too crazy, but sometimes you want to 
compose data constructors, and then it's a big issue.

Making this issue worse in order to solve something else does not seem like 
a good trade-off.

Why not use a different character?  There are lots of them :)

--

-- 
Stephen Paul Weber,  <at> singpolyma
See <http://singpolyma.net> for how I prefer to be contacted
edition right joseph
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Stephen Paul Weber | 27 Jun 16:39 2013
Picon

Re: Overloaded record fields

Somebody claiming to be Dominique Devriese wrote:
>I would prefer to have dot notation for a
>general, very tightly-binding reverse application, and the type of the record
>selector for a field f changed to "forall r t. r { f :: t } => r -> t"
>instead of
>"SomeRecordType -> t".  Such a general reverse application dot would
>allow things like "string.toUpper"

If that's even possible, then we do not need the `.` at all, and can just 
use perfectly normal function application.

If people want to create YetAnotherFunctionApplicationOperator, we can't 
stop them, but no reason to include one (especially that overlaps with an 
existing, more useful, operator)

--

-- 
Stephen Paul Weber,  <at> singpolyma
See <http://singpolyma.net> for how I prefer to be contacted
edition right joseph
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
AntC | 27 Jun 13:43 2013
Picon

Re: Overloaded record fields

> Adam Gundry <adam.gundry <at> strath.ac.uk> writes:
>
> I've started to document the plan on the GHC wiki:
> 
http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Pla
n
> 

Thank you Adam, (Simon)

I like the approach for Representation hiding. (That was something I was 
particularly trying to preserve out of H98 records.)

At first I was worried this would involve 'scoped' instances -- which 
seems a wild departure.

But I see in the 'Multiple modules' section that you're generating 
instances on-the-fly. So I agree that option (2) looks best.

As I understand it, this means that the extensions switched on in the 
importing module is all we have to worry about, not what extensions 
applied where the data type is declared.

So for backward compatibility, I can import historic Library L ( R(x) ) 
which knows nothing about the new stuff. And in my module (with the 
extension on) declare data type T with field x, and not have a clash of 
field names.

Sweet!

Gmane