Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go 2 Draft Designs (googlesource.com)
909 points by conroy on Aug 28, 2018 | hide | past | favorite | 473 comments


Looks good. The control flow of "handle" looks a bit confusing, but overall I definitely like the direction this is going.

The decision to go with concepts is interesting. It's more moving parts than I would have expected from a language like Go, which places a high value on minimalism. I would have expected concepts/typeclasses would be entirely left out (at least at first), with common functions like hashing, equality, etc. just defined as universals (i.e. something like "func Equal<T>(a T, b T) bool"), like OCaml does. This design would give programmers the ability to write common generic collections (trees, hash tables) and array functions like map and reduce, with minimal added complexity. I'm not saying that the decision to introduce typeclasses is bad—concepts/typeclasses may be inevitable anyway—it's just a bit surprising.


I don't like the possibility of nested handle() blocks.

This could make grasping the control flow in a function very cumbersome.

Everything else looks like a very good change to me and would draw me back in to Go.


I've thought about that before, as I participated in one of the bugs where we chewed on these various matters, and I came around to the conclusion that in general, understanding what is in your current scope is just the cost of doing business. Being scope-based is the minimal cognitive load you can hope for.

I also strongly suspect that in the general case, you're not going to see more than one handler at the top of the function. It will be a rare function that has a dozen of them in there, and either A: it needs to be refactored, and while I'm sorta sympathetic to the idea that we shouldn't hand programmers too much rope to hang themselves with, that argument has the problem that you end up removing pretty much everything from the language or B: it's the core inner complicated loop of a program and it just needs that much complexity, which still doesn't mean it's going to show up in every function.

It's not that dissimilar from one function getting nested four layers deep in try/catch statements; yes, it would be a bit confusing to disentangle what that would actually do (moreso, since the catch statements are conditional but the handlers are not), but the answer is simply not to do that unless you really need to, and thwack any coworker that tries it on the wrist during code review. It is impossible to make a Turing-complete language in which I can't express something incomprehensible in.


I actually like the explicit error handling idiom a lot.

(especially in Rust now that is has been made convenient with the ? operator and the failure crate which provides backtraces and wrapping).

I would prefer an extension of the `check` syntax instead of the standalone handle blocks.

For example:

    check someFn() 
    |> if err : = someFn(); err != nil { return err; }

    check someFn() handle {
      panic(err)
    } 
    |> if err := someFn(); err != nil { panic(err) }
So handle blocks are only for an individual expression.

This way the control flow is still linear and explicit (ignoring defer...) but the syntax is much more convenient.


Yeah, this is the obvious approach. Just make "check EXPR handle BLOCK" syntactic sugar for the "if err != nil" idiom. It's totally clear to the reader what's going on.


It's obvious because it's essentially just try/catch (as I'm sure you know) but misses the whole point of why they propose anything different.

The point of 'handle' is that it applies to ALL FUTURE calls to check in the function body. If we're going to consider alternatives we need to at least address the authors perspective (even if just to deny it).

I guess the proposal's central idea is that once you see an error, error handling is going to be focused on rolling back various stages of partial progress. It's the complement to how 'defer' incrementally accumulates guaranteed execution in the face of either a return or unwind.

That said, how common this issues is and if it's worth optimizing for is more up to debate. It would be nice to see a lexicon of well handled errors; for 90% most of what I do log and unwind is sufficient.

What do I know -- I actually like Java style checked/unchecked exceptions and think they'd work well for Go as the parent said. For what I'm up to it's often sufficient to just not leak partial progress via side effects. All the partial progress just evaporates into the GC. 'finally' or 'defer' takes care of most of the .close() or .unlock() calls.

The change I'd make to Java-style exceptions if I had a time machine is to make all exceptions types unchecked. Instead of looking at exception's types at all, enforce checks only if a function has 'throws' declared.

Calls to 'void close() throws IOException' would still need to be checked but a function containing 'throws new IOException()' isn't forced to declare throws or catch it. This lets API's be explicit and force due diligence but lets user code intelligently allow unwinding until it's caught at the appropriate level. Some sugar for code to acknowledge the checked error and unwind could be nice I suppose.

I think fattening out Go's err's to full exceptions with stacktraces is more pressing personally.


As someone that also likes them, I would point out that actually Java design team took their inspiration from CLU, Modula-3 and C++ regarding checked exceptions.


Im only familiar with Java and python, do CLU or Modula-3 do much differently? I heard C++ engineers generally hate exceptions, partially because they're unchecked.


The point being that CLU, Modula-3 and C++ had checked exceptions before Java was even a thing.

CLU never went beyond university projects, and Modula-3 research died after the labs went through several acquisitions, DEC => Compaq => HP.

As for C++, checked exceptions never gained much support, because their semantics are different from what Java adopted. Throwing an exception that isn't part of the list terminates the application and there were some other issues with them, so now as of C++17 they have been dropped from the standard after being deprecated in the previous ones.

In general there are two big communities in C++, the ones that turn everything on (RTTI, exceptions, STL, Boost...) and those that rather use C but get to use a C++ compiler instead.

So no we don't hate exceptions, it just depends on which side of the fence one is.


...you have proposed try/finally/except I guess (just put some closures in there where needed).

I think just using try/finally/except would be a step up over what has been proposed.


good idea. please add it to the feedback they're asking for :)


Yeah, the control flow of "handle" is the most questionable thing, from my standpoint. (Incidentally, I find the function-scoped nature of "defer" unfortunate too.)


handle/check looks like try/catch in reverse order with check/try limited to statements instead of code blocks. Not sure if that's an improvement or just different for the sake of being Go.


I have noticed that my Python code increasingly works like this via the optional else clause in its try/except mechanism.


I don't think the Go 2 generics proposal amount to type classes;

From what I inferred from the proposal, contracts are structurally typed rather than nominally such that you don't have a location at which you explicitly denote that you are implementing something. Rather, the fact that a type coincidentally has the right methods and such provided becomes the implementation / type class instance.

Also; I didn't check this in a detailed way, but does contracts as proposed have the coherence property / global uniqueness of instances? I would consider that a requirement to call a scheme for ad-hoc polymorphism on types "type classes". In other words, Idris does not have type classes but Haskell and Rust do.


I'm also surprised, manual dictionary passing ("scrap your typeclasses"-style) seems way simpler.


I dislike concepts because it seems to be a half measure; one that is already causing them pains in the design.

For example the issues with, implied constraints, infinite types, and parametric methods could be solved by using something more properly founded in type theory. Concepts are effectively refinement types[1] and a properly founded implementation from type theory would mitigate these issues.

[1] https://en.wikipedia.org/wiki/Refinement_type


What "properly founded implementation" would you suggest?

Concepts are not "effectively" refinement types.


Combative a bit are we?

I mean something properly founded in type theory, like say _refinement types_ as implemented in liquid haskell: https://www.microsoft.com/en-us/research/wp-content/uploads/... . Constraints of course are _not_ refinement types, but very close to a _subset_ of refinement types. Specifically they have deep similarity to Refined Type Classes from the above paper.

Having used Idris, LiquidHaskell, and F* to implement provable algorithms for some distributed systems I _don't_ think Go should go down that route. The techniques are simply not simple or clean enough for more general purpose languages. But having a type system which has a subset of those feature, and which stands on firmer mathematical firmament lets you reason about stuff like "which of these two programs is okay:

``` // OK type List(type T) struct { elem T next List(T) }

// NOT OK - Implies an infinite sequence of types as you follow .next pointers. type Infinite(type T) struct { next Infinite(Infinite(T)) }```

Which the go authors see as problematic:

> "It is unclear what the algorithm is for deciding which programs to accept and which to reject."

Idris certainly has no problem choosing which ones should and should not be used.


I don't think there is any lack of clarity about your List example; that is clearly forbidden. The example that is less clear is the one that generates a million types and then stops.

A similar issue arises in C++; the C++ standard says that there is an implementation defined limit on the total depth of recursive instantiations. Perhaps Go should do something similar.


C++'s method is certainly a practical way to limit expansion on inductively defined data types. I think a better way would be to construct a type system _opposite_ of most constructed today (which focus on the ability to express problems) and instead intentionally limit the types we are able to construct to those with "easy to use" and computationally efficient properties.

Consider this sample. We want to be able to define inductive types such as `List(T)`, but not `InfiniteList(T)` or `BigArray(T)`. While `BigArray(T)` is an interesting construct (it's effectively a dependent type)* it jumps past the "can I keep it in my head" smell test for me. As soon as a I have to reason deeply about what a type constructor does it just doesn't feel like Go to me.

So we want to be able to construct types which are inductive but can only calculate a single type. List{T} calculates one type, BigArray calculates _n_ types, and InfiniteList calculates an infinite number of types.

* In Idris one would write something like:

  data BigArray : (n : Nat) -> (l : List m) -> Type where
      Z l : Vect Z v
      n l : Vect n l


Although I don't think they called out refinement types as a possibility (I only skimmed near the end), I wouldn't be surprised if they quickly determined that tackling undecidability is not a goal of the Go type system. Especially since every attempt at refinement types (or even nontrivial constraints) ends up with a compiler that gets really slow, and fast compilations are one of the tenets of the language.

Go contracts as-written, or "accept a T that can do a thing and have a default implementation", is orthogonal to refinement types.

N.B. I'd really love to see refinement types break past the acceptable threshold of compilation time, since they're really neat and are easy to explain to people in terms of their usefuless.


My thoughts here are based on this section of the generics draft:

> We would like to understand better if it is feasible to allow any valid function body as a contract body.

A generic function to compute a type is _very much_ in the realm of both refinement and dependent types. The BigArray example (as mentioned in a sister thread) is just a dependent type.

> The hard part is defining precisely which generic function bodies are allowed by a given contract body. .. We are most uncertain about exactly what to allow in contract bodies, to make them as easy to read and write for users while still being sure the compiler can enforce them as limits on the implementation. That is, we are unsure about the exact algorithm to deduce the properties required for type-checking a generic function from a corresponding contract.

My proposal/goal here is to define a type system, or set of constructors which has nice, formally provable limits on what can be constructed. Those constructs should be both easy to grasp _and_ computationally simple. As you say _general_ refinement types are not suited for a compiler focused on speed; however, a subset can very well be.


Yes, I would already be more than happy with what CLU allowed for.


In some dependently typed languages they can be easily implemented via refinement types, where the “refinements” are the implementations of the typeclass requirements. Idris is a good example of this approach.

It has the advantage of making typeclasses first-class, and of enabling a lot of additional functionality without additional fundamental constructs. It is however very antithetical to what someone means when they say Go is minimalistic.


Overall really great that Go is addressing it's current biggest issues. I do think the argument for the check keyword instead of a ? operator like in Rust is quite weak. Mainly because a big advantage of the ? operator (apart from the brevity) is that it can be used in method calling chains. With the check keyword this is not possible. AFAICT the check keyword could be replaced for the ? operator in the current proposal to get this advantage, even while keeping the handle keyword.

Furthermore the following statement about rust error handling is simply not true:

> But Rust has no equivalent of handle: the convenience of the ? operator comes with the likely omission of proper handling.

That's because the ? operator does a min more than the code snippet in the draft design shows:

    if result.err != nil {
	    return result.err
    }
    use(result.value)

Instead it does the following:

    if result.err != nil {
	    return ResultErrorType.from(result.err)
    }
    use(result.value)
This means that a very common way to handle errors in Rust is to define your own Error type consumes other errors and adds context to them.


> I do think the argument for the check keyword instead of a ? operator like in Rust is quite weak. Mainly because a big advantage of the ? operator (apart from the brevity) is that it can be used in method calling chains.

In Dart (as in a couple of other languages) we have an `await` keyword for asynchrony. It has the same problem you describe and it really is very painful. Code like this is not that uncommon:

    await (await (await foo).bar).baz);
It sucks. I really wish the language team had gone with something postfix. For error-handling, I wouldn't be surprised if chaining like this was even more common.


Maybe I'm crazy but I'd much prefer that as:

   const temp = await foo;
   const bar = await temp.bar;
   const baz = await bar.baz;
But then again it looks like in this example we are returning promises from getters on objects which is something I would avoid.


Yeah, the guidance is usually to hoist the subexpressions out to variables like you do here. But I encounter it often enough that it feels like we are trying to paper over a poor language syntax.

> we are returning promises from getters on objects which is something I would avoid.

Getters are very common in Dart and it's idiomatic to use them even for properties that are asynchronous. (It's not like making it a method with an extra `()` really solves anything.) It's not as common to have nested chains of asynchronous properties, but they do happen, especially in tests.


Would it be better to use statements more? After all, this is a sequence of steps and it's doing a context switch between each step. That seems better written vertically, one per line.


Deeply chained method calls are a code smell on their own. But would it really be that much better if you wrote it as something like:

    foo..bar..baz;


I'm not sure about Dart, but in my experience it's rare that you await a function which returns an object with another awaitable in it.

Never mind something that nests these 2 levels deep.


fetch() + response.text() is usually two. add a user-defined third and you're there.


You're right, I oversimplified that. But having a "ResultErrorType" is not really context, not by itself. The interesting context would be additional fields recorded in that type.


That being the case, you should change the doc, because it's an important distinction. Speaking personally, I have found in my own Rust that the presence of the propagation operator does not, in fact, come with the "likely omission of proper handling"; to the contrary, it has allowed me to (properly) propagate errors that I can't meaningfully add context to, and to (properly) handle those errors that I can handle -- all with very readable code that doesn't involve error-prone boilerplate.


Yeah, the underhanded comment "the convenience of the ? operator comes with the likely omission of proper handling" is not only unnecessary but completely wrong in my experience. It strikes me as if the author hasn't actually ever used or investigated the language in any kind of depth, and took a wild guess that "making propagating errors easy means people don't actually deal with errors" or something.

In practice, writing `?` is no more or less automatic than `if foo, err := someFunc(bar); err != nil { return nil, err }`, but the difference is that it's immediately obvious when special error-handling logic has been added, by the simple fact of its existence.


This is something I don't enjoy about Rust's error handling. The context is added on a package level (or at least, the level at which the Error type is defined, which seems to be usually package level), and often quite far away from where the error was emitted. For all its ills, `errors.Wrap(err, "...")` puts the adding of context incredibly close to the place where the error is relevant.


x? will convert the error type for x, E1, to the error type for the calling function, E2, only when a conversion from E1 to E2 is defined. If its not possible to do this conversion without some contextual information, then this bare conversion can't be defined and x? will not compile. In that case, callers will have to do something like

    x.context(...)?
    x.map_err(|e| ...)?
    ...etc...
so they have to provide contextual information.


You can use map_err or the failure crate's with_context to add context.


What do you think are the ills of errors.Wrap?


> But Rust has no equivalent of handle: the convenience of the ? operator comes with the likely omission of proper handling.

Additionally, Rust programs heavily use "RAII", avoiding the need for `handle`.

But there is talk of adding support for `catch`.

See https://internals.rust-lang.org/t/pre-rfc-catching-functions...


Go doesn't chain things much. I like that it doesn't.

Verbosity can be a pain. But needlessly dense code is (imho) less readable. Having a single character that changes the entire context of a statement is not fun (I have the same problem with !). Reading a set of a dozen chained functions, some with ? and some without... that's not fun for anyone.


I get the feeling that the core Go team is rather against 1-character operators. Also, the Go community doesn't tend to do a lot of call chaining from a general stylistic standpoint.


I've written a lot of Go in the last few years. I stopped chaining calls after I realized how much of a pain it is to debug.

When you need to print / log a value from the middle of your call chain you have to take the chain apart. So just write it out to start with. It's not any less efficient.


I'd note that's only an issue if you don't have a good IDE.

In IntelliJ you can breakpoint on an expression that uses chaining, hold down the alt key and then click on the expression you want to evaluate. It turns into a hyperlink and can be viewed easily.


I think needing a heavy IDE to comfortably work in a language/paradigm is a negative


Given what Lisp, Smalltalk, Mesa/Cedar, Oberon, Delphi, VB introduced as productivity tooling, I see as negative programming as if my computer was still stuck in 80x25 green phosphor terminal.


That's a silly caricature. I could retort "I see as a negative if my development environment takes 40GB of RAM and half an hour to boot up" and it would be on the same level.

Debuggers are great but they're not always practical, for instance for debugging remote targets with limited connectivity. Actually in some situations you may end up having to buy expensive licenses for closed source software to run a debugger on certain hardware (although that might not be a concern for Go).

I'm also of the opinion that firing the debugger as soon as something unexpected happens instead of taking 10 seconds to reason about your code might lead you to miss a more general problem with your architecture as you focus solely on the symptoms, but that's of course a lot more subjective and your experience may vary. Personally I generally tend to use debuggers as a last recourse when I really can't make sense of what's going on through "static" analysis.


also, if you write a ton of log statements because that's how you debug, then when your production system blows up and you can't fire up the debugger, you have something to work on.


Or use modern instrumentation data that debuggers can work with like IntelliTrace, JMX, Flight Recorder, JTAG,....


I don't understand why it has to be an exclusive OR. Both printf-style logging and debuggers have their use. As I mentioned in my previous comment deploying some tracing solutions can be a complicated and expensive process, writing to an UART is however very cheap. Breaking or slowing down the execution of tasks interacting with other threads and resources can be impractical or simply hide the bug altogether if it's race-condition related, sometimes toggling a GPIO and putting an oscilloscope on the PIN beats any trace framework in terms of practicality. It will also work regardless of programing language, developing environment software and hardware used.

Saying that debuggers are for noobs and that they shouldn't be used is idiotic but arguing that not using them is doing it wrong and being stuck at the age of 80x25 terminals isn't much better. Wisdom is to use the right tool for the right job.


Amen


You don't need an IDE but it solves problems like "I don't want to chain my methods because my weak debugger can't handle it well".


Chained method calls being difficult to debug does not necessarily mean using a debugger. It can mean difficult to insert clarifying code in the middle of the chain: print statements, or additional value checks, and so on.


Chained methods also don't diff as clearly (the one liners anyway).


They do on GUI diff tools by using different colors.


It's still not as clear as just splitting across lines, plus in most tools you need to explicitly indicate you want to highlight word diffs.

But more abstractly speaking: are chained calls, from the POV of a human that didn't write it, or wrote it two months ago, more readable than a number of statements on multiple lines?


Generally speaking, Gophers don't interactively debug. There is some support for it, but it's historically been difficult and we have tended to adjust to logging everything and working out the problem from the logs.


That's OK but when using "logs" it takes more time and tries to find a complex bug than to search for it using an interactive debug tool.


I don't know about Go, but if I faced such an issue with rust, I would make a trait with one method:

fn print(self) -> Self { println!("{:?}", self); self }

You can do it nicely, so all you need is to add one `derive` to a type declaration, and then you will be able to insert .print() into the chain.

It is like print in lisps, might be really handy for the debug printing.


I'm not sure why you would prefer to (1) create a new trait, (2) create a custom derive for that trait, (3) derive that trait for all the types you want to debug over assigning to a variable and printing. It sounds like an over-engineered solution to a non-problem to me.


Because that's a tiny amount of work that saves you from making bloated junk variables all over.


The compiler removes them, and the naming of intermediates often makes the code more understandable.


By bloat I mean in the source code.

Naming intermediates can sometimes make things clearer, but also it can make things less clear. Often there is no good name, or the good name is just as long as the code that made it.


(1), (2) and (3) needs to be one once per lifetime. Or even less, if it was published on cargo.io. It is not an issue. And it doesn't seem as over-engineering for me, it would be like 10 lines of library code, with clear purpose. If it was 50 lines of code, with several methods for different cases, and with a lot of corner-cases, then it might be called over-engineering. But 10 lines of straight self-describing code with two line comment about suggested uses is not.

As to derive for each type, I already use `#[derive(Debug)]` for most of my types, to be able to print them with {:?} in format/println!. Though you are right in the sense, that it would be great to make such a method a part of Debug trait. But I personally have never needed it, so maybe complications of Debug trait would be the over-engineering you speak about.

From other hand I do not like variables. Variables are complications of code, and complications like this make it impossible to grasp code on the first sight. There are a lot of cases when you need a variable, and it is not apparent on the first sight which case is it. You need to stop and think. I do not like to think just to find that there was no reason to think. Intelligence is a constrained resource, you need to use it wisely.


In Scala, I use an implicit class to add a `tee` function. I kind of wish it were in the standard library so I wouldn't have to import it.


> Also, the Go community doesn't tend to do a lot of call chaining from a general stylistic standpoint.

There's also the small matter of multiple returns breaking the chain.


> chaining

Could you do this?

check (check func1()).func2()

Looks ugly but maybe they can clean it up further. In fact maybe they could just introduce something that looks like ? but works like check.


It looks like check can be used in calling chains.

That is, if F is string to string,error, and G is empty to string,error then

  s := check F(check G())
either assigns a string to s, or calls the handler.


I hope this doesn't make it in. I'd hate to see the inevitable

check (check A(check B(check C( ... ))))


I wouldn't. It's very explicitly saying, this is a series of calls that can fail.


These are all great starts, I'm glad the Go team is finally able to get some purchase around these slippery topics!

Unless implied constraints are severely limited, I don't think they're worth it. The constraints are part of the public interface of your generic type, I'm worried that we could end up with an accidental constraint or accidentally missing a constraint and having the downstream consumer rely on something that wasn't an intended behavior.

For Go 2, I would really like to see something done with `context`. Every other 'weird' aspect of the language I've gone through the typical stages-of-grief that many users go through ("that's weird/ugly" to "eh it's not that bad" to "actually I like this/it's worth it for these other effects"). But not context. From the first blog post it felt kinda gross and the feeling has only grown, which hasn't been helped by seeing it spread like an infection through libraries, polluting & doubling their api surface. (Please excuse the hyperbole.) I get it as a necessary evil in Go 1, but I really hope that it's effectively gone in Go 2.


How would a replacement for `context ` look?

Also, I don't share your sentiment. I actually find Context to be a quite nice, minimal interface. Having a ctx argument in a function makes it very clear that this function is interruptible. I certainly prefer this over introducing a bunch of new keywords to express the same.


The problem with context isn't necessarily the interface, it is that it is "viral".

If you need context somewhere along a call chain, it infects more than just the place you need it — you almost always have to add it upwards (so the needed site gets the right context) and downwards (if you want to support cancellation/timeout, which is usually the point of introducing a context).

Cancellation/timeout is important, so of course there have been discussions of adding context to io.Reader and io.Writer. But there's no elegant way to retrofit them without creating new interfaces that support a context argument.

Cancellation/timeout is arguably so core to the language that it should be an implicit part of the runtime, just like goroutines are. It would be trivial for the runtime to associate a context with a goroutine, and have functions for getting the "current" context at any given time. Erlang got this right, by allowing processes to be outright killed, but it might be too late to redesign Go to allow that.

I'm ignoring the key/value system that comes with the Context interface, because I think it's less core. It certainly seems less used than the other mechanisms. For example, Kubernetes, one of the largest Go codebases, doesn't use it.


> it infects more than just the place you need it — you almost always have to add it upwards and downwards

> Cancellation/timeout is arguably so core to the language that it should be an implicit part of the runtime

This is exactly what I mean, thank you for putting it so succinctly! I would go even further to claim that Go 1 goroutines are fundamentally incomplete due to missing this ability to control their behavior from the outside, and context is an attempt to paper over the missing functionality.

The key-value store built into them is also an attempt to paper over a large gap in the language: the absence of GLS (goroutine-local storage :P).

That context combines two ugly coverups of major deficiencies with the fact that using it means doubling almost all apis and infecting the whole ecosystem is why I dislike it so much.


Agreed. Thread-local storage has rightly been criticized as a bad idea, but Go'a contexts are arguably worse. It'd be easier if Go has something like Scala's implicits.


As a type theorist and programming language developer, I’ll admit that’s a fairly reasonable design for generics.

I’m still a bit disappointed by the restrictions: “contracts” (structural typeclasses?) are specified in a strange “declaration follows use” style when they could be declared much like interfaces; there’s no “higher-level abstraction”—specifically, higher-kinded polymorphism (for more code reuse) and higher-rank quantification (for more information hiding and safer APIs); and methods can’t take type parameters, which I’d need to think about, but I’m fairly certain implies that you can’t even safely encode higher-rank and existential quantification, which you can in various other OOP languages like C#.

Some of these restrictions can be lifted in the future, but my intuition is that some features are going to be harder to add later than considering them up front. I am happy that they’re not including variance now, but I feel like it’ll be much-requested and then complicate the whole system for little benefit when it’s finally added.


They are truly looking for feedback on these proposals. If you have thoughts on how could change to be potentially more compatible for future additions, consider doing a blog post write up and submitting it to them.


Thanks for the comments. We tried for some time to declare contracts like interfaces, but there are a lot of operations in Go that can not be expressed in interfaces, such as operators, use in the range statement, etc. It didn't seem that we could omit those, so we tried to design interface-like approaches for how to describe them. The complexity steadily increased, so we bailed out to what we have now. We don't know whether this is is OK. We'll need more experience working with it.

I'm not sure that Go is a language in which higher-level abstraction is appropriate. After all, one of the goals of the language is simplicity, even if it means in some cases having to write more code. There are people right here in this discussion arguing that contracts adds too much complexity; higher order polymorphism would face even more push back.


Yeah, I’m not sure higher-level abstraction is right for Go either—like I said, I do think the tradeoffs in the generics design are ultimately reasonable, given the design constraints.

I guess it’s those very constraints that get to me. This is going to sound harsh, but I believe the goal of “simplicity” is already very nearly a lost cause for Go.

The language is large and lacks any underlying core theory, with many features baked in or special-cased in the name of simplicity that would have been obviated by using more general constructs in the first place; the language is not simple, it’s easy for a certain subset of programmers. Adding convenience features here and there and punting on anything that seems complex by making it an intrinsic has led to a situation where it’s increasingly difficult to add new features, leading to more ad-hoc solutions that are Go-specific. This leads to the same problem as languages like C++: by using Go, developers must memorise details that are not transferable to other languages—it’s artificial complexity.

A major source of this complexity is inconsistency. There are few rules, but many exceptions—when the opposite is more desirable. There are tuple assignments but no tuples. There are generic types but no generics (yet). There are protocols for allocation, length, capacity, copying, iteration, and operators for built-in containers, but no way to implement them for user-defined types—the very problem you describe with contracts. Adding a package dependency is dead-easy, but managing third-party dependency versions is ignored. Error handling is repetitive and error-prone, which could be solved with sum types and tools for abstracting over them, but the draft design for error handling just adds yet another syntactic special case (check/handle) without solving the semantic problem. Failing to reserve space in the grammar and semantics for features like generics up-front leads to further proliferation of special cases.

Virtually all of the design decisions are very conservative—there’s relatively little about the language that wasn’t available in programming language research 40 years ago.

I don’t mind writing a bit of extra code when in exchange I get more of something like performance or safety wrt memory, types, race conditions, side effects, and so on. But Go’s lack of expressiveness, on the whole, means it doesn’t provide commensurate value for greater volume of code. But it could! None of these issues is insurmountable, and the quality of the implementation and toolchain is an excellent starting point for a great language. That’s why I care about it and want to see it grow—it has the immense opportunity to help usher in a new era of programming by bringing new(er) ideas to the mainstream.


Old and busted:

    func printSum(a, b string) error {
        x, err := strconv.Atoi(a)
	if err != nil {
		return err
	}
	y, err := strconv.Atoi(b)
	if err != nil {
		return err
	}
	fmt.Println("result:", x + y)
	return nil
    }
New hotness:

    func printSum(a, b string) error {
	x := check strconv.Atoi(a)
	y := check strconv.Atoi(b)
	fmt.Println("result:", x + y)
	return nil
    }


Why the handler function over something like Rust's propagation operator? [0]

I see that it adds significant flexibility, but at the cost of verbosity and a new keyword that largely conflicts with one of Go's niches right now, web servers. And that flexibility seems likely to go unused in my experience. I would be shocked to see anything other than `return err` inside that block.

Sure errors are values and all that, and maybe I'm just working on the wrong codebases or following worst practices. But generally I see three approaches to errors in Go code, in order of frequency:

1. Blindly propagate down the stack as seen here. "It's not my problem- er, I mean, calling code will have a better idea of what to do here!"

2. Handle a specific type of error and silently recover, which does not typically need a construct like this in the first place.

3. Silently swallow errors, driving future maintainers nuts.

This seems to only really help #1, but `return thingThatCanError()?.OtherThing()?` can handle that just as well.

[0]: https://doc.rust-lang.org/book/second-edition/ch09-02-recove...


I believe this is explicitly discussed in the doc: https://go.googlesource.com/proposal/+/master/design/go2draf...


Their discussion of Rust's error handling is woefully incomplete. They seem to think that the only thing that exists is the ? operator, which will return your error early, or a match statement, which is verbose. It misses some important aspects of rust's error handling.

1. There is a typed conversion between one type of error and another, which can keep context. 2. There are traits which can extend error types. 3. Backtraces can optionally be kept on (at a performance hit during errors of course).

I have a lot of code written that looks like this, using the excellent "failure" crate, which allows optionally to add context to any error:

file.seek(SeekFrom::Start(offset)) .context(format!("failed to seek to {} in file", offset))?;

You get explicit errors, minimal code, optional error context. It's pretty perfect.


I got turned off of failure when a reddit user reported that it slowed down his program by 300x and the author of failure's response was that the original poster had misused failure.

https://www.reddit.com/r/rust/comments/7te8si/personal_exper...


They said the OP misused it largely because the documentation was insufficient, which is a reasonable response. There are different types of error object, and the one used was a poor fit for common, expected errors.

What would you have wanted them to do instead?


That's unfortunate, as I think you're reading into it to see an attitude that is not there. In fact that response makes me more interested in using it.

The author starts by identifying the use of the wrong type. They then immediately blame it on their own documentation in the very next sentence.


That snippet looks like it'd be very helpful, I'm going to make a note to check out that library. Thanks!


"Their discussion of Rust's error handling is woefully incomplete."

These are clearly not intended to be dissertations on the topic.

However, i don't think anything you said changes their discussion, actually.


> However, i don't think anything you said changes their discussion, actually.

How does it not? It seems to me that OP revealed an alternative method of error handling (which is very cool I might add), exactly in response to Google's discussion on Rust's error handling.

If the discussion doesn't address this, then it seems reasonable to say it's incomplete, as its omission makes it appear they're not debating Rust at its strongest.

edit: paper->discussion


Because none of the things said actually refute any of the points in the Google discussion? At all?

I also definitely don't feel a failure to go off and survey every random library that exists for rust makes a discussion "woefully incomplete" or that it isn't debating rust at it's strongest. The same treatment is given to all languages, including rust, go, etc.

The discussion is also about language features, and it discusses the language features.


In your new hotness, I believe that handler is already implicit, so you can just leave it off.

What I am still curious about, and can't see whether or not is possible from the current draft (possible I've just skimmed too hard) is whether printSum can become:

    func printSum(a, b string) error {
        fmt.Println("result:",
            check strconv.Atoi(a) +
            check strconv.Atoi(b))
        return nil
    }
Possibly with an addition paren around the check.

(I'm just using this as an example; in this case I probably would assign x & y on separate lines since in this case this saves no lines, but there are other cases where I've wanted to be able to call a function that has an error but avoid the 4-line dance to unpack it.)


It can, and this is mentioned in the error chain design doc[1].

[1]: https://go.googlesource.com/proposal/+/master/design/go2draf...


Yes, what you write should work with the current draft.


Cool. It isn't something I'd want to abuse, but there are places in my code where it would clean things up nicely, too.


That should be possible because check is an expression


Serious question: why is the error return from Println not handled?


In general, catching errors when there is simply nothing useful to do with them isn't that useful. If the world is so broken that printing isn't working, it probably isn't the print failing that you care about. Even the errcheck linter, which I use in all my serious code, doesn't make you check the error from basic print statements like that. (Fprint it makes you check. But not just Print.)


Then why does Print return an error code at all?


So if the printing doesn't work, you can see what's going on?


But all the comments in response to the question are saying there's no point in checking the return code of print, because if it's broken, you can't print the error code you got anyway (or it's never broken in practice).

I think there's a deeper point the question asker was making here - it's easy to say "everyone should just check error codes" but in practice it never works out that way. People always ignore/swallow some of them, or propagate them in such a way that they lose detail e.g. a detailed error code from a subsystem is mapped to a generic error code higher up the abstraction stack. Print failing is a classic example of something most wouldn't care to check, but perhaps there's a good reason for it. With exceptions in the common case, where nobody checked, an exception would propagate up the stack until it was either caught and handled in a generic way ("subsystem X failed") or simply caught by the runtime.


"But all the comments in response to the question are saying there's no point in checking the return code of print"

That's not quite what I said, and the difference is important. I said there's no point catching an error when you have nothing useful to do with it.

It so happens that the basic "print to standard out" is the extreme outlier of a function where you have nothing very useful to do with it in the general case. What are you going to do, print an error? Are you going to log it? Probably not, because if you have a log in the first place you probably aren't seriously using printing.

There are exceptions where you might care, such as a command-line program where printing is the major case you care about, but it so happens that the printing functions often have nothing useful to do with errors.

It is not the only case. I often don't log things terribly strongly during the time a socket is being shut down at some point in a protocol where both sides have agreed to shut the connection down, for instance. Or if I have a socket connection and the underlying connection fails, I don't need an individual log message for every concrete manifestation of that fact necessarily. If logging fails... uhh... what exactly is one supposed to do with that information in the general case?

So even though I use errcheck like I said on all my code, there are places where I throw the error away on purpose. It's clearly indicated in the code, though, rather than being implicit, since errcheck requires a "_ = erroringThing()" line of code. Except for printing to stdout, because as a special case, that's just too common a case to worry about, even when one is careful enough to use a linter for everything else.


There's no reason to care about print being broken, unless there's a reason to care about print being broken. Both cases need to be supported. The developer ignoring the error value if they don't care is perfectly rational.


I'm having a hard time making it fail. Even with Printf, in the output it tells you what you did wrong instead of returning an error.

In fact, the only _print function I could get to fail was Fprintf by sending an already closed file. I suppose if it was somehow not allowed to write to stdout the other functions might fail. But I'm not sure.

    _, err := fmt.Printf("Hello %d", "world")
    fmt.Println(err)
    
    >> Hello %!d(string=world)
    >> <nil>


Yes, the various fmt.Print functions only return an error if Write on the underlying Writer returns an error.


Here you go: https://play.golang.org/p/0vGdejrjm5H

To reproduce the key lines here:

    os.Stdout.Close()
    _, err := fmt.Println("foo")
You can also close stdout from outside the program, use ptrace to cause the write call to fail, or a number of other things.


There are two main reasons:

1. Many developers learn the print funcs before they really start paying attention to errors as return values, and so never think to check what it returns.

2. If println is failing, things are going very wrong. It would probably be more ergonomic to just have it panic in most cases, because if it errors out then what are you honestly going to do to recover gracefully? Just let your surrounding infrastructure handle it, whether that's restarting the process, starting a new container somewhere else on the cluster, or whatever.


If stdout is unavailable, should that really be fatal? Perhaps it should be considered an unusual equivalent to piping to /dev/null.


Which is the current practice of ignoring errors from `fmt.Println()`.


In this particular example there is no good reason, it’d be better to end with:

    return fmt.Println()


Only seems useful when you have a function and want every error handled the exact same way, and don't have any requirement to add context the way https://github.com/pkg/errors does.


Read the design draft (https://go.googlesource.com/proposal/+/master/design/go2draf...). The issues you raise are addressed.


Not quite. You'd need to write new `handle err { ... }` blocks for every new context, like how they have one in the for loop. In effect you're just moving the handling of the error away from the place it occurred, which is not optimal. As someone who writes Go every day I don't see this strategy effectively cutting down on boilerplate core or adding clarity.


While this is true, more often than not, in lower-level functions you want errors to bubble up. For example, in most Node.js code, the first line in each callback raises the error to the callback, or likewise throws up in a promise/async chain.

Beyond this, there's no reason you can't add additional context and/or wrap the original error before returning your own error.

With this additional syntax and the addition of generics, I'm far more likely to take up go. Up to this point, I'd only toyed with it a little, because it just felt so cumbersome to actually get stuff done compared to node. Beyond that, dealing with web-ui, if I'm going to take on a cognitive disconnect for a back-end, I want as little friction as possible.


I'm in the same boat as you. Error handling and code generation (to support the lack of generics) in go always left a poor impression on me. I ended up going back to node for work, f# or typescript for personal projects.

These two changes alone are enough to get me to pick go back up.


My point is that using this to wrap context around errors is even more verbose than what's available/common today, since you either settle for one wrap for the whole function or you have to define multiple `handle` blocks.


Does the "old" way no longer work?


To be frank, I was for a long time on the camp that Generics is a much-needed feature of Go. Then, this post happened: https://news.ycombinator.com/item?id=17294548. The author made a proof-of-concept language "fo" on top of go with generics support. I was immediately thrilled. Credit to the author, it was a great effort.

But then, after seeing the examples, e.g. https://github.com/albrow/fo/tree/master/examples/, I could see the picture of what the current codebase would become after its introduced. Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

Few months back, I tried to raise a PR to docker. It was a big codebase and I was new to Golang, but I was up and writing code in an hour. Compared to a half-as-useful Java codebase where I have to carry a dictionary first to learn 200 new english words for complicated abstractions and spend a month to get "assimilated", this was absolute bliss. But yes, inside the codebase, having to write a copy-pasted function to filter an array was definitely annoying. One cannot have both I guess?

I believe Golang has two different kinds of productivity. Productivity within the project goes up quite a lot with generics. And then, everyone tends to immediately treat every problem as "let's create a mini language to solve this problem". The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

I don't know whether generics is a good idea anymore. If there was an option to undo my vote in the survey...


All I want is typesafe efficient higher order list operations like map, filter, reduce, flatmap, zip, takeUntil, etc. I don't care if Go puts them into the stdlib and only allows these operators to use special internal generics to operate. If Go had a robust higher order std library like this, I would stop asking for generics. Again, must be typesafe and zero overhead.


One of the main motivations for adding generics is that we could then put typesafe algorithms and data structures in the standard library.


And make the standard library unmagical? One of the things I've always disliked about Go is how stdlib provides magic facilities unavailable to user code.


I agree, but there's not much, and it's usually for decent reasons:

* the "time" package hooks into the Go scheduler.

* the "reflect" package needs type information out of the runtime.

* the "net" package hooks into the Go scheduler a bit. But way less than it used to. I think you might even be able to do this all yourself outside of std nowadays.

What else?

Our rough plan going forward is to actually move a bunch of the stdlib elsewhere but ship copies/caches of tagged versions with Go releases. So maybe "encoding/json" is really an alias for "golang.org/std/encoding/json" and Go 1.13.0 ships with version 1.13.0 of golang.org/std/encoding/json. But if you need a fix, feature, or optimization sooner than 6 months when 1.14.0 comes out, you can update earlier.


My guess that quotemstr meant builtins: append, copy, range "protocol", etc.


There are many examples of things builtins can do which user-code can't:

1. The constructs like "x <- chan" and "x, ok <- chan" (and map indexing and so on) are only available to builtin types.

It's impossible to write a thing like "x, ok <- chan" where it either panics or doesn't depending on if the caller wanted two or one return values.

2. generic types, like 'map' and 'chan'.

3. 'len' can't be implemented for a user type, e.g. "len(myCustomWorkQueue)" doesn't work.

4. range working without going through weird hoops (like returning a channel and keeping a goroutine around to feed it)

5. Reserving large blocks of memory up front (e.g. as 'make' can do for slices and maps) in an efficient manner (e.g. make(myCustomWorkQueue, 0, 20) doesn't work)


You're talking about the language. He's talking about the standard library.


We're both talking about both. The core of my complaint is that the language privileges parts of the standard library.


> privileges parts of the standard library

It really doesn't. Maps and channels are not part of the stdlib. They're part of the language, but not the library.

Similarly, range is part of the language, not the stdlib.

If the stdlib was privileged, "container/*" in it would be generic, but it's not.


All right, but my point stands. Maps and channels should be part of stdlib, and they should rely on primitives available to user code too.


Maps and channels are not part of the standard library. How you know that is, you can replace the entire standard library, using none of it, and still be programming in Go with maps and channels.


Parent wrote "should be" not "are".


Such primitives would bring a lot of complexity which is something Go tried to avoid.

Consider that make, map, chan etc are just language features/specs just like int64, string, func, main etc


Why should string be fundamental?


Nobody said it "should" be. It simply is, in Go, the language we're discussing. If you nerdsnipe an HN thread with an argument that there's no difference between a language, its runtime, and its standard library, people are going to pile on to point out that's not true.


Could, sure. Should? I think you are asserting facts that are not in evidence.


I'm a bit confused, and I'd like to understand your position better. I can see why arrays, structs/tuples, or [tagged] unions are usually primitives – they sort of express different fundamental layouts of data in memory, so they need support from the runtime/compiler. In the case of Go, channels also make sense as a primitive – as I understand it, `select` requires scheduler support.

But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood. So, especially in a "systems language", I would sort of expect them to just be libraries in the stdlib, not language features[1]. But if you think otherwise (or see a gap in my reasoning) I'd love to hear why!

[1] Except maybe some syntactic sugar for list/dict/set literals.


"Should" implies some normative assignment of value.

> But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood. So, especially in a "systems language", I would sort of expect them to just be libraries in the stdlib, not language features[1].

Maybe! But if those higher order structures are closer to language features than libraries -- and, specifically to this conversation, if they leverage parts of the runtime that aren't available to mere mortals -- is it a strictly worse outcome? I don't know. Some evidence might suggest yes. But I think it's at best arguable.


I think I understand, thanks for explaining.


> But higher level data structures like lists, sets, maps etc. are all implemented using those primitives under the hood

In Go you don't have generics as a primitive with which to build those things.

As a result, you don't have the building blocks for anyone to build a usable data structure unless it's baked into the core of the language.


I'm aware of that, and I read the generics proposal linked. I commented here because I wanted to respond to this:

> Should [be just libraries]? I think you are asserting facts that are not in evidence.

I tried to describe why I feel that go's collections should be libraries and not builtins, and hopefully understand why GP seemed like they were questioning that idea.


What's magical about the stdlib?


Maybe they meant the language, which has some generic functions like append or delete - instead of those functions being attached to a data type or accepting any type, they only act on built-in types. I doubt any of that would change though given the stated goal of remaining compatible in most cases.


There are a handful of packages (time and some stuff in os for example) that are tied into the runtime and can't be rewritten outside the standard library. It's not a huge amount, and some of it can probably be fixed in the future.


What if generics were just implemented internally in Go? IMO, there are good use cases, like generic numbers (int32, int64, etc.), list processing, etc. There's already precedence for this with map[A]B, make(), etc.


What changed? Why is the Go team finally considering this now, when it has been requested for these exact reasons by many credible developers since Go's beginning?

EDIT: Yes, something has changed. They always said "no, but maybe later" and now it's "ok how about now". That is a big change and I am just asking what's the reason for it.


A year ago we said it was time to move toward Go 2: https://blog.golang.org/toward-go2 . This is another step in that direction.


Then I'm a year late in asking why the Go team is now deciding to say yes to something they said no to for over half a decade.


The blog post explains that.


Nothing changed. It wasn't off the table to begin with. See: https://golang.org/doc/faq#generics


The question of the parent boils down to "what put it on the menu".

So, yeah, it was always "on the table" in some hazy, we'll see later, way, but not like that.


I think a good faith reading of the circumstance would lead one to conclude that the passage of time, and other, higher-priority and lower-cost-of-implementation issues being addressed, have put these issues on the menu.


Likely because they have now figured out implementations of these features which they feel work well with the rest of the language design.


Because now is later than when they said later?


The reason for saying "no, but maybe later" originally was that it was always a desired piece of functionality. I believe the word "inevitable" was even tossed around. The problem was that they didn't like the existing implementations of generics in other languages and also didn't know how they would improve it. I haven't had time to look at the details, so I'm not sure what they actually did, but I think they basically came up with a strategy that they are mostly happy with.

You might ask, why not pick a solution earlier and then improve it over time? However with a popular language it's not that easy. People are writing lots of code. If you start early, you can paint yourself in a corner and end up with a system that you really can't improve very much because it introduces incompatible changes. Note the extremely painful (and drawn out!) incompatible changes with Perl 6 and Python 3. If you make the wrong choice early, you might still take over a decade to find an opportunity to replace it. And since the state of the art in language design moves very slowly, it took a long time before they could see something that they felt they wouldn't regret choosing.


I highly recommend reading https://go.googlesource.com/proposal/+/master/design/go2draf...

This is part of the material linked to from this thread. It answers all your questions. You will see that some of the people who are directly replying to you are very much involved in this effort.

Edit: I tried editing the previous message but HN actually automatically made a reply instead....


Adding generics is a major change (e.g. it would restructure the stdlib), and so it would only belong in a major-version bump of the language.


Though the authors have clarified that the "2" in "Go 2" is just for marketing purposes, and that the actual release would still be "Go 1.x", and shouldn't break any existing code (so it won't restructure the stdlib in any major way, unless they want to do a huge mass-deprecation).


So Go is following the exact same path as Java.


Inevitably.

Also inevitably, Java will get value types (structs) at some point.

Just like with Go Generics, people will complain the feature is bolted on. After some time it is just one of the many quirks of a complex language.

I think it was Stroustrup who said: there are two kinds of programming languages, the ones people hate and the ones nobody uses.


And Java had quite a few good examples to take on like Modula-3 and Eiffel, but alas they wanted something for average Joe, kind of sounds familiar.


The difference is that Go could have learned from Java.


I think because it would break backward compatability in v1, and not doing that was the Most Important Thing in all 1.xx releases.

v2 is allowed to Break Stuff. So obviously they're going to consider the thing that people have shouted for most in Go... generics and error handling (and package management but they found a way of doing that earlier).


> What changed? Why is the Go team finally considering this now, when it has been requested for these exact reasons by many credible developers since Go's beginning?

Simply, Go has probably accumulated all the developers it is going to accumulate without something drastically changing.

At this point, I can't think of any programmers around me who now want to learn Go.


I basically agree. I still think, while we're at it, making Generics a general feature of the language would probably be easier than adding all kinds of special cases.

But yeah, if Go just added a generic map/filter mechanism (Python's list comprehensions (in turn inherited from Haskell) look very nice), that would cover about %75 of the use cases I want Generics for. Add a generic mechanism to use the range operator with user-defined types, and we're at about 95%.


> Python's list comprehensions (in turn inherited from Haskell) look very nice

Do they? I find them hard to read and hard to chain. I tend to prefer basic map, filter, reduce methods


Mmmh, that is a good point. I never used nested list comprehensions, to be honest, and I have not used Python in a couple of years. For non-nested cases, they are very nice, though.

OTOH, I never used nested map/remove-if constructs in Lisp, because that, too, can get rather annoying to read.


I agree with this sentiment completely. Of course, if the only way to get these wonderful list operations is generics, then I guess I want generics, too.


You might be interested in my project: https://github.com/lukechampine/ply

The idea is that when people say "generics", they usually just mean map/filter/reduce. So ply just bakes those into the language directly instead of adding new generic programming facilities.


Those are the things that I don't want to see in Go, honestly.


I'm curious: why not?


Generally, chaining things in the style that encourages that makes it harder to debug the code, harder to step through it in my head or on paper, and harder to compile to performant code. I'd rather see loops.


On the other hand, you're less likely to need to debug those things because they've been implemented once, correctly. A manually-written filter, map, etc. will pretty much always have a higher probability of containing a bug. And in languages that have these features, I've never found it difficult to debug. What issues have you had in the past?


I've almost never had issues with manually written filters/maps -- at least, not with the iteration itself. On the other hand, having to untangle stack traces of closures of closures that have been passed through closures does slow me down when the function that's being mapped is buggy.


I guess I've never run into issues where a map or filter slowed me down for debugging, most languages will pretty easily show you where the error occurred. And if you're `map`ping and `filter`ing with pure functions, debugging is pretty simple because you just need the input that went into the buggy function. A good compiler will also optimize the closures into something you'd write by hand in Go. Overall I'm glad to see Go get these nice tools for more easily working with collections.


I've just never run into places where that style improved readability, performance, or debuggability. Outside of the Haskell world, I've usually found myself preferring loops for ease of following the code.


Thanks for your answer. I find it very interesting how different people come to different conclusions about these basic approaches to writing programs. Imho, as always, there's no one true answer and understanding where everyone comes from is very important in understanding this divergence of thought.

In my work I don't think I could live without these higher level abstractions anymore. It enables me to write clear and concise code that helps me keeping accidental complexity under control so that I can focus on the essence of what I'm trying to codify. Performance on the other hand is not a primary concern to me, given it stays above some acceptable threshold.

I appreciate how different tasks would require different approaches. I'm interested in which direction go goes (no pun intended).


At that point why not just add generics...


Lisp tends to build abstractions, too, and it also does that via macros, which are even harder to mentally parse unless you have a habit.

I'd say that to attain simplicity and clarity, you have to limit the use of abstraction. OTOH there's no way to avoid abstraction at all; programming is all about abstraction. If the means of abstraction are not powerful enough, copy-paste programming and boilerplate proliferate, making code less maintainable and harder to understand.


In my opinion, implementing these higher order functions with the help of one powerful abstraction (generics) is much simpler than having to deal with a bunch of special cased weaker abstractions (magic map, magic filter, magic reduce, magic...).


So you want a Lisp with other syntax, right?


That's about right. And gradual types, multi methods, generics and tight integration with C++: [0].

Dylan was too long winded for my taste, as are most modern attempts.

[0] https://github.com/codr4life/snabl


You say it like it's a bad thing.


I was just curious, because those functionionality wished is in nearly every list processor, no mather what syntax used. And if not you macro that away, right?


Some abstractions have stood the test of time - map, reduce, sort, those are worth having and various other collection types would be nice to see.

I agree the introduction of generics is a big worry not because of what good programmers will do with it, but because of what people more enamoured with pretty abstractions than code that does work will do with it. I don’t particularly want to live inside someone else’s creation for weeks just to understand it. The existing culture of no nonsense and minimal abstraction should help police that though.

I’m not that keen on the syntax of contracts either, but we’ll see what comes out at the end, that’s not so important - the idea of contracts looks like a good one in theory at least, but it feels like they could have used interfaces instead somehow (I’m probably missing something there).

The errors proposal is interesting as it looks like try/catch at first glance but importantly doesn’t leak that error across function/lib boundaries as exceptions do, so it’s a natural extension of the existing approach.


> I’m not that keen on the syntax of contracts either, but we’ll see what comes out at the end, that’s not so important - the idea of contracts looks like a good one in theory at least, but it feels like they could have used interfaces instead somehow

I'm not sure I understand the need for contracts really. If a function with type argument is compiled, the caller should use specific types to call it, so the compiler can determine whether said type has the required methods used in the generic function. Therefore the compiler knows everything it needs to fail with a compile-time error without any extra ugly "contract" syntax. What am I missing? Are some types only resolved at runtime?


That's exactly what C++ is doing for their template. The problem here is that if you want to use some generic functions, then you need to look into its implementation to see what kinds of requirements are needed for its type. And if you failed to meet the requirement, you'll very likely get several hundred lines of cryptic compiler error message; C++ compilers have been trying very hard to make the error message readable but it's still an awful experience. Also, an implementer now needs to be extra careful when they want to update the function since a single line of seemingly innocuous change may bring breakages for downstream users. Of course this can be prevented by carefully written exhaustive tests, but then what's the point of not having contract?

So the lessons from a number of PL implementations show that this kind of information needs to be explicitly encoded in codes, and that's the whole point of having "contract", "type class", "concept" or whatever else it's called. Compilers may be able to infer technical details of type, but at least for now it's not sophisticated enough to infer the original intention of the writer.


Sorry, still don't get it. Extra contract code needs to be written just to prevent "cryptic" compile-time errors? It's just a type mismatch/error like many others to me. Cryptic error messages can be made less confusing by improving the output.

Since Go doesn't have standalone object files / binary libraries, it's not going to produce confusing link-time errors for this.


The problem is you don't know if the error is in the type-parameter or if it's in the generic class. Take this very simple c++ program as example, it has 3 potential sources of error and it is not clear from the contract of the foo-function which one is the actual error.

    template <typename T>
    void foo(T bar)
    {
        bar.hardToSpell();        //1. Error here?
    }
    
    struct Bar() {
        void ahrdotspell();        //2. Error here?
    }

    Bar buzz;
    foo(buzz);                    //3. Error here?

In this particular case it's easy to deduce that something is misspelled and error 2 is the real error but the compiler is probably going to tell you it is error 1. And once you get into more complex programs the problem might be that the Bar class is not simply supported by the generic function or that the generic class is implemented incorrectly.

If you instead could add an interface constraint on T (like for example C#/Java/Typescript allows) it would be clear from the contract that this interface defines all the required functions on T if Bar doesn't implement those it is unambiguous that the implementation of the interface is not there.


> If you instead could add an interface constraint on T (like for example C#/Java/Typescript allows) it would be clear from the contract that this interface defines all the required functions on

Then you just have a 4th possible spot for an error. I don‘t see an essential difference between considering the function definition as the reference and having a contract (where typos are possible as well), just the latter being cumbersome and superfluous.


Assuming the interface definition is correct, it gives you one degree less of freedom. Error #1 is always evaluated vs the interface. Error #2 is also always evaluated vs the interface. Error #3 is evaluated vs the type constraint.

So each of the 3 potential errors can always be pinpointed exactly, as opposed to my example where it could be any 3 (or all 3 simultaneously) of them because "T having a hardToSpell-method" is not part of any public interface description, it's something deep in the implementation of the foo-function (in my example it's not very deep but in real code it usually is).


Typically you would have the template instantiated for multiple classes (…or there wouldn't be a need for templates). Each class will have its own definition of the functions used on the template argument. If the function definition is the reference, which one?


In this case the mismatch is between a type argument and the way that a type parameter is used, so it's not a type mismatch/error, it's a meta-type mismatch error. You are suggesting that the meta-type be inferred from the function (and any functions that it calls) rather than being explicitly stated. That is doable--C++ does it--but it means that the user has to understand the requirements of an implicitly inferred meta-type. The Go design draft suggests instead that the user be required to understand an explicitly stated contract.


As I mentioned, it's not just for avoiding cryptic error messages, though I am not sure if we're even referring the same thing as this is a completely different one from linker errors. Believe me, improving the output doesn't help that much. Hundreds of C++ compiler developer have been trying that for tens of years. None of them has succeed to the level of making a novice understands what's going on when they give sort(list) to the compiler.

To iterate my point, it serves as a "contract" between users and the code author. Without this, any user of a generic function will implicitly depend on its implementation, which creates tight coupling. Cryptic error messages or unintentional breakage are just undesired side effects from not having this clear boundary. So, the interface of generic functions needs to be explained to its users anyway; then why not making this understandable to compiler and prevent users from making such mistakes?


Go does have binary packages, it just happens most devs always build from source.


I NEED generics to do any quantitative code. I evaluated Go for a project, and completely ruled it out because of this. You're unable to write an algorithm for float 32s and 64s, scalars vs vectors vs matrices, etc.

There might be a bias in the commentary because the membership of the Go community has self selected to just be those for whom the current constraints are feasible.


If you need a generic algorithm where the possible types are very finite (e.g. just float32 and float64, or just NxM matrices for N,M=1,...,4) you might be able to `go generate` the concrete implementations from a text template.


Text generation is fragile, error prone, does not play well with other type-based tooling, is difficult to upgrade, adds unnecessary busywork to the programmer's day.

When Java's limited, half-crippled version of generics showed up, sourcecode generation died. Pretty much overnight. Why doesn't anyone do it? Because it's a terrible experience compared to having types.


You won't be able to do numeric stuff with these proposals as well. You would still be missing operator overloading and parameter of integer types, which are the basis of any generic numeric code.


True about no operator overloading, but that's just syntactical sugar for functions.

I believe you can get integer parameter types by using something like the `convertible` contract seen here https://go.googlesource.com/proposal/+/master/design/go2draf... (unless I'm mistaking what you are saying)


I believe you are mistaking. C++ allows things like std::array<int, 3>, where 3 is parameter of integer types.


And after that evaluation what did you use?


Fortran does ok without generics.


So does C but you don't expect the same things from a language from the 70's. I don't think C or Fortran would be very popular if they were released today.


I won't argue with that, but I'd just like to say that the newest Fortran versions actually make a quite modern language.

Most of its quirks are related with compatibility with older versions. If a new version without that handicap was released today, it may look nicer than most people think.


Sure, I'm not denying that generics are useful sometimes. It's just that people sometimes exaggerate how big of a problem it is not to have them.



>In Fortran 90, user-defined procedures can be placed in generic interface blocks. This allows the procedures to be referenced using the generic name of the block.

I don't know Fortran, but I'm fairly sure that you just googled "fortran generics" and pasted the first result without checking it :)


Fortran got simplified generics in 2003, I pasted the wrong link by mistake.

https://cug.org/5-publications/proceedings_attendee_lists/20...

https://software.intel.com/en-us/node/692037


Fair enough, I didn't know about that feature. However, people were using Fortran successfully for writing huge numeric libraries waaaaayyy before that.


I was successfully using Z80, 68000 and 80x86 Assembly 30+ years ago as well.


can't you just use different names, algF32, algF64, really? is this the end of the world? No wander we have npm packages isEven and isOdd, sometimes you need to get yours hands dirty.


You can't legislate against poor taste. Community norms help but there's always exceptions.

It's interesting to consider Python which - generally - has strong community norms against excessive abstractions and too much magic.

There are some notable exceptions. I believe Twisted was regarded as "UnPythonic" and I've heard complaints about asyncio. Interestingly both are related to the same problem space.

I'm sure there are other examples of inscrutable pyramids of abstraction but Python also has many examples of good, readable abstractions.

Maybe it's not abstraction that's the problem but poor judgement? What makes one language sprout AbstractSingletonProxyFactoryBeans and another not? I'm not sure if correlates directly to "ability to create abstractions" - I think there's a human element.


> What makes one language sprout AbstractSingletonProxyFactoryBeans and another not?

Missing features that enable higher levels of composition and abstraction.

Such as, for example, generics.

If you look at the usual punching bag in these discussions, Spring, you will see that as Java added features, the implementations became simpler.

Java's design hypothesis was: if we take away these features, median programmers won't blow their feet off accidentally. But it turns out that to get things done you need well-above-median programmers to provide tools, APIs, frameworks, leverage and that in turn, they will need to find a way to do it. In the case of Spring that's been marked by a slow retreat from heroics into progressively simpler approaches as and when Java itself allowed for them.

Spring 5, for instance, sets Java 8 as the minimum, and they went through the whole framework refactoring, cleaning up, cutting down and so on wherever that baseline allowed them to.

Disclosure: I work for Pivotal, which sponsors Spring, though on unrelated software. But I've spent a lot of time with Spring folks. They are smart.


> What makes one language sprout AbstractSingletonProxyFactoryBeans and another not?

On this case, it's the lack or availability of alternatives. You can't take away every feature that isn't the strictest possible OOP and expect people to create expressive and simple code. Remember that those things were created by the best professionals around, and won the selection of best practices and standard open source tools.

People do not write singletons in Python because it has actual static variables. Any moderately complex Python code is full of factories, but people don't even notice because they are actually just a function declaration around the same code that would be there anyway. There is a lot of "writing too the interface" that Java people do with abstract classes, but it comes nearly automatically with duck typing. And for the "bean" part people just use dictionaries.

Go people seem to be taking the wrong lesson from those things.


> I believe Golang has two different kinds of productivity. Productivity within the project goes up quite a lot with generics. And then, everyone tends to immediately treat every problem as "let's create a mini language to solve this problem". The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

This is a very valuable observation, relevant outside of Golang, which underlines the pitfalls of abstractions. Design patterns aim to standardize abstractions so that other readers of the same code can easily understand.

I used to capture the same related principles in what I called "IKEA code" - simple, elegant code that is cheap to write, easy to understand and use and not meant to last forever. I.e. code and design that is meant to be replaced. I think the concept of "local productivity vs global productivity" is very related and completes the "IKEA code" principles. Rather than suggest one is better than the other, I think it's important to understand the motivations when making a choice.


>The second type of productivity - which is getting more and more important - is across projects and codebases. And that gets thrown out of the window with generics.

Generics increase type safety (no more interface{} casts) and reduce code duplication. I don't see how this could possibly throw productivity "out the window". If anything, the effect will be the opposite. The tradeoff with generics has always been programmer time versus compile time and/or execution time. Assuming that programmers don't unnecessarily complicate things for themselves, programmer time will be significantly lessened with the introduction of generics. And there's nothing preventing programmers from burying themselves in a sea of abstraction even in a language without generics.


> One cannot have both I guess?

Not to be snarky, but...

The whole point of LISP-style macros is that you very much can have both. Ultimately, generics will compile down, either via type erasure or a generated specialization, to a "copy-pasted" function (in concept, at least). With a macro, you have ultimate control of how this is achieved.

The problem, of course, is that the people who care how generics are implemented all think their way is the best (leading to a proliferation of slightly-different-and-incompatible implementations), and everyone else just wants something that works fast and reliably.

Bad code can be written in any language. Good code can be written in most languages (yes, even C++...the jury is still out on PHP). Every feature that grants power comes at the cost of potential complexity. What ultimately determines success or failure is the community and best practices that develop around these features.

In this regard, I think Go will be "o.k." even with generics.


> But then, after seeing the examples, e.g. https://github.com/albrow/fo/tree/master/examples/, I could see the picture of what the current codebase would become after its introduced. Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning

> Compared to a half-as-useful Java codebase where I have to carry a dictionary first to learn 200 new english words for complicated abstractions and spend a month to get "assimilated", this was absolute bliss

You're making a good argument for standard generics and their machinery included as part of the standard library as opposed to bolted on.

Java incrementally baked generics in relatively late in its life along with a culture of a lot of ad hoc machinery to cover up the lack of useful lambdas.

The wisdom of Go putting these in the stdlib is you get their benefit universally without having to wade through a learning process with a uniquely flawed and ad hoc hierarchical ontology for each project.

Having such universal tooling is a sign that your language is powerful enough to lift out and abstract real problems. That's a good thing! Don't mistake Java's legacy issues for issues Golang might have. It probably won't end up with the same problems because it starts with a better feature set.


I am not quite fond of the idea of having C++/Java Style Generics (I simply don't like the syntax that comes with it), but I see that the current system is broken.

From my current perspective I think fixing some things related to Go Interfaces and Container types (Maps/Slices) should make them even more usable than they are now and making them pretty much the thing people want to use Generics for. Currently, it looks like hell to build something with the empty interface and using a lib which is built around the empty interface doesn't look nice either.

Using Go maps is sometimes awkward too as they require some weird attribute 'comparable' which some types posses and others don't. Why??? I mean why can't the Go authors use their own language construct and use an Interface like:

  interface Comparable {
    Comparable(interface{}) bool
  }
Similarly, I find it awkward that we still can't use Interface Slices[1]. I mean I understand it isn't simple to implement but certainly not impossible.

One thing on my todo list is writing an experience report with better examples. It's priority just got up-voted ;-)

[1]: https://github.com/golang/go/wiki/InterfaceSlice


Counterpoint: coming from a Java background, I find totally freely-definable interfaces for some features used by core container types to be a little worrysome.

In Java, I can't count the number of times I've debugged either correctness bugs or 'merely' horrible performance bugs based on incorrect implementations of Equals or Hashcode methods. Implementing a non-commutative Equals method is a colossal footgun, and incredibly irritating to debug, and yet somehow it happens time and again.

And there are other, subtler, arguably "correct" and yet but extremely unwise things one can when do given Java-like maps implemented on the Equals and Hashcode interface methods. For example, one can define two types T1 and T2 which both claim to be (commutatively) equal, and define some interface TX which is implemented by T1 and T2. Using a Map<TX,Object> and inserting members by T1 and T2 now has an interesting form of semi-defined behavior: if inserting "equal" keys, whichever type was used first will be persisted. Imagine doing this in some concurrent map, and imagine that T1 is 8 bytes and T2 is 400 bytes; enjoy debugging those occasional OOMs.

I've been very happy not to ever have this particular problem when defining the key types in my Go maps.


Java does equals and hashCode horribly. Granted, they did not have Java to learn from but it's what we have.

Kotlin and Scala solve the problem pretty decently. Equals and hashCode are automatically generated for data/case classes. I've never encountered a messed up equals in Scala and only very rarely felt the need to override the generated equals, and then you make a very conscious choice to do it.

C# does it even better, there is a IEquatable<T> interface that defines the `bool Equals(T other)` method. It partially solves the irritating universal equality.

If I would design Java today Object would have no methods and there would be a stdlib interface defining `equals(other: T)`, and data classes would automatically implement that interface


> If I would design Java today Object would have no methods and there would be a stdlib interface defining `equals(other: T)`, and data classes would automatically implement that interface

This polymorphic equality is the default in some functional languages, like OCaml. It's fine for simple cases, but you really do need overloadable equality. Equality as a method in OO is a problem because of the asymmetry though, and often an even bigger problem because such languages typically permit pervasive null.


There definitely are cases where that's not enough, but it's a 95-99% solution.

The problems you mentioned:

Classes could change equals to accept subclasses. Then subclasses could not alter equality themselves.

T implementing `equals(other: U)` for some U solves a lot of other use-cases, but could make equality asymetrical.

Reference equality would still exist, but would not be the default, e.g. Java's `equals` is Scala's `==`, and Java's `==` is Scala's `eq` operator (which you almost never use)



I recently had to write my own equals and hashcode functions in a Java project (a language relatively new to me) just so I could store my custom object as a key in a HashMap. The experience of doing this seemed so silly and unnecessarily low-level to me, that I almost abandoned the whole codebase and looked for another language to start over.


Well, there is another problem with my suggestion: At the moment, Maps have no problem using custom structs as they stack up the comparable attribute of the containing fields. If you would use a normal interface you would have to define how you would want to name that kind of relation (all fields of X implement interface Y so X implements Y? Sounds pretty similar to how the equal operator works in Go). Otherwise, you would have to implement Comparable for every custom type which would be in fact even worse than the current state (It is bad enough we have to do that for the sort.Interface twice per week ;-).

So I completely agree, that having to implement basic attributes like 'equals' or 'comparable' is no good idea. Instead I would favor some logic which assumes some natural situation and lets you implement a custom logic if you want to do it.

Nevertheless, I think those problems just show some of the inconsistencies Go comes with. Don't get me wrong, I love Go and I like the way the Go devs work. It is just that I am worried that we will end up with Go 2.0 adding Generics without solving the issues within the otherwise pretty nice Interface system.


Go could automatically create hidden Test methods for methods used by core types like maps. If Equals and Hashcode have to be correct, then these tests could ensure it.


> C++/Java Style Generics

C++ and Java generics are nothing alike, so there's no such thing as C++/Java-style generics.


In fact, I am not completely sure about the state of Generics in C++. AFAIK the term 'Generics' was coined in context of Java, but C++ has a similar concept they call Templates. Maybe you could elaborate on what is the exact difference between those two concepts?


Templates can do a lot more than generics can, but come with corresponding downsides.

Templates are basically a form of copy/paste style substitution with fancy bits. When you instantiate std::vector<char> in C++, the compiler creates an entirely new class with "char" substituted in everywhere the type variable appears. This class has nothing in common with std::vector<MyStruct> - there's no common superclass, the compiled code may or may not be shared depending on low level details of your toolchain and the types involved, etc. That in turn means if you want to write a function that works for any std::vector you must also use templates, and your function will also be substituted, and your final binary will also have many similar-but-different copies of the compiled code, etc. However because of how C++ templates work you can achieve some pretty astonishing things with them. In particular you're getting code compiled and tuned to the exact data type it was instantiated with, so you can go pretty fast with certain data layouts.

Java generics seem superficially similar but in fact are different. In (erased) generics, a List<Foo> and List<Bar> are exactly the same class at runtime. They're both just List and both just contain objects. The compiled code is the same, you can cast between them if you know what you're doing, etc. Likewise if you write a generic function that works for any list, there's only one copy of that function. Generics have different tradeoffs: they're less powerful (you can't write a regex library with them for instance), and they don't auto-tune themselves to particular data types ... at least not until Project Valhalla comes along ... but they're backwards compatible with pre-generic libraries at a binary level and they avoid an explosion of compiled code bloat, which is a common problem with templates.


Thanks for clarifying, sounds like pretty deep stuff though.


Can you point me to a resource with details about this? I'm relearning C++ and a comparative evaluation of a language feature like that would be very useful.


Check Tour of C++, 2nd edition is updated for C++20.


I think they were just referring to the angle-bracket syntax.


I see that the current system is broken.

Just where is the line between "tedious but workable" and "broken?"


> "Processor", "Context", "Box", "Request"

There are a lot of interfaces in the existing library that one could consider quite abstract as well. Reader? Writer? error? Another thing worth noting is that many/most C++ codebases manage to avoid the type of generic proliferation you're talking about.


I'm not sure C++ is a great example of a language that avoids generics abuse. The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers.

Java is actually a much better example of where generics are used rarely and well. Containers are parameterised by exactly one thing - the type of what they contain. You don't parameterise them with different allocation strategies or comparators. Type inference means generics are often automatic, but still catch mistakes for you. And whilst nothing stops people making bad abstractions in any language, the standard library sets a pretty good example by and large.

That said, Kotlin's generics are better still and probably the best implementation I know of.


> The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers

The main reasons template errors suck so much in C++ are 1) lack of a proper mechanism to constrain instantiations (hopefully addressed by Concepts) and 2) The general mess that is function overloading and automatic type conversions.

For example, the classic "template hell" error message people usually encounter when starting to use C++ is failing to properly overload operator<< (example here[1]). That error message could easily be as simple as "No `operator<<` for class C", but instead the compiler needs to inform the user of every possible overload match, every possible type conversions that could cause a match, and templates that could have matched but can't be instantiated along with a reason for the failure. Of course it doesn't help that std::ostream is absurdly abstract, but it's really just making a bad problem worse rather than the fundamental root of the problem

[1] https://godbolt.org/z/XveitP


I think it's 100% ok for a big utility library like boost or std to have complicated internals to support a variety of use cases. The go standard library likewise is big and complicated (although perhaps not quite as). My main concern is that user code can be written simply. And to be clear, I'm not holding up C++ as an example of a great language. In fact, I'm kind of looking at it as a worst-case scenario. I'm only saying that most C++ programs I've been exposed to are relatively free of creeping genericity.


> The reason STL errors are so convoluted is largely due to the unintuitive and complicated way they use generics to support obscure features like changing the allocator used by containers.

The reason STL errors are so convoluted is because the C++ template language is Turing complete.


no, it's because every function is implemented with a dozen helper function so compiler error stack traces have to show all the intermediate, generally return-only, helper functions


True if we are speaking about C++11 and earlier.

With C++14 onwards, it is a matter of developers actually making use of static_assert, constexpr if and enable_if.

Not as comfortable as just using concepts, but they get the job done.


> no, it's because every function is implemented with a dozen helper function

Which are needed and/or allowed for reuse because the template language is Turing complete. More restrictive type-level languages have better error messages because the compiler either understands what you're intending to do or you're limited in the things you can express, full stop.


I'd say generics aren't just about avoiding copy-pasting. It's also added type safety. At this point the closest thing is using an interface as a function parameter. You get a dynamically typed value (albeit limited to types that implement the interface).


That, and efficiency, as the generated code can be inlined in many cases.


The closest thing is auto-generating code, not using interface{}.


Yes, but there's still a depressingly large amount of code in the wild that just casts interface{} back and forth.


Common with ORMs. The gorm API is horrible - it's all interfaces.

The only reasonable ORM library I've seen is sqlboiler, which uses code generation.

https://github.com/volatiletech/sqlboiler


I don't necessarily mean interface{}. This:

type Thing interface { ... }

is better, but Thing is still a dynamic type.


I think it's hard to get a feel for how it would look in practice from those short examples.

Library code might be a little ugly and abstract, but as an end-developer, you usually don't need to worry about it. The code you yourself would see and write is going to have a lot more meaning and be more understandable.

I've rarely had to do anything fancy with generics in my own projects. Where I've seen it be most useful is when I want to offer a flexible public API -- in which case, the choice of writing a bit of generic library code is much less smelly than copy pasta.


> Library code might be a little ugly and abstract, but as an end-developer, you usually don't need to worry about it.

But the functions you provide for the code you're writing are a library for someone else.


> Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

This is a completely false dichotomy. Just because a feature can be misused doesn't mean it's not legitimately useful. Generics make the language easier for the user, but are harder for the implementors. Most language designers / implementors recognize that this trade-off is worthwhile, and therefore nearly every other popular typed language has them. The Go implementors are the only ones against doing it, because, well, they would have to implement it, and they're concerned that it will make the codebase too complex for them to maintain. (Their sympathizers are against it too, which is still a mystery to me.)


> Just because a feature can be misused doesn't mean it's not legitimately useful.

The point isn't that it can't be useful.

In the real world, the amount of code I'm forced to understand that uses overly complicated abstractions is the issue. I'm glad one language finally took the opposite direction.


> In the real world, the amount of code I'm forced to understand that uses overly complicated abstractions is the issue.

This may say more about the specific codebases you work in than it does the software world at large. I spend most of my time in a couple of languages that all have generics and don't experience the problem you describe.


There are good and bad abstractions. In my experience good ones are usually grounded in math. Used the way they are intended generics/parametric polymorphism should reduce your cognitive burden. Eg. having a single, well defined and tested abstraction like Set that is type safe and its operators can work on any type it can contain is surely a good and worthwhile abstraction. The same goes for parametrically polymorphic functions that need not know about the concrete types they operate on and with the proposed contracts they can optionally require that its argument types possess a given constraint/capability like equality.


I'd argue that Generics make the language easier for the writer, not the reader which is what he was getting at. What I like about Go is the language is small, so at the expense of some verbosity, that in most cases you write once, its very easy to understand. One thing I dislike about some Java & Scala codebases is the "manual" unwinding you have to do to debug some functionality.


This is FUD.

Parametric polymorphic code is not hard to read at all - in fact, it is easier to reason about since it is universally quantified for all types.


Most people don't even know what parametric polymorphism is, let alone have ever used a language that supports it. I find it's basically impossible to explain just how powerful of a tool it is to someone who lacks experience using it.


"append","delete","copy" in Go is parametric polymorphism hardcoded in the compiler. append can be called with many different array types. There is nothing complicated with that.

go can iterate over arrays irregardless of the element type the same way with the same construct using "range", therefore "range" is polymorphic.

The real debate here is should the compiler be "privileged" or should there be API in userland to implement these behavior.


> Today it exists in Standard ML, OCaml, F#, Ada, Haskell, Mercury, Visual Prolog, Scala, Julia, and others.

> Java, C#, Visual Basic .NET and Delphi have each introduced "generics" for parametric polymorphism.

https://en.wikipedia.org/wiki/Parametric_polymorphism


A bunch of low-reach languages have true parametric polymorphism. A bunch of common languages have polymorphism that may be parametric or may be bounded by runtime type information checks, and you can't tell the difference from the types.

Neither of those contradict my assertion that most programmers have never done work in a language where the type system can make guarantees that they can trust.


This is a very pedantic point


The only programmers who haven't used a language with parametric polymorphism are those who have only programmed in C, Go, or dynamic languages at this point.

Outside of Java syntax noise, I really have my doubts that Java's parametric polymorphism is too complicated for the average programmer to understand.

  public static <A> A identity(A a) { return a; }


Java does not have parametric polymorphism because instanceof exists in the language. That reduces what you can say about something from the type alone to "maybe parametric, maybe bounded by runtime checks that aren't visible in the type."

How many languages provide you a way to guarantee at the type level that the behavior of a function cannot be influenced by instantiation of type variables? I don't believe that to be true of anything that runs in the JVM or in .NET, for instance, as runtime type information is always available. C++ has dynamic_cast. What common language actually has parametricity expressed in the type system?


That’s true..but Haskell has bottom and Scala has bottom + instanceOf checks and JVM reflection as well. And they clearly have usable parametric polymorphism despite things being “broken.”

Saying Java doesn’t have parametric polymorphism because you can cheat is like saying Haskell isn’t pure because of unsafePerformIO.


Agreed. There is always a point where it's the developer's responsibility to write good code, even in languages where it's very difficult to shoot yourself in the foot.


> Lots of abstract classes such as "Processor", "Context", "Box", "Request" and so on, with no real meaning.

A programmer can write Java in any language if they're willing to work hard enough at it.


Actually Java adopted those ideas from 90's C++ code style.


Complete bullshit. Generics have 0 relationship or co-dependency of any kind with the abstraction slash design patterns you mention. These are entirely possible without generics, and generics does not in any way encourage them.


There are less common design patterns particular to generics. The presence of generics does tend to encourage those. I have seen subsystems where coders used C++ templates where they could have used polymorphism instead.


... I'm sorry, that's a bold claim when a large chunk of these patterns are literally used to allow for you to be able to operate over a variety of different objects in a loosely coupled fashion


My gut feeling is a significant part of people who voted for them don't really intend to use Go that much.

They might try to please different crowds with Go 2. At the end of the day, they might just loose everyone.


Poor design has nothing to do with generics. If anything they make API cleaner and safer to work with.


Having generics isn't related to excessive abstractions. One is a language feature and the other is program architecture. It's definitely not an exclusive trade-off.

I'm sure there are plenty of Go apps with bad architecture, where a tiny dose of generics would greatly help.


Having generics isn't related to excessive abstractions.

I've seen the situation in C++ where templates led to excessive templates, simply because they were "neat." As a result, I found myself debugging code that had no source code whatsoever.


A bunch of pioneer project like docker, kubernetes, etcd, prometheus etc. has been built with go and I don't believe that the maintainers suffered lack of generics and error handlers. On the other hand, as a new go programmer, I can really dive into their code base and understand each line of code without thinking twice. This comes from simplicity.

But these possible nested error handlers and generics will lead developers to think more than twice during writing or reading a code base. These ideas is not belong to go era but Java, C++ etc which go doesn't wanted to be like.

Someone here has mentioned that internal support of generics for slices, maps and other primitives. I think this can be the best solution for generics in go. For the error handling I think more elegant way could be found.

Please do not rush.


> A bunch of pioneer project like docker, kubernetes, etcd, prometheus etc. has been built with go and I don't believe that the maintainers suffered lack of generics and error handlers.

Here's an experience report from k8s: https://medium.com/@arschles/go-experience-report-generics-i...

They've been using a code generator as a work around: https://github.com/google/gvisor/tree/master/tools/go_generi...


I think code generators are a better path than having the compiler generator the code for you. It's nice to look and see what is about to be built instead of waiting for it to be built then trying to debug with breakpoints.


It sounds like nested error handling will be the abnormal case and the "default" error handling will often be normal. That is, all you will notice in common use is that the boilerplate "if" is replaced with a "check".

It also sounds like the nesting does not escape the function itself. It always either returns error, or doesn't.


Well, How about multiple return values? How will "check" keyword handle return values other than error?

  func ReadFile(path string) (string, error) {
    b, err := ioutil.ReadFile(path)
    s := string(b[:])
    return s, err
  }
How this function would be written regarding to drafts?. I am really confused.


The draft doesn't include any actual examples with multiple returns on a function that uses check internally, but the behavior is fully specified:

> A check returns from the enclosing function by returning the result of invoking the handler chain with the error value.

> A return statement in a handler causes the enclosing function to return immediately with the given return values. A return without values is only allowed if the enclosing function has no results or uses named results. In the latter case, the function returns with the current values of those results.

> A handler chain function takes an argument of type error and has the same result signature as the function for which it is defined.

> If the enclosing function has result parameters, it is a compile-time error if the handler chain for any check is not guaranteed to execute a return statement.

So any handler defined in ReadFile would be required to return (string, error) or not include a return at all (such that other handlers got their chance to interact with the error).


check returns all non-error return values, so the function would be written as:

  func ReadFile(path string) (string, error) {
    b := check ioutil.ReadFile(path)
    return string(b), nil
  }


A handle block has access to all variables that are in scope. So if you have partial results that you want to return is a check fails you can return them in the handle block if they are already declared/assigned. The easier way to handle this is using named return variables which are always in scope. If no partial results have been assigned then they will be returned as zero values otherwise the partial results can be set.


What you're thinking of is the default handler (because if you wrote a handler, it would be your code returning things).

The default handler will populate the last element of the return list with the error, and return.

The other elements get zeroes, if not named, or they are unchanged from how they have already been set, if named.


I really thought they were going to allow their idealism to win over pragmatism. Versioned modules, generics and exception handlers all going in is really going to put the defenders of them being missing in a tough spot.


Lack of generics was never part of "Go idealism". It has remained an open question for 9 years.

It was added to the FAQ in 2010: https://web.archive.org/web/20101123010932/http://golang.org...

Ian has been thinking about the problem for the past 7+ years, since even before Go1 was out, when the language was frozen.

I think this is about the seventh generics design doc that Ian has written. (This time he wrote it with Robert.) They keep getting better. Hopefully this one holds up.


> Lack of generics was never part of "Go idealism". It has remained an open question for 9 years.

You should read less is exponentially more by Pike. He made it clear back then that he didn't see a need for generics and Go was explicitly written to be a replacement for C++ without all the unnecessary features. Not having generics was at least initially a core part in justifying the existance of Go, just like not having any form of exception handling ( they had to add panic recover quite early ).


You realize you are trying to explain Go's philosophy to one of the Go developers, right?


Are you claiming that Pike wasn't a core developer and driving force behind Go? Because my point is about the "never" being false when early statements quite clearly said something else.


> Are you claiming that [...]

No, I'm saying that the person you are replying to has no need of you reminding him of Pike's words, given that he actually knows Pike, unlike you.


So, just that I follow. Go guy claims historically never A.

I point to Pikes blog which clearly said A.

So given these two points I should implicitly discard evidence (in form of a blog entry by one of the Go creators) in favour of a statement that is clearly at odds with it? His connection to Pike or Go does not change the text on the blog, unless you disavow Pikes role in the creation of Go his distaste of generics and his clear classification of them as unnecessary remains a fact.


Agreed! Go is the only language I’ve found that combines static typing, lightning fast compilation, great concurrency support, a lightweight runtime, a community that emphasizes simplicity over purity, a strong ecosystem, and nice tooling.

However, ever churning dependency management, cumbersome error handling and the lack of generics were all huge drawbacks, that kept me reaching for Scala and Python for backends of personal projects. When Go 2 comes out, I think it’ll be my clear #1. It’ll bring the language to the level of power I like, while maintaining Go’s unique set of strengths (listed above).


I've always been of the opinion that generics would be nice to have. But people were acting ridiculous about their exclusion. You can write plenty of useful programs without them.


You can write any useful program in assembly, but compile time is dirt cheap so it's far more reasonable to automate anything that doesn't require thought and creativity. E.g., a large set of identical function bodies with similar but different types, where if I write them by hand the best I can do is not make any mistakes.


I'm generally for generics, but there are tradeoffs. As soon as they debut, expect a flood of overly abstracted libraries. I'm actually pretty apprehensive because I could easily see the costs outweighing the gains.


Agreed that “excess abstraction entering the Go ecosystem” is probably the main danger, but even still, I’m not too worried. The Go community is all about simplicity, and I don’t see that changing. It’s not going to suddenly become the Enterprise Java community. Libraries with crazy abstract APIs won’t become popular.

I expect it’ll be similar to Python, which supports generics, very powerful reflection, many programming paradigms, etc., but the popular libraries tend to have simple, easy to understand APIs, because that’s just the culture of the community.


This last point isn’t true at all. Lots of popular libraries do weird things. Lots of popular library functions do completely different things based on the types and combinations of provided parameters (Pandas being one of the most egregious offenders, but the stdlib does this all over too) or they overload operators to make a clever, magical DSL (e.g., SQLAlchemy). Or they use metaclasses to programmatically generate APIs (e.g., AWS sdk, SQLAlchemy, etc). I take notice because I use Python professionally and spend so much more time pouring over docs and source code trying to grok APIs than I do with Go.


Yeah, the scientific Python world is a different story, but I’d say most Python libraries are simple to use, including the AWS SDK. Agreed that SQLAlchemy is much tougher to use if you opt in to their DSL, but to be fair that’s true of every SQL lib with a DSL, in every language I’ve used.

Basically, I find that though it’s possible to do all sorts of crazy shit in Python, most popular libraries have simple APIs, because simplicity is part of Python’s culture. There are certainly exceptions though.


Agreed that the scientific community are the worst offenders, but I don't think it's just them. There are examples all over, but I don't think this conversation will benefit from their enumeration. I will point out that you're letting SQLAlchemy off the hook because their DSL is "opt in" and it's par for the course for DSLs; however, their DSL is not really "opt in", it's the standard way for building queries. Further, "DSLs will be DSLs" is kind of missing the point, which is that the Go community rejects these kinds of hacky DSLs while the Python community embraces them.

At the end of the day, I notice a significant difference in wanton magic between the Go and Python communities, and it's big enough to make me prefer Go over Python when given the choice (despite that I've used Python far longer and more regularly).


Those overly abstracted libraries already exist, they are just using interface {} everywhere or go generate.


Maybe, although I suspect that these libraries are but a shadow of what will follow the release of generics. Hopefully I’m wrong! :)


Of course we can, after all we were writing code before generics became mainstream around the mid-2000's.

The point was creating a language without them after they had become a common language feature across all major static languages.


C doesn't have generics. Objective-C didn't get generics until 2015, which is well after Go was released. Not all major static languages have generics and not all major static languages had generics when Go was being planned.

Not that Go ever planned to not have generics. It was always simply a question of how to implement them in a non-ridiculous way. They were warned early on by members of the Java team to tread carefully and not make the same mistakes they did.

The team has crafted many generics proposals over the years, but they all fell short one way or another.


C has basic support for generics since 2011.

Objective-C merges C's type system with the dynamism of Smalltalk. Generics are only meant as a mechanism to ease interop with Swift.

Dynamic languages naturally don't require generics due to their semantics.

The team always handwaved the remarks of many of us as they now themselves recognise.

"In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."


To be fair, we had macros and they covered many of the uses of generics people are arguing with.


Yeah, I can't count the times I heard some variation of "It's literally impossible to ship software without generics" from /r/programming alone. The OP's idealism/pragmatism comment is particularly misguided, since Go consistently favors pragmatism over idealism and in fact that was why Go has held off for so long on adding generics (idealism would have added them from day 1, consequences be damned).


Edit: To clarify, the handle() system does indeed resemble an exception propagation system and is also the part I like least.

Just "check" would be preferable, with an implicit handler of "if err != nil { return err }"

Although I still would prefer something leaner like the "?" operator from Rust.


One of the key characteristics of exception handling is the ability to set a handler in one stack frame, but receive exceptions from deeper in the stack, causing error-handling-at-a-distance. Anything that does not have that is not an exception system.

Proof: Any exception system that lacked the ability to catch exceptions from lower frames would be called a defective exception system.

This check keyword does not implement an exception handling system.


This is the default behavior of the new check system. If you don't handle it at the bottom of the stack it is propagated up the stack to the caller automatically. Not sure how easy this will be to debug since it doesn't capture a stack trace at the bottom and it won't be obvious where the error started.


But isn’t it only to the immediate caller using the explicitly declared error, not all the way up the stack?

That’s very different behaviour and along with using the error interface avoids the main objections to exceptions ( unexpected throws handled far from the error location, unexpected errr types).

This is just a small change to make it easier to annotate and easier to return without comment.


Assuming the caller uses check as well it will continue on. Seems the same as checked exceptions to me.


That’s an invalid assumption, as most functions will not use check (looking at my code there are just a few places I’d use it). It’s an addition to current error handling, not a replacement.


Ok, we won't call them exception handlers. Instead, call them "if anything goes wrong in this function do this" handlers.


Those are very different things, whatever you want to call them. check/handle doesn't involve unwinding the stack, and doesn't default to crashing your program if you forget to add a handle block.


and doesn't default to crashing your program if you forget to add a handle block

Is that a good thing?


Is that a good thing?

Yes, because instead your program won’t compile if you forget to handle an error ( except in the case of a function which only returns error, which you can ignore by mistake - I would like to see that fixed in go 2 as well).


Expect to see a lot of `_, _ = fmt.Println(...)`, then.


Since they’re calling it go 2 perhaps they could just make an exception for println by having an alternative with no error (like map access with k,ok or just k).


> The draft design introduces two new syntactic forms. First, it introduces a checked expression check f(x, y, z) or check err, marking an explicit error check. Second, it introduces a handle statement defining an error handler. When an error check fails, it transfers control to the innermost handler, which transfers control to the next handler above it, and so on, until a handler executes a return statement.

Sure. It's explicit error handling that just so happens to look exactly like a form of exception handling, c.f., the zero-overhead exceptions proposal in C++land.

This is how you say, in language-design-ese, "I want to do what my critics have been saying for years is the right thing without admitting that they've been right all along".


I really don't know where people get on this high horse and imagine that there's been some kind of fighting attitude here.

I saw a talk by James Gosling (of Java fame) a few months ago in which he said one of the most important things he's glad he did throughout the development of the language is "Decide... not to decide".

The purpose of this "deciding not to decide" should sound familiar by now: once you commit to something, it's hard to backtrack. And once it's hard to backtrack, you have lost your ability to look for more optimal solutions than the one you've fixated on.

This is why generics in Java took so long. And to this day, the developers of those generics do not regret taking their time to do so. I think that's interesting to note, don't you?

It's okay to make decisions after deliberation.

Deliberation does not imply a fight.

---

Tone police aside, did you... read... the proposals? I don't think it's correct to dismissively say that the proposed handle/check syntax is identical to other checked exception syntaxes; it doesn't compose in the same way as traditional try-catch blocks; and it doesn't have the power to create arbitrary longjumps across multiple stack frames as exceptions do. In short, it's very little like traditional exceptions at all.


Actually they do, hence the ongoing work to add value types and reified generics to Java, which is again a multi-year project with no defined end date.


Sure, but just like there's a difference between "deliberation" and "a fight", there's also a difference between "hmm, there's more to do" and "we regret the thinking we invested so far".


If you are interested in learning why check-handle differs from traditional exception handling, you can read https://go.googlesource.com/proposal/+/master/design/go2draf....

If you just feel like snarking, then sorry for spoiling.


Are there any write-ups on how your error handling proposal interacts with higher-order functions and generic wrappers? That has always been a problem with "explicit everything" approach to error handling, in my experience - like, if you map() over a collection that does disk I/O as part of its implementation, then map() needs to be able to flow I/O errors through - but it can't be aware of them specifically, because it's a generic algorithm. In Java, it pretty much meant that you had to use the RuntimeException escape hatch, defeating the purpose of checked exceptions. For another example, generic collection interfaces don't reflect the fact that e.g. fetching an item can fail, making them that much less generic.

And to work around that problem, you need some kind of generics that lets you define error signatures in terms of other signatures - e.g. for map(), you want to say that it can return an error if and only if the function it maps can return an error (and ditto for error types).


It looks like the only difference is that catch blocks precede try blocks and aren't indented to show how they nest. One could mechanically convert

  handle err {
          cleanup1()
          return 42, nil
  }
  handle err {
          cleanup2()
  }
  check dostuff()
into

  try {
      try {
          dostuff();
      } catch (Exception e) {
          cleanup2();
          throw e;
      }
  } catch (Exception e) {
      cleanup1();
      return 42;
  }


That is not the only difference. Exceptions travel through calling functions if there is no catch clause. The check construct does not.


When every function call uses "check", we end up with the same control flow as exceptions, which is almost always what I want. You only get different behavior where you don't use it.


The default handler just propagates the error up to the caller.


And then the caller has to either check or ignore the error. The error never propagates to the caller's caller without an explicit action in the caller. That is unlike exceptions.


What is an "unmarked check call"?


But it will propagate through a chain of unmarked check calls. If the requirement to mark potentially-throwing call sites is what disqualifies an error handling scheme as belonging to the class of exception systems, then we need a new term that includes both exceptions and this proposal and excludes everything else, because the proposed Go scheme very closely resembles exceptions.


An unmarked check call would mean explicit error handling like if err != nil {} which isn't unchecked..it's just checked not using `check`. If you don't have this, then your error variable will be unused an you'll get a compile error.


To some extent I almost wish they had stuck to their guns. The current brokenness actually serves as a much stronger argument for why exceptions are needed. There's a visceral understanding that comes from forcing people to handle every error immediately that is actually very hard to teach.


I'm disappointed the Java has given exceptions such a bad wrap that all new programming languages dance around them like crazy. I understand Rust going with error returns due to it's low-level nature but Go is pretty high level.

This semi-explicit error handling code optimizes for fairly trivial cases as very few real-world functions don't have some kind of exception condition. The difference between prefixing nearly every function with "check" vs. having that implicit is very small.

Proper use of exceptions puts the focus on where you can handle the error rather than where the error occurs. Java screws this up with checked exceptions, which puts the all focus on error site, and programmers have subsequently determined (correctly) that checked exceptions are hardly better than error returns.

However, that is fundamentally the wrong way to look at error management. If a method 17 levels deep on the call stack throws a network error, I shouldn't have to care about those 17 levels because I'm going to restart the whole operation at the point I can do that. The important part is not where the error is raised/returned.


Exceptions are second only to proper return types for error handling such as Result<T, E> or Option<T>.

And in order to have those you have to have generics. Once you have generics result types can be used for 99% if error handling.


No, because you are still bubbling these 17 stack frames, as OP was (rightfully) complaining about.

The only downside of exceptions is the loss of equational reasoning, really.

But for everything else, they are a superior approach to result values.


It's not really about Java giving exceptions a bad rap. After all exceptions are a feature of almost any OOP language designed after 1990 and they work nearly the same way in all of them.

I've argued in the past that the reason the current crop of languages that compile to native code tend to avoid exceptions, is to do primarily with implementation cost, the backgrounds of the language designers and a mistaken attempt at over-generalisation in an attempt to find competitive advantage.

https://blog.plan99.net/what-s-wrong-with-exceptions-nothing...

One problem is that the sort of people who write native toolchains that compile to LLVM or direct to machine code like Go, tend to be the sort of people who spent their careers working in C++ at companies that ban C++ exceptions. So they don't really care or miss them much.

Another is complexity. One of the most useful features of exceptions is the captured stack trace. But implementing this well is hard, because generating a good stack trace in the presence of optimised code with heavy use of inlining requires complex deoptimisation metadata and many table lookups to convert the optimised stack back into something the programmer will recognise. Deopt metadata is something the compiler needs to produce as it works, at large cost to the compiler internals, but most compiler toolchains that aren't optimising JIT compilers (like the JVM) don't produce such metadata at all. So stack traces in natively compiled programs are frequently useless unless you compiled in "debug mode" where all optimisations are disabled. VMs like the JVM have custom compilers that generate sufficient metadata, partly because they have more use for it - they speculate as they compile and use the metadata to fall back to slow paths if their speculations turn out to be wrong. But a language like Go or Rust doesn't have a runtime that does this.

Finally, I find many of the arguments cited against exceptions to be very poor.

For example the Go designers cite Raymond Chen's blog posts as evidence that exceptions are bad because it's easy to forget to handle errors and the checks are "implicit". This seems totally backwards to me.

You cannot forget to handle an exception in most languages that use them. If you forget a method can throw and don't catch it, the runtime will catch it for you and print out a stack trace and error message that tells you exactly what went wrong and where. The thread won't continue past the error point and the runtime will stop the thread or entire app for you. But if you forget to check an error code, the error is silently swallowed and no diagnostics are available at all. The program will continue blindly in what is quite possibly a corrupted state.

The Go designers argue that with exceptions you can't see "implicit checks". This is an odd argument because, beyond not being able to see if you are ignoring a return code, all programs must have implicit checks of correctness. For example checking for null pointer dereferences or divide by zero conditions, yet it makes no sense for e.g. divide to be a function that returns an error code you must check every time. In C, such checks turn into signals that can be caught or on Windows, the OS delivers them as exceptions! Yes, even for C programs, it's called SEH and is a part of the OS API - all programs can have exceptions delivered to them at any point. If you don't catch them then the OS will catch it for you and display the familiar crash window. UNIX uses signals which are not as flexible but have similar semantics; if you don't catch the signal you'll get the OS default crash handler.

So this is no knock against exceptions. Moreover, very often you don't want to handle an error. Not all code can or should attempt to handle every single thing that can go wrong at the call site, or even at all. So in practice errors are almost always propagated back up the stack, often quite far up. For many errors there's nothing you can do beyond the default exception handling behaviour anyway of printing an error or displaying a crash dialog and quitting, e.g. many programs can't handle out of memory or out of disk conditions and nor would it make sense to invest the time in making them handle those conditions.

Exceptions were developed for good reasons; they were designed in response to the many enormous problems C-style error handling created and codified common patterns, like propagation of an error to the point where it makes sense to capture it, changing the abstraction level of errors and so on.

The Go designers have realised what many people told them from the start - that their error handling approach is bad. Unfortunately they don't seem to have taken the hint and reflected on why they made these bad decisions in the first place. Instead their language comparison ignores all the other languages that went a different direction, and only looks at Swift and Rust. Neither language is especially popular. Other modern languages like Kotlin which do use exceptions are completely ignored.

In conclusion, nothing in this document makes me think that Go is likely to fix its self-admitted design errors. They're probably just going to make variants of the same mistakes.


> It's not really about Java giving exceptions a bad rap. After all exceptions are a feature of almost any OOP language designed after 1990 and they work nearly the same way in all of them.

Java has checked exceptions which have been universally agreed upon as bad idea. No subsequent languages use them and they've even been removed from C++. Unfortunately most common first programmers' language is Java and they learn the worst possible implementation of exceptions. All your pro-exception arguments (and all mine) pretty much all apart when dealing with checked exceptions. So it's easy to see why anyone who's only exposure to exceptions was from Java would consider error returns the superior option.

> But implementing this well is hard, because generating a good stack trace in the presence of optimised code with heavy use of inlining requires complex deoptimisation metadata and many table lookups to convert the optimised stack back into something the programmer will recognise.

Of course, the only reason any of this is necessary is because compilers don't implement exceptions in naive way of simply returning and propagating the exception the same way you would propagate an error return. Instead they go through a convoluted mess to ensure that in the case the exception doesn't occur, you pay no run-time performance penalty. So, in fact, this is still an advantage of exceptions over error returns. This is merely an optimization that is possible when error handling is not explicit.


That's all true. I've become more sympathetic to checked exceptions over time though. I think the issues Java had with it are more to do with poor choice of what to make checked in the standard library. For example IOException should not have been made checked. Also, it's too difficult to change an exception from checked to unchecked, and it should have created suppressable warnings rather than compiler errors.

It's a pattern - early versions of Java erred too much in favour of making things compile errors rather than warnings, like unreachable code being an error , even though it's common whilst debugging or writing code (Go made the same mistake). Not catching checked exceptions is something very common whilst making prototype code or simple command line tools where all you can do with most errors is print the message and quit anyway. It isn't worth forcing the developer's hand via an error.

But I won't be surprised to see some variant of checked exceptions be explored again in the coming years, probably with IDE support. Being able to know how a method can fail in the type system, beyond just checking the docs, is a very useful thing if the information is used in a reasonable way.


Checked exceptions fundamentally violate most of the principles of object-oriented programming by forcing you to expose implementation details into the method type signature. Encapsulation, abstraction, and even polymorphism are violated by this.

Java programmers typically get around this by creating ClassNameException classes and then stuff the real exception (untyped) into the innerException property.

Checked exceptions are conceptually and fundamentally flawed.


People are often lazy and do that, but it's not fundamental. You can and should hide exceptions behind more abstract types if you're using layers of libraries, but that happens with error codes too - a curl error code meaning "server returned 500" might get turned into a different error code meaning "could not download data file", losing the exact nature of why it failed because the library doesn't want to expose curl's constants in its own API.

The big win of Java-style exceptions in this situation is there's a common protocol for chaining, so you can make more abstract errors but without losing any detail about what the original cause was. I find this helps me almost every day.


> Checked exceptions fundamentally violate most of the principles of object-oriented programming by forcing you to expose implementation details into the method type signature

No, they don't. They can be abused in a way which does this (and can be implemented in a way which unnecessarily creates avenues for it), but checked exceptions in general are isomorphic to static return types, and don't violate OOP any more than static return types do.


Static return types would violated OOP as well if they exposed implementation details unrelated to the result of the operation.

If you have a method called GetPrice() that uses a database, you know it, because you have to expose all the possible exceptions. If you change that to use a file or a network service, you have to change every caller and every caller of that. Is that abstraction? No. Is it polymorphism, also no.

You can't have two different implementations of GetPrice() that are compatible unless they have exactly the same exception result. What's the value of that?


> Static return types would violated OOP as well if they exposed implementation details unrelated to the result of the operation.

You're arguing against a particular usage of checked exceptions. You don't have to use checked exceptions like that so the argument is not convincing.

> If you have a method called GetPrice() that uses a database, you know it, because you have to expose all the possible exceptions.

And if you had a method called getPrice() that should always fail if a product is part of a bundle and cannot be purchase separately then you would potentially want every caller to be aware of that possibility. One error condition is an implementation detail and the other is a fundamental invariant of the domain. Guess which one checked exceptions should be used for?

> You can't have two different implementations of GetPrice() that are compatible unless they have exactly the same exception result. What's the value of that?

That's the whole point. There are invariants that should be communicated in a type-safe manner. The confusion here isn't new and is actually well understood. All error signals fall into two buckets -- does my caller care and can she respond sensibly to it or will she just pass on the error and/or quit. The benefit of checked exceptions is maximum flexibility: you can guess (and it's always a guess) that your caller doesn't care and throw an unchecked exception or, if you feel really strongly that the caller should care because some key invariant has been broken, you can throw a checked exception. It's not an exact science but this question -- does my caller care? -- goes to the very heart of encapsulation and abstraction.


I disagree. If you have an polymorphic getPrice() as part of an interface than you can't possibly list all the invariants as individual well-typed exceptions. That would assume perfect knowledge of all possible implementations of that method now and in the future. That's not OOP.

In a non-polymorphic case of a bundled item, you'd simply not have a getPrice() method on the item at all. It might simply not implement the PricedItem interface, for example. There are plenty of ways of ensuring all invariants are met, in a type-safe way, without using exceptions. If it's possible to come to situation as normal correct operation (such as an item in bundle) then it's not exceptional.


> Static return types would violated OOP as well if they exposed implementation details unrelated to the result of the operation.

Sure, but neither checked exceptions nor static return types inherently do that.

Bad practices around either can, and bad language design around either can make that more likely. (I honestly think unchecked exceptions are a bigger problem, because they make poor chocies less obvious.)

> If you have a method called GetPrice() that uses a database, you know it, because you have to expose all the possible exceptions.

Only if you are letting them bubbled up as is; you only have to expose exceptions you don't handle, which is equivalent to if you shouldn't expose an exception to a caller, you must handle it, perhaps with a handler which simply throws an exception that is appropriate to expose.


> you only have to expose exceptions you don't handle

Which is the vast majority of exceptions because by-definition exceptional conditions can't be handled within the method. That's why they are exceptions.

The presumption that most exceptions are handled locally is wrong. Exceptions, if they are handled at all, are almost always handled far away from the site they were thrown. Further it doesn't even matter which method it is! I can handle all network errors by restarting the operation from the start; I don't need to know which of the 3,000 possible methods up from my loop actually triggered the network exception. If any of those methods change from throwing to not throwing a network error, it literally doesn't matter. But I do need to know it's a network error and not some abstracted LibraryNameException.


> But I do need to know it's a network error and not some abstracted LibraryNameException.

How is that any different from using return values like Rust does? It seems that libraries have their own `LibraryError` type which you have to decode anyway.


> That's all true. I've become more sympathetic to checked exceptions over time though. I think the issues Java had with it are more to do with poor choice of what to make checked in the standard library.

Exactly this. Working on large codebases has made me a fan of checked exceptions. It's certainly the most efficient and elegant mechanism available to communicate real domain violations. Java's mistake was that the vast majority of checked exceptions (IOException, SocketException etc) are clearly RuntimeExceptions.

> But I won't be surprised to see some variant of checked exceptions be explored again in the coming years, probably with IDE support.

The logical conclusion of checked exceptions is Design By Contract. It would be great to see checked exceptions expressed as real invariants. The syntax might be something along the lines of:

'cancel(Order o) throws TooLateToCancelException if "!o.isWorking()" { ... }'

Because checked exceptions are required to express real domain invariants you might as well go all the way and capture those invariants too.


I think they had a valid point that exceptions and their implicit error checks do make it hard to look at the program text and see the error-handling program flow.

However, for languages with exceptions, it makes me wonder if perhaps IDEs and other tools could take up the slack here. If I call a function that was declared to throw an exception, why not visually annotate that as a place where an error may bubble up from? The information is there in the definition, but it'd be very handy to see it at the call site.


Just assume that all methods do, or could potentially, or will in the future throw an exception and you'll have no problem looking at program and understanding the error handling flow.

There is a consistent focus on the site the error is generated but that's the least important piece of information for handling errors.


Isn't Kotlin limited to supporting Java's exception model, though?


There's a Kotlin/Native that uses LLVM as its backend and differs from Kotlin/JVM in several key ways. I guess it could do something different but as far as I know it has exceptions.

Also Kotlin doesn't have checked exceptions at all.


Java's explicit exceptions design is based on Modula-3, CLU, C++.


Well, shoot. I don't like/don't want to like Go, but they keep getting better.

My general annoyance is that they keep things ~just different enough to be irksome, e.g. in this proposal generics is using parens:

type List(type T) []T

Instead of using <> like every other language (okay, I meant Java and C++) with generics:

type List<type T> []T

(Another example of being ~just different enough to be annoying was switching from C style `int i` to `i int` but then not using colons like `i: int`. And then using type name case for public/private (wtf), so I read variable decls of `foo foo`, which happens all the time in methods that are private implementation details, and my polygot brain interrupts: ...damn, which one is the type again?)

As usual, I'm sure `(type T)` vs `<type T>` will be due to "reasons" (...oh, maybe it's that crazy `<-chan` syntax that I dyslexically can never keep straight and wish it was just real type names like `ReadChan[T]` and `WriteChan[T]`), but really I hop languages all day and this makes a big difference to my eyes going back/forth.

They should also go back and fix `map[K]V` to be `Map[K, V]`; iirc one of the touted benefits of `gofmt` was to "make automated source code analysis/fixing easy". Great, take advantage of that and get rid of that inconsistency. Same thing with `Chan[string]` instead of `chan string` (both are public types so should be upper case for consistency). It's a major release, just do it.

Anyway, disclaimer yeah I'm nit picking and this is what made Guido quit Python. To their credit, Go has great tooling, the go mod in 1.11 finally looks sane, etc. Their continual improvement is impressive.


> Instead of using <> like every other language (okay, I meant Java and C++) with generics

Using angle brackets for generics is really annoying when it comes to the grammar of the language. There are nasty edge cases around things like interpreting `>>` as a shift operator in some places but closing two type argument lists in others. There are occasional ambiguities like:

    foo ( a < b , c > - d)
Is this "foo(a < b, c > -d)" or "foo(a<b,c> - d)"?

Angle brackets are (alas) more familiar, but they aren't simple. Go seems to place a premium on the latter over the former.


I disagree with a lot of Go's design decisions, but avoiding `<>` for generics seems like a very good idea. They're actually learning from other language's mistakes this time.

https://github.com/rust-lang/rust/commit/3d5fef6f3001beeffde...


The proposed contracts design is quite different to Java and C++'s generics. Why should they use the same syntax?

If you want Java and C++ then they are there for you to use. Go is a different language and its differences are not gratuitous. Use it for a while and you'll find out why "i int" works better than "int i" (compare Go's pointer and function type specs to C's).


> Why should they use the same syntax? ... Go is a different language

True; I harp on Go primarily due to their IMO idiosyncratic approach: AFAIU they claim to be a "purposefully boring", "good for in-the-large" language. (Which is great, I want/like languages like that.)

But then they do cute things like the upper/lower case for private, the "method params can be defined `a, b int` because it saves an `int`", the `:=` assignment vs. a standalone `var`/`val` for "hey this is a new variable", not throwing in a ":" for `i: int`, etc., that are all, IMO, very tangential/distracting from their core mission of being a "boring language".

So, if they were purposefully trying to be a wildly different language, sure, that's great, I would care a lot less.

But it's that they want to be a boring language, and yet are "off" by just enough, that it admittedly frustrates me more than it should.


Those things aren't being "cute". For example, upper/lower case for exported/unexported is a huge readability benefit.

You would be less frustrated if you took the time to understand why these decisions were made, instead of pronouncing them arbitrary.

Go's designers were long-time C users (longer than anyone else in the world, in fact); they had every reason to stick with the way things worked in C, but they deliberately chose to make Go different.


> Go's designers were long-time C users (longer than anyone else in the world, in fact); they had every reason to stick with the way things worked in C, but they deliberately chose to make Go different.

Most criticisms of Go don't come from C perspective, though.


The ones I'm responding to in this thread do, though.


uppercase vs lowercase is not a readability benefit. For someone reading the code an explicit 'pub' or other operator is much more readable, as in "I understand the implications of what I am reading". Many people can skim over it without realizing the difference of the var name case.


I thought it was weird for 2 months. Hasn't bothered me in the 4 years since. Except [de]serialization at first, and then I got over it when I realized it's worth the verbosity just to be able to flexibly rename things internally while the external interface remains stable.


You don't see "pub" where a name is used, only where it is declared. Once you've been using Go for a little while the benefits become obvious.


why would you ever not want to like something? liking something is universally better than disliking something.


I dunno, arguably, liking things that are bad for you can be worse than disliking them.


I am disappointed that go seems to follow Java and C++ with generics support instead of taking a more LISP-like turn like e.g. Julia.

I would much rather have powerful hygienic compile-time macros to generate code. Then I would have the power to do all thay generics does but also a lot more.

Just having those powers at compile time evaluation would be rather good, don't need to go all the way to Julia/Lisp.

We know what mess C++ became, with people abusing the template system to do compile-time metaprogramming. Much better to simply provide a good macro system.


generics are immune to much of the mess that templates cause. It is a somewhat different thing. They are also less powerful, but that is fitting with Go's desire to avoid language constructs which let people make really confusing code, I think.

If you want cool meta programming abilities, there is Rust, Nim, Lisp, Julia, F# and others to choose from. No need for Go.


By that logic, then currently, if you want cool generics abilities, there is Java for you. No need for Go.

But then there's this spec for generics, so obviously one wants to develop Go further. And I think it's a valid question why generics in particular is chosen instead of other ways of solving the same problem.


>By that logic, then currently, if you want cool generics abilities, there is Java for you. No need for Go.

Java has one of the least good generics implementations of modern languages. I suggest C#, or even better, F# if you want to make heavy use of them =)

Generics compared to templates keep the language simpler, solve the most common cases of code repetition, and can easily provide good error messages and runtime performance.


> Java has one of the least good generics implementations of modern languages. I suggest C# ...

What are the particular differences that make you suggest C# over Java generics? As far as I am aware they are nearly identical for practical use (as language constructs). Things like Java using type erasure don't actually affect the language much only the particular compiler implementations.


They affect the language a lot in non-trivial applications. For example, you can't implement the same generic interface with different type parameters, even when it is perfectly meaningful: suppose we have an interface Convertible<T> that objects that can be converted to T should implement - why can't an object implement Convertible<int> and Convertible<string> at the same time?

Ditto with overloads - suppose you have a family of functions like frob() that apply to ints and doubles - so there's frob(int) and there's frob(double). So far, so good. Now, you want to add helpers to do the same for collection - and so there's frob(Iterable<int>) and frob(Iterable<double>)... except that doesn't work, because you can't overload on a generic interface, since the method signature has that information erased entirely.

And then, of course, it breaks Reflection. Which is not so common in application code, but has many valid uses in the libraries.


Templates are generics.


They're in that weird middle ground between macros and generics, but arguably still closer to macros. The original pre-Standard C++ templates were pure macros. Then two-phase lookup was added in C++98, which muddied the water.


They are sort of a superset of generics.


What is the type safety story of the macro-based Lisp-style code generation?

Macro-like code generation is what the compiler does when it processes type-parametrized functions. It's more restricted that way, but it also prevents type-unsafe macro expansions due to mistake.


I am not sure if I understand your question. If a macro expansion violates typing rule then you get a compile time error of course.. is it a question about the quality og error messages?


Crafting good macro error messages is really hard. Naively, they point into the expansion which is user-hostile. It's a fair bit easier (though still difficult) with certain implementations of generics.


> I would much rather have powerful hygienic compile-time macros to generate code

There's still a place for this, even in a Go with the proposed generics.

There's 3 main styles of generics, each with different speed/space trade-offs:

* use interface{}, which runs slower and has no compile-time checking, but can be faster to code

* use the proposed type-safe generics compilation, which generates compact code, but which require more detailed coding

* use the hygienic compile-time macros, which run fast, but can generate lots of bloat when compiling

There's a place for all three being available. So perhaps Go 2 will get macros one day !


I don't really understand your example. Julia has both macros and builtin parametric polymorphism. Unless you expect user code to implement its own type system (technically possible, e.g. Racket does it), then macros aren't a solution to the same problem as generics


If I could do

SortSliceOfIntFunction := MakeIntSorter!(int)

...then I would not need generics to sort a slice of ints. I just generate code that I would have written without generics to implememt the case for ints.

You need to instantiate for all cases you need manually... but today we are using go without generics entirely and it sort of works.

I agree it is not the same.


What about sum types? During 4 years I've been programming Go professionally I missed sum types more times than I missed generics. Writing yet another

  interface SumType {
      isSumType()
  }
won't kill me, but it does get annoying.


for sure. It would also be lovely to add sum types to Protobufs too, while we're at it. many message protocols are very neatly described with sum types in my opinion.

Also, errors would be way more elegant as a sum type:

instead of:

err, a := x

if err != nil { }

we can do:

switch a { Err(err) -> {}, Ok(a) -> {}, }

and avoid null pointers :) and then "check" is literally just "bind/ try!, flatMap, =<<" or however your language du jour calls it :)


Regarding Protobufs, did you mean something other than `oneof`?

https://developers.google.com/protocol-buffers/docs/proto#on...


Annoying and the performance is terrible, since pretty much every instance ends up being an allocation, and allocations in Go are quite a lot more expensive than in other GC languages. Further, it doesn't quite have the same semantics--if you return one of these interfaces, your caller still doesn't know what all of the permutations they have to check for.


Plus a short video in the Go blog: https://blog.golang.org/go2draft


> The call with type arguments can be omitted, leaving only the call with values, when the necessary type arguments can be inferred from the value

I think if too many different ways are offered in the generics proposal, it will work against the simplicity of the language for newcomers.


Yea, I really like Go because it isn't C++2049. I was able to get writing it quickly, and I don't have to think too hard about what the right (read: most clever) way to implement something is given all the language features. I like to just write code and I think a post here said it best, Go gets out of the way and lets you build.


weeks of work can save an hour of planning


Not having to provide the time argument at the callsite is by far the most intuitive way to call a generic function.


I am very excited to see the draft designs. They point to some good progress with some open design aspects with Go.

The error handling concept looks to me as if it is ready for implementation in one of the next Go revisions. The fundamental of Go error handling vs. an exception mechanism always was, that Go errors are returned as part of function results. The calling function gets the result and the error and then can decide how to deal with it - handle the error, or return from itself. Exceptions not only mean automatic returns, but also automatic stack unwinding across all functions that do not implement an exception handler. I have always preferred principle of the Go error handling to exceptions.

What is great about the error handling proposal is, that it is fully contained inside each function. It does not change the signature of functions, they still return an error amongst the return values or not. You cannot see when looking at a function, whether they use the proposal or not. But like the defer statement, they offer an elegant way to schedule effects to happen, when a function returns. A plain return will trigger all deferred function calls, a check "catching" and error will trigger all handlers and then all deferred functions. Furthermore check works as a filter, so chaining functions which return values and errors becomes much easier. I also like the choice of "check" vs. "try" or "?". A short word reads better than an operator character and try is too familiar with an exception mechanism, what check/handler is not.

On the first glance, the handlers looked a little bit confusing about the logic flow. But if one considers check/handler the twins to return/defer, they become very obvious and fit very well into the existing framework. Handler can be considered a deferred function for the return in the error case, and check as triggering that error case return.

On the generics, the concept is not finished yet, but what is there so far looks like to be on a good path to adding generic types to Go without making it a much more complex language or too many sacrifices like the type erasure in Java. On the syntax side, I like that they managed to find a syntax, which uses the normal parens used for function and method syntax before, as this makes the code a bit nicer to read. I am very curious how they develop.


One nit-pick in the generics overview. It claims Swift introduced generics in Swift 4, but parametric polymorphism has been in the language since Swift 1, though refined after each subsequent release.


This is also incorrect:

> Equatable appears to be a built-in in Swift, not possible to define otherwise.

Equatable is a standard library type.


They may trying to refer to the automatic synthesis of Equatable conformance.


FWIW, these changes make it much more likely that I would voluntarily choose to write a Go program.


I agree, honestly the error handling alone has always been the main reason I never used Go. Yes, I knew there were good reasons for that design, but I prefer to have options in how to handle errors and keep a clean, elegant code.


Really excited to see things start to take shape, I'd imagine there will be a lot of rounds in that feedback loop to get it right, but glad to see the Go team have stuff on the roadmap


That error handling looks very much FP inspired, intended or not.

I'm not going to say the M-word, someone will maybe say it in ... either case.


   I'm not going to say the M-word, someone will *maybe* say it in ... *either* case.
Just wanted to say I see what you did there. Nothing insightful to add.


Had a bit of a rough day, and insightful or not, your comment still did brighten my day.


monad transformer stacks amiriiiiiiiite


I didn't have a very precise pattern in mind, but yes that would smell about right!


It seems to be lacking a mechanism for accumulating multiple errors, which can be done using a package like hashicorp/go-multierror.

The usage pattern is that you have a complex function that does something like parse a document and return the data from it. But the file format is so complex, that a lot of minor problems can occur, but they don't necessarily prevent you getting some data out. So the return value is the data and a set of errors (or warnings, if you like).


This might actually be addressed, albeit very quickly, in one of drafts: https://go.googlesource.com/proposal/+/master/design/go2draf....

>To implement formatting a tree of errors, an error list type could print itself as a new error chain, returning this single error with the entire chain as detail. Error list types occur fairly frequently, so it may be beneficial to standardize on an error list type to ensure consistency.


Heh, it's hard to get someone more qualified to look at this issue than Ian Lance Taylor. An unknown member (by most) of the Go team that more ought to know about.


(I usually avoid ranting, but just couldn't resist it this time around. Go on, bring on your downvotes, you still know deep inside this is the truth.)

Oblig. generic laughter [1]

Are Go's problems over, then? I have read the error handling draft so far, and girl, was I in for a big surprise. Imagine a few nested try-catch blocks, where all the try{} and catch{} are written... imperatively, one after the other, with both try{} bodies and catch{} blocks interspersed in the same scope. E.g., line 1: "catch" block, lines 2-10: "call IO" statements, line 11: another "catch" block, etc. The errors are always passed to the previous "catch" block, then the one before that, etc, until the error has been handled.

Haven't we seen something like this before? Ah yes, the goto statements!

This was designed by a person with a severe case of scope-o-phobia. If that person has designed the rest of Go (and I think she has), the language might be unsalvageable. Just start from scratch, and do it right this time around. Oh wait, no need - we already have Kotlin.

/rant

[1] https://www.youtube.com/watch?v=iYVO5bUFww0


It'll be interesting to see how Go handles a new major version. Will it be a Python 3 situation? Or more like, I dunno, Ruby 2 and onwards.


Is it an explicit design requirement that we never invalidate any Go 1 code or bifurcate the community. Any language changes must keep that in mind. Also, "Go 2" won't be a release as much as it'll be a series of releases. Go 2 might be called "Go 1.15.0". Even if it's named "Go 2.0.0" for marketing/excitement reasons, it'll still effectively be Go 1.15.0 and compile all Go 1 code and be able to call back (and forth!) between Go 1 and Go 2 seamlessly.

We're not doing a Python 2/3 or Perl 5/6.


Presumably that first "doing" in the last sentence was supposed to be a "not."


Whoops, thanks. Fixed.


Now you have two "doing" :)


I think the two big problems with the Python 3 rollout were a lack of compelling reasons to upgrade (at the time - many big features have been implemented in Python 2 in the years since), and upgrades like this being inherently scarier in dynamically types languages.

If Go 2 brings improved error handling and generics, those are HUGE reasons for the community to get onboard and upgrade quickly. Plus with compiled languages, the upgrade is way less scary - so many errors will be caught at compile time that can sneak through as subtle bugs in dynamically types languages (unless there’s GREAT test coverage). Also, Go 2 may not even bring breaking changes! Though I’d imagine some things will still break no matter what (code analysis tools, programs that use variable names that become keywords, etc.).

Maybe I’m being naive, but I think Go 2 will be adopted quickly with relatively little pain, as long as it brings super-desired features like genetics and error handling improvements.


> I think the two big problems with the Python 3 rollout

When Python 3 was announced I thought "this is a great way to rollout". After a few years I thought "what a terrible rollout, even I haven't changed to version 3 100% yet because of some libraries I rely on are still only verion 2". Now I'm back on the positive side of things. It turns out, in the grand scheme of things, 5+ years for a new version to be adopted didn't really affect me in a negative way. I'm back on the "slow and steady wins the race" bandwagon.


There's a bit of survivorship bias there. There was a very real chance that Python would get replaced by another language in the same way that Ruby and Python replaced Perl.

If that had happened, you'd probably still be in the "what a horrible rollout" frame of mind.


I seem to remember that Ruby was already replacing Python pretty heavily 10 years ago. Rails was gaining momentum at a huge rate and Python web frameworks were dying, tools like Puppet were on the rise etc. The full history of Perl 6 still lay ahead too.

If it hadn't have worked out in the end, it would be just as relevant to claim "what a horrible rollout" was just the benefit of hindsight talking.

I think the Python devs and community need to take some credit for managing to pull off that transition successfully (even if it took longer than most expected). And to claw their way back into relevance again.

And every other language community now has a case study they reference during discussions about language upgrades :)


None of those really replaced Perl...


> upgrades like this being inherently scarier in dynamically types languages.

So true.


I don't know how Ruby 2 does things, but Go will definitely not be like Python 2/3. Old Go 1 programs will continue to compile under Go 2, but Go 2 programs will not compile under Go 1.

It's also not certain that we will need a Go 2. It may be that all features can be written in a backwards compatible manner and the major version won't need to be bumped.


With these proposals so far, nothing is breaking with existing code. Any code you right today that runs on Go 1.11, would compile and run with these proposals. They are just feature additions at this point (mainly some new keywords, like "contract", "handle", and "check").


I have nothing more substantial than a gut feeling to back this up, but it all feels half baked and too conservative to me. Like they cranked out some special syntax and new concepts that sort of solve their problems and called it a day. It may look like the path of least resistance initially, but sooner or later the features will start stepping on each others toes.


> it all feels half baked and too conservative to me

Your gut is wrong. These proposals have been in development for years. The author of the generics proposal has been thinking about the problem for nearly a decade.


Sorry about the toes, but identifying with a programming language will always get you into trouble. My gut is fine, thank you very much.

They're C heads, trying to build a simplified C; at least they were, they sort of messed that up with garbage collection, wild west threading and using reflection for all the wrong reasons.

Sure, now the tone is different. But back in the days when the language was first released, they would make fun of anyone mentioning generics or exceptions.


To the Go authors:

* I've never seen a language rise in popularity as fast as Go.

* Go is doing something right.

* Please refrain from changing the language.


You could almost write a `check!` macro for rust that would have the same semantics. You can obviously already write one when no `handle` exists (it'd just be `try!` or `?`), but I can't think of how you'd build up the handles like they describe without a procedural macro.

You likely could do something semantically equivalent with procedural macros.

`handle!{ block }` would define a block to inline later when needed and then `check!` would do the normal thing but also inline that block, just like Go's.

Also, I feel like it is a little disingenuous to say that Rust has `no equivalent of handle` when it has something so similar in `Result::or_else(self, Op)`. You don't get the handle accumulation and you have to pass in a closure at the site, but they're remarkably similar.


A question on generics from the docs

if the declaration is

func Keys(type K, V)(m map[K]V) []K

why do we need to call it like

keys := Keys(int, string)(map[int]string{1:"one", 2: "two"})

can't the types int and string be inferred from

Keys(map[int]string{1:"one", 2:"two"})


They are. Further in the draft there is an example:

  var x []int
  total := Sum(x) // shorthand for Sum(int)(x)


ah crap. how did I miss that! Thanks!


> In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier.

Yep!

The overall design looks quite good, though.


I do like the idea of contracts for generics, seems to fit in well with Go's interfaces and whole design.

What I do not like at all is that type arguments are put in parens, same as normal arguments. Something like Java's <T> would be more readable IMO. I have to do a double-take to realize that the first paren is for the type arguments. Combine that with the parens that are used for receivers and multiple return values, and function signatures become paren hell.


<T> has issues with ambiguous parsing as < and > are also the less than and greater than operators. Rust recently had an small issue with this:

https://github.com/rust-lang/rust/pull/53562


[T] has precedents (Eiffel comes to mind).


Unfortunately [T] is also ambiguous with the current Go syntax. The parser can't distinguish an array declaration from a generic type.


How about the following:

  contract Addable(t T) {
  	t + t
  }

  func Sum.<T Addable>(x []T) T {
	var total T
	for _, v := range x {
		total += v
	}
	return total
  }
It has the nice property that it's somewhat analogous to type conversions, the grammar should be unambiguous, and it's much less visually confusing than re-using parentheses IMO.


I'm always so happy to see Eiffel mentioned. I read Object Oriented Software Construction immediately before I read the book on (then recently-released) Java 1.2, and it made it so clear how much Java got wrong :(


This is very exciting!

One challenge with introducing generics after the language has shipped is that the language was already designed around their absence. In particular, there are a number of "generic" or "polymorphic" that are built into the language. In order to handle them specially, they often have special-purpose syntax. Examples (note: I'm not very familiar with Go, so I might have some of this wrong):

* Addition, which is overloaded for various number types, uses "+".

* Accessing array and map elements uses "[...]".

* Equality, which is overloaded for various types, uses "==".

Having dedicated syntax for these makes sense in Go 1 because they are special. They do things user code cannot do. Prohibiting operator overloading reinforces this. If you see a "+" in code, you know it's the magical built-in "+" operator.

But with generics (and potentially overloading), these operations no longer need special powers. It's possible to write your own collection type with a type-parametric "subscript operator". You just don't get to use the "[...]" syntax.

This is particularly problematic because it interacts poorly with contracts, which specify syntactically which operations are used. Most of the contract examples in the doc here use "==", "+", and "[...]".

So even though you have generics and constraints (contracts), it's very hard to define a generic function that works with both built-in generic types (slice, map, etc.) and user-defined ones.

The end result is probably going to be something similar to Java where the "best practice" is to use named methods for all operations ("Add()", "Equal()", etc.) and then provide wrappers that lift the built-in types into this more flexible world like you do with ArrayList instead of raw arrays in Java.

That sucks because the built-in syntax is better. It's inane that Java has array literals, a nice subscript operator, and a terse ".length" property but 99% of Java code doesn't get to use any of them when working with "arrays". Instead of:

    array[0] = array[array.length - 1];
It's all:

    array.set(0, array.get(array.size() - 1));
I could certainly be wrong, but it seems Go is on that same path.

Aside from this, the generics proposal looks really neat to me. The limitations on methods will also likely be very painful, but it's a hard problem with no easy solution given Go's approach to structural typing, methods, and the desire to not require runtime specialization. Language design is hard. Language evolution is 10x harder.


I was hoping the generic type would be part of the struct/interface/func declaration section instead of the name part of the declaration. I wonder how that will work with anon ifaces/structs/funcs? I assume just a type param list before the anon decl? (seems easy with funcs, less so w/ types).

Also, I assume I can make named aliases of types with concrete type params values?


There are examples of aliases e.g.

type VectorInt = Vector(int)


The case against exceptions is unfair to Java, whose model is CHECKED exceptions. Of course, people hate them, but they are semantically equivalent to the new proposed solution, which is itself much better than current error handling in Go, which is frankly horrid.

I must admit the check/handle mechanism is syntactically more pleasant though, avoiding surrounding code with try/catch.


I'd like to see sum types (often called product types, union types or enums) introduced. I think they would fit Go well, and get us away from the current awkward "interface hell".

For example, imagine you're implementing an AST. You might have something like:

  type Identifier struct {
    Name string
  }
  type StringLiteral struct {
    Value string
  }
and so on. Currently the only way to unify them is with a dummy interface:

  type Expression interface {
    isExpression()
  }
and then you have:

  type (Identifier) isExpression()
  type (StringLiteral) isExpression()
  // etc.
Then in your compiler you'll have lot of switch statements:

  switch e := expr.(type) {
    case Identifier:
      // ...
    case StringLiteral:
      // ...
    default:
      panic("unexpected expression")
  }
There are multiple problems here.

One, unifying pure data structs with an interface is not what Go interfaces were intended for, and is pure type system abuse. Fine, so maybe in this case Expression also gets things like GetPos() and what not, which sort of justifies an interface. But there are plenty of use cases where there are no methods.

The other problem is that there's no way to prove at compile-time that a switch is exhaustive. Add another expression struct, and all your switches still compile. Someone made a post-processing tool (go-sumtype), but it's an awkward and buggy hack that has to be run as a "go generate" task next to your development workflow.

Thirdly, performance-wise, this forces every struct to be boxed in an interface.

I'd rather like to be able to declare something like:

  type Expression (Identifier || StringLiteral)
It can, I suspect, be simpler than Rust's full-blown sum type support, because you're essentially declaring a new type that holds the possible union of types.

Of course, this could be elegantly extended to errors:

  func Open(fn string) (*File || error)
The downside to this, as opposed to something like Rust's Result, is that you can't add methods, so you can't chain these calls without something more.

On the other hand, it would work together with the proposed check/handle mechanism.


I don't recommend any attempt to unify check/handle and panic/defer/recover. They don't behave the same because they aren't doing the same: one is controlled handling of normal errors with reduced boilerplate, the other is "crashing with cleanup".


> A more robust version with more helpful errors would be....

My issue with that example is that it isn't like that. People don't use fmt.Errorf() with exactly the same string on every line. They like to add per-line context.

Then you're back to one handle block for every line and it's even worse.


This made my day. I'm so happy. Going to spend the week going over the draft proposals!


We need a variant of Greenspun's tenth rule for systems programming languages.

Any sufficiently old systems programming language that aims for simplicity and clean code will eventually devolve into a variant of C++, except ad-hoc and poorly specified.


What the Go team is doing seems diametrically opposed to "ad-hoc" to me.


Really wish they'd consider looking at some of the more fun things TypeScript types can do. Like specifying a return type as being Pick<SomeType, 'x' | 'y'>. Imagine this in combination with Go interfaces.


I've never understood why Go tries so hard to reinvent exceptions. I've read their manifesto, their books, and this new exception syntax, but I'm not convinced the problem they're describing exists; and, even if it does exist, that their implementation solves it.

Exceptions in Ruby, Python, Java, C#, etc. are a tried-and-true paradigm for handling unexpected code paths, with a clear syntax: the only problem with exceptions is that developers don't use them, or don't use them properly, or aren't forced to use them (e.g, checked exceptions, a la Java's throws), all of which can be fixed with compilation rules.

As someone who writes defensive code, and utilizes a lot of exception throwing/handling for 14+ years, Go's lack of a rational exception framework is a big turn off.


> a tried-and-true paradigm for handling unexpected code paths, with a clear syntax

Citation needed here. Who seriously has this opinion? There was just a Java/JVM conference on the very topic of how to make exception handling better.

Tried? Yes.

True with a clear syntax? Definitely not.


> the only problem with exceptions is that developers don't use them, or don't use them properly, or aren't forced to use them

Well, that's enough of a rationale for trying some other mechanism, isn't it?

> Exceptions in Ruby, Python, Java, C#, etc. are a tried-and-true paradigm for handling unexpected code paths

the problem is, most errors aren't "unexpected code paths". Trying to open a file whose name is provided by the user, for instance, can fail for several reasons: file does not exist, permission is denied, file is busy, file content is invalid, etc. None of these situations is "unexpected", they are all part of the task being solved.

Now, accessing the first element of an empty list, dereferencing a nil pointer, dealing with a value that has an illegal state, facing a bus error or a stack overflow? Now that sucks, that is unexpected and it should be dealt with some special mechanism. That's where exceptions really shine (and go has panic/recover for those, ahem, exceptional circumstances).


In the Error Values Draft Overview[1], rsc provides another argument against traditional exceptions:

>As a cautionary tale, years ago at Google a program written in an exception-based language was found to be spending all its time generating exceptions. It turned out that a function on a deeply-nested stack was attempting to open each of a fixed list of file paths, to find a configuration file. Each failed open operation threw an exception; the generation of that exception spent a lot of time recording the very deep execution stack; and then the caller discarded all that work and continued around its loop.

[1] https://go.googlesource.com/proposal/+/master/design/go2draf...


There are serious problems with exceptions.

1. It's often not clear which exceptions can be thrown from a function. I don't think many languages force you to list them all. In most languages it is a case of "this function can throw any exception it wants - read the source code (and of all the functions it calls) if you want to know which".

2. Adding context to exceptions is even more verbose than Go's error handling. In the video for this post for example, where they have fmt.Errorf() for each line that might fail you can easily add context like "opening file failed: ..." or "parsing file failed: ...". The equivalent with exceptions is a separate try/catch for every line! I've never seen anyone bother with that.

This seems like a bit of a flaw with the "handler" approach by the way. Rust's map_err() is a bit nicer.


> It's often not clear which exceptions can be thrown from a function

Go 1 failed to solve that as well. "error" as an interface tells me nothing about what could go wrong with a function call. Go 2 is not looking like it will solve it either. I am happy with the proposals, but I know I'm still going to be looking at godocs and be forced to dig into source code to find out what error types actually come back, and hope they don't change from version to version.


I agree with this. It would be great if we didn't have to give up any type safety with error returns. Even still, this is relatively minor when you remember that people write code in JS, Python, Ruby, etc with zero type safety.


This is true. However it is better in one big way - you at least know whether or not it will return an error at all.


This is just syntax, but I'm curious about the decision to go with regular parens for supplying type parameters, rather than the familiar angle brackets, or perhaps Julia's choice of curly brackets?



angle brackets may be harder to parse, remember that compile times are an important go feature.


I don't understand why promoting Go to be the next Java is a good thing. Let's just use it to do what it's good at, and if you need generics, go use a language that has generics. We already know what a language that can do everything looks like, and it's a mess.

Plus, are they going to rewrite the entire STL? Are we going to get a library of Algos and DS when we have generics? Generics pretty much changes the entire look and feel of the language. Might as well make it a fork than an iteration.

So much for simplicity and copying. :) RIP Go 2009-2019


The Go team is pretty set on maintaining backwards compatibility—rewriting the standard library is the exact opposite of that. Just because the promise is Go 1.X won't break your code doesn't mean they don't remember Python 3.

Besides, which parts of the existing standard library would even benefit from generics themselves? Of course the obvious thing is to /add/ a set of standard container templates but that doesn't really step on anyone's toes.


What a strawman. Adding a basic PL feature means we're gonna rewrite the STL? Come on.


If you prefer Go as is, stay on 1.x? Besides, isn't v2 of a language like a fork anyway? There's a considerable investment in v1 already so it's not like it's going to disappear overnight.


What sorts of things can you do in the Haskell type system that you would not be able to do with the Go generics proposal?


Higher-kinded types? Monads, etc.


Couldn't one define Monads, etc. with interfaces, once they allow generics?

(there's nothing in Haskell which enforces the category laws in the typesystem, afaik)


What about literal parameter for functions or types ? I'm just asking because c++ and D support it.


One step at a time. It's certainly a natural extension, and I don't think anything in the current design draft precludes it.


Go 2 considered harmful


After a year with Go I have to say that I am quite comfortable with it. The beginning was an absolute hell but once you open up your mind a bit and accept that it is what it it and if you want to use it YOU have to accept its concepts. And time heals all wounds :D

From those three points - the error handling seems a waste of time to even bother with implementing. The way it works now is fine.

The error values - yes, there should be something done. As mentioned in the draft, you often need information on where the error occurred(func, line, file) since the error can travel in between libraries, you have dynamic messages so you cannot declare global variable to compare the error with based on string content and lastly, error code would be nice too(severity/http status code).

Generics - I have to say this is pain for most people, I got used to work around it via wrappers and so far I'm good. I mean, if you have argument as interface{} and you accept bool or int then why don't you just have Value{b bool, i int} struct and just check which one it is and proceed with the logic?

I have typed data with 23 different types and a proxy that implements all interfaces with GetType, SetType and AsType methods and yes, it is long code to write for something so trivial but in the end it works and I can now go on with my life. Internally, gRPC does the same when it generates Go client.

To me, one of the things I miss the most is ternary operator, that would save a ton of code. And I would also welcome removal of nil maps. var foo []string can be used with append but var foo map[string]string cannot be used as foo["one"]"two" because you have to foo := map[string]string{} first which means you have do add 4 lines of unnecessary, imho, code everywhere you work with maps to avoid panic due to map being nil and not just empty.


Also they could implement ``` to allow ` in strings


This is tragedy. We'll no longer be able to say "lol, no generics" :(


[flagged]


^ The above post is Spam and has absolutely nothing to do with topic of discussion. Overlords, please flag the post.


Yay, more tech churn...


when can Go Lang get rid of the address reference(*) shit?

checkout java and python.


Generics are just syntax sugar around `interface{}` [1] well hopefully the runtime JIT optimizes things for me.

I'm rather impressed that when `go` actually implements something I wanted they still manage to implement it in a way that is hacky and has massive trade-off's while literally avoiding doing the bare minimum to improve the compiler. Its disappointing.

[1] https://go.googlesource.com/proposal/+/master/design/go2draf... go to efficiency




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: