> People laughed at Seymour Papert in the 1960s, more than half a century ago, when he vividly talked about children using computers as instruments for learning and for enhancing creativity, innovation, and "concretizing" computational thinking.[1]
> ...our intelligence resides not in individual brains but in the collective mind. To function, individuals rely not only on knowledge that is stored within our skulls but also on knowledge stored elsewhere, be it in our bodies, in the environment or especially in other people. Put together, human thought is incredibly impressive, but at its deepest level it never belongs to any individual alone. [1]
This has been circulated around HN and Reddit several times, and it's disappointing that someone of Norvig's stature would present the data in such a misleading way.
Here's a good explanation posted by "tedsanders" the last time this came up on HN:
"""
All of these claims from Google that say competition
performance hurts or that GPA doesn't matter are
missing one huge thing: selection bias.
Google only sees the performance of the employees that
it hires, not the performance of the employees that it doesn't hire. Because of this, the data they analyze is statistically biased: all data is conditioned on being employed by Google. So when Google says things like "GPA is not correlated with job performance" what you should hear is "Given that you were hired by Google, GPA is not correlated with job performance."
In general, when you have some thresholding selection, it will cause artificial negative correlations to show up. Here's a very simple example that I hope illustrates the point: Imagine a world where high school students take only two classes, English and Math, and they receive one of two grades, A or B. Now imagine a college that admits students with at least one A (AB, BA, or AA) and that rejects everyone without an A (BB). Now imagine that there is absolutely zero correlation between Math and English - performance on one is totally independent of the other. However, when the college looks at their data, they will nonetheless see a stark anticorrelation between Math and English grades (because everyone who has a B in one subject always has an A in the other subject, simply because all the BBs are missing from their dataset).
When Google says that programming competitions are negatively correlated with performance and GPA is uncorrelated with performance, what that likely means is that Google's hiring overvalues programming competitions and fairly values GPA.
"""
I've also heard people involved in Google's Code Jam competition say that Norvig's study was done a long time ago, and no longer really applies.
I think what you said is true. But the main point implied here but I didn't mentioned is the mindset or competence is quite different between programming competition & real work. After all, being good on the job depends more on reflection, going slowly, making things right. ;-)
This is relative to other people who had been hired at Google - there's probably still a positive correlation between those two variables amongst the general population.
> monads aren’t actually all that complicated. In fact, most of the experienced functional programmers I’ve met consider them downright simple. It’s just that newcomers often have a really hard time trying to figure out what exactly monads even are... A lot of intermediate-to-advanced functional programmers have taken it upon themselves to write monad tutorials... But for the most part, these tutorials never seem to work.
The article is spot on. The problem with monads is not the what, is the why. Tutorials often explain the what, which is easy to understand (except where people get creative with similes... "monads are like tuna sausages, but you can build castles with them").
Not having used them, I'm still puzzled at _why_ I would need monads at all.
At this point, tutorials use examples like "like flatMap and Maybe in other languages!" which is even more confusing. Why do I need a monad then, if there are similar constructs in other languages that don't need the understanding of monad? Why the complexity? What do I get from monads?
Monad, like any interface, is useful because we can abstract over it. Let's take an example from Java: both ArrayList and LinkedList implement List. This means I can write code that is agnostic to the implementation of List, and later I can drop in any implementation I want. Seen from the other direction: if I write something that resembles a list and I implement the List interface, all of the code that's compatible with List will also be compatible with my new implementation.
Similarly, if I define a new data type, and realize that it can implement Monad (or more precisely: that it has a monad instance), then I'll be rewarded with a giant library of code that's already compatible with my new data type [1]. Monad is an especially interesting interface because (1) it turns out many things conform to it [2] and (2) it comes with a set of algebraic laws. There is some controversy over how strict we need to be about the algebraic laws, but in some sense the algebraic laws are part of why such a general interface can be meaningful at all.
So yes, it's true that Monad allows us to sequence effects in a lazy pure language, and that's important, but I think a more down to earth reason to be interested is that it allows for more code reuse [3].
[3] It's also worth mentioning that Monad gets all the attention, but Haskell if flush with other mathematically inspired interfaces that are just as general.
>At this point, tutorials use examples like "like flatMap and Maybe in other languages!" which is even more confusing. Why do I need a monad then, if there are similar constructs in other languages that don't need the understanding of monad? Why the complexity? What do I get from monads?
Monads are a mathematical concept, like a ring. "Maybe" is a monad in whatever language you use it in, just like integers are a ring in whatever language you use them in. Pointing out that complex numbers, rational numbers, nxn matrices are all examples of rings and wrapping everything up into a type class doesn't add to the complexity of the language.
The 'why' question is my pet peeve too - often because the answer to
> Why do I need a monad then, if there are similar constructs in other languages that don't need the understanding of monad?
is that you usually don't.
Where I work, people use a handful of monads daily, but most people know the monad laws, nor do they need to.
But now that they know a few specific monads, the "why" of monads is really, well, we needed certain convenience methods (namely bind aka flatMap) to make using e.g. Maybe or promises not a huge pain. And we chose a uniform way to do it across different wrapper types, so that we minimize the cognitive overhead.
It lets you model sequential operation and side effects in a pure functional language. You can usually write your code "manually" without monads, and see what it's like. For example, the State monad is basically a function that takes in a value and some state, and returns a new value and a new state. That's a pure function, but composing/chaining them together is pretty messy. If you do that manually once, you'll then get some intuition why the State monad is useful. Then you can expand to other types of monads and eventually have an intuition for their general value.
Edit: You don't really need monads any more than you need classes, interfaces, functions, procedures (just use goto!), etc. they just help bring a bunch of seemingly disparate functionality together into one standard (which is helpful in, e.g. Scala or Haskell where there's some syntactic sugar for dealing with monads). People complained for the longest time (amongst many, many other things) that Javascript lacked classes, even though you could totally hack it together with `new`, functions and prototypes. Similarly, FP people complain that everything else lacks monad support, even though they can hack it together with largely language independent features (typically without compile-time checking).
I don't have an intuitive grasp of the formal definition of monads, but some examples of things that I _think_ are monads (in Java ... sorry :/ it's all I work with these days):
* jOOQ [1]: you use it to build up a sequence of SQL statements with a pleasant chaining API, then execute the whole shebang
* Promise/future chaining: you build up a sequence of promises that should apply in-order, then defer their execution until later (unless you use a language that performs a transformation at compile time which effectively does this for you).
* Streams/optional mapping: you build up a sequence of functions that should apply in-order to every element in a potentially empty sequence (optional: a sequence of 0 or 1).
* The builder pattern: you build up a sequence of property values, then (potentially) construct the entire object.
> Why do I need a monad then, if there are similar constructs in other languages that don't need the understanding of monad? Why the complexity? What do I get from monads?
They remove the need to type the same shit over and over again. With a list monad you avoid having to write the loop constructs, for the Option/Maybe monad (optional value) you avoid having to write if/then/else, for the Either/Choice monad you avoid having to write if/then/else and manually propagating an error value on failure, for the State monad you avoid having to pass a context value through as an argument to every function, for the Try monad you get exception handling and error propagation like the Either/Choice monad, with the Writer monad you are able to do logging without a global logging system or having to manually pass through a logging context, etc.
Ultimately they're there to reduce boilerplate and to help write more composable and reliable code. There are other benefits, but this encapsulation and abstraction of common patterns is what most programmers strive for, no matter what language they use, so I feel it's important to put them in that context.
Let me give you a very simple example. I'm going to use C# because it's been a long time since I've done any Haskell and I'll probably get it wrong.
I'm going to create a monad called NoNull. If at any point it sees a value that is null then it will stop the computation and return without completing. It's a slightly pointless example because C# has the null propagating operator, but conceptually it should be easy for any programmer to grasp that using null is bad, so being able to 'early out' of a computation when you get a null value is desirable.
First I'll define the type:
public class NoNull<A> where A : class
{
public readonly A Value;
public NoNull(A value) =>
Value = value;
public static NoNull<A> Return(A value) =>
new NoNull<A>(value);
public NoNull<B> Bind<B>(Func<A, NoNull<B>> bind) where B : class =>
Value is null
? NoNull<B>.Return(null)
: bind(Value);
public NoNull<B> Select<B>(Func<A, B> map) where B : class =>
Bind(a => NoNull<B>.Return(map(a)));
public NoNull<C> SelectMany<B, C>(Func<A, NoNull<B>> bind, Func<A, B, C> project)
where B : class
where C : class =>
Bind(bind).Select(b => project(Value, b));
}
It's a class that has a single field called Value. The two functions to care about are:
Return - Which constructs the NoNull monad
Bind - Which is the guts of the monad, bind does the work
Select and SelectMany are there to make it work with LINQ. I have implemented them in terms of Bind.
As you can probably see Bind is encapsulating the test for null, and if null is found then the result is Return(null), it doesn't run the `bind` delegate.
Next we'll create some NoNull monadic strings:
var w = NoNull<string>.Return("hello");
var x = NoNull<string>.Return(", ");
var y = NoNull<string>.Return("world");
var z = NoNull<string>.Return(null);
The last one contains null, the bad value.
I can now use those values like so:
var result = from a in w
from b in x
from c in y
select a + b + c;
Notice there's no checks for null, the monad does that work for us.
In this instance, result.Value is equal to "hello, world".
Now if I inject a bad value into our monadic computation:
var result = from a in w
from b in x
from c in y
from d in z // <--- this is null
select a + b + c + d;
Then result.Value is equal to null and the `a + b + c + d` didn't run at all.
The alternative is this:
string w = "hello";
string x = ", ";
string y = "world";
string z = null;
string result = null;
if (w != null)
{
if (x != null)
{
if (y != null)
{
if (z != null)
{
result = w + x + y + z;
}
}
}
}
Where the programmer constantly has to do the work that the monad will do for free. I've nested those if statements to try and give you the impression of what the series of 'from' expressions are in the example above (in Haskell it's known as 'do notation'). Each 'from' creates a context over the whole of the rest of the expression.
For the avoidance of doubt about the nesting, I could have written it thus:
var result = w.Bind(a =>
x.Bind(b =>
y.Bind(c =>
z.Bind(d =>
NoNull<string>.Return(a + b + c + d)))));
Here [1] is an example of a Parser monad in action in C# (it's part of a parser that parses C# source that I use to generate API reference docs for my github projects [2]). This runs code 'in the gaps' between each 'from' statement. What it does in those gaps is check if the parser above it failed, and if so it bails out with an appropriate error message based on what it expected. But you don't see any if(result.IsFaulted) ... code anywhere, or maintenance of the position of the parser in the input stream, because that is encapsulated in the Parser monad. It makes the code very clear (I think) in that it's not cluttered with the regular scaffolding of control structures that you usually see in imperative languages.
What's really quite beautiful about monads is the way they compose, and I think this is especially beautiful with parser combinators. A Parser<Char> which parses a single character has the same interface as a Parser<SourceFile> which parses an entire source file. Being able to build simple parsers and then combine them into more complex ones is just a joy to do. Clearly parsing isn't unique to monads, but there's an elegance to it (and monads in general) which I think is hard to resist.
“Any error may vitiate the entire output of the device. For the recognition and correction of such malfunctions intelligent human intervention will in general be necessary.”
— John von Neumann, First Draft of a Report on the EDVAC, 1945
Perhaps no advantage, but it's a good thing. As in the CPU market, there are Intel, AMD, even another architecture alternatives, eg. ARM. It's just an analogy. :-)
The best reason to start a company is that you are obsessed with solving a problem-- there is some pressing issue that you need to fix, or some product that you need to exist. Don't chase hot new technologies or perceived market opportunities (especially not as a 21-year-old newbie to the real-world market). The road to a successful startup is so long and hard that, without insane levels of conviction, most will fail. Don't start a company for the sake of starting a company-- start one because you honestly believe that you have to.
Paul's essay "How You Know[1]" has a great analogy about subconscious mind:
“Reading and experience train your model of the world. And even if you forget the experience or what you read, its effect on your model of the world persists. Your mind is like a compiled program you've lost the source of. It works, but you don't know why.”
It reminds me of a book I read last year: Bored and Brilliant[1].
[1] https://www.amazon.com/Bored-Brilliant-Spacing-Productive-Cr...