Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's pretty simple.

   x = 1
   addSomething(y) = y + x
The above is not a combinator. addSomething relies on the line x = 1 and is forever tied to it. You cannot reuse addSoemthing without moving x = 1 with it. Therefore addSomething is not modular. This is the root of organizational technical debt. When logic depends on external factors it cannot be reorganized or moved.

This is also a big argument against OOP because OOP is basically the same exact thing with a bit of scope around it:

  class B()
     x = 1
     addSomething(y) = y + x
     divideSomething ...
     b = 3
Now addSomething can never be used without taking x=1 and everything inside B with it. It is less modular as a result.

A combinator is like this:

  addSomething(x,y) = x + y
fully modular and not tied to state.

The point free style is a bit long to explain. Another poster brought up readability and now I think I went too far with it as a recommendation, it's like going vegan basically if you employ that style. Just stick to combinators and you get 99% of the same benefits.

Suffice to say the point free style eliminates the usage of all variables in your code. You are building just pipelines of pure logic without any state.

When you get rid of state as much as possible, your logic will not have any dependencies on state, and thus will generally be free of technical debt caused by logic being tied to dependencies.



The downside to taking that combinator approach too dogmatically is that passing all state as parameters can get extra unwieldy, because now a simple change in data schema can result in you refactoring every single function call.

This dilemma has a name: The Expression Problem. A decent summary can be found here.

https://wiki.c2.com/?ExpressionProblem

Functional programming is an amazing paradigm for most domains. However, some domains will take a seasoned functional programmer and make them want to jump off a cliff. UI programming, game programming, and simulation programming are some examples where pure functional approaches have never made a dent, and for good reason.


One more thing I should mention. UI programming and game programming are now currently the areas where functional programming techniques are sort of in vogue.

If you want to do FP in your job, becoming a front end developer is your best bet as React + Redux currently follow a paradigm called functional reactive programming (FRP) with react trying to go more and more in the direction of FP and trying to separate out all side effects from pure functions.

A popular pattern in game programming is called ECS, which isn't strictly FP but is similar in the sense that functions are separate from data rather then attached to data as it is in OOP. The game industry is definitely heading in this direction over OOP style techniques. It's actually rather similar to FRP.


Generally the solution that functional programmers arrive at when facing UI / game / simulation, is to have a consistent persistent data structure. Incidentally, this is also basically what SQL is: a functional language for transactionally querying the global state.


>The downside to taking that combinator approach too dogmatically is that passing all state as parameters can get extra unwieldy, because now a simple change in data schema can result in you refactoring every single function call.

This should happen with methods too. Whether a variable is free or a parameter doesn't change anything.

   x = {b = 1, c = 2}
   f(x) {return x.b}
   g() return x.b
A change in x, say deleting b, will require a refactor for both the combinator and the method.

I'm not saying combinators are the solution to everything. Of course not. I'm saying combinators are the solution to technical debt caused by organizational issues. Of course there are trade offs, I never said otherwise.

Both of the issues above are separate from the expression problem though. Personally I don't think the expression problem is much of a problem. Whether you add a new function or a new shape to either paradigm in the example link you gave, the amount of logical operations you have to add is equal for both cases. The difference is the location of where you put those logical operations. In one case they can be placed closed together, in another case they have to be placed in separate scopes, but the total amount of logical operations written to achieve a certain goal is equal.

For example adding perimeter to either paradigm necessitates the need for you to define the perimeter of every shape no matter what. Neither paradigm actually offers a shortcut when new information is introduced into the system.


Classes and methods are just sugar around namespaces, functions with implicit "this" params, and some extra markup around design ownership (ie private members).

You don't gain or lose state with classes alone. Your examples didn't remove any state. X is still there, its just not B.x.

What you're fighting against is side effects and reducing what is in scope at any given time. One could argue that the goal of classes is the same!

Sadly one can write terrible, leaky code in either style.


I am not talking about leaky code. I am talking about code that is not modular.

Rest assured, I know you’re talking about a perceived isomorphism between a function with a struct as a parameter and the same struct with a method. There are some flaws with this direction of thought.

It is the usage of implicit ‘this’ that breaks modularity. When a method is used outside of a class the ‘this’ is no longer implicit thereby preventing the method from ever being moved outside of the context of the class. This breaks modularity. Python does not suffer from this issue.

Couple this with mutation. Often methods rely on temporal phenomena (aka mutations) to work, meaning that a method cannot be used until after a constructor or setter has been called. This ties the method to the constructor or setter rendering the method less modular as the method cannot be moved or used anywhere without moving or using the constructor with it.

My claim is that combinators can be reorganized without dragging context around thereby eliminAting technical debt related to organization and repurposing and reusing logic.

Note that when I say combinator, I am not referring to a pure function.


So basically, use functions but limit your use of closures? As in define your functions to be dependent only on parameters and not surrounding scope (even if the surrounding scope is immutable/pure)? If that’s the lesson, I’m all for it, with the exception of fully local closures that are used more for expressiveness than standalone functionality.


>So basically, use functions but limit your use of closures?

Not limit, terminate the use all together along with classes because methods in classes are basically closures.

>I’m all for it, with the exception of fully local closures that are used more for expressiveness than standalone functionality.

Sure I can agree with this... formally though. When you write a local closure you are preventing it from ever being reused. The philosophy of this style is to assume an unknown future where anything has the possibility of being reused.

When too much of your logic gets inserted into "local closures" your program will be more likely to hit the type of organizational technical debt I am talking about above.

It's not a huge deal breaker though, you can always duplicate your code to deal with it. I'm not against shortcuts but most programmers need to know when they are actually taking a shortcut while being aware of the formal rules that will actually eliminate organizational technical debt.

Many functional programmers are unaware of the the origins of organizational technical debt and mistakenly build technical debt into their programs with closures even when it wasn't their intention which is separate from your case as you are doing it intentionally.


I think we’re mostly in agreement. I’m a little looser than your absolute in practice, but I apply the same principles. Where I’m looser is basically an allowance for closures as a simple expression (and where languages with more expressiveness may not require a closure). If any local logic becomes more complex than that, I’m quick to parameterize it and move it out to its own function.


Yeah I'm already on board with modular functions and functional programming in general. I was wondering about the point free thing. I agree that's like going vegan.


functional programming still allows the usage of functions that are not combinators. So I'm referring to that specifically, not functional programming in general. The OP is recommending functional programming I'm taking it a step further.


It sounds like a combinator is roughly the same thing as a pure function. I'm more familiar with the term pure function, and OP does specifically advocate for pure functions.


No, they refer to different things but can intersect. Not all pure functions are combinators. Just look it up. Haskell is purely functional but it also promotes many patterns that are not combinatorial.


hah i was taught that style is "pointless"...

it comes from topology, no?


Yeah that's a fun name for it. It is often a useful style:

    map (not . elem [2,3]) [1,2,3,4] ===> [True,False,False,True]
or

    grep foo bar.txt | wc -l
Also chaining in OOP is a bit "pointless", in that it doesn't mention the "points":

    foo.bar().baz()
But, like most things, it's best _in moderation_.


No idea. Does it? Never studied topology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: