Hacker News new | past | comments | ask | show | jobs | submit login
All programming philosophies are about state (worldofbs.com)
377 points by signa11 on Feb 6, 2023 | hide | past | favorite | 238 comments



The article is grouping things together that don't belong in the same categories.

OO, functional, imperative, declarative: these are ways of controlling dispatch.

Monoliths and microservices are both ways to organize codebases and teams of programmers and control whether dispatch is intermediated by the network or not. Either way, both of these options are implemented by some kind of language in the previous category (OO, functional, imperative, or declarative).

Service-oriented architecture applies to both monoliths and microservices, and very few programmers still working in the industry have really seen what an alternative to service-oriented architecture actually looks like.


All those things do actually fit in the broad category of “ideas pertaining to programming”. The author is illustrating a general notion. Far from being a problem, the divergent examples support the basic thesis.


I vehemently disagree.

It is really about state, dispatch is an implementation detail and can be simulated in many languages.

The main difference between monoliths and microservices is in coherent and, if possible, atomic changes in state of different subsystems. Monoliths allow use of blocking primitives or atomic transactions in software transactional memory to achieve that, to do so with microservices is much harder.

The very existence of many implementations of database systems is an attestation that many, if not most, software systems require consistent and atomic transactional processing with regard to state.

State is central to all software.

With what dispatch method to achieve coherency in the processing of state changes is an implementation detail.


The real difference between monoliths and microservices is decoupling teams to enable them to move independently in their software development lifecycles.

I've seen people set up a microservice architecture that was really a distributed monolith because multiple services were reading/writing from the same database table at the same time. I've also seen them use a central locking service as part of that.

That just shows that it's possible to carry forward the state management practices of a monolith into microservices. State management is not the differentiator.


The same is true for monoliths, like Linux kernel.

You definitely can decouple teams to enable them to move independently in their software development lifecycles with monoliths too. Our team definitely do that.

And in realm of programming langages, the state management is a differentiator. For what it worth, the very difference between Turing machine and lambda calculus is the ability of computer (a human computer at the time) to rewrite some part of state when using rules of Turing machine.

Again, for what it worth, when you have arbitrarily modifiable state not all things are possible: https://www.mail-archive.com/haskell-cafe@haskell.org/msg797...


I agree this is mixing apples and oranges. To the degree that server architecture has to preserve state in various situations, though, coming from running my own dedicated monoliths and federated systems to service-oriented architecture about a decade ago, I have to grudgingly admit that the service model really is just cleaner and easier to wrangle. I'm thinking right now of a major version DB upgrade I have to pull this week where I'm going to have a go at the new blue/green provisioning provided by RDS. The old way would have required major downtime, rewiring web services, lots of risk of data collision, not to mention time on the phone with a datacenter just to get up and running. Ultimately the old ways always felt duct taped in some regard, like trying to switch the scenery of a stage production in the middle of an act. I hate that I'm so reliant on Amazon now, but it's just wildly more efficient.


Would you say that this distinction between SOA and micro services is correct, which I found in an O'Reilly report [1]:

> One of the fundamental concepts to remember is that microservices architecture is a share-as-little-as-possible architecture pattern that places a heavy emphasis on the concept of a bounded context, whereas SOA is a share-as-much-as-possible architecture pattern that places heavy emphasis on abstraction and business functionality reuse. By understanding this fundamental concept—as well as the other characteristics, capabilities, and shortcomings of both micro‐ services and SOA that I discussed in this report—you can make a more informed decision about which architecture pattern is right for your situation.

Or is this a distinction you would not make (or were not even aware of)?

[1] Mark Richards - Microservices vs. Service-Oriented architecture


To be honest, I've never really thought of the architectures I work on as being divided along those lines. I think it may be a more important distinction for larger teams. In general I think in terms of separation of duties and authority; my preference is for loose coupling and lazy loading, separate data stores and applets for separate purposes, what might be called microservices; but referring to SOA I really just mean the sharding of different parts of the stack to independent services (several RDSs, S3 buckets, EC2 instances that run cron tasks to sync disparate data systems, beanstalks for some app frontends, etc). I guess some applications within that paradigm are more "micro" than others. Some have custom APIs and others are bound more tightly to central software functions. But perhaps I don't really understand what others are talking about when they stress this distinction.


>OO, functional, imperative, declarative: these are ways of controlling dispatch.

Not really. They are orthogonal to dispatch, which is why you can have different dispatch strategies with all (or at least most) of them. Dispatch is an implementation detail.


There is a bit of categorical mixing going on and the author fails to identify actual hierarchies within the programming styles. But he does see, correctly, that it all has to do with state management. I'll point out what he got mixed up:

First OO programming is a specific style of imperative programming. OO is simply imperative programming with state and functions scoped into instances. If you are doing OO, you are also still doing imperative programming.

You can't really compare OO to functional because it would be like comparing a very specific concept to a very general concept. Like comparing cars and planes, but instead you're comparing a 2020 tesla with all planes in general. No... either compare specific cars with specific planes or planes in general with cars in general.

Declarative programming, on the other hand, can be stateless or stateful so it doesn't really apply here. Declarative programming is sort of left field to all these programming styles because technically chatGPT is declarative. Declarative programming is more about linguistics, AI and natural language processing. It's completely orthogonal to state management.

Functional programming and Imperative Programming are the two correct categories to compare here. They are siblings in the hierarchy and they are distinct and in essence simply two different ways of handling state.

Case in point: If you change one thing in your imperative programs. One thing... then your imperative program immediately becomes functional.

This thing is immutable state. Imperative programming with immutable state IS functional programming. Change the way you manage state, then you essentially change the name of your programming style.

The two paradigms are in essence two different styles of state management. All the other stuff with OO and Declarative is sort of fluff and distracts from the true essence of the isomorphism the author noticed here.


I think your mistake is that it's not about the "abstract"/academic/philosophical or etymological meaning of those terms (functional, imperative, etc).

It's about their meaning (as used for decades) within the developer community at large, as established and commonly understood.

And in that, there's absolutely a functional vs OO dichotomy, the first meaning Lisp, Haskell, etc, and the latter meaning Java/C++/Smalltalk and such (doesn't even matter if Smalltalk for example has a different conceptual model for its OO or different dispatch mechanism, etc).

And sure, "well, actually Lisp has CLOS" -- but OO and functional as commonly used (and as the author uses it) means the part of functional that's about first class functions and immutable data and purity, and OO means Java/C++ style classes and coding style.

Ditto for "imperative", which in TFA just means "C style more direct manipulation of state", even if OO in say C++ is still imperative in the academic sense of the term (and, heck, even that is not that clear cut. "Procedural" programming for example is still imperative in its manipulation of data, but the terms have been used in academia and industry to describe different things. So it's not about a naive application of the definition, but rather about the intention behind the term).

>Declarative programming, on the other hand, can be stateless or stateful so it doesn't really apply here.

Again, for the purposes of TFA, it doesn't matter if declarative can be "stateless or stateful".

The author doesn't say that the programming language philosophies are about "different approaches to state across a single axis" (e.g. stateless vs stateful). He just says that they are about "different approaches to state" period.

In this case, regarding declarative programming, the difference is not "keeping state or not", but "the programming managing whether state is kept or not (and how)" vs "the language managing it and the programmer just declaring their intentions".

>This thing is immutable state. Imperative programming with immutable state IS functional programming.

Not in any colloquial use of the term. SSA form might be "functional programming", but it's not what 99% of devs (and the author) means by functional programming, which includes the trappings and idioms offered by traditional languages called functional programming languages.


"SSA form"?


Static single assignment is a property of an intermediate representation, the language used by the front-end of the compiler (the part that handles a specific language, parsing, semantics, and whatnot) to the back-end of the compiler (the part that handles a specific architecture, emitting assembler instructions as well as optimizations).

One of the tricks of making compilers is transforming the code into whatever form makes the thing you're trying to do easier. SSA is one of those forms, and it means that each variable has exactly one assignment. There is some "cheating" (phi nodes) involved to handle things like "if (...) { a = 0 } else { a = 1 }".

Notably, LLVM IR is SSA.

Since there's no mutation, it's technically a functional language.


Thanks! TIL.


>I think your mistake is that it's not about the "abstract"/academic/philosophical or etymological meaning of those terms (functional, imperative, etc).

No mistake made by me. The popular or even academic usage of the terms isn't relevant to the topic here. The reason is because the way these terms are used are highly inconsistent even in academia. They're not formally defined, they're just used for informal fuzzy communication.

I simply define a hierarchy here that's inline with our intuition to point out things the author and you completely missed.

I'm totally with you in that the popular usage is relevant in MOST cases. But in this specific context it doesn't work because the author attempts to do a very broad isomorphism across terms with fuzzy definitions stating that all of these styles are different forms of managing state and he misses the actual true meaning of what's going on.

Let's not get into language specifics like CLOS for lisp. Let's just focus on the core programming style without considering a specific language. All of these programming styles, more or less can imitate each other when managing state. None of them have a crystal clear way of handling state that's specific to their style. OOP can imitate Imperative and vice versa. In fact I'll list all the connections here:

   OOP -> FP
   OOP -> Imperative
   OOP -> Declarative
   Imperative -> OOP
   Imperative -> FP
   Imperative -> Declarative
   Declarative -> OOP
   Declarative -> FP
   Declarative -> Imperative
An arrow represents "can manage state in the same way as".

Basically almost every programming style listed here is so similar they can practically imitate each other. An isomorphism is obvious and the differences in state management are trivial. However, there is one exception above. You will note FP is not at the left side of the table.

Functional programming Cannot imitate ANY of the other paradigms. Both the author and you completely missed this.

What I am saying is this, there's really only two basic ways of handling state and these two ways are fully exemplified between functional and imperative styles.

All the other styles including the SOA stuff is just fluff. Different ways of doing the same thing. It's like replacing every letter in the alphabet with a different symbol and calling it a new language even though it's still more or less the alphabet.

>Not in any colloquial use of the term. SSA form might be "functional programming", but it's not what 99% of devs (and the author) means by functional programming, which includes the trappings and idioms offered by traditional languages called functional programming languages.

This statement here is categorically wrong even when considering colloquial understanding of THIS concept. It is just factually completely incorrect. Have you tried doing this? map, reduce, recursion, and all FP patterns become your ONLY tools when things become immutable.

Try writing fibonacci with everything immutable in say something simple like JS or python. Good luck doing it without using recursion or reduce.

The isomorphism between FP and imperative programs that use immutable variables is something well known, but apparently, not by you.


> Try writing fibonacci with everything immutable in say something simple like JS or python. Good luck doing it without using recursion or reduce.

SSA, no recursion, no reduce ;)

    def factorial(X: int) -> int:
        encode = lambda n: ((lambda N: lambda g: lambda f: lambda x: x if N == 0 else f(g(N-1)(g)(f)(x)))(n)
                            (lambda N: lambda g: lambda f: lambda x: x if N == 0 else f(g(N-1)(g)(f)(x))))
        decode = lambda f: f(lambda x: x + 1)(0)
        fact = (lambda f: lambda n: cond(vaut0(n))(lambda _: un)(lambda _: mult(n)(f(f)(pred(n)))))(
               (lambda f: lambda n: cond(vaut0(n))(lambda _: un)(lambda _: mult(n)(f(f)(pred(n))))))(n)
        return decode(fact(encode(X)))

Of course, I'm joking, as the Y combinator is clearly visible here.

Full form for those interested : https://termbin.com/cd3q


Clever. Very cool. Of course the y combinator. Even so my point stands.

This snippet was clearly functional .


>All of these programming styles, more or less can imitate each other when managing state. None of them have a crystal clear way of handling state that's specific to their style. OOP can imitate Imperative and vice versa.

And the point is that this doesn't matter. It's not about C++ OOP being able to "imitate imperative" or not, it's about what C++ (or OOP in general) adds to the table regarding how programmers should think and hide state.

>All the other styles including the SOA stuff is just fluff. Different ways of doing the same thing.

Like above, you're still reasoning from lower levels of abstraction to point the similarities and underlying unity, whereas TFA is all about the level of working with each language and its "programming paradigm" as a programmer.

It's like you're arguing that "in the end, chemistry is just physics". Or worse, "in the end, cooking is just physics". Sure, but that is pedantically irrelevant information in a regular cookbook or when talking about national cuisines.

>Try writing fibonacci with everything immutable in say something simple like JS or python. Good luck doing it without using recursion or reduce. The isomorphism between FP and imperative programs that use immutable variables is something well known, but apparently, not by you.

My whole point was that it's not about the isomorphism, but the higher level trappings. Funny how you've managed to miss the whole argument, it's not even like I didn't spell it out (or like I haven't already said that SSA is nominally "functional programming") but it doesn't matter


>And the point is that this doesn't matter. It's not about C++ OOP being able to "imitate imperative" or not, it's about what C++ (or OOP in general) adds to the table regarding how programmers should think and hide state.

That's your point. And MY POINT is that your point is missing the true relationship between these programming styles. C++ is not a topic here. That is a specific implementation of a programming style. We are talking about programming paradigms, not specific languages.

Additionally How is anything added to the table if every language can more or less imitate one another? Obviously you're referring to a bias that a language sort of pushes you toward. This is what YOU are saying. What I am saying is that YOUR perspective, again, is the one missing MY point.

Let me put it to you this way. When you notice everything is more or less isomorphic then you notice a true difference. FP is fundamentally different. Imperative and FP styles are MORE different then all the other styles of state management compared. Then from this you can see that there's a missing hierarchy within all these programming language paradigms. This is what I am pointing out. I'm presenting flaws of the original point, and making a new point.

>It's like you're arguing that "in the end, chemistry is just physics". Or worse, "in the end, cooking is just physics". Sure, but that is pedantically irrelevant information in a regular cookbook or when talking about national cuisines.

No I'm not. I'm saying that YOUR and the OP's point is wrong because the "everything is the same" result is what you get when you compare programming paradigms this way. You need a hierarchy and only sibling nodes in the hierarchy can be compared.

>My whole point was that it's not about the isomorphism, but the higher level trappings. Funny how you've managed to miss the whole argument, it's not even like I didn't spell it out (or like I haven't already said that SSA is nominally "functional programming") but it doesn't matter

In logic and in programming as you get more and more higher level things become more and more isomorphic until finally at the highest level of abstraction everything is the same thing. This follows logic.

In programming and in nature as you get more and more lower level things ALSO tend to become the same. But this does not follow logic. From our observations everything looks to be made out of atoms, and by design all programming styles compile into assembly instructions. But this doesn't follow any form of logic; it doesn't HAVE to be this way by logic, it is simply this way by observation or by design.

That being said, your comment about higher level trappings make no sense. Going in EITHER direction things should become more and more samey. Additionally I'm not even going to a higher level or a lower level. I am simply saying these programming paradigms have hierarchy. I am also saying within each paradigm, things are so flexible that there is no specific way of managing state. You can manage state the EXACT same way in all paradigms, except ONE style. By seeing it this way a clear hierarchy emerges. That is it.

SSA has nothing to do with this. This is a term from compiler design and you're accusing me of going to a lower level? Still I get your point and in SSA from the way you use it is not "nominally" functional programming. IT IS functional programming. Literally look at the other persons post who implemented fib. That's totally functional.


> OO, functional, imperative, declarative: these are ways of controlling dispatch.

Two problems with this statement:

1. imperative & declarative are about dispatch, OO & functional are about much more (dispatch being one of the most negligible components)

2. dispatch is extremely concerned with state: e.g. for declarative, state is handled by the "dispatcher", whereas for imperative state is an extra responsibility of core logic - the approaches to state handling is one of the most important differentiators between these paradigms.

As for your 2nd & 3rd paragraphs, 100% agree but they don't seem to contradict the article so I'm not sure what point you're making.


Younger developer here. What does non-SOA style CRUD app look like?


Anything monolithic, where a single binary executable is run at the start, maybe spawning subprocesses to handle parallel tasks, possibly sharing memory through synching protocols to avoid interlocking.

SOA uses message passing only as the synch mechanism, but sharing memory allows for other strategies with synch primitives such as semaphores or mutexes (with the assumption that collaborating processes run on the same physical machine, or communicate through Remote Procedure Calls).

http://www.composingprograms.com/pages/48-parallel-computing...

https://en.wikipedia.org/wiki/Remote_procedure_call



XML-RPC based apps might count but they’re generally all under SOA in my experience. A real-time desktop application with some networked features to manipulate documents like via websockets may count as CRUD but not follow any SOA kind of architectural or interface conventions.


A p2p serverless app with storage replicated across peers.


Ethereum / bitcoin / some other crypto


> Service-oriented architecture applies to both monoliths and microservices, and very few programmers still working in the industry have really seen what an alternative to service-oriented architecture actually looks like.

In what industry? I'd agree that anyone making anything web-facing is using some form of SOA, but there are other things, too. Desktop apps (and to some extent mobile apps that aren't just a thin interface over a web API) still exist.

Unless you're arguing the maximalist approach, i.e., that anything with an API that tries to hide any form of implementation details is an example of SOA. In which case, that's not very interesting...


Can you explain more what you mean by controlling dispatch?


Imperative: modifying state is the point of a bit-flipping machine; get out of my way so I can have fun!

OOP: OK, I mostly had enough fun, can we try to tame the bit-flipping chaos with real-world analogies, without deflating all the fun?

Functional: Any Monad is by definition an Endofunctor, which also means it's an object in the category of Endofunctors, where the monadic μ(flatMap) and η(unit) operators satisfy the definition of a Monoid in that particular Monoidal Category. You got that?

Declarative: I'm gonna need a corner office with a view, and $200K/y.


I wish people would quit propagating this hostile view of functional programmers. The vast majority of them — even in academia — are happy to never mention or even fully learn about the scary math words, and most of those who _do_ learn about them work really hard to not make the community unwelcoming to those who don't know or don't want to know. But the reputation precedes, which causes people to avoid it altogether, which is really a shame.

The whole "a monad is a monoid in the category of endofunctors, what's the big deal?"-in-a-condescending-tone bit comes from "A Brief, Incomplete, and Mostly Wrong History of Programming Languages": a satirical blog post from 2009 (http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...). It is not indicative of the general community, but the joke has been reproduced enough that people who don't know anything will encounter it, think the practitioners are serious about it, and write off the whole thing as unwelcoming. It's a bummer.


It would help if functional didn’t collect all the people with their heads all the way up their own asses. Functional has more shibboleths than anybody except maybe professional consulting groups like IBM GS.

You can’t have exclusive language and then be surprised when nobody wants to worship at the altar of your obfuscation.

ETA: Look. I was reading 25 year old complaints about why Lisp isn’t popular 25 years ago. You can go on scratching your heads for another 25 years or you can start taking criticism instead of deflecting.


> ETA: Look. I was reading 25 year old complaints about why Lisp isn’t popular 25 years ago. You can go on scratching your heads for another 25 years or you can start taking criticism instead of deflecting.

Is this... directed at me? Are you saying that I've been guilty of this behavior? Could you point me to an example? Because I put a lot of effort into helping the functional programming community be more welcoming, so it'd be important to me to know if I've gone astray somewhere.

> It would help if functional didn’t collect all the people with their heads all the way up their own asses.

I don't disagree that the kind of people who like to be particularly condescending tend to also end up in this community.

That said, I will point out: (a) there are still plenty of assholes in non-functional (or even anti-functional) communities, and (b) the community has grown a lot over the years, and has mostly shifted to being much more positive and inclusive. Unfortunately, the reputation from decades ago is still strong, and I honestly believe part of that is due to historically-justified propagation from people like you in this comment and the other person in the comment to which I originally replied, to name two examples. There's a lot of residual hostility against the FP community that I don't think is truthfully justifiable anymore.


There’s another variation to ‘functional’:

Let’s treat data as what it is so I can sleep at night again.


I feel I should point out that that ‘functional’ quote is (a) only applicable to pure, strongly-typed functional languages, and (b) pretty much irrelevant unless you happen to be interested in category theory.


Yep, the FP scene is split into the pragmatic and the academic sides.

I think it is not precisely delineated along the static vs dynamic sides: In addition to Elixir/Erlang, and Clojure we have eg F# and Elm from the static side in the pragmatic team.


I think the distinction you’re trying to make is that of ‘impure’ vs ‘pure’; ‘pragmatic’ vs ‘academic’ strikes me as a rather loaded choice of terminology, especially since languages like Haskell are rather widely used practically.


I don't think that's the distinction fulafel tried to make.

Elm is very much pure, yet it's on the pragmatic side: it forgoes certain powerful abstractions (higher kinded polymorphism, type classes), which enables the compiler to give very helpful error messages, and makes the language easier to learn.


> Elm is very much pure

Huh… you’re quite right. I always thought it was impure. In that case I retract my previous comment (pity I can’t edit it any more!).


You might want to look a bit more closely before you write functional programming off like this.

Eg. Clojure is a modern functional language that is used for a lot of things & happens to be dynamic.


Since when was I writing off FP? My main language for personal projects is Haskell. I was just pointing out that the category theory stuff isn’t really relevant at all.


...and it's a real shame that Haskell has to be like that. I feel like much of the, sorry, mathematical wankery, is well separable from the stuff that actually makes program behavior more predictable in the functional style. I hear that F# has some success at doing exactly that.


> mathematical wankery

Some blog posts are wankery. Haskell in practice is as separate and separable from it as you want.

I learned what a mathematical monad is out of interest (after using them in practice for years), and my conclusion is that it was a huge waste of time with close to zero use for my programming in Haskell.


> I learned what a mathematical monad is out of interest (after using them in practice for years)

Agreed. I regret that I tried to learn what a mathematical monad was before using them, out of a belief that it would help me get started quicker with Haskell. It didn't. It made something confusing that's actually pretty simple.


Can confirm been working with F# for 2 months and I really miss the strong type system, DU and match statements when I go back to Python.


This made me smile. I'm not sure I get the declarative one though, can someone expand on this? Is it because the sentence is very declarative? or because declarative programming allows lets to declare expensive to compute stuff (by mistake) too easily?


You don't tell the computer how to do the task, but you tell what result you want. Eg: SQL


I think it's the first.


Functional languages is what happens when you let a bunch of mathematicians get a hold of a compiler.

At it's core, it's about treating data like data and not about having to much about with "objects".


This made me laugh. Especially because I've read that exact monad snippet before and had several wtf moments in rapid succession.


Isn’t OOP a subset of imperative programming and functional a subset of declarative programming?


Not really. For example ocaml allows oop in a functional programming language.

OOP is about encapsulation of data and algorithms which deal with that data in logical units (objects) and allowing certain operations eg inheritance, polymorphism (sometimes) etc over those objects. It can be declarative or imperative.


You can be very declarative with OOP.


Applause!


this one made me laugh!! thank you :)


I’ve been a programmer all my life, got first sw job around 2000 in business-oriented area (consulting + programming + “ops”). Before that were 5-7 years of toy programming as a kid. I experimented with tech a lot, got into paradigms early and never restricted myself to a single language/env/os/hw. I haven’t created nothing big, stellar or rocket science, but a couple of my projects lived for 10-17 years and counting.

Mind you, this “state management” thing and the fuss around it, which pops up for last ten years more actively, was never a concern that I found particularly useful to have a name for. 20+ years of a mediocre career and I still don’t get it, neither why, nor what the problem is. Maybe that’s why it is mediocre? Living in a bs country doesn’t help either. Otoh, I can make things work and ship mvps next week once there’s a plan and determination. (No, it’s not PHP.)

I believe that the fuss part comes from the fact that software becomes more and more low-level uncontrollably, so there’s a lot of self-imposed state that business isn’t even aware of and which becomes per-LoC routine that is easy to stumble upon but hard to document or explain to a person outside. State belongs to business and is not in your control, all other state is parasite. Mapping business state 1:1 in your program keeps everything simple to do, to change, to grow. It’s usually imperative, sometimes declarative (that’s where programming emerges) and never functional. Parasitic state’s place is under a rug. Library, syntactic sugar, framework, db/service, platform, whatever.

To conclude, well, I have nothing to say really. Still confusing.


Maybe you simply gravitated towards sensible ways of dealing with it?

There are quite a bunch of things that are _hard_ (IMO) and related to state management a non-exclusive list of examples:

- GUIs in general, the more interactive they are, the more state you need to manage.

- Caching (at any level) is a form of state. You need to be aware of how and when data changes and who changes it and which parts of the caching are affected, when you care about it etc.

- Memory and disk allocation are state. You need to be aware of what your resources are and when to free them up. You might use locks to make sure your data doesn't get corrupted. At a higher layer you might have authorization models to restrict access.

- TCP is state. What do you do when connections fail or get interrupted? Do you store/buffer messages to be processed at a later time? What does that mean for the receiver?

- SQL databases are stateful. Do you ever need to know what happened when? How do you restore previous state? How do you make that efficient? Are backups/snapshots good enough or can you leverage something more granular?


It’s hard to tell. My gut feeling, especially the one I get from popular state management libraries, says I actually gravitate away from it, in a sense. I see what they are for, but it feels like they are made with a sugar-coated landing/tutorial page in mind, while at the same time making things more complex, basically trading more for less. They turn languages and paradigms indise out to get cool effects, but make your way of thinking diverge from how business requirements speak, creating a gap between programmers and consultants.

As others noted, the size of my projects may skew the perspective. But I’m not sure how big something should be (and tightly coupled) to start worrying about it, unless my primitives aren’t too great to use without a second thought. What bugs me even more us that popular libraries don’t even provide them out of box, suggesting to write code and to spread structures in a special convoluted way instead.


The text of a program does not show what is in memory, or what the devices attached to the machine are doing. You hope that if you run the program, in your head, you can reliably guess -- but often, you make mistakes.

Controlling "state" amounts to trying to force the world to match the text of your program as closely as possible, so you don't go wrong.


Thinking about state helps me write better code. If the codebase for a project is small enough, and I can enumerate all states and all ways that states can change into other states, I can make code that is so stable it can run indefinitely. Very satisfying. The next logical step from this point, I guess, is TLA+ (I still haven't tried it).


It's more of an issue the more network-y and multi-cpu your ecosystem is. I find the post rather banal and obvious. The state is literally the how and what.


For a lot of small to medium scale business software, state tends to be managed mostly by one central, source of truth, database that business logic gets applied to; there it does not matter much with what paradigm you handle the "request" state (be that a network request or a GUI/TUI request interface).


An idea that sounds convincing at first, but falls apart on closer examination. I found myself less and less convinced with each example, some of which really seem to be grasping at straws.

Programming philosophies encompass much more than just state management, and narrowing the focus to state doesn't seem particularly enlightening to me, at least with these summaries. I'd love to be proved wrong, though.


I think they are right about the first few: OO and FP are both most definitely about state management.

However, I definitely agree that it's a stretch to say that architectural patterns like monolith vs microservices are about state. Those have far more to do with structuring deployments and organizing work.


It seems that the author is trying to make a kind of grand statement but the epiphany isn't felt. A program without any state seems uninteresting, state without logic to manipulate it seems uninteresting as well - no computer ships with only registers or only an ALU after all. It seems that we must include logic in the discussion as well but if we do, the title needs to be updated to "all programming philosophies are about programming".

This is not a deliberately reductive take, I just can't think of any other meaning.


I feel this, but I think it's just a thing that programmers do. It was 10 years ago when I realized that all programming methods are about dealing with (conceptualizing/architecting) state. It made a lot of stuff "simple" to me that had previously been "complex" and I got a lot of value out of that change in paradigm. Not so much that I didn't assume all my peers didn't already know it (I find I'm usually "behind the curve" when it comes to piecing obvious things together), so didn't evangelize it to anyone, but I definitely understand why someone would.

It feels secret, because no one really talks directly about how all programming paradigms are about state. But once you realize it, you can start to see how a lot of high-level programmers reference it when they talk, all the time. Just something that "everybody knows", once you spend enough time working in code.

Personally, it wasn't the realization that it's all about state that was particularly exciting. It was using that new interpretation to understand how I could mix and match different types within a single project, to make each separable/modular piece of it, rather than dogmatically sticking to a single paradigm throughout. I think there's a definite mental breakthrough when a programmer can confidently use multiple different state paradigms to handle what would otherwise be a clunky or unwieldy implementation in a single paradigm.


I found that a big jump in my quality of design was when I started thinking about what state was necessary, what transformation needed to be done, and by what entity. Before that, I used to code for the task at hand, and create data structures mostly for the task at hand. Thinking about what each program might need to know and in what format relative to it's role just made things smoother.

It wasn't a large leap conceptually, but the mental model change came from a long lost hacker news comment along similar lines.


"all programming philosophies are about programming"

I don't think it is a reductive take. The second instance of the term here instead involves the more traditional concept

program: "a regular plan of action in any undertaking" https://www.dictionary.net/program

Programming (in the sense of software construction) paradigms, philosophies, indeed "languages" etc., are about regularizing plans of action for complex computational undertakings.. usually.


As someone coming from a functional perspective I would humbly characterize my position as:

> Functional - Modifying state is hard to get correct; so let's focus on the rest of the problem (the part we can get right!)

Pushing state to the edges is actually just a consequence of this impulse.

In the end it isn't a satisfying solution, because you end up with every bit of low-level, inconsequential state percolating up to the top and polluting every single data structure along the way.

I feel there must be a better approach ... in which the "impertinent" state can be abstracted away and managed orthogonally to the functional description of the program.


Almost certainly there’s no meaningful insight to my random thought, but your comment reminded me of a conversation I had with Joe Armstrong about 10 years ago, discussing control vs data plane for Riak.

Originally Riak shipped its data around as part of Erlang’s standard messaging mechanism, the process mailboxes. Over time that was recognized as a bottleneck and data started being handled differently (honestly it’s been such a long time I don’t remember the details, or how far down that road we went).

I have no conclusion to this ill-formed rumination, I’m afraid, just that your point about managing state independently reminded me of that conversation.


The trick of bubbling to the top is to keep the top nearby. Like erlang. Functional process in an imperative message passing shell.


It's been a while but I used to enjoy telling folks in the early years of their career that software engineers and anarchists have a lot in common because both view "the state" as the main cause of problems.


Former long time anarchist, current long time FP proponent, presently having a laugh with you on this.


A Classless and stateless society! sign me up!


we must build a Functional Economy.


Have a laugh at this then, too: http://wiki.c2.com/?AnarchyProgramming


Out of curiosity, why former?


Basically… I accepted the premise that the state probably has a necessary role in any process that could lead towards a stateless and classless society, or even in any process which could intervene in regressions from that goal.

I don’t want to get more into that here because there be many dragons. But I’m easy to contact if you want :)


“He who is not a républicain at twenty compels one to doubt the generosity of his heart; but he who, after thirty, persists, compels one to doubt the soundness of his mind.”


I really dislike this quote. It always comes across like it's designed to ease greedy people of their guilt for moving to the right as they earn more money. To me, being sound of mind is wanting everyone to have what you have, or at least an equal shot of getting there from wherever they had the roll of the dice of being born.


Well sure, it’s “sound of mind” to you because that’s your political view.

To people on the right it’s just completely naive. The first half is mathematically impossible and the second half is so wishy washy that it leaves room for everyone to claim they were born further back in one aspect or another on the proverbial board.


> it leaves room for everyone to claim they were born further back in one aspect or another on the proverbial board.

Because generosity is exploitable, it's better to exploit.


Because generosity is exploitable, it deserves examination for integrity. That's a different way to phrase it that avoids your strawman.


The quote is not nice and I don't blame anyone for taking issue with it. However I don't see people feeling any guilt as they move right. Nor do I see a need for greed to corelate with the right leaning politics. The same teachings that lead me to lean right also lead me to donate more than 10% of my income. I think all people that can should donate to charity but I don't think anyone should be forced to. I take issue with the idea that forcing others to give money is generosity.


I think that’s possibly a fundamental misunderstanding of left ideas.

The point is not to force generosity on others. The point is to fight back against unjustified value extraction by the powerful and to democratize the workplace.

Socialists see charity as trying to patch over symptoms of an inherently deeper problem. It’s not a philanthropic movement. It’s a worker movement.


To interpret the quote charitably, I think it's more that age can come with an appreciation with the fragility of society, the recognition that it's a miracle it works at all, and a humility about how much and how rapidly a system can be changed without being destroyed.


> humility about how much and how rapidly a system can be changed without being destroyed

I guess everyone familiar with US history should find the quote to be utter bullshit, then.

The American Revolution, Civil War, Emancipation Proclamation, 19th Amendment, Great Depression, WWII, and Civil Rights Act were all major upheavals in society that brought on rapid change.

A lot of people died prematurely, but a lot of lives were later extended or saved to balance that out somewhat, and the system surely didn't collapse.


The system did collapse, that's why so many people died prematurely and why the system that came after was radically different than what came before. Keeping the same name just creates an illusion of continuity.


Well, there are plenty of examples outside of the US's fairly young history where revolution has not always gone so well. Plus, I think one could argue that the US is still to prove that it can survive the aftermath of the Civil War. Sometimes collapse takes a while.


Well, since the quote comes from a Frenchman in 1875 I don't think we should read present day US politics into it. But I'm not sure what a républicain does believe.


Reminds me of a great 30 rock joke, being fiscally liberal and socially conservative.

Politics is really just dividing up the shared resources, which is the root of that joke. Who cares about politics and doesn’t care about the resources part?

I guess after 30 people forget how to share per that quote. Never happened to me though. Personally I don’t even think it’s accurate, in my life I’ve never seen people swing from generous to… let’s just say “not”

The people who were always like that a little got more so.


> Reminds me of a great 30 rock joke, being fiscally liberal and socially conservative.

You can tell online people isn't representative because they think it's a joke when this is actually pretty common among average people. (Who aren't on a single position on the political compass but are instead "cross-pressured".)


> fiscally liberal and socially conservative

That's top left on the political compass... Is that not a thing in the US?


Not in conception, no. One could argue it exists in practice, though, in the "deficit-funded tax cuts for the rich" crowd.


Is it a thing anywhere? Who chooses not to care about the resources, the things you're there to divvy up.

Not caring at all is certainly a choice I can see individuals making, but someone who is politically engaged and doesn't care about the resources, I can't even fathom it.


Not to agree or disagree with the quote itself, I’m simply giving a possible explanation for why the commenter may have changed their mind.


God, this is such condescending nonsense. The older I get, the more left I lean, because the more of the world you see the more you see what chaos our current ideology has wreaked upon it. The only people I see going the way this quote does are unthinking idiots who were fashionably "liberal" in their youth without understanding any political theory, and are now fashionably "conservative" with even lesser understanding.


Yeah, it starts condescending and gets worse once you understand the argument: "as an elite, you will learn in your 20s how the policies and goals of conservatism are self-serving, and you will therefore adopt them."

In my 20s I bought a house, started filling a brokerage account, started managing people -- so I certainly started to feel those incentives, but I am both principled enough to not act entirely out of self interest and wise enough to realize that I would have to be considerably wealthier before the capital-side incentives actually overtook the labor-side incentives in their importance to my bottom line.

Unfortunately, even these meager principles and this sliver of wisdom are rare enough that "temporarily embarrassed millionaires" abound.


Solidarity forever, rare as it may be!


Hard same. In some sense I don't think my values have changed much. I've always been dispositionally moderate and compassionate. Indeed, in a lot of ways I'm dispositionally conservative; for example, I'm a proud member of the Boring Technology Club. [1] (As the Christians say, "Test everything; hold fast to what is good.") But the older I get, the more my compassion and my curiosity drive me to see through the bullshit and to question the people who talk about being conservative but act to conserve the worst of our history, not the best of it. So although I remain a dedicated incrementalist, many would see me as increasingly radical.

[1] https://boringtechnology.club/


I personally find it more and more difficult to subscribe to any ideology.

I think societies, states and economies are emergent properties of people living together. But the more I know people, the less I believe overarching systems, models and grand narratives.

They are all wrong. Not only in respect of what should be. We can’t even agree on what is or has been.


Some are less wrong than others.


I'm not sure if I can escape from your accusations of idiocy or of fashion-following, but I am someone who's moved further right as I've gotten older.

When I was younger I was more confident that the world and institutions could easily be remade or replaced just by reasoning about them intelligently, and I was also more confident that everyone's interests and morality roughly converged, so not very much had to be done to allow us to get along and live together in harmony. (Nonetheless, I think it's likely that you'd have considered my adolescent beliefs to already be right-wing and to be lacking a consciousness of political theory.) Now I'm more apt to think there are things that we can't redesign, things we depend on that evolved with no one designing them, and lots of immutable constraints from human nature. Also that a great deal of the harmony we experience is fragile, and we might be much worse off without it.

(For what it's worth, I would also try to avoid making generalizations about people that imply what they're supposed to believe based on their age -- or making fun of people on the basis that their beliefs are supposedly out of step with their age.)


You have to admit that the State causes many of those problems. Police brutality would not be a problem if there were no police, for instance.

One response is to say we need to empower the State with more authority to regulate those problems, with the hope that it doesn't just cause even more problems.

Another response is to say we need less State authority, and find other ways of dealing with certain problems typically given to the State. Not every problem is a nail to hammer with the State.

Neither response is correct in all circumstances.


> The older I get, the more left I lean, because the more of the world you see the more you see what chaos our current ideology has wreaked upon it.

That sword cuts both ways.

The more you see of the world, the more you treasure what the current ideology has brought.


I bet 1 $ you work in crypto.


Nope! You can actually find out what I do for work pretty easily tho because it’s all open source and I use the same handle.

Edit: please donate the proceeds of your bet loss to victims of the earthquakes in Türkiye.


Wow, that's great, thanks! As a dad and Certified Card-Carrying member of the International League of Punists I feel this is really profound. Not at all in my political direction, but who cares it's the pun that matters.

Also, it would make an epic t-shirt in the hands of someone with some actual artistic skills. Is there such as a thing as missing a cool t-shirt you haven't seen? :/

"Down with the State", "Shrink the State", "Less State -> More Good", "No State, No Problems", "Spread the State", "No State/Stay Pure"? Oh my kingdom for a programming-literate copywriter.


"State -> Less State -> Stateless"


Love it, thanks. Subvert the dominant paradigm! But only if it's justified and well documented for future maintenance :)


Oh this one was good. I'm going to have to put that in my back pocket. The punchline is that obviously that's not the only thing they have in common.


Funny, I arrived at this "all software is.." piece last week.

Hypothesis: All of software *is* about change management ----

It dawned on me that all the software design efforts - Design patterns, programming styles, management practices - are all geared towards one thing at its core - managing change. For example:

* Command pattern: Figure out all the things that can be called and instead of a giant if/switch, look them up by name and call them. That way when you need to *change* the list of things you want to call, nothing else is affected.

* Structured programming: Keep the common code in one place and call them from many places. That way when the common code *changes*, we can contain the change to one place

* Object Oriented Programming: Keep the data and behavior common to one actor in the system contained in an object/class so that when it *changes* we have to change only that thing.

* Functional programming: *Changing* data is bad. Instead create copies of data and change them.

* Scrum: It's hard to predict too far into the future because things *change*. So let's try to plan for just the next n weeks.


If you haven’t already, you might benefit from learning about the idea of OODA loops.


Thanks. Also, nice blog post about MMM. Good analogy to finance.


That doesn't really seem like a good summary of functional programming. It has state all over the place. But it's true that functional programming is about state management. I'd argue that the functional approach to state management is to make state explicit and visible.


FP is about evaluating expressions and it doesn't have state all over the place.

I would also add that statefulness is incredibly hard and complex to get right in complex applications and pure fp languages like haskell lack enough emphasis on runtimes to handle this gracefully.


> FP is about evaluating expressions and it doesn't have state all over the place.

All the state is in the function stack and the instruction pointer.


While you can’t read the state, it's not a “state”, it's just a chaotic mess of data in memory.


> FP is about evaluating expressions and it doesn't have state all over the place.

FP is all about side effects. Every time you declare something you're effecting a compile time side effect. And then there's monads, which are literally reinventing state with a worse API. I find it odd that no mainstream functional language has an explicit call stack monad and an explicit symbol table monad. Maybe because even the "purists" realize how asinine that would be.

> I would also add that statefulness is incredibly hard and complex to get right in complex applications and pure fp languages like haskell lack enough emphasis on runtimes to handle this gracefully.

Mutable state is a powerful tool. And it's true that if carpenters used table saws with the same level of discipline that programmers use compilers, there wouldn't be a carpenter left with any fingers.

Nevertheless, powerful tools can be used in a responsible and disciplined way and if you do you can be highly confident no unexpected dismemberment, literal or metaphorical, will occur.

Edit: to be more explicit, the mathematicians who invented Algol were well aware of all the theory for so-called functional programming and rejected it in favor of treating programs as transformations of a multidimensional space with good reason.


> FP is all about side effects.

Every programming language is about side effects.

Computations that don't lead to side effects are useless.

That being said fp splits a core functional part (pure, no side effects of any kind) and the interpreter part where the instructions are run.

The way one should visualize pure functional programming is that functional programs don't _do_ anything. They are equivalent to producing a description of the operation. The actual execution happens when those descriptions (generally encoded in effects or data types such as IO/Task) are interpreted by an external runtime.


> Every programming language is about side effects.

> Computations that don't lead to side effects are useless.

Right.

> The way one should visualize pure functional programming is that functional programs don't _do_ anything. They are equivalent to producing a description of the operation. The actual execution happens when those descriptions (generally encoded in effects or data types such as IO/Task) are interpreted by an external runtime.

This is the right way to understand imperative programming too. The program really doesn't do anything, it just describes a set of possible processes. Then the runtime converts the program into one of those processes (or maybe more than one). Each of those processes is a path through a program state space graph, but supposing your process is to have visual effects at various nodes in that graph the runtime will do externally visible things. It's really not any different.

In the end FP is basically just a way to force programming in a SSA style, plus whatever silliness the designers go for to deal with the reality that mathematical functions can't really capture input and output in any sort of useful way. Interestingly, it appears to me that in pure math the vernacular text around the equations fulfills that role.


1. Yes, software development is about managing state change. If the state has not change, the software is idle or not running

2. The mix of concepts between programming paradigms, architectural design, etc. doesn't body well supporting the conclusion about "every programming philosophy"

It feels to me like much ADO about nothing


Declarative/logic is a bit of meta-state IMO, the only state is exploration space, but the actual information has no state per se.

Also I wonder if algebraic thinking is not different from state, since you map / combine subsets of the domain[0] with more interesting operations. You don't got from 2->3->4, you can (+ 2 2).

[0] which some would say, is state, but here it's reified as a standalone value


I used to think this until I read Whitehead and Hickey and realized it’s all about time


I thought maybe you were referring to a known text or paper.

"whitehead and Hickey" search links back to this comment and only one other thing that can't be relevant.

So, WTF are you talking about that we should know it and search engines don't?


Rich Hickey's "Simple Made Easy" talk discusses value and time - how mutability ties these together such that value is always time-dependent - and the complexity that results.

https://github.com/matthiasn/talk-transcripts/blob/master/Hi...


Google turns up a Clojure library called Avout [0].

The library itself isn't too important here, but its website cites the philosophy of Rich Hickey and Alfred North Whitehead on state being an illusion:

Rich Hickey has spoken eloquently on mutable state in his talk "Are We There Yet?" [1]. To summarize, Rich and Alfred North Whitehead [2] don't believe in mutable state, it's an illusion. Rather, there are only successions of causally-linked immutable values, and time is derived from the perception of these successions. Causally-linked means the future is a function of the past; processes apply pure functions to immutable values to derive new immutable values, and we assign identity to these chains of values, and perceive change where there is none.

Hickey's talk itself is quite interesting and goes into detail on the relevance of Whitehead's ideas in his book Process and Reality to concurrent programming.

[0] https://avout.io/

[1] https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hi...

[2] https://archive.org/details/AlfredNorthWhiteheadProcessAndRe...


Whitehead as in Alfred N Whitehead?


The original author was smart in calling it programming philosophies as it avoids a lot of puristic discussion.

As mentioned elsewhere: Pure functional programming (as in lambda calculus and its friends) does not consider state. It consider expressions that need to be reduced – the user can on top of that build something that mimics state, which is entirely up the programmer.

However, I think the most productive way of assessing programming languages for real world software applications is by looking at state management, as it usually turns out to be the most complex part of that application.


The so maligned "state" in FP resides in the call stack:

    f(g(h(x)))
It's a mutable and invisible data structure that drives the execution of the program.


It's not mutable and it's not a stack in the model, that's just an implementation artifact used for efficiency. You could equally well allocate every function context on the heap.


It feels like the art of choosing strategic abstractions is an underlying theme in SICP, for example, and has “state”, as argued in the blog post, as just one aspect.


There's a great discussion of state in one of the original lambda papers: "The Art of the Interpreter or, the Modularity Complex" by Gerald Sussman and Guy Steele from 1978: https://dspace.mit.edu/bitstream/handle/1721.1/6094/AIM-453.... (Part 2).


No it is not.

Lisp stands in stark contrast to this. The lisp philosophy is this: as with every science, you mould your language to your problem, not the other way around.


Yes, it goes without saying that Lisp and mold go hand in hand. I find that Lisp is very organic, kind of like fungus.


Sure, because they haven't heard of the rule of least power, which is why it's sometimes good for languages to be about stopping you from doing the wrong thing vs. easily doing things.


This article is bad at managing the state of its examples. Just kidding. But deep down, it's turtles all the way down. All of programming, working and else is about managing state. Especially if you think of state as entropy, then all the meaning of life is about managing state.


I agree with the imperative, oop and functional part. The declarative part is just stretching it.

It's like saying using chatGPT to write your code is just a form of state management. Technically yes, but this viewpoint is overly pedantic.

Declarative programming is therefore more then just state management.


I thought of adding external relational databases. The state of the program at runtime isn't really the most important thing. The state of the whole system over the span of days / weeks / months / years is what's really important to get right.


If you trivialize everything, everything looks trivial.


And as states all programming philosophies are going to need some structure in order to function consistently. Laws and enforcement, representation, and an executive to lead things and maintain consistent direction.


Has anyone or any PL theory in CS made a useful distinction between data and state?


Turing Machines are defined in a way which makes a distinction between data and state - it consists of a state machine (which controls what happens when each symbol is read) and a tape (which records symbols - data - which will be manipulated). The state machine is part of the Turing Machine's definition - so one Turing Machine can only have one particular state machine configuration, but can operate on any tape with any content.

However, the existence of Universal Turing Machine (a Turing Machine that can simulate behavior of any other Turing Machine) demonstrates that data can also represent state - i.e. you can encode an arbitrary Turing Machine's state machine in the tape, without changing the Universal Turing Machine's state machine at all. That's how we got stored program computers.

In a more practical sense, a useful distinction between data and state is within the boundaries of the program - any data that is internal to your program can be considered state, while any data that you read from an external source/write to an external sink can be considered just plain data. Separating the two is a helpful technique in writing reliable software.


They're different levels of abstraction. Data is used to describe state.


State is made of data. The difference is that state is data together with a persistent identity (morally, a location), and the data associated with that identity may change over time.


Side effects: can't live with 'em, can't live without 'em.


"Functional - Modifying state is hard to get correct; keep it at the boundaries and keep logic pure so that it is easier to verify the logic is correct."

My main beef with functional programming is that you essentially put the state in the "instruction pointer" and its history of walking the call stack instead of bound to symbols. It does not fit my way of thought. I personally prefer more or less statemachines.


I think what you're missing is that functional programming is about referential transparency. There should be no need for an instruction pointer in your mental model.


Not really. Try to interact with DB in FP. Yes, there is a referential transparency ... when combining functions which call DB (e.g. using monads). As a result you get a composite function, which is the same every time you run that composition. Who cares really? When you execute that composite function though you will potentially get different results and your "referential transparency" goes out the window. Of course one can write a "pure" FP program without effects, but any side effect reduces referential transparency to combining functions, is it really easier to reason about computations this way? Nope, it is not.

Basically how FP sells:

  def a
    1
  end

  def b
    2
  end

  def c
    a + b
  end
You see? c = 3! Now try to apply this logic to (real world with effects):

  def a
    (conn: DBConn) => { .... some complex code  ... }
  end

  def b
    (conn: DBConn) => { .. another complex code ... }
  end

  def c
   (conn: DBConn) => a(conn) + b(conn)
  end
So c = ? Still transparent?


> So c = ? Still transparent?

No, it's not. and a and b are neither in your second example, otherwise they could not produce any result that could be combined with +. Unless `{ .... some complex code ... }` is just `return 42`.

Referential transparency means that we can evaluate the same thing over and over again and getting exactly the same value back. Obviously that does not work with what you have constructed.

It helps when you annotate your examples with return types. That will make things clear very quickly.


DB operations are not what is being described in the conversation about referential transparency.

We’re talking about something that starts with things like reduce vs. a for loop with mutable variables to accumulate values. Once the language forces you to think in terms of stateful operations to perform even basic tasks, the reflexive tendency is to use mutable state to perform any task. At that point, nobody really knows what a functional does because you have to understand the entire system to know what the output of a function will be.


But we do have loops in the real world, don't we? So in a true FP we are talking about recursion as an alternative to for loops. Can you easily reduce recursive function? If you can, I can as well "reduce" for loop. Otherwise we are talking about a paradigm without recursions and side effects.



Recursion, what was the point of the link?


Your problem is that you’re trying too hard to argue and your time would be better spent trying to understand.


No, but this is why we have things like linear types or encapsulating side effects in a state monad to describe these kinds of strict sequencings. I personally think pure functional programming is still very unergonomic but referential transparency in the face of side effects is not an unsolved problem.

Besides, it's good practice to limit how widespread side effects are in any program, anyway.


Referential transparency is something you can reason about. While technically monads are "reasonable" you are not going to reduce them by substitution. Yes, you can if you have a good imagination, but how does it help? While combining effects is thread safe etc., executing resulting function not necessary is. So we just kicked the can down the road adding another layer on top of existing compiler.

Monads is a nice way to hide state I have to admit, but not because of referential transparency and "reasoning".


If we're talking about Haskell monads, they are referentially transparent, and they are very very easy to substitute. For a specific example, see: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/M...

Each Haskell monad might have a different implementation of the >>= (bind) operator, which is referentially transparent and which you can easily substitute. The IO monad would be an exception: I believe that one is not referentially transparent and thus cannot be substituted, because it's not implemented in Haskell. You can however imagine that it is a sort of state monad for the external world, but that's a fiction in practice.

(Note: Haskell monads are not "real monads" from category theory, and nor are monads in any other programming language AFAIK... So it's helpful to be very specific about what we're talking about when we use the word "monad" without context. You can implement a monad-like thing using classes in Python or many other languages, but those aren't referentially transparent and aren't typically the monads people are talking about in the context of pure functional programming.)


I'd argue that the difference between a monad and any old type with operations of bind and pure's types is precisely reasonableness -- I can expect Control.Monad.forever to work because the monad laws allow me to reason about monads as an abstraction rather than caring about the precise way the Writer monad might be implemented.


Is this another way of putting what you're saying?

In functional programming, we think of computation as reducing an expression; a computation is complete when it can't be reduced anymore. In imperative programming, we think of computation as updating a store of data; a computation is complete when we run out of steps to perform.

When you're partway through execution, the "rest of the program" in an imperative system is captured by the current state of the store (including the stack). In a functional program, the "rest of the program" is just the program expression itself, reduced as it has been up to this point.

I tend to like the functional model, myself, but I can see the mental appeal of keeping the program text fixed, and isolating all changes to a separate store of data.


You don’t need history. Embrace one way data flow and your life will be simpler and more easy to reason about.


modern FP (effect systems) says the opposite - having separating the pure from the impure, we can now bring them back together and orchestrate fine grained fabrics of effects at greater scale with deterministic lifecycle and strong real source cleanup guarantees; for example signals based reactive dom rendering


The author is onto something. What is needed to flesh out the "map of state philosophies" (and counter the criticism of apples and oranges) is I believe more resolution about of where state is physically expressed (memory, disk, remote machine etc)

Having a good representation / classification of alternative choices and tradeoffs could help people with more informed design choices and selection of tools


Data (state, context, whatever) is more important to structure than code. Make sure your programming language lets you structure data the way you feel is natural. The code comes later and the main function is to massage the data. At least this is how I usually think about it, but everyones mind works differently... :-)


Well, programming is about managing state.

Also, you forgot Data Oriented Design: state should be modelled to match how the hardware.


Do you have any reccomended reading on DOD ? There seems to be some discusions around it, but i'd rather something more practical from someone who has practiced it.


There are definitely ones that aren’t:

1. Language Oriented

2. Relational (the main thing there is largely about reversibility)

3. Procedural: Control flow is way more important here than state. Strict procedural might doesn’t allow early return since you then have a block with more than one exit.

Therefore, procedural and language oriented seem more about control flow.


I mean ... all programming is is manipulating state/data soo yea I guess this tracks.


Exactly. The Turing machine is a state machine. The universe does a pretty good job of appearing stateful (amusing side note, Greg Bear's Moving Mars has a fundamentally object-oriented universe, as opposed to Wolfram's bent). All of our programming paradigms are built to, in the end, manipulate state. Imperative programming does this in a very direct manner, others are more indirect. Ultimately, this implies a tradeoff of some kind of overhead, plus some ... paradigm impedance mismatch, if I may coin a clumsy phrase ... at the edges.

You can try to ignore state but at the end of the day, you still want to do something with it at the start and at the end, at least one of the two.


OP just discovered that computer science is 99% about data manipulation, duh.


Since in reductive sense function theory and turing machines are equivalent (church-turing thesis), and a turing machine is rules and a tape (state) then if a language is turing-complete by definition ...


Since we are programming Von Neumann machines, at their core all programs are about state. Therefore it seems intuitive that all programming philosophies would reflect that.


Political and economic philosophies are also about state. Religions as well. Drinking alcohol also. Colors. Mental states (hey this one even has it in the name). And kind of potateos.


No, all programming philosophies is a about composition? What units one expresses a program in, and how they can be combined together.


All state philosophies are about programming, man!


Dont have state.

Try to derive it from the data aka the environment.

Rather prefer adding state to the environment (timestamps, recipesteps of a product), then keeping seperate track from it, because seperation is, additional state.

And state is a curse:re. So dont.

If it cant be avoided, dont duplicate state, add a machine that resets it upon sensors detecting a changed environment (data) and revert wherever you can back to deriving it from data.

If it is unavoidable, make it concise, so that at least the maintainers can follow the state and see were it goes wrong.


well, a program is moving around bits in computer memory, now state is a fancy name for the these bits. Makes sense that the philosophy of p. has something to do with this state of affairs.

But the philosophy of p. is also dealing with ways of creating abstractions, because you can't view everything exclusively on a very low level of abstraction, without loosing your mind.


Functional and declarative are the same to me


I think what the article intended by “declarative” is roughly occupied by the “logic programming” paradigm and DSLs like SQL. If my intuition is right, they have a lot of conceptual overlap. But they differ mostly in how much they realize a goal of “what, not how”.


They're related, but "declarative" is a broader category that includes relations. Functions are a special case of relations (see also: prolog, datalog, sql...).


You can write declarative code in an imperative language, and you can write non-declarative code in a functional language (eg IO monad). So IMO they can’t be related.

That’s assuming that „declarative“ means „describing outcomes rather than describing process“.


You can write functional code in an imperative language and you can write imperative code in a language like Haskell, but that’s irrelevant.


They are orthogonal. React (FC) is a declarative _and_ functional approach. The declarative part is where React code describes what should be rendered, instead of adding/removing DOM elements manually.

The old React is not 100% functional (partially OO), but it still is declarative.


They're not orthogonal. I believe one is a subset of the other, but at a minimum they overlap.


Using extremes for both, that would mean there is overlap between languages like Haskell and languages like SQL?

(Edited for clarity)


Well… I’m not sure how helpful that example is, but yes you could probably find something in common between SQL and Haskell, and on the other hand, no, what I said did not mean that.

Look up Graham Hutton’s writings on relations. He has published quite a bit about this, and he knows a thing or two about functional programming.


I think relations are not a precondition for something to be called declarative programming. Most definitions I‘ve found are way broader and generic.


I never claimed they were, and I don't know if they are. But relations are in the category of declarative, and are a generalization of functions.


I'm a biologist that doesn't understand what's being talked about on this thread at all (a definition of "state", and its relation to other computational parameters would be nice). But I think I can pull an analogy from ecology that might be useful here.

When a bird population starts speciating along, say, the arctic circle, you have a line of a bunch of closely related sub-species that can interbreed with the sub-species nearest them.

Sub-species 1 ... Sub-species 2 ... Sub-species 3 ... Sub-species 4 ...

1 can breed with 2. 2 can breed with 1 and 3. 2 can't breed with 4 because they are too far apart, but if a 4 came to the territory of a 2 they could breed. Etcetera.

Eventually You get a whole circle of interbreedable sub-species as they spread across a circle of arctic latitude around the world:

1 ... 2 ... 3 ... 4 ... 5 ... 6 ... 7 ... 8 ... 9 ... 1

But once you get to that point you find that sub-species 9 and sub-species 1 can't, or won't, interbreed anymore. They are now two entirely separate species despite coming from the same origin and living right next to each other.

However none of them can interbreed with a complete separate lineage, such as wolves.

TLDR: The overlap is before the extremes, not between them.


React is neither reactive nor declarative nor functional.


Instead of stonewalling, please explain what React is in your opinion.

Edit: I mean it’s called „Functional components“, are they lying?


No, the React team means that they use JavaScript functions as the preferred authoring experience for React components. React components often have side effects & internal state and that's not necessarily a bad thing, notwithstanding strict mode.


Functional programming doesn’t mean no side effects though. At some point, every language/platform has to have them, or it can’t be used for dynamic inputs.

React (can) have referential transparency and up to a degree immutability (yes hooks are not side effect free, but so isn’t the IO monad).


"All Programming is about State". All code is state. All state is code. It's turtles all the way down.


Code is logic. Data is state. To say that code is state is like saying that electrical circuits are electricity.

Code carries data in the same way that electrical circuits carry electricity.


State is data over time!


This article makes an error: All programming philosophies are about state, but they're not _just_ about state.


Disagreeing with someone's thesis is not the same as them making an error. Like maybe you don't find their argument compelling, but that's not the same as them making a factual error.


Since everyone already clearly understand what he means, so let's avoid playing semantics and instead focus on being clear and straightforward. It seems that your questions are only leading to more confusion and not offering any valuable perspectives.


My objection was i thought the original comment was being rude, not that it was hard to understand.


Politeness is an orthogonal concept to truth. The article clearly states that programming philosophies "can be boiled down into a simple statement", and the comment you were referring to takes issue with this.


What happened to good ol' saving state in variables and modifying it using simple operations as needed?


From my experience, most internal business logic is coded using simple imperative style, even if the language used offers object-oriented solution.

Frequently, there are 30+ local variables inside a procedure or a function. It is very rare to even see a class definition. I am talking about mostly financial logic.


That's covered by Imperative programming in the post.


When state is stored simply in variables, especially public fields in a structure, then any code can access them -- even if doing so will violate invariants that the type is expected to uphold.

For example, if I have a `String` whose byte sequences must be valid UTF8 (as in the case of Rust's `String` type), or if I have a `UnitVector3` (which is a `Vector3` that must be of unit length), then allowing direct manipulation of structure fields (X, Y, Z) creates the possibility of instances existing that break the invariant. In the case of Rust `String` that leads to memory unsafety.

Abstractions make it possible to enforce invariants using the type system. If `UnitVector3` provides functions such as a constructor that either (1) takes a valid unit vector as input, or else fails or (2) takes any vector as input, and changes its length to become a unit vector (etc.); and also all of the type's functions return `UnitVector3` (where appropriate, e.g. transformations such as rotation), then it is impossible for an invalid `UnitVector3` to exist!

To continue the example, if I add two vectors together, then their sum will be a `Vector3`, not a `UnitVector3` -- so that operation should have the appropriate return type (`Vector3`). Meanwhile rotating a `UnitVector3` will always return a `UnitVector3`. But translating a `UnitVector3` yields a `Vector3`. And so on. Any code written using these operations can determine from their return type what scenarios it needs to handle (is the return value guaranteed to be another unit vector, or could it be any vector?).

If any code can access and manipulate the X, Y, Z coordinates of `UnitVector3`, then any code that relies upon instances being actual unit vectors has the potential to misbehave. That code is correct -- it's the code that created an invalid `UnitVector3` that's wrong! But the error will show up somewhere else at runtime.

Encapsulating these fields within a type (that provides getter methods), and provides a constructor that validates the input (and fails on invalid), makes this entire class of error impossible.

It is still possible for code to attempt to construct an invalid `UnitVector3`, but the call to the constructor (which validates its input) will fail, thus causing the program to fail as soon as possible, and in the most relevant place!: In the code that's creating an invalid vector -- not some obscure other part of the program that is processing the invalid `UnitVector3`.

Using the type system to enforce soundness properties is impractical to achieve without encapsulation, and the ability to hide data and provide interfaces to it. Structures with public fields, where any code can modify their value, is very high risk by comparison.


To me, they're more about minimization of if then else statement.


Not exactly, how do you connect between Generics Programming and State?


I don't think this applies to Jonathan Blow's philosophy about programming (in short: that everything in the application stack should be visible, understandable, and debuggable by its developers). So it doesn't apply to all philosophies.


I mean sure, if you ignore all the non-state related issues...


I think paradigms rather than philosophies is a better term.


OO / FP are paradigms. Monolith / microservices are architectures. Paradigm and architecture are orthogonal.

I guess the author used "philosophies" because they are mixing pears and apples, so the most specific word that can be stretched enough to encompass those concepts is "philosophy".


They forgot Immutable! The best way to handle state.


Is that not covered by functional programming?


State laws are different from the central laws


state, computation, IO … thats all there is!


All programming philosophies are about 0-1


surely there must be some focused on naming things and cache invalidation?

on edit: yeah I guess cache invalidation is state too.


i want to see someone elaborate a state-maximizing, instead of state-minimizing, philosophy. lean in


Forth in its simpler implementations seems like it goes about as far as one could conceivably go in terms of maximizing state, or minimizing non-state. No parameters or perhaps even local variables, everything is just pushed onto a stack (or one of many stacks) that can be freely manipulated at any point. No major opaque control flow mechanisms, ifs and loops are implemented as functions operating on an exposed "return stack" which can be freely manipulated. No opaque function definitions compiled into a binary, function implementations are placed in a data structure that can be freely accessed at any time. No separate compile stage that manipulates state otherwise inaccessible to the program, compilation happens at run-time by manipulating freely accessible data structures, and the "syntax" for compiling things is essentially just function calls that manipulate globally accessible state based on what's in the input stream and you can write your own alternatives if you want.

If the functional paradigm tends towards avoiding state, then it would seem to me that forth does the opposite: making almost everything a part of the state of a program that a developer can access and manipulate, even things that most structured programming languages do not expose to the developer.


The MOV-only CPU [1, 2] is probably about as close to this as you can get. If you think about it, every structured programming affordance is sugar for writing a state machine. MOV-only pushes all control flow into a state machine, and does so without using any branches itself. It's all MOVs all the time.

[1] https://github.com/xoreaxeaxeax/movfuscator

[2] https://harrisonwl.github.io/assets/courses/malware/spring20...


BASIC, as in the original line-numbered version. Every variable is global. Every line is an entry point. The philosophy is: It works just fine, so long as you're careful.

Also: Any assembly language that predates memory protection.


... Recurrent neural networks?


Assembly language, where most opcodes in a given application are concerned with moving data around and every opcode (including the no-op) mutates at least one register, in that the no-op causes the program counter to advance even if it does absolutely nothing else.


Spaghetti architecture. There is so much state organizational controls outside the codebase limit which parts of the code may be executed at any given time.


Befunge is a bit like that, with its 2D grid of state/program/wtf being the entirety of the program.


Maybe OOO counts? Every bit of code gets its own state?


lisp - code is data (meaning state).


i've been thinking about this for a while. my basic question is: languages give or don't give features mostly to avoid spaghetti code; what if what's really needed is tools to make dealing with spaghetti code easy?


I’m right there with you. It feels much more interesting than the 3 strategies we have collectively formulated for avoiding code. Those strategies have plateaued. Functional programming isn’t good enough. We need something different.


The problem with your desire is that the most cutting edge OOP paradigm is event-sourcing, which is essentially functional. Event sourcing isn't a clearly defined pattern, but it kinda blends functional ideas, "given a state and a change, i'll give you a new state" such that your system will give a non-deterministic new state based on a change. That's de facto functional programming.

Having said that, the leanest system will be reference types under the hood, as for example, the C# stack frame is limited to 1mb and can't contain your app's entire state.

So we arrive at today's cutting edge which is F#, Clojure, etc. that are functional wrappers around heap-based lookups.


State machines?


This is dumb. That’s like saying that every food in the world is about carbs because they happen to all contain some amounts of carbs.


you can't really minimize state. functions are stored in the stack, which is... state.


The problem isn't that there is any state involved, it's that we as programmers must reason about that state.

Evaluating "(1 + 2) * 3" requires some state in the CPU, but I can treat it mentally as just an expression that be reduced down to 9 without me feeling like there's state involved.

The only thing that matters in my opinion is if it affects our ability to _reason_ about it


isn't it minimised if the compiler/OS manages it instead of you?


Good idea, bad execution


State = context. In a pure functional language the state would be concerted into extra function parameters.

Many bugs arise from unexpected context: programmer writes code expecting one context but gets an oddly-shaped context they weren't planning for (the other bugs are typos, where the context is correct but your code is not what they planned it to be, or when downstream code has different behavior than they expected. Ultimately, the code is not doing what they expected it to).

But context is inevitable. Even "stateless" programs like compilers, which take an input and return an output, have a lot of context (e.g. scopes, prior analyses, virtual pc). In pure languages like Haskell this gets abstracted by monads (and not just the `State` monad, all of them); if you do not abstract the context you get a function with an obnoxious amount of parameters and return values which is even more unweidly.

When people say "managing state" what they really mean is managing context, so that you guarantee when a function is called it has the context (state and parameters) it was handled for. The way to do this is to make the context reliable, whether that be state invariants, strongly-typed parameters, less state/parameters, exhaustive case analysis, etc.


State is not the same as context, because (nonpure) functions also perform state changes, and concurrent execution can cause shared state to change at any time during the execution of a function, including for objects the function created itself (weren’t existing context).

Bugs arise from incorrect assumptions. These can be assumptions about state or context, but really it can be any kind of assumption. Assumptions are a kind of dependency. Minimizing dependencies also helps to minimize assumptions. Similarly, statically checked type systems help by having the compiler double-check at least some of the developer’s assumptions. The more powerful the type system, the more assumptions can be expressed and hence checked.

OOP tries to minimize the assumptions and dependencies between code by encapsulation/data hiding and interface contracts. FP tries to minimize them by having the result of a function only depend on its parameters. Declarative programming tries to minimize them by having the program only describe relationships between inputs and outputs.

How to minimize the assumptions a given piece of code has to make, and minimizing its dependencies, are the bedrock of all software engineering advice, IMO.


> How to minimize the assumptions a given piece of code has to make, and minimizing its dependencies, are the bedrock of all software engineering advice

Would this be a case of managing the complexity of the developer's systems model?

Abstractions in effect clamp the complexity presented to the programmer & may include assumptions about the given piece of code. Question being are the cognitive assumptions presented to the programmer able to result in unintended consequences, how often, & how severe in systemic terms?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: