Hacker News new | past | comments | ask | show | jobs | submit login

If you have a large engineering team (hundreds+ devs) with a large codebase then having a monolith can slow down developer velocity.

There’s massive scope with the monolith, build tools start to strain, etc




Interesting. I found working on FB’s monolith to have, on average, higher velocity and fewer blockers, than working on microservicified systems at other places 2+ orders of magnitude smaller in both engineering and codebase size.


It is not fair to compare the Facebooks monolith and the monolith at the average company, as they are not really the same thing. The tooling available at Facebook is built and maintained by a team larger than the engineering departments at most companies.

There comes a point, where regular off the shelf tooling does not scale sufficiently well for a monolith. Tests suits and builds start to take too long. Deployments get increasingly complicated. Developers start to get into each other's way, even when working on unrelated features. Additionally, if you are using an untyped, interpreted language, keeping a large app well organized can also be a problem.

Microservices is a tool for dealing with complexity and certainly not the only one. However, building the tooling and infra for a large and sophisticated monolith is not simple and not guaranteed to be an easier solution to the problems listed above.


How is this relevant? My comment is in response to an observation about "large engineering teams," not "the monolith at the average company."

At the average company, standard tools will work fine, while companies with large engineering teams have the resources to maintain custom tooling.


You are assuming that the observed tool strain scales with the number of developers. In my experience it scales with the number of coupled concerns inside the same repo. Now, this may be somewhat correlated with the number of developers, but not entirely. Therefore in my experience again you can end up with a moderately sized company running into tool limits with a monorepo. FB doesn't have those problems because they use different tools.


> FB doesn't have those problems because they use different tools.

Exactly - instead of using microservice-oriented tools, they use tools organized around monoliths. And that decision serves them well. That's the whole point.


Microservices move the complexity rather than solve it.

The dependency boundaries between portions of the data model can never be cleanly demarcated -- because that's not how information works, especially in a growing business -- so there's always going to be either some loss of flexibility or addition of complexity over time.

Individual developers getting out of each other's way just ends up getting pushed to getting in each other's way at release time as the matrix of dependencies between services explodes.

Engineers jobs become more about the lives and dramas of the services they work on than about business domain. You end up building your organization and reporting structure around these services, rather than the business needs of the customer. And then you end up indirectly or directly shipping that org chart to the world in delays or bugs caused by your fragmentation.

Instead of modeling facts about data and their relationships, and constructing the relational model which can capture this, the developer in the microservice model becomes bogged down in service roles and activities instead, again taking them away from the actual problem: which is organizing information and making it accessible to users/customers.

It's a shell game.

The Facebook monolith works because engineers there invested time in building the tooling you're complaining is not available to others. Same with Google: Google invested in F1, etc. because it evaluated the cost to do otherwise and it made sense to invest in infrastructure.

Yes, small companies can't often afford this. Luckily they have two things on their side:

Most small companies don't have a fraction of the scale issues that a FB or a Google have. So they can afford to monolith away for a lot longer than they seem to think they can, while they put in infrastructure to scale the monolith.

The industry as a whole has invested a lot in making existing things scale. e.g. you can do things with a single Postgres instance that we never would have dreamed about 10 years ago. And when that falls over, there's replication, etc. And when that falls over, guess what? There's now high performance distributed ACID SQL databases available for $use.

Microservices is surely one of the longest lived, biggest cargo cults in our industry. I've seen others come and go, but microservices really seems to cling. I think because it has the perception of breaking business problems down into very small elegant independent atomic pieces, so it has a very.. industrial revolution, automation, factory floor, economies of scale vibe. But that's not what it is.

There are places for it, I'm sure. But systems with highly interelated data and quickly changing requirements are not well suited.

IMHO.


Yeah, I've seen stuff carved into tiny, fragile microservices when the number of nodes was under ten. Stupid, IMO, and it took a stable service and made it a flaky mess. It was done because of dogmatic "It must be in containers in the cloud with microservices, because that is The Way(TM)". Literally there was an initiative to move everything possible to the cloud in containers with lots of microservices because one of the place's gurus got that religion. It increased complexity, decreased reliability and cost a lot of money for not much benefit.

Until you have well over 20 systems doing one thing/application, trying to treat bespoke services like cattle instead of pets is silly. It will also drive your ops people to drink, especially if it's done by "DevOps" that never get paged, and refer to your group in meetings with other companies as "just ops". (Yes, I'm still salty about it.)

Often I think it's "resume driven development", especially if the people pushing for it want to abandon all your existing tools and languages for whatever is "hot" currently.


I suspect a lot of companies are pushing for MS architecture because its trendy, not because it makes sense for their own use case, which is what is causing such a strong reaction on HN. Moreover, I suspect that the services end up being very small and as such, rather poorly self contained. All I wanted to say with my comment is that microservices are a tool, and there are certain scenarios where they could be a good solution to a problem (although perhaps not the only one).

I do want to provide a counter point example against everything must be a monolith. Years ago I worked at a medium sized company that worshiped at the altar of monolith and the mantra of "this is how google does it" was often repeated. Unfortunately what they were doing was far from what Google was doing. Their monolith incorporated solutions for multiple, largely unrelated business lines. All of the code deployed across all of the hundreds of servers, and data was shared without any sense of boundaries across the entire app. The company didn't want to invest significantly into the appropriate tooling to make such a large app function well (multiple gigabytes source code written in PHP). The result was a terrible dev experience where deployments took 4 to 6 hours on a good day and the blast radius of a given change was sometimes hard to verify. Its akin to sharing servers between GMail and Google Docs, and mixing up the code together for good measure (so it can be reused). This created a culture of slow moving, large development cycles as well as a lot of defensiveness and fear within the software engineering group. Suffice to say, it was not a pleasant experience.

Before I get down voted a bunch I should say I also tend to prefer monoliths as much as possible. They are much simpler to work with, much simpler to test and easier to analyze. Also if using a good compiled language, the compiler can help a lot to eliminate a lot of common silly regressions. However, that being said, I would consider making a new service in certain cases. For example, if I was building a new, unrelated product or if there was a part of the app that was functionally very different from the rest of the app.

I also tend to distinguish between a mono repo (single repo, can have more than one service in it) and monolith (single app). If you are willing to setup some more sophisticated CI/CD tooling, I think mono repo is the way to go, even when it makes sense to have more than one service in the design of the app.


Agreed.


Surely there should be something between gigantic monolith and micro services. I would call it service.


Typically, "monolith" implies services - ie, the "backing services" from a traditional 12-factor monolith:

- https://12factor.net/backing-services

Monolith vs. Microservices comes about because microservices proponents specifically set it up as an alternative to a traditional monolith + services stack:

- https://www.nginx.com/learn/microservices-architecture/#:~:t...


> Typically, "monolith" implies services - ie, the "backing services"

I don't know if that's a common interpretation - in the monolith-loving companies I've worked with it very much meant a single ball of code (e.g. huge Rails app), not separate services.


Yes, not sure why we have so many brainwashed fanatics who see world as hotdog and not-hotdog - microservices and monoliths - only.


Seriously! I think there is a good space for the concept of "mid-services" - cluster similar and interdependent services and service fragments together, so they split in logical groups for updating.

It would be sort of like "Module A is authentication and session management, module B is data handling layer, and module 3 is the presentation and processing layer." Each of those under a microservices dogma would be two to four microservices struggling to interoperate.

I read the book written by the dev that advocated for microservices. I wanted to throw it across the room, but it was an ebook. He literally went for over half the book before he even addressed operability. Everything was about developer convenience, not operating it with an eye toward user satisfaction. The guy was clueless.


That’s what Google does. Seems to work fine for them.


I think services became really popular from Amazon, and that quickly went to microservices IMHO. It's perhaps analogous to say: "That's what Amazon does. Seems to work fine for them". It's very interesting there are big techs now with well known monoliths vs Amazon services.

Though, Amazon's services are so that any internally useful tool can be externally exposed and then rented out to generate more revenue. The reasons/benefits are very different compared to say decoupling or the other reasons given for service orientated architectures.


One awesome and often overlooked benefit of microservices is how they simplify security/dependency updates.

With a monolith, dependency updates, especially breaking ones, often mean either all development stops for a "code freeze" so the update can happen, or you have a team responsible for doing the update and they are trying to update code faster than other devs add new code.

The result of this is that updates get pushed back to the last minute, or are never just done. I've seen old (ancient) versions of OpenSSL checked into codebases way too often.

With microservices, you can have a team that isn't as busy take a sprint to update their codebase, carefully document best practices for fixing breaking changes, document best practices for testing the changes, and then spread the learning out to other teams, who can then update as they have time or based on importance / exposure of their maintained services.

It is a much better way of doing things.

It also means some teams can experiment with different technologies or tool chains and see how things work out. The cost of failure is low and there isn't an impact to other teams, and build systems for microservices tend to be much simpler than for monoliths (understatement...)


Microservices are a heavy handed way to draw boundaries around your software so that bad technical decisions don't bleed across different teams. Obviously there is some benefit to that but there is also a massive tradeoff - especially for certain types of software like complex UIs.

> With a monolith, dependency updates, especially breaking ones, often mean either all development stops for a "code freeze" so the update can happen, or you have a team responsible for doing the update and they are trying to update code faster than other devs add new code.

In all my years I've never seen a code freeze due to a dependency update. Maybe the project you were working was poorly engineered?

> The result of this is that updates get pushed back to the last minute, or are never just done. I've seen old (ancient) versions of OpenSSL checked into codebases way too often.

There should be nothing stopping you from running multiple versions of a dependency within a single monolothic project.

> With microservices, you can have a team that isn't as busy take a sprint to update their codebase, carefully document best practices for fixing breaking changes, document best practices for testing the changes, and then spread the learning out to other teams, who can then update as they have time or based on importance / exposure of their maintained services.

Gradual adoption of new dependencies has nothing to do with microservices.


> In all my years I've never seen a code freeze due to a dependency update. Maybe the project you were working was poorly engineered?

I spent a decade at Microsoft, I started before cloud was a thing. All code lived in monoliths[1]. I once had the displeasure of looking at the source tree for XBox Live circa 2008 or so. Nasty stuff.

"Don't check anything in today, we're trying to finish up this merge" was not an uncommon refrain.

But you are right, often times there wasn't code freezes, instead system wide changes involved obscene engineering efforts so developers could keep the change branch up to date with mainline while dependencies were being updated.

I'll confess my experience with large monolithic code bases are all around non-networked code, but IMHO the engineering maintenance challenges are the same.

> There should be nothing stopping you from running multiple versions of a dependency within a single monolothic project.

Build systems. They are complicated. I spent most of my life pre JS in native C/C++ land. Adopting a library at all was an undertaking. Trying to add 2 versions of a library to a code base? Bad idea.

Heck even with JS, Yarn and NPM are not fun. And once a build system for a monolith is in place, well the entire idea is that a monolith is one code base, compiled into one executable, so you don't really swap out parts of the build system.

Hope none of your code is dependent upon a compiler extension that got dropped 2 years back. And if it is, better find time in the schedule to have developers rewrite code that "still works just fine".

Contrast that, in my current role each microservice can have its own build tools, and version of build tools. When my team needed to update to the latest version of Typescript to support the new AWS SDK (which gave us an insane double digit % perf improvement), we were able to even though the organization as a whole was not yet moving over.

Meanwhile in Monolith land you have a build system that is so complicated that the dedicated team in charge of maintaining it is the only team who has even the slightest grasp on how it works, and even then the build systems I've seen are literally decades old and no one person, or even group of people, have a complete understanding of it.

Another benefit is that microservices force well defined API boundaries. They force developers to consider, up front, what API consumers are going to want. They force teams to make a choice between engineering around versioning APIs or accepting breaking changes.

Finally, having a REST API for everything is just a nice way to do things. I've found myself able to build tools on top of various microservices that would otherwise not have been possible if those services were locked up behind a monolith instead of having an exposed API.

In fact I just got done designing/launching an internal tool that was only possible because my entire organization uses microservices. Another team already had made an internal web tool, and as part of it they made a separate internal auth microservice (because everything is a microservice). I was able to wire up my team's microservices with their auth service and throw a web UI on top of it all. That website runs in its own microservice with a customized version of the org's build system, something that was possible because as an organization we have scripts that allow for the easy creation of new services in just a matter of minutes.

Back when I was at Microsoft, none of the projects I worked on would have allowed for that sort of absurd code velocity.

Another cool feature of microservices is you can choose what parts are exposed to the public internet, vs internal to your network. Holy cow, so nice! Could you do that with a monolith? Sure, I guess. Is it as simple as a command line option when creating a new service? If you have an absurdly well defined monolith, maybe.

Scaling, different parts of a system need to scale based on different criteria. If you have a monolith that is running on some # of VMs, how do you determine when to scale it up, and by how much? For microservices, you get insane granularity. The microservice pulling data from a queue can auto-scale when the queue gets too big, the microservice doing video transcoding can pull in some more GPUs when its pool of tasks grows too large. With a monolith you have to scale the entire thing up at once, and choose if you want vertical or horizontal scaling.

You can also architect each microservice in a way that is appropriate for the task at hand. Maybe pure functions and completely stateless makes sense for one service, where as a complex OO object hierarchy makes sense someplace else. With microservices, impedance mismatches are hidden behind network call boundaries. Yes you can architect monoliths in vastly different fashions throughout (and I've done such), but there is a limit to that.

E.g. with microservices you can have one running bare metal written in C++ on a hard real time OS, and other written in Python.

Oh and well defined builds and deployments is another thing I like about microservices. I've encountered monoliths where literally no one knew how to completely rebuild the production environment (I overheard from another engineer that Xbox live services existed in that state for awhile...)

And again, my bias is that I've only ever worked on large systems. Outside my startup, I've never worked on a project that didn't end up with at least a couple hundred software engineers writing code all towards one goal.

Is k8s and microservices a good idea for a 5 person startup? Hell no. I ran my startup off a couple VMs that I SCP'd deployments to along side some Firebase Functions. Worked great.

[1] This is not completely true, Office is divided up pretty well and you can pull in bits and pieces of code pretty independently, so if you want a rich text editor, that is its own module. IMHO they've done as good of a job as is possible for native.


> Heck even with JS, Yarn and NPM are not fun.

    $ mkdir hello && cd hello
    $ npm init -y
    $ npm install react17@npm:react@17
    $ npm install react18@npm:react@18
    $ cat "var react17 = require('react17'); var react18 = require('react18'); console.log(react17.version,         react18.version);" > index.js
    $ node index.js
    $ node index.js
    17.0.2 18.2.0
That's the problem with a lot of these discussions. Conclusions are often based on layers of other conclusions that could be wrong.

> That website runs in its own microservice with a customized version of the org's build system, something that was possible because as an organization we have scripts that allow for the easy creation of new services in just a matter of minutes.

I don't see what this story has to do with microservices. That kind of velocity can easily be achieved with a single codebase too.

> Scaling, different parts of a system need to scale based on different criteria.

That's not at all unique to microservices.

A monolithic application can run in different modes. For example of if you run `./my-app -api` then it'll start an API server. If you run `./may-app -queue` then it'll run the message queue processor. And so on.

This way you can run 10 API servers and 50 queue processors and scale them independently. How an application is deployed isn't necessarily tied to how it's built.

> Another cool feature of microservices is you can choose what parts are exposed to the public internet, vs internal to your network. Holy cow, so nice! Could you do that with a monolith? Sure, I guess. Is it as simple as a command line option when creating a new service? If you have an absurdly well defined monolith, maybe.

I'm confused.

Is there some magical program called "microservice" that takes command line options? What does a systems architecture approach have anything to do ease of deployment?

The whole public/private networking thing is an infrastructure decision. Your monolithic application could easily have internal endpoints that are only exposed when certain flags are set.

> With microservices, impedance mismatches are hidden behind network call boundaries. Yes you can architect monoliths in vastly different fashions throughout (and I've done such), but there is a limit to that.

What are those limits praytell?

> E.g. with microservices you can have one running bare metal written in C++ on a hard real time OS, and other written in Python.

That has nothing to do with microservices. You can thank the UNIX architecture for that. Most computers run more than one program written in more than one programming language.

> Oh and well defined builds and deployments is another thing I like about microservices. I've encountered monoliths where literally no one knew how to completely rebuild the production environment (I overheard from another engineer that Xbox live services existed in that state for awhile...)

So it was architected poorly. Why is that a strike against monoliths? Are you saying that messy builds are impossible with a microservice architecture? One of the top arguments against microservices is how much of a rats nest they are in production - in practice.

Some companies are able to do it right with discipline and good engineering. The same can be said for monoliths.

By the way the big problems with microservice architectures is you don't get atomic builds. Very difficult problem to work around. Usually with lots of tooling.


> That's the problem with a lot of these discussions. Conclusions are often based on layers of other conclusions that could be wrong.

Great, now do multiple versions of typescript. and jest. and ts-jest. And while you are at it, how would you change over from using a legacy test system, such as sinon, to jest? With microservices it is easy, newly generated services use jest, old ones can update when they want to.

Could you engineer a code base so different folders use different testing frameworks and different versions of different testing frameworks? Sure. At the cost of added complexity.

> I don't see what this story has to do with microservices. That kind of velocity can easily be achieved with a single codebase too.

For how long? Until it gets to what size?

I've spent my life working on large code bases, the velocity always falls off, typically it is best in the first 6 months, steady state for another year or 2, and then undergoes a steady decline for another 2 or 3 years after that until it levels off at "well, it's legacy what you do expect?"

> A monolithic application can run in different modes. For example of if you run `./my-app -api` then it'll start an API server. If you run `./may-app -queue` then it'll run the message queue processor. And so on.

So generating multiple executables from one repo? That is a monorepo, if you have a bunch of processes that communicate with each other through some channel, be it pipes or sockets or whatever, then I'm going to argue you have microservices. They may be running on one machine, but again, you have multiple processing talking to each other.

Now I'll also argue that such code bases are more likely to have strong coupling between components. It doesn't have to be that way, but it becomes harder to resist (or to just stop junior devs from inadvertently doing).

> The whole public/private networking thing is an infrastructure decision. Your monolithic application could easily have internal endpoints that are only exposed when certain flags are set.

So one executable with multiple network endpoints exposed? Sure. But you have lost some security boundaries in the process. If the only system that can ever have customer PII lives inside a firewall with no public IP address, you've gained security VS PII floating around in the same process memory address as your public endpoints.

Also now teams have to maintain two versions of their API, one internal for binary callers, another for the exposed network endpoint. Odds are that exposed network API is just not going to ever be written, or if it is, it will quickly become out of date unless it is very highly often used. (Which means if in a few years someone has a great idea for an underutilized API, it will likely be missing major version updates, and be old and crufty and barely functioning.)

> That has nothing to do with microservices. You can thank the UNIX architecture for that. Most computers run more than one program written in more than one programming language.

The characteristics of hardware that do DB queries are different than the HW that does video transcoding. With microservices you can shove your video encoding service on some machines with powerful GPUs, and the part of your code that talks to the DB on another machine, and when your s3 bucket gets full of files you need to transcode, you can spin up more transcoding machines.

Vs a monolith where you have everything running on one machine under one process. Sure you can just scale up your entire app, but that is absurd and I'm sure literally no one does that, people who need to transcode lots of video have dedicated machines that do the transcoding, because having the same machine that serves up your website also transcode video is beyond inefficient.

Also, the Unix philosophy is small dedicated programs that do one thing really well. Kind of like... microservices.

> What are those limits praytell?

Different problem domains are best solved with different programming paradigms. Having dramatic tonal shifts within one application is confusing really fast. Having part of an app use mobx and another part be completely stateless and a third part use complex OO domain modeling is going to give maintainers whiplash.

And then you have to have all those different systems talk to each other. Which means you need translation layers for "get shit out of mobx and put it into this other system".

At which point you either have well defined boundaries, or a ball of spaghetti, but once an app has grown to the point of absurdly different problem domains, it may be time to start thinking about breaking it up into multiple apps and figuring out how they should talk to each other.

> By the way the big problems with microservice architectures is you don't get atomic builds. Very difficult problem to work around. Usually with lots of tooling.

If you have well defined interfaces you don't need atomic builds. That is kind of the entire point! Unless there are some legal requirements which mandate "blessed builds".

Microservices of course have a huge performance penalty, but so does every other programming paradigm. IMHO paradigms exist to restrict what programmers can do. OO people say you need to shove your nouns and verbs together and only expose a limited amount of state. Those are some serious limitations, and as the Java ecosystem has demonstrated to us, they aren't enough to prevent complex balls of mud from being developed.

Pure functional peeps say no state at all, and arguably the functional paradigm often works, but the performance hit is insane at times.

Microservices say you need to decouple components and that components can only talk through preestablished and agreed upon messages. It enforces this decoupling by isolating different bits of code on the other sides of boundaries so that network connections of some type have to be used to communicate between modules.

You get some benefits with this, I still maintain it is easier to update to new breaking changes of dependencies on smaller chunks of code rather than adopt a breaking change to a global dependency across possibly hundreds of thousands of lines of code.

Also, you can deploy a service that Does A Thing to the appropriate hardware. I'm still not sure what you are talking about in reference to a monolith that spits out multiple executables, that sounds like a monorepo that generates multiple services, each of which I presume has to talk to each other somehow.

Because you only need to maintain the API contract, microservices can be deployed in a true CI/CD fashion, with teams constantly releasing changes to their services throughout the day, every workday.

There are downsides. Microservices have a huge performance overhead. Refactoring across microservices sucks, thus why some teams adopt a monorepo approach, but that brings along its own set of complications, refactors across multiple services means you have to deploy all those services at the same time. For an organization that may be used to managing a hundred+ microservices independently of each other, all at once deployments can be a huge productivity loss.

Microservices also have build and deployment complexity. It is largely an up front tooling cost, but it is a huge cost to pay.


> Great, now do multiple versions of typescript. and jest. and ts-jest.

Why would I do that?

There is absolutely no reason a single application should be using two versions of Typescript at the same time. What you're talking about is a combinatorial explosion of packages and versions - it's bad, bad, bad for software quality.

Upgrade paths can be done gradually. I've done and seen it done more times than I can count.

> For how long? Until it gets to what size?

That depends a great deal on the type of software and how it was engineered.

If you inherit a mess and your engineering team is incapable of not creating a mess then, sure, drawing heavy handed microservice boundaries might make sense. But in that case you're solving an organizational problem not a technical problem. All of the technical benefits you've been claiming are moot.

> So generating multiple executables from one repo? That is a monorepo, if you have a bunch of processes that communicate with each other through some channel, be it pipes or sockets or whatever, then I'm going to argue you have microservices. They may be running on one machine, but again, you have multiple processing talking to each other.

You could generate multiple executables. Or you can generate a single executable that's able to run in different "modes" (so to speak).

What you've shown throughout your writing is you don't quite understand what microservice architecture is. Multiple applications communicating over the network (or whatever) is NOT a "microservice". Why wouldn't you just call that a "service"?

A microservice is a very small application that encapsulates some "unit" of your larger application. Where that boundary is drawn is obviously up for debate.

I don't wanna digress but I can assure that two applications on the same server talking over a network is NOT a microservice. That's just... like... software, lol.

> Now I'll also argue that such code bases are more likely to have strong coupling between components. It doesn't have to be that way, but it becomes harder to resist (or to just stop junior devs from inadvertently doing).

Creating a network boundary (a HUGE performance cost) just to prevent people from doing stupid stuff is not good software architecture.

> So one executable with multiple network endpoints exposed? Sure. But you have lost some security boundaries in the process. If the only system that can ever have customer PII lives inside a firewall with no public IP address, you've gained security VS PII floating around in the same process memory address as your public endpoints.

Your PII lives in a database. Restrict read access to that database (or subset of that database) to applications running on the internal network. The most straightforward way to do that would be through configuration management.

Applications accessible to the outside world will never even have read access.

With microservices, there is nothing stopping some tiny internal service from exposing data to a public service. Maybe the internal service wrote their ACLs wrong and the data leaked out.

What you're describing, again, has nothing to do with microservices and is a problem you need to deal with either way.

PII is a data problem not a code problem. Microservice/monolith is a code problem not a data problem.

> Also, the Unix philosophy is small dedicated programs that do one thing really well. Kind of like... microservices.

So now all programs that communicate over the network are microservices. Do you not see how silly that sounds?

> Vs a monolith where you have everything running on one machine under one process.

That's not how anyone deploys monolithic applications. You're now trying to claim that monolithic applications only run one server. Confusing.

> Different problem domains are best solved with different programming paradigms.

That's why we create multiple applications that do different things. Microservices are something totally different.

> If you have well defined interfaces you don't need atomic builds. That is kind of the entire point! Unless there are some legal requirements which mandate "blessed builds".

The benefit of atomic builds is it's very easy to reproduce the behavior of your application at any given moment.

So, for example, a user reports a difficult to find bug and you're only investigating it 2 weeks later. You'll need to rewind your application's "code state" to whatever it was at the time that the user was using your application. For a monolithic application this is as easy as pointing the monolithic repo to some commit SHA.

With microservices this is much harder to do without tooling. This can be for many reasons. Sometimes every microservice is a separate repo. Sometimes your development environment isn't running every microservice.

> Pure functional peeps say no state at all, and arguably the functional paradigm often works, but the performance hit is insane at times.

I'm not even that into FP and this is painful to read. Such a gross misrepresentation of what FP is about.

> Microservices say you need to decouple components and that components can only talk through preestablished and agreed upon messages.

You mean like function calls?

> It enforces this decoupling by isolating different bits of code on the other sides of boundaries

You mean like... visibility? As in public/private interfaces?

> so that network connections of some type have to be used to communicate between modules

Enforcing correctness by placing a network boundary is too heavy handed. There are other ways to achieve the same thing that doesn't involve adding orders of magnitude latency for no reason.

> You get some benefits with this, I still maintain it is easier to update to new breaking changes of dependencies on smaller chunks of code rather than adopt a breaking change to a global dependency across possibly hundreds of thousands of lines of code.

You do realize that large software existed before microservices became a thing right? And it was maintainable, right? There are so many ways to solve this problem without affecting thousands of lines of code blindly.

There's also just as much risk in having 10 services that are running slightly different versions of the same dependency. In fact that's a freaking nightmare.

> Because you only need to maintain the API contract, microservices can be deployed in a true CI/CD fashion, with teams constantly releasing changes to their services throughout the day, every workday.

Sir, I'm afraid of you have a bad case of buzzworditis.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: