Hacker News new | past | comments | ask | show | jobs | submit login
How to Improve Your Monolith Before Transitioning to Microservices (semaphoreci.com)
235 points by ahamez on July 6, 2022 | hide | past | favorite | 168 comments



Here's a quote from https://grugbrain.dev/ (discussed here on HN a while ago) which seems very appropriate:

> Microservices: grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too. seem very confusing to grug


fundamental problem is cargo cult developers trying to copy massive companies architectures as a startup. They fail to realize those companies only moved to microservices because they had no other option. Lots of startups hurting themselves by blindly following these practices without knowing why they were created in the first place. Some of it is also done by people who know it isn't good for the company, but they want to pad their resume and leave before the consequences are seen.

same thing applies to Leetcode style interviews, no startup should be using them honestly. They are for established companies that have so many quality applicants they can afford to filter out great candidates


re: "copy massive companies"; I don't recall seeing a single "microservice" at Google. Granted, in the 10 years I was there I mostly worked on non-cloud type stuff. But still.

Google has perfected the art of the giant, distributed, horizontally scalable mostly-relational database. They have services, yes. But in large part... they generally all talk to F1 or similar.

Microservices with each thing having its own schema and/or DB seems to me to be a phenomenally stupid idea which will simply lead to abusing your relational model and doing dumb things like your own custom bespoke server-side joins with your own custom bespoke two phase commit or other transaction logic.

Before Google I worked in one shop that did 'microservices' and the release process was a complicated nightmare that would have been better solved by a complicated combinatorial graph optimization library. There were cross service RPCish calls made all over the place to piece data together that in a more monolithic system would be resolved by a simple fast relational join. I shudder to remember it.

But I'm just an old man. Pay no heed.


Meanwhile, I come from the perspective that a shared database that everyone talks to is a shared infrastructure point. And... I have seen those cause more problems than makes sense.

My bet is I'm also just an old man in this discussion. Such that I really think we can't underline enough how particular the discussion will be to every organization.


If you have shared data with relationships in the data, that is shared infrastructure whether it is physically manifested in one "place" or not.

If you choose to parcel that information out into multiple pieces of infrastructure to avoid a single point of failure, you've now moved the nature of your problem from maintaining the shared database to maintaining the process of managing that data in multiple places.

You'll be doing client-side or service-side joins all over in order to produce a unified piece of data. But note you still haven't removed the dependency on the shared state. You've simply moved the dependency. And in the process made it more complicated.

This might have the advantage of making it "not the problem" of the ops/sysadmin/SRE. But it has the problem of making it "the problem" for either the customer (in the case of shitty data in the client) or the developer (in the case of having to fix bugs.) And I can guarantee you that the developers working on "mycoolmicroservice" are way shittier at understanding data serialization and coherence and transaction management than the people who wrote your database engine.

The ACID relational model was developed for a reason. The set of compromises and problems we are dealing with in infrastructure has been known since the 60s when Date&Codd et al proposed it. The right thing to do for engineering & the infrastructure level is to make the pieces of shared database super reliable and performant at scale, not to try to handwave them away by pushing the problem elsewhere.


I mean, you aren't necessarily wrong. But, I will again lean on "depends on fundamental organization factors of where you work."

In large, there is no reason for people in your HR department to have access to data in your sales department. Even if they have relationships between them. (For payroll/commission/whatever.)

If you are small enough that having wide access to all of the data is fine, then it is fine.

And, ACID isn't some magic bullet that will prevent things from needing an audit process to make sure data is accurate. In particular, if you have an inventory system, your actual physical inventory will trump whatever your database says.


I am also old. And tend to agree with you on this.


I’m older than i was when people started telling me as a tech founder that we needed micro services to be cool. At 25 i felt it didn’t deliver any real customer value and made my life harder, at 35 it seems like the people that do this are not people i want to hire anyway.


I started sorta the opposite and thought that nano(!) services (think one AWS Lambda per API call) were the best approach, and now I look at that younger wisdom with a parental smile..


I agree that we need to at least name nanoservices -- microservices that are "too small". Like surely we can all agree that if your current microservices were 100x smaller, so that each handled exactly one property of whatever they're meant to track, it'd be a nightmare. So there must be a lower limit, "we want to go this small and no smaller."

I think we also need to name something about coupling. "Coupling" is a really fluid term as used in the microservice world. "Our microservices are not strongly coupled." "Really, so could I take this one, roll it back by 6 months in response to an emergency, and that one would still work?" "err... no, that one is a frontend, it consumes an API provided by the former, if you roll back the API by 6 months the frontend will break." Well, I am sorry, my definition of "strong coupling" is "can I make a change over here without something over there breaking", for example rolling back something by 6 months. (Maybe we found out that this service's codebase had unauthorized entries from some developer 5 months ago and we want to step through every single damn thing that developer wrote, one by one, to make sure it's not leaking everyone's data. IDK. Make up your own scenario.)


I'm just surprised no one mentioned https://martinfowler.com/bliki/MonolithFirst.html yet :-)

Nano actually did (and does) make sense from access control perspective - if a service has permissions to do one thing and one thing only, it is much harder to escalate from. But I'm not sure if these benefits outweigh the potential complexity.


Your definition of coupling seems a bit too strong to be useful. By that definition, just about nobody has an uncoupled API, because if anyone uses it (even outside your company) then you can’t really just revert six months of changes without creating a lot of problems for your dependents. If those changes just happen to be not user facing (eg an internal migration or optimizations) then you might be ok, but that is a characteristic of the changes, not of the coupling of a service and it’s dependents.

IMO it’s more valuable to have strong contracts that allow for changes and backwards compatible usage, so that services that take a dependency can incrementally adopt new features.


That definition of strong coupling is in fact standard. Like if you ask people why they don't want strong coupling they tell you exactly that when you change a strongly coupled thing you induce bugs far away from the thing you changed, and that sucks.

Now you might want this strong coupling between front-end and back-end and that's OK—just version them together! (Always version strongly coupled things together. You should not be guessing about what versions are compatible with what other versions based on some sort of timestamp, instead just have a hash and invest a half-week of DevOps work to detecting whether you need to deploy it or not. Indeed, the idea of versioning a front-end separate from a back-end is somewhat of an abdication of domain-driven design, you are splitting one bounded context into two parts over what programming language they are written in—literally an implementation detail rather than a domain concern.)

Other patterns which give flexibility in this sense include:

- Subscription to events. An event is carefully defined as saying “This happened over here,” and receivers have to decide what that means to them. There's no reason the front-end can't send these to a back-end component, indeed that was the MVC definition of a controller.

- Netflix's “I’ll take anything you got” requests. The key here is saying, “I will display whatever I can display, but I'm not expecting I can display everything.”

- HATEOAS, which can function as a sort of dynamic capability discovery. “Tell me how to query you” and when the back-end downgrades the front-end automatically stops asking for the new functionality because it can see it no longer knows how.

- HTTP headers. I think people will probably think that I am being facetious here, what do HTTP headers have to do with anything. But actually the core of this protocol that we use, the true heart and soul of it, was always about content negotiation. Comsumers are always supposed to be stating upfront their capabilities, they are allowed a preflight OPTIONS request to interrogate the server’s capabilities before they reveal their own, servers always try to respond with something within those capabilities or else there are standard error codes to indicate that they can't. We literally live on top of a content negotiation protocol and most folks don't do content negotiation with it. But you can.

The key to most of these is encapsulation, the actual API request, whatever it is, it does not change its form over that 6 month period. In 12 months we will still be requesting all of these messages from these pub/sub topics, in 12 months’ time our HATEOAS entry point will still be such-and-so. Kind of any agreed starting point can work as a basis, the problem is purely that folks want to be able to revise the protocol with each backend release, which is fine but it forces coupling.

There's nothing wrong with strong coupling, if you are honest about it and version the things together so that you can test them together, and understand that if you are going to split the responsibilities between different teams than they will need to have regular meetings to communicate. That's fine, it's a valid choice. I don't see why people who are committing to microservices think that making these choices is okay, as long as you lie about what they are. That's not me saying that the choices are not okay, it's me saying that the self-deception is not okay.


I think strong versioning in event-driven arch is a must, to avoid strong coupling. Otherwise, it becomes even worse than "normal" call-driven service arch, because it's already plenty hard to find all of the receivers, and if they don't use strong versioning then it's so easy to break them all with one change in the sender.


Yeah I would tend to agree! I think there is freedom to do something like semver where you have breaking.nonbreaking.bugfix, which might be less “strong”... But in a world with time travel you tend to get surprised by rollbacks which is why they are always my go-to example. “I only added a field, that's non-breaking” well maybe, but the time reversed thing is deleting a field, are you going to make sure that's safe too?

And I think there's a case to be made for cleaning up handlers for old versions after a certain time in prod of course.


I'm not sure how rolling back 6 months of updates could ever be possible unless you never ship new features.


As fuel for thought, consider that when python was being upgraded from 2 to 3, shipping lots and lots and lots of new features, and breaking several old ones, there were many libraries which supported both versions.

Some of them may have functioned by only programming in the mutually agreed upon the subset, but given that that subset did not really include strings, that was clearly the exception rather than the norm. Instead people must have found a way to use the new features if they were available, but fall back to old ways of doing things if they weren't.

Some of those mechanisms we're internal to the language, “from __future__ import with_statement”. So how can my JSON-over-HTTP API, within that JSON, tell you what else is possible with the data you fetched?


Ugh sorry for weird autocorrect errors, I had to abandon for a mealtime haha... I was going to add that there are also mechanisms outside the language, for instance that you might just detect if you can do something by catching the exception if it doesn't work... If it's syntax like JS arrow functions this might require eval() or so. The API version is just to expect the failure codes that would have been present before you added the new feature, usually 404, and handle them elegantly. If it's a follow-up request that RFC-“must” be done for consistency, maybe your app needs a big shared event bucket where those can always go, so that when “POST /foo-service/foos/:fooId/bars/:barId/fix" fails, you can fall back to posting to the deferred request buffer something like

    {
      eventName: "foo-service.fix-bar-requested", 
      version: 1,
      params: {fooId: 5, barId: 7}
    }
Just catch the exception, record the problem for later resolution, use a very generic template for storage that doesn't have to change every month...

Think creatively and you can come up with lots of ways to build robust software which has new features. Remember the time reversal of adding a feature is removing it, so you have to fix this anyway if you want to be able to clean up your codebase and remove old API endpoints.


I almost went down that road once, and got very good advice similar to the first couple of steps (plan, do your homework) in the original post here: "before you start making implementation choices to try to force certain things, try to really understand the architecture you want and why you want it, and ask yourself if it's a technological problem you're hitting or just a system design one"


Yep have seen this firsthand. Beware the evangelist who shows up bearing fancy new architectural patterns and who can't be bothered to understand nor discuss details of the existing system.


Same here.


I have seen the resume padder on many occasions, and have had to clean up after they left. It's amazing to see a team become visibly happier with their day job when you move to sensible foundations based on sound reasoning.

The teams that were the most efficient, and happiest with their day to day were the ones which picked frameworks with good documentation, a plethora of online resources, and ignored the latest shiny toy on the market.


What's amazing is how often people have to see it to believe it.

One of my running jokes/not jokes is that one day I'm going to hire a personal trainer, but then instead of having them train me, I'm just going to ask them a million questions about how they convince people who have never been 'in shape' how good it's going to feel when they are.

Because that's what it often feels like at work.


Wouldn’t it be an engineer’s hell working at a place that employs based on resumes from resume driven developers?

You learn the latest shiny tech, then change jobs to a new company using your sparkling resume, self-selecting for companies that like the beautiful sparkles in your resume, then you get to work with other like-minded sparkle tech lovers.

At what companies do resume padders end up in?

Quote from elsewhere in thread: “Whatever is the new thing, jump on it hard and fast like your life depends on it.”


this; i've repeated it dozens of times at my co. to the same people. it was funny, then weird, now it's becoming depressing, not sure what's next.


Next, you find a sensible company with reasonable people.


It's hard to find non-faang company not filled with cv padders (i mean who always votes for unnecessary but shiny frameworks and an additional architecture complexity).

People are motivated with money more than anything else, and a cv which looks good opens access to that money. So incentive to pad cv is very high, even at the expense of the system they build and their teams and companies.


>> fundamental problem is cargo cult developers trying to copy massive companies architectures as a startup. They fail to realize those companies only moved to microservices because they had no other option.

This.

Microservices add complexity; therefore, they should be avoided unless necessary. That's Engineering 101 folks.


I once worked with a system where all local functions calls had parameters serialized to XML, sent over, then deserialized by the calling functions.

The framework was meant to be network transparent, remote calls looked the same as local calls, everything was async, and since everything used service discovery you could easily refactor so that a locally provided service was spun off to a remote computer somewhere and none of the code that called it had to change.

So anyway 50% of the CPU was being used on XML serialization/deserialization...


> remote calls looked the same as local calls

To some degree this is actually nice! I mean, one major reason local API calls (in the same programming language) are nicer than network calls – besides avoiding latency – is that network APIs rarely come with strong type safety guarantees (or at least you have to bolt them on on top, think OpenAPI/Swagger). So I actually wish we were in a world where network calls were more similar to local API calls in that regard.

But of course, in the concrete situation you're describing, the fact that

> all local functions calls had parameters serialized to XML

sounds like a very bad idea.


IBM has a hardware for that and it is called "IBM WebSphere DataPower SOA Appliance". It has special hardware-accellerated XML processing...


There was also a version that would automatically turn JSON into XML and vice versa. When REST started pushing out SOAPy stuff, we actually had a meeting where the sales guy (at the behest of our XML loving CTO) showed us a diagram that was roughly this:

Web ------------- Premisis

JSON ---> || ---> XML

JSON <--- || <--- XML

Price was $9,600 for the 2U appliance that makes all your non-enterprisy JSON into something fitting of today's modern enterprise, circa 2001. To this day, I laugh when I think about it.


How does such a thing work? Parsing XML is full of branches and dynamic allocations that general purpose CPUs are good at.


I would assume it works by encoding primarily a strict, predictable format. And when decoding, primarily accelerating the stricter, more predictable parts, and including some regular CPUs for the difficult bits.


In 2013 I was working on this stack: The frontend server talks in XML to the backend server, so you need 6 massive physical servers to handle the load. The load being XML serialization.

People laugh at my SQL in REST classes, aka no layer at all, but it grew with the company and we now have 4 layers, 4 USEFUL layers. And with this ultra lean architecture, we were profitable from Day 90, so we are here 7 years later.


Is there ever a good reason to change from monolith to microservices unless you have already solved the factoring problem in the monolith? The factoring problem doesn't get easier because you add network calls into the mix, but being well factored first makes adding the network calls a lot easier. If you're lucky your framework or platform already has that part figured out.

Maybe one exception is if you absolutely must change the programming language at the same time, but even then I would suggest doing the rewrite as a monolith first for the same reason, so you again don't have to deal with network calls at the same time as solving an already insanely hard problem.

There's the argument that microservices lets you gradually replace the old monolith piece by piece, but there's nothing special about microservices for that, you can do that with two monoliths and a reverse proxy.

And at the end of either you might find that you didn't need microservices in the first place, you just needed to factor the application better.


The benefit of microservices is that you can divide areas of responsibility among teams and (theoretically uncouple the change processes for each)


Dividing areas amongst teams is only a theoretical benefit if you haven't solved the factoring problem. Otherwise teams are still going to be cross-dependent on other teams constantly.

But if you've solved the factoring problem, your team and another team can happily work next to each other in the same codebase with less back-and-forth anyway. And the question reduces largely to if you deploy it all at once or not. Sometimes all at once is much easier anyway.

I've seen microservices used by management as a way to force engineering into factoring their shit correctly far more often than I've seen it done because things were already factored well (because cross-team coordination in different services is much harder than just jumping around in the monolith's single repo in a single PR). It's an easy lever for management to pull, too, because most devs don't see it as something they're being forced to do, they think they're getting their wish at rewriting and modernizing things. ;)


> factoring their shit correctly

When is shit ever factored "correctly"? Next week, your product owner will come over with a new idea for a feature spanning multiple modules and suddenly you will realize you should have factored your shit differently and maybe not split things up into multiple microservices prematurely.

So what I'm trying to say is: Grug is right. Factoring a piece of software correctly is one of the hardest issues. The assumption that you can do it correctly is the reason why so many microservice architectures are such a PITA to work with.


It’s true that factoring is something you can spend an unbounded amount of effort on, but it’s too unwieldy to always say factored “so well that there is no point working more on it because of diminishing returns”. So when you see “correctly”, read that as “so well that there is no point working more on it because of diminishing returns”.

Arguably that is what correct factoring is anyway, because it’s simply not correct in any way to fiddle with a piece of software without generating any real value for either the business objectives or the developers that have to work with it down the road.


I don't understand how this isn't mitigated by a CODEOWNERS file[0]. Give the directories to the teams that should own them, and now they have absolute control over the PR review process accordingly (and responsibility of incoming code for the directories that they own too).

[0] https://docs.github.com/en/repositories/managing-your-reposi...


That just solves one problem, and not the most significant. Other issues, not usually inherent to the monolith pattern but very common:

* Release management coupling

* CI coupling

* Deployment coupling

* Binary bloat

* Monitoring/alerting/logging noise

* Lack of security policy granularity

* Language and library choice coupling

Of course, people mess these things up with the service model as well, but the failure modes there don't propagate as much (unless they're organizationally mandated...)


… which often turns everything into siloes not talking to each other, using different languages/tools/processes/release plans. No thanks.


The world is not so black and white, and silos have legitimate business purposes. The reality is that any one contributor or team can only handle so much. When duties must be split between teams for effective ownership of responsibilities, separation of concerns becomes paramount to the continued progress in any one domain.

If all you have is one team, microservices make no sense.


I've tried searching this and couldn't find anything related to microservices - what is the factoring problem or system that grugbrain is referring to?

Is it something to do with calculating the cost of splitting off a monolith into a microservice vs. some other method?


They are referring to software structure and decomposition.

This is a hard problem that spans architectures; you can have a poorly factored microservice architecture, just as you can have a poorly factored monolith.

These two posts from Shopify are good:

https://shopify.engineering/deconstructing-monolith-designin...

https://shopify.engineering/shopify-monolith


I think they are talking about modularisation, i.e. breaking the code into somewhat independent units.


At first, you're ignorant. Then, dogmatic. And then you learn what to apply and when.

Due to the average age of programmers, a lot of people fall into the first two categories.


An analogy that seems to connect with some people at least:

You know the Martial Arts movie trope where 20 minutes into the movie, the master tells the student never to do X, and then in the final battle the student only triumphs by doing X? That's because what you can do and what you should do changes as you figure out what you're doing. The rule applies to you until you understand why the rule doesn't apply to you/this situation.

The subtlety here is in knowing the difference between "the rules don't apply to me" and "I understand the situation where the rule is reasonable, and safe, and either this is not one of those situations, or 'safe' doesn't address an even bigger danger." The former is the origin story of the villain, the latter the origin story of the unlikely hero. Or less dramatically, it's Chesterton's Fence.


I translated this to plain english!

https://github.com/reidjs/grug-dev-translation


grug is the peak of SWE wisdom to me


I didn't know about this website. It's hilarious. Thank you for sharing.


> Pick something easy to start with, like edge services that have little overlap with the rest of the monolith. You could, for instance, build an authentication microservice and route login requests as a first step.


If you have a large engineering team (hundreds+ devs) with a large codebase then having a monolith can slow down developer velocity.

There’s massive scope with the monolith, build tools start to strain, etc


Interesting. I found working on FB’s monolith to have, on average, higher velocity and fewer blockers, than working on microservicified systems at other places 2+ orders of magnitude smaller in both engineering and codebase size.


It is not fair to compare the Facebooks monolith and the monolith at the average company, as they are not really the same thing. The tooling available at Facebook is built and maintained by a team larger than the engineering departments at most companies.

There comes a point, where regular off the shelf tooling does not scale sufficiently well for a monolith. Tests suits and builds start to take too long. Deployments get increasingly complicated. Developers start to get into each other's way, even when working on unrelated features. Additionally, if you are using an untyped, interpreted language, keeping a large app well organized can also be a problem.

Microservices is a tool for dealing with complexity and certainly not the only one. However, building the tooling and infra for a large and sophisticated monolith is not simple and not guaranteed to be an easier solution to the problems listed above.


How is this relevant? My comment is in response to an observation about "large engineering teams," not "the monolith at the average company."

At the average company, standard tools will work fine, while companies with large engineering teams have the resources to maintain custom tooling.


You are assuming that the observed tool strain scales with the number of developers. In my experience it scales with the number of coupled concerns inside the same repo. Now, this may be somewhat correlated with the number of developers, but not entirely. Therefore in my experience again you can end up with a moderately sized company running into tool limits with a monorepo. FB doesn't have those problems because they use different tools.


> FB doesn't have those problems because they use different tools.

Exactly - instead of using microservice-oriented tools, they use tools organized around monoliths. And that decision serves them well. That's the whole point.


Microservices move the complexity rather than solve it.

The dependency boundaries between portions of the data model can never be cleanly demarcated -- because that's not how information works, especially in a growing business -- so there's always going to be either some loss of flexibility or addition of complexity over time.

Individual developers getting out of each other's way just ends up getting pushed to getting in each other's way at release time as the matrix of dependencies between services explodes.

Engineers jobs become more about the lives and dramas of the services they work on than about business domain. You end up building your organization and reporting structure around these services, rather than the business needs of the customer. And then you end up indirectly or directly shipping that org chart to the world in delays or bugs caused by your fragmentation.

Instead of modeling facts about data and their relationships, and constructing the relational model which can capture this, the developer in the microservice model becomes bogged down in service roles and activities instead, again taking them away from the actual problem: which is organizing information and making it accessible to users/customers.

It's a shell game.

The Facebook monolith works because engineers there invested time in building the tooling you're complaining is not available to others. Same with Google: Google invested in F1, etc. because it evaluated the cost to do otherwise and it made sense to invest in infrastructure.

Yes, small companies can't often afford this. Luckily they have two things on their side:

Most small companies don't have a fraction of the scale issues that a FB or a Google have. So they can afford to monolith away for a lot longer than they seem to think they can, while they put in infrastructure to scale the monolith.

The industry as a whole has invested a lot in making existing things scale. e.g. you can do things with a single Postgres instance that we never would have dreamed about 10 years ago. And when that falls over, there's replication, etc. And when that falls over, guess what? There's now high performance distributed ACID SQL databases available for $use.

Microservices is surely one of the longest lived, biggest cargo cults in our industry. I've seen others come and go, but microservices really seems to cling. I think because it has the perception of breaking business problems down into very small elegant independent atomic pieces, so it has a very.. industrial revolution, automation, factory floor, economies of scale vibe. But that's not what it is.

There are places for it, I'm sure. But systems with highly interelated data and quickly changing requirements are not well suited.

IMHO.


Yeah, I've seen stuff carved into tiny, fragile microservices when the number of nodes was under ten. Stupid, IMO, and it took a stable service and made it a flaky mess. It was done because of dogmatic "It must be in containers in the cloud with microservices, because that is The Way(TM)". Literally there was an initiative to move everything possible to the cloud in containers with lots of microservices because one of the place's gurus got that religion. It increased complexity, decreased reliability and cost a lot of money for not much benefit.

Until you have well over 20 systems doing one thing/application, trying to treat bespoke services like cattle instead of pets is silly. It will also drive your ops people to drink, especially if it's done by "DevOps" that never get paged, and refer to your group in meetings with other companies as "just ops". (Yes, I'm still salty about it.)

Often I think it's "resume driven development", especially if the people pushing for it want to abandon all your existing tools and languages for whatever is "hot" currently.


I suspect a lot of companies are pushing for MS architecture because its trendy, not because it makes sense for their own use case, which is what is causing such a strong reaction on HN. Moreover, I suspect that the services end up being very small and as such, rather poorly self contained. All I wanted to say with my comment is that microservices are a tool, and there are certain scenarios where they could be a good solution to a problem (although perhaps not the only one).

I do want to provide a counter point example against everything must be a monolith. Years ago I worked at a medium sized company that worshiped at the altar of monolith and the mantra of "this is how google does it" was often repeated. Unfortunately what they were doing was far from what Google was doing. Their monolith incorporated solutions for multiple, largely unrelated business lines. All of the code deployed across all of the hundreds of servers, and data was shared without any sense of boundaries across the entire app. The company didn't want to invest significantly into the appropriate tooling to make such a large app function well (multiple gigabytes source code written in PHP). The result was a terrible dev experience where deployments took 4 to 6 hours on a good day and the blast radius of a given change was sometimes hard to verify. Its akin to sharing servers between GMail and Google Docs, and mixing up the code together for good measure (so it can be reused). This created a culture of slow moving, large development cycles as well as a lot of defensiveness and fear within the software engineering group. Suffice to say, it was not a pleasant experience.

Before I get down voted a bunch I should say I also tend to prefer monoliths as much as possible. They are much simpler to work with, much simpler to test and easier to analyze. Also if using a good compiled language, the compiler can help a lot to eliminate a lot of common silly regressions. However, that being said, I would consider making a new service in certain cases. For example, if I was building a new, unrelated product or if there was a part of the app that was functionally very different from the rest of the app.

I also tend to distinguish between a mono repo (single repo, can have more than one service in it) and monolith (single app). If you are willing to setup some more sophisticated CI/CD tooling, I think mono repo is the way to go, even when it makes sense to have more than one service in the design of the app.


Agreed.


Surely there should be something between gigantic monolith and micro services. I would call it service.


Typically, "monolith" implies services - ie, the "backing services" from a traditional 12-factor monolith:

- https://12factor.net/backing-services

Monolith vs. Microservices comes about because microservices proponents specifically set it up as an alternative to a traditional monolith + services stack:

- https://www.nginx.com/learn/microservices-architecture/#:~:t...


> Typically, "monolith" implies services - ie, the "backing services"

I don't know if that's a common interpretation - in the monolith-loving companies I've worked with it very much meant a single ball of code (e.g. huge Rails app), not separate services.


Yes, not sure why we have so many brainwashed fanatics who see world as hotdog and not-hotdog - microservices and monoliths - only.


Seriously! I think there is a good space for the concept of "mid-services" - cluster similar and interdependent services and service fragments together, so they split in logical groups for updating.

It would be sort of like "Module A is authentication and session management, module B is data handling layer, and module 3 is the presentation and processing layer." Each of those under a microservices dogma would be two to four microservices struggling to interoperate.

I read the book written by the dev that advocated for microservices. I wanted to throw it across the room, but it was an ebook. He literally went for over half the book before he even addressed operability. Everything was about developer convenience, not operating it with an eye toward user satisfaction. The guy was clueless.


That’s what Google does. Seems to work fine for them.


I think services became really popular from Amazon, and that quickly went to microservices IMHO. It's perhaps analogous to say: "That's what Amazon does. Seems to work fine for them". It's very interesting there are big techs now with well known monoliths vs Amazon services.

Though, Amazon's services are so that any internally useful tool can be externally exposed and then rented out to generate more revenue. The reasons/benefits are very different compared to say decoupling or the other reasons given for service orientated architectures.


One awesome and often overlooked benefit of microservices is how they simplify security/dependency updates.

With a monolith, dependency updates, especially breaking ones, often mean either all development stops for a "code freeze" so the update can happen, or you have a team responsible for doing the update and they are trying to update code faster than other devs add new code.

The result of this is that updates get pushed back to the last minute, or are never just done. I've seen old (ancient) versions of OpenSSL checked into codebases way too often.

With microservices, you can have a team that isn't as busy take a sprint to update their codebase, carefully document best practices for fixing breaking changes, document best practices for testing the changes, and then spread the learning out to other teams, who can then update as they have time or based on importance / exposure of their maintained services.

It is a much better way of doing things.

It also means some teams can experiment with different technologies or tool chains and see how things work out. The cost of failure is low and there isn't an impact to other teams, and build systems for microservices tend to be much simpler than for monoliths (understatement...)


Microservices are a heavy handed way to draw boundaries around your software so that bad technical decisions don't bleed across different teams. Obviously there is some benefit to that but there is also a massive tradeoff - especially for certain types of software like complex UIs.

> With a monolith, dependency updates, especially breaking ones, often mean either all development stops for a "code freeze" so the update can happen, or you have a team responsible for doing the update and they are trying to update code faster than other devs add new code.

In all my years I've never seen a code freeze due to a dependency update. Maybe the project you were working was poorly engineered?

> The result of this is that updates get pushed back to the last minute, or are never just done. I've seen old (ancient) versions of OpenSSL checked into codebases way too often.

There should be nothing stopping you from running multiple versions of a dependency within a single monolothic project.

> With microservices, you can have a team that isn't as busy take a sprint to update their codebase, carefully document best practices for fixing breaking changes, document best practices for testing the changes, and then spread the learning out to other teams, who can then update as they have time or based on importance / exposure of their maintained services.

Gradual adoption of new dependencies has nothing to do with microservices.


> In all my years I've never seen a code freeze due to a dependency update. Maybe the project you were working was poorly engineered?

I spent a decade at Microsoft, I started before cloud was a thing. All code lived in monoliths[1]. I once had the displeasure of looking at the source tree for XBox Live circa 2008 or so. Nasty stuff.

"Don't check anything in today, we're trying to finish up this merge" was not an uncommon refrain.

But you are right, often times there wasn't code freezes, instead system wide changes involved obscene engineering efforts so developers could keep the change branch up to date with mainline while dependencies were being updated.

I'll confess my experience with large monolithic code bases are all around non-networked code, but IMHO the engineering maintenance challenges are the same.

> There should be nothing stopping you from running multiple versions of a dependency within a single monolothic project.

Build systems. They are complicated. I spent most of my life pre JS in native C/C++ land. Adopting a library at all was an undertaking. Trying to add 2 versions of a library to a code base? Bad idea.

Heck even with JS, Yarn and NPM are not fun. And once a build system for a monolith is in place, well the entire idea is that a monolith is one code base, compiled into one executable, so you don't really swap out parts of the build system.

Hope none of your code is dependent upon a compiler extension that got dropped 2 years back. And if it is, better find time in the schedule to have developers rewrite code that "still works just fine".

Contrast that, in my current role each microservice can have its own build tools, and version of build tools. When my team needed to update to the latest version of Typescript to support the new AWS SDK (which gave us an insane double digit % perf improvement), we were able to even though the organization as a whole was not yet moving over.

Meanwhile in Monolith land you have a build system that is so complicated that the dedicated team in charge of maintaining it is the only team who has even the slightest grasp on how it works, and even then the build systems I've seen are literally decades old and no one person, or even group of people, have a complete understanding of it.

Another benefit is that microservices force well defined API boundaries. They force developers to consider, up front, what API consumers are going to want. They force teams to make a choice between engineering around versioning APIs or accepting breaking changes.

Finally, having a REST API for everything is just a nice way to do things. I've found myself able to build tools on top of various microservices that would otherwise not have been possible if those services were locked up behind a monolith instead of having an exposed API.

In fact I just got done designing/launching an internal tool that was only possible because my entire organization uses microservices. Another team already had made an internal web tool, and as part of it they made a separate internal auth microservice (because everything is a microservice). I was able to wire up my team's microservices with their auth service and throw a web UI on top of it all. That website runs in its own microservice with a customized version of the org's build system, something that was possible because as an organization we have scripts that allow for the easy creation of new services in just a matter of minutes.

Back when I was at Microsoft, none of the projects I worked on would have allowed for that sort of absurd code velocity.

Another cool feature of microservices is you can choose what parts are exposed to the public internet, vs internal to your network. Holy cow, so nice! Could you do that with a monolith? Sure, I guess. Is it as simple as a command line option when creating a new service? If you have an absurdly well defined monolith, maybe.

Scaling, different parts of a system need to scale based on different criteria. If you have a monolith that is running on some # of VMs, how do you determine when to scale it up, and by how much? For microservices, you get insane granularity. The microservice pulling data from a queue can auto-scale when the queue gets too big, the microservice doing video transcoding can pull in some more GPUs when its pool of tasks grows too large. With a monolith you have to scale the entire thing up at once, and choose if you want vertical or horizontal scaling.

You can also architect each microservice in a way that is appropriate for the task at hand. Maybe pure functions and completely stateless makes sense for one service, where as a complex OO object hierarchy makes sense someplace else. With microservices, impedance mismatches are hidden behind network call boundaries. Yes you can architect monoliths in vastly different fashions throughout (and I've done such), but there is a limit to that.

E.g. with microservices you can have one running bare metal written in C++ on a hard real time OS, and other written in Python.

Oh and well defined builds and deployments is another thing I like about microservices. I've encountered monoliths where literally no one knew how to completely rebuild the production environment (I overheard from another engineer that Xbox live services existed in that state for awhile...)

And again, my bias is that I've only ever worked on large systems. Outside my startup, I've never worked on a project that didn't end up with at least a couple hundred software engineers writing code all towards one goal.

Is k8s and microservices a good idea for a 5 person startup? Hell no. I ran my startup off a couple VMs that I SCP'd deployments to along side some Firebase Functions. Worked great.

[1] This is not completely true, Office is divided up pretty well and you can pull in bits and pieces of code pretty independently, so if you want a rich text editor, that is its own module. IMHO they've done as good of a job as is possible for native.


> Heck even with JS, Yarn and NPM are not fun.

    $ mkdir hello && cd hello
    $ npm init -y
    $ npm install react17@npm:react@17
    $ npm install react18@npm:react@18
    $ cat "var react17 = require('react17'); var react18 = require('react18'); console.log(react17.version,         react18.version);" > index.js
    $ node index.js
    $ node index.js
    17.0.2 18.2.0
That's the problem with a lot of these discussions. Conclusions are often based on layers of other conclusions that could be wrong.

> That website runs in its own microservice with a customized version of the org's build system, something that was possible because as an organization we have scripts that allow for the easy creation of new services in just a matter of minutes.

I don't see what this story has to do with microservices. That kind of velocity can easily be achieved with a single codebase too.

> Scaling, different parts of a system need to scale based on different criteria.

That's not at all unique to microservices.

A monolithic application can run in different modes. For example of if you run `./my-app -api` then it'll start an API server. If you run `./may-app -queue` then it'll run the message queue processor. And so on.

This way you can run 10 API servers and 50 queue processors and scale them independently. How an application is deployed isn't necessarily tied to how it's built.

> Another cool feature of microservices is you can choose what parts are exposed to the public internet, vs internal to your network. Holy cow, so nice! Could you do that with a monolith? Sure, I guess. Is it as simple as a command line option when creating a new service? If you have an absurdly well defined monolith, maybe.

I'm confused.

Is there some magical program called "microservice" that takes command line options? What does a systems architecture approach have anything to do ease of deployment?

The whole public/private networking thing is an infrastructure decision. Your monolithic application could easily have internal endpoints that are only exposed when certain flags are set.

> With microservices, impedance mismatches are hidden behind network call boundaries. Yes you can architect monoliths in vastly different fashions throughout (and I've done such), but there is a limit to that.

What are those limits praytell?

> E.g. with microservices you can have one running bare metal written in C++ on a hard real time OS, and other written in Python.

That has nothing to do with microservices. You can thank the UNIX architecture for that. Most computers run more than one program written in more than one programming language.

> Oh and well defined builds and deployments is another thing I like about microservices. I've encountered monoliths where literally no one knew how to completely rebuild the production environment (I overheard from another engineer that Xbox live services existed in that state for awhile...)

So it was architected poorly. Why is that a strike against monoliths? Are you saying that messy builds are impossible with a microservice architecture? One of the top arguments against microservices is how much of a rats nest they are in production - in practice.

Some companies are able to do it right with discipline and good engineering. The same can be said for monoliths.

By the way the big problems with microservice architectures is you don't get atomic builds. Very difficult problem to work around. Usually with lots of tooling.


> That's the problem with a lot of these discussions. Conclusions are often based on layers of other conclusions that could be wrong.

Great, now do multiple versions of typescript. and jest. and ts-jest. And while you are at it, how would you change over from using a legacy test system, such as sinon, to jest? With microservices it is easy, newly generated services use jest, old ones can update when they want to.

Could you engineer a code base so different folders use different testing frameworks and different versions of different testing frameworks? Sure. At the cost of added complexity.

> I don't see what this story has to do with microservices. That kind of velocity can easily be achieved with a single codebase too.

For how long? Until it gets to what size?

I've spent my life working on large code bases, the velocity always falls off, typically it is best in the first 6 months, steady state for another year or 2, and then undergoes a steady decline for another 2 or 3 years after that until it levels off at "well, it's legacy what you do expect?"

> A monolithic application can run in different modes. For example of if you run `./my-app -api` then it'll start an API server. If you run `./may-app -queue` then it'll run the message queue processor. And so on.

So generating multiple executables from one repo? That is a monorepo, if you have a bunch of processes that communicate with each other through some channel, be it pipes or sockets or whatever, then I'm going to argue you have microservices. They may be running on one machine, but again, you have multiple processing talking to each other.

Now I'll also argue that such code bases are more likely to have strong coupling between components. It doesn't have to be that way, but it becomes harder to resist (or to just stop junior devs from inadvertently doing).

> The whole public/private networking thing is an infrastructure decision. Your monolithic application could easily have internal endpoints that are only exposed when certain flags are set.

So one executable with multiple network endpoints exposed? Sure. But you have lost some security boundaries in the process. If the only system that can ever have customer PII lives inside a firewall with no public IP address, you've gained security VS PII floating around in the same process memory address as your public endpoints.

Also now teams have to maintain two versions of their API, one internal for binary callers, another for the exposed network endpoint. Odds are that exposed network API is just not going to ever be written, or if it is, it will quickly become out of date unless it is very highly often used. (Which means if in a few years someone has a great idea for an underutilized API, it will likely be missing major version updates, and be old and crufty and barely functioning.)

> That has nothing to do with microservices. You can thank the UNIX architecture for that. Most computers run more than one program written in more than one programming language.

The characteristics of hardware that do DB queries are different than the HW that does video transcoding. With microservices you can shove your video encoding service on some machines with powerful GPUs, and the part of your code that talks to the DB on another machine, and when your s3 bucket gets full of files you need to transcode, you can spin up more transcoding machines.

Vs a monolith where you have everything running on one machine under one process. Sure you can just scale up your entire app, but that is absurd and I'm sure literally no one does that, people who need to transcode lots of video have dedicated machines that do the transcoding, because having the same machine that serves up your website also transcode video is beyond inefficient.

Also, the Unix philosophy is small dedicated programs that do one thing really well. Kind of like... microservices.

> What are those limits praytell?

Different problem domains are best solved with different programming paradigms. Having dramatic tonal shifts within one application is confusing really fast. Having part of an app use mobx and another part be completely stateless and a third part use complex OO domain modeling is going to give maintainers whiplash.

And then you have to have all those different systems talk to each other. Which means you need translation layers for "get shit out of mobx and put it into this other system".

At which point you either have well defined boundaries, or a ball of spaghetti, but once an app has grown to the point of absurdly different problem domains, it may be time to start thinking about breaking it up into multiple apps and figuring out how they should talk to each other.

> By the way the big problems with microservice architectures is you don't get atomic builds. Very difficult problem to work around. Usually with lots of tooling.

If you have well defined interfaces you don't need atomic builds. That is kind of the entire point! Unless there are some legal requirements which mandate "blessed builds".

Microservices of course have a huge performance penalty, but so does every other programming paradigm. IMHO paradigms exist to restrict what programmers can do. OO people say you need to shove your nouns and verbs together and only expose a limited amount of state. Those are some serious limitations, and as the Java ecosystem has demonstrated to us, they aren't enough to prevent complex balls of mud from being developed.

Pure functional peeps say no state at all, and arguably the functional paradigm often works, but the performance hit is insane at times.

Microservices say you need to decouple components and that components can only talk through preestablished and agreed upon messages. It enforces this decoupling by isolating different bits of code on the other sides of boundaries so that network connections of some type have to be used to communicate between modules.

You get some benefits with this, I still maintain it is easier to update to new breaking changes of dependencies on smaller chunks of code rather than adopt a breaking change to a global dependency across possibly hundreds of thousands of lines of code.

Also, you can deploy a service that Does A Thing to the appropriate hardware. I'm still not sure what you are talking about in reference to a monolith that spits out multiple executables, that sounds like a monorepo that generates multiple services, each of which I presume has to talk to each other somehow.

Because you only need to maintain the API contract, microservices can be deployed in a true CI/CD fashion, with teams constantly releasing changes to their services throughout the day, every workday.

There are downsides. Microservices have a huge performance overhead. Refactoring across microservices sucks, thus why some teams adopt a monorepo approach, but that brings along its own set of complications, refactors across multiple services means you have to deploy all those services at the same time. For an organization that may be used to managing a hundred+ microservices independently of each other, all at once deployments can be a huge productivity loss.

Microservices also have build and deployment complexity. It is largely an up front tooling cost, but it is a huge cost to pay.


> Great, now do multiple versions of typescript. and jest. and ts-jest.

Why would I do that?

There is absolutely no reason a single application should be using two versions of Typescript at the same time. What you're talking about is a combinatorial explosion of packages and versions - it's bad, bad, bad for software quality.

Upgrade paths can be done gradually. I've done and seen it done more times than I can count.

> For how long? Until it gets to what size?

That depends a great deal on the type of software and how it was engineered.

If you inherit a mess and your engineering team is incapable of not creating a mess then, sure, drawing heavy handed microservice boundaries might make sense. But in that case you're solving an organizational problem not a technical problem. All of the technical benefits you've been claiming are moot.

> So generating multiple executables from one repo? That is a monorepo, if you have a bunch of processes that communicate with each other through some channel, be it pipes or sockets or whatever, then I'm going to argue you have microservices. They may be running on one machine, but again, you have multiple processing talking to each other.

You could generate multiple executables. Or you can generate a single executable that's able to run in different "modes" (so to speak).

What you've shown throughout your writing is you don't quite understand what microservice architecture is. Multiple applications communicating over the network (or whatever) is NOT a "microservice". Why wouldn't you just call that a "service"?

A microservice is a very small application that encapsulates some "unit" of your larger application. Where that boundary is drawn is obviously up for debate.

I don't wanna digress but I can assure that two applications on the same server talking over a network is NOT a microservice. That's just... like... software, lol.

> Now I'll also argue that such code bases are more likely to have strong coupling between components. It doesn't have to be that way, but it becomes harder to resist (or to just stop junior devs from inadvertently doing).

Creating a network boundary (a HUGE performance cost) just to prevent people from doing stupid stuff is not good software architecture.

> So one executable with multiple network endpoints exposed? Sure. But you have lost some security boundaries in the process. If the only system that can ever have customer PII lives inside a firewall with no public IP address, you've gained security VS PII floating around in the same process memory address as your public endpoints.

Your PII lives in a database. Restrict read access to that database (or subset of that database) to applications running on the internal network. The most straightforward way to do that would be through configuration management.

Applications accessible to the outside world will never even have read access.

With microservices, there is nothing stopping some tiny internal service from exposing data to a public service. Maybe the internal service wrote their ACLs wrong and the data leaked out.

What you're describing, again, has nothing to do with microservices and is a problem you need to deal with either way.

PII is a data problem not a code problem. Microservice/monolith is a code problem not a data problem.

> Also, the Unix philosophy is small dedicated programs that do one thing really well. Kind of like... microservices.

So now all programs that communicate over the network are microservices. Do you not see how silly that sounds?

> Vs a monolith where you have everything running on one machine under one process.

That's not how anyone deploys monolithic applications. You're now trying to claim that monolithic applications only run one server. Confusing.

> Different problem domains are best solved with different programming paradigms.

That's why we create multiple applications that do different things. Microservices are something totally different.

> If you have well defined interfaces you don't need atomic builds. That is kind of the entire point! Unless there are some legal requirements which mandate "blessed builds".

The benefit of atomic builds is it's very easy to reproduce the behavior of your application at any given moment.

So, for example, a user reports a difficult to find bug and you're only investigating it 2 weeks later. You'll need to rewind your application's "code state" to whatever it was at the time that the user was using your application. For a monolithic application this is as easy as pointing the monolithic repo to some commit SHA.

With microservices this is much harder to do without tooling. This can be for many reasons. Sometimes every microservice is a separate repo. Sometimes your development environment isn't running every microservice.

> Pure functional peeps say no state at all, and arguably the functional paradigm often works, but the performance hit is insane at times.

I'm not even that into FP and this is painful to read. Such a gross misrepresentation of what FP is about.

> Microservices say you need to decouple components and that components can only talk through preestablished and agreed upon messages.

You mean like function calls?

> It enforces this decoupling by isolating different bits of code on the other sides of boundaries

You mean like... visibility? As in public/private interfaces?

> so that network connections of some type have to be used to communicate between modules

Enforcing correctness by placing a network boundary is too heavy handed. There are other ways to achieve the same thing that doesn't involve adding orders of magnitude latency for no reason.

> You get some benefits with this, I still maintain it is easier to update to new breaking changes of dependencies on smaller chunks of code rather than adopt a breaking change to a global dependency across possibly hundreds of thousands of lines of code.

You do realize that large software existed before microservices became a thing right? And it was maintainable, right? There are so many ways to solve this problem without affecting thousands of lines of code blindly.

There's also just as much risk in having 10 services that are running slightly different versions of the same dependency. In fact that's a freaking nightmare.

> Because you only need to maintain the API contract, microservices can be deployed in a true CI/CD fashion, with teams constantly releasing changes to their services throughout the day, every workday.

Sir, I'm afraid of you have a bad case of buzzworditis.


Off topic, how is the .dev domain hosted? Does anyone know? I bought a .dev to host my own website but found out Bluehost doesn't support hosting .dev websites.


There's nothing special about .dev domains, other than they're automatically included on Chrome's HSTS preload list (so Chrome won't let you connect to a .dev website unless it has a valid HTTPS cert). If Bluehost says they "don't support" them, they probably don't know what they're doing.

This is what you'd typically do to point a custom domain (eg example.dev) that you've registered through one DNS-hosting company to a website that you've set up with a different web-hosting company:

https://docs.netlify.com/domains-https/custom-domains/config...


I have a .dev domain that is hosted through Google Domains. I _think_ that Google owns that TLD, or something. They run https://get.dev too.


There's nothing special about hosting a .dev domain except for the HTTPS requirement. .dev domains require you to enable TLS on your site.


Seems like grug only works on tiny teams and on undifferentiated problems


Well yeah, there's cases where it makes sense.

I was once on a team dedicated to orchestrating machine learning models and their publishing in a prediction service. The training-to-publishing process was best kept an atomic one, and its Scala codebases separate and distinct from the Java monolith that called it, despite being the only caller and not benefiting from asynchronicity in the call, this design, implementation and operation of a very profitable part of the business was way more convenient than having tried to shove the prediction logic inside the caller.

But there are teams that will be splitting every damned endpoint of a service into their own process, that's just unnecessary overhead from whichever way you look at it; network, deployment, infrastructure, monitoring, testing, and deployment again because it bears repeating that it complicates rollouts of new features.


You miss the point. Fucking up your infrastructure isn't a solution to organizational problems.


Bingo. In the long tradition of COM/CORBA, XML World, capital-A Agile, micro-services are hard to argue against because people fucking freak out if you push hard, because it’s a career path.


5 years ago this was very much true but with serverless services, it drastically lowered the cost of overhead.

It is more work but the goal is always to move away from monolith and to reap the benefits.

We are past the Backbone.js phase of Microservices from jQuery phase into React phase. An established standard with dividends that pay later is being realized.

I just no longer follow these microservice bashing tropes in 2022, a lot has changed since that Netflix diagram went viral


You still pay that cost with your wallet, it's just hidden from you when you look at your code.

The main monetary benefit of serverless is that you can truly scale down to 0 when your service is unused. Of course, writing a nice self-contained library that fits into your monolith has the same benefit.


[flagged]


I fully understand serverless billing, which is why I told you its advantage: scaling to 0 for functions you almost never use. But if you are running literally ANYTHING else, you can get that advantage yourself: run your "serverless microservice" as a *library* inside its caller. You don't need the overhead of an RPC to enforce separation of concerns.

A small startup can pay $5/month to run a monolith on a tiny server commensurate with its use. It can scale that monolith up with use, from a $5 VM offering 1 core and a tiny bit of RAM all the way to a two-socket server VM offering 110+ cores and 512 GB of RAM. Alternatively, a large company can scale nearly infinitely with horizontal scaling. When I worked on a service with a trillion QPS and 50% usage swings at a mega-Corp, that's what we did. All our customers, even the ones with a measly 1 million QPS did the same. And their customers, and their customers, and so on.

"Serverless" was something sold to fashionable tech startups and non-technical people who didn't have the expertise to maintain the VMs/containers they needed. The serverless systems carried a huge price premium too. Everyone with true scale understood that there are servers there, and you are going to pay to manage them whether you want to or not.


serverless is s a huge boon to large enterprises who want to be more agile and not dependant on monolith architecture. the cost to them is a rounding error at best. a startup is a poor yardstick to measure serverless's benefits, if you can run on a $5 DO vps by all means you are not its target market.


They already run cloud VMs sharing CPU, the sharing already happens so why would it be somehow magically cheaper?

And how do you think they make money? Every second of a serverless architecture is probably 100s or 1000s times more expensive than a second of a traditional server.

Use your brain for 10 seconds, it's obviously going to be ridiculously overpriced for the compute you actually get. That's how they make money. And on top of that they have to do so much more orchestration and overhead to run it.

And bonus points, you're now locked into their architecture too!

If you have enough load to have a dedicated machine running at 10% CPU load on average, it'll be cheaper running a dedicated machine than anything else, you probably looking at old school VMs costing 2x more, cloud servers 10x more expensive and serverless would probably be at a minimum 20x more expensive.

We're not luddites, you're just a sucker.


The premium is about 10x over cloud VMs, unless you are running very specific kinds of functions that are long-running and take very little memory.


Some thoughts:

- #12 add observability needs to be #1. if you can't observe your service, for all you know it's not even running. Less hyperbolically, good instrumentation will make every other step go faster by lowering the time to resolution for any issues that come up (and there will be some)

- #11 is incredibly oversimplified, and potentially dangerous. how to do double writes like that and not create temporal soup is...complicated. very complicated. it's important to remember that the database the app is sitting on top of is (probably) taking care of a great many synchronization needs.

if you can do a one way replication, that drastically simplifies things. otherwise either do it in the monolith before breaking up into services, or do it after you've broken up services, and have the services share the database layer in the interim.

(I'm not debating that it needs to be done -- just advocating for sane approaches)

- #10 - I've had great results with the strangler pattern. Intercepting data at I/O gives a lot of tools for being able to gradually change internal interfaces while keeping public/external interfaces constant.

- #5 - as you introduce more processes, integration and end to end testing becomes more and more vital. it becomes harder and harder to run the service locally, and it becomes harder to tell which where a problem is occurring. cross service debugging can be a nightmare. in general it's just important to keep an eye on what the system is doing from an outside perspective and if any contracts have inadvertently changed behaviors.


> add observability needs to be #1

Very much this. The importance of this is overlooked so often and then when there is a problem it's far more difficult to solve than it should've been.


Author here. I agree that observability is really high priority.

I've never meant the numbers to represent steps or say that one point has a higher priority than others (maybe I didn't make that clear enough in the article). In theory, you would be following all/most of the points while you prepare the monolith for migration.


I sympathize with both positions — however, spend any time working somewhere like a bank, or hardware/electronics company founded before 1980, and you’ll develop a distinct distaste for monoliths.

I’ve seen monoliths create teams of career software bureaucrats who exist to choke out companies to their last breath. Companies will go underserved for decades while underfunded projects fail to replace a monolith one after another. All crossroads lead to the monolith that was written in some language that died 30 years ago, performs like it did 30 years ago and is maintained by people hired 30 years ago — creating imminent crises with their looming retirements.

I think a lot of people here read this article as writing monoliths vs microservices — not so. There are many very old, inherited monoliths that are actively creating maintenance crises at many businesses. These need to be modernized in chunks, because full rewrites take years to fail (and often do, eg. the mythical data warehouse). Microservices offer some incremental path out of the quagmire.


>teams of career software bureaucrats who exist to choke out companies to their last breath. Companies will go underserved for decades while underfunded projects fail to replace a monolith one after another. All crossroads lead to the monolith that was written in some language that died 30 years ago, performs like it did 30 years ago and is maintained by people hired 30 years ago — creating imminent crises with their looming retirements.

None of this seems like a monolith problem, it seems like a very-legacy codebase at a non-tech company problem.


I think their point is that trying to continue to work inside such a monolith is soul-crushing, but attempts to replace the monolith in one step fail because it's too big.

I do think there's some confusion, though, because microservices are not the only solution to this problem. You can solve it with two services: the big legacy service hosts all the logic you haven't cleaned up yet, the small-but-growing modern service has the logic that you've cleaned up and made livable.

This way you don't have to replace the old monolith in one step, but you also don't need to go to microservices—the target is still one well-factored monolith.

(Edit: there's a risk here, of course, that your new code becomes another legacy ball of mud that everyone hates and the cycle repeats, but this time with two services...)


> there's a risk here, of course, that your new code becomes another legacy ball of mud that everyone hates and the cycle repeats, but this time with two services

I would call that risk a guarantee. And as long you havent turned the other off, the net result is negative. I say, make the journey as valuable as the goal, and make incremental improvements that offer value from day 1.


If you don’t have the skills to write modular maintainable monoliths then you most definitely don’t have the skills to implement modular maintainable microservices. It’s the same skills required to do both: the ability to break down a complex system/problem into a number of simpler systems/solutions/modules/libraries/services/… so that the complex system/problem can be solved by composing those simpler systems/solutions/modules/libraries/… into a single system/monolith/…


A lot of those steps just seem like good engineering. (I personally prefer modular monoliths over microservices though, in all but very few cases.)


"A lot of those steps just seem like good engineering. "

Agreed. I always wonder why people think that their inability to write libraries with good modularization will be solved by introducing microservices.


It takes experience and guts to know when to use what and most people just go with the the latest and fanciest. Well tested, focused and self contained libraries are good architecture even when micro-services are a must.


"I personally prefer modular monoliths over microservices though, in all but very few cases."

Couldn't agree more. Often times folks are using microservices to achieve good modularity at a cost.


> Often times folks are using microservices to achieve good modularity at a cost.

And even more often, folks use microservices, but make them so coupled that you can't really run/test/hack one without running all the others... Basically creating a distributed monolith.


Strong agree. I worked on a "distributed monolith" once, and now I loudly make it a requirement that all my team's microservices can easily be run locally. Running one stack locally required starting up 12 different microservices, all in a particular order, before you could do anything with any of them. Insanity.


Kinda reminds me of how you "need to have a horizontally scaling database setup because 1 million rows a day are impossible to handle via a normal database server"

people really underestimate the power that vertical scaling can achieve, along with the long tail that microservices can bring. (The more calls between services you need to handle x, the more likely it is that you get into a case where one call takes exceptionally long)

https://www.youtube.com/watch?v=SjC9bFjeR1k


I’ve faced this a number of times “we think we’ll have scaling issues” when they are running on the lowest possible database tier on offer. I think people just don’t understand how much power they actually have at their fingertips without needing anything esoteric.


(second comment - after I had some time to think about it)

Actually I think microservices serve as a tool for enforcing modularity. When pressure is high, corners are cut, and when unrelated code is easy to reach (as in case of a monolithic codebase), it's easy to fall into temptation - a small dirty hack is faster than refactoring. And when you consider different maintainers, it's easy to lose track of the original monolith architecture idea in all the dirty hacks.

Microservices enforce some kind of strict separation, so in theory nobody can do anything that they're not supposed to. In practice, a lot of coupling can happen at the microservices level - the most common symptom being some weird isolated APIs whose only purpose is to do that one thing that another service needs for some reason. Those kind of ad-hoc dependencies tend to make services implementation-specific and non-interchangeable, therefore breaking modularity.

So, in conclusion, some approaches are easier to keep modular than others, but there seems to be no silver bullet for replacing care and effort.


Having a modular monolith myself I couldn't agree more.


Right. I feel like it's a better article without any reference to microservices.


Advice. Don’t bother if you have a monolith. Just keep trucking on with it and partition your stuff vertically with shared auth and marketing site.

Fuck microservices unless you have a tiny little product with a few million users. Which is almost none of us.


90%+ of transitions to microservices occur because the developers involved need to give the impression they are doing something in the absence of being allowed to build something new like they would prefer, and want to put "microservices" on their resume for when they try to leave their current shithole of a company for a slightly better paying, trendier shithole


Biggest mistake of my life was to say lets keep maintaining and building on the old thing rather then transitioning to the newest greatest (language, micro-service, whatever). The old thing on your resume does nothing for you. Whatever is the new thing, jump on it hard and fast like your life depends on it.


I was once working with a guy that was hell bent on microservices.

When our banking application became incompatible with logging, due to his microservice design, he really argued, fought and sulked that logging wasn't important and we didn't need it.


I'm skeptical of over using microservices but I don't quite understand how they make an app "incompatible with logging".


Microservices are incompatible with monlithic text based logging - maybe they meant that?


I suppose tracing becomes more important in that new architecture, assuming of course that each service is logging the unique identifier for the trace (for instance the request ID or some system-initiated event ID), but of course that presupposes logging to begin with, so I am not sure what "incompatible with logging" means.


Does each microservice really need its own database? I have recently proposed my team initially not do this, and I'm wondering if I am creating a huge problem.


Isolated datastores is really the thing that differentiates microservice architecture (datastores meant in the most broad sense possible - queues, caches, RDBMSs, nosql catalogs, S3 buckets, whatever).

If you share a datastore across multiple services, you have a service-oriented architecture, but it is not a microservice architecture.

Note that I'm saying this without any judgement as to the validity of either architectural choice, just making a definitional point. A non-microservice architecture might be valid for your usecase, but there is no such thing as 'microservices with a shared database'.

It's like, if you're making a cupcake recipe, saying 'but does each cake actually need its own tin? I was planning on just putting all the batter in one large caketin'.

It's fine, that's a perfectly valid way to make a cake, but... you're not making cupcakes any more.


If they won't have it then they're not microservices.

The main premise is independent deployability. You need to be able to work on microservice independently of the rest, deploy it independently, it has to support partial rollouts (ie. half of replicas on version X and half on version Y), rollbacks including partial rollbacks etc.

You could stretch it in some kind of quasimodo to have separate schemas within single database for each microservice where each would be responsible for managing migrations of that schema and you'd employ some kind of policy of isolation. You pretty much wouldn't be able to use anything from other schemas as that would almost always violate those principles making the whole thing just unnecessary complexity at best. Overall it would be a stretch and a weird one.

Of course it implies that before simple few liners in sql with transaction isolation/atomicity now become phd-level-like, complex, distributed problems to solve with sagas, two phase commits, do+undo actions, complex error handling because comms can break at arbitrary places, performance cam be a problem, ordering of events, you don't have immediate consistency anymore, you have to switch to eventual consistency, very likely have to do some form of event sourcing, duplicate data in multiple places, think about forward and backward compatibility a lot ie. on event schema, taking care of apis and their compatibility contracts, choosing well orchestration vs choreography etc.

You want to employ those kind of techniques not for fun but because you simply have to, you have no other choice - ie. you have hundreds or thousands of developers, scale at hundreds or thousands of servers etc.

It's also worth mentioning that you can have independent deployability with services/platforms as well - if they're conceptually distinct and have relatively low api surface, they are potentially extractable, you can form dedicated team around them etc.


independent deployability, independent scalability, ease of refactoring, reduced blast radius, code ownership and maintenance, rapid iteration, language diversity (ie an ml service in python and a rest api in nodejs), clear domain (payments, user management, data repository and search) just to name a few. if two or more services need none of the above or must communicate with the same database or is too complex to communicate with each other if a db is not used (ie queue nightmare, shared cache or files) are usually signs that the two should be merged as they probably belong to the same domain. at least thats some of the logic i follow when architecting them.


Yes, you need it. Imagine having to make a change to the DB for one service. You'll have to coordinate between all microservices using that DB.


Agree and disagree - it really depends on why you are going to micro services. Is it because you have too many people trying to work on the same thing and you’re architecture is just a reflection of your organisation. Or is it to decouple some services that need to scale in different ways but still need to sit on top of the same data. Or is it some other reason?

I think the dogmatic “you always need a separate database for each micro service” ignores a lot of subtleties - and cost…


> Or is it to decouple some services that need to scale in different ways

This is really over sold. You could allocate another instance to a specific service to provide more CPU to it, or you can allocate another instance to your whole monolith to provide more CPU.

Maybe if the services use disproportionately different types of resources - such as GPU vs CPU vs memory vs disk. But if your resources are fungible across services, it generally doesn't matter if you can independently scale them.

Compute for most projects is the easiest thing to scale out. The database is the hard part.


> Compute for most projects is the easiest thing to scale out. The database is the hard part.

If you did things the "right" way to begin with. You have to keep in mind that many people in the industry need to solve problems of their own making, and this then doesn't translate or make sense to other people.

Scaling is a great example.

It's common to see web applications written in horrendously inefficient front-end languages. Developers often forget to turn "debug" builds off, or spend 90% of the CPU cycles logging to text files. Single-threaded web servers were actually fairly common until recently.

Then of course the web tier will have performance issues, which developers can paper over by scaling out to rack after rack of hardware. Single threaded web server? No worries! Just spin up 500 tiny virtual machines behind a load balancer.

Meanwhile, the database engine itself is probably some CotS product. It was probably written in C++, is likely well-optimised, scalable, etc...

So in that scenario, the database is the "easy" part that developers can just ignore, and scaling the web servers is the "hard" part.

Meanwhile, if you write your front-end properly, then its CPU usage relative to the database engine will be roughly one-to-one. Then, scaling means scaling both the front-end and database tiers.


This is particularly painful experience if you've got business logic at the database layer.

For example stored procedures that due to not splitting the db get "shared" between micro-services.


Yes, but when doing so seems silly it's a good sign that they should not be separate services. Keep things that change at the same time in the same place. When you schema changes the code that relies on it changes.


Need depends on your needs. You can share the DB but you lose the isolation. The tradeoff is up to you.

There are also different ways to share. Are we talking about different DBs on the same hardware? Different schemas, different users, different tables?

If you want to be so integrated that services are joining across everything and there is no concept of ownership between service and data, then you're going to have a very tough time untangling that.

If it's just reusing hardware at lower scale but the data is isolated then it won't be so bad.


I'm agreeing with your other other replies, but with one caveat. Each service needs its own isolated place to store data. This programming and integration layer concern is very important. What's less important is having those data stores physically isolated from each other, which becomes a performance and cost concern. If your database has the ability to isolate schemas / namespaces then you can share the physical DB as long as the data is only used by a single service. I've seen a lot of microservices laid out with different write/read side concerns. These are often due to scaling concerns, as read-side and write-side often have very different scaling needs. This causes data coupling between these two services, but they together form the facade of a single purpose service like any single microservices for outside parties.

Additionally, you can probably get by having low criticality reports fed through direct DB access as well. If you can afford to have them broken after an update for a time, it's probably easier than needing to run queries through the API.


There are two ways to interpret this question, and I'm not sure which you're asking. You should not have two microservices sharing a single database (there lies race conditions and schema nightmares), but it is totally fine for some microservices to not have any database at all.


I like microservices owning their databases. It allows you to choose the correct database for the job and for the team. Sharing state across these microservices is often a bad sign for how you’ve split your services. Often a simple orchestrator can aggregate the relevant data that it needs.


Are you talking about different DBs, or just different tables? If it's just different tables, they can operate sufficiently independently if you design them that way, so you can change the schema on one table without messing up the others.


To initially not do this is fine. Otherwise now you have two hard problems to solve concurrently.


Enjoyed the article, and found #11 and #12 to be almost a requirement for most teams.

I've worked for an org that adopted microservices when it's small size didn't justify it but eventually found its footing with good containerization, K8s from a growing DevOps team. Individual teams were able to eventually deploy and release to various environments independently, test on QA environments easily.

And I now work for an org with a monorepo with 100+ developers that probably should have been broken up into microservices a while ago. Everything just feels broken and we're constantly running around wondering who broke what build when. We have like 6 SREs for a team of 100+ devs? I think a lot depends on how well CI/CD is developed and the DevOps/SRE team.


> And I now work for an org with a monorepo with 100+ developers that probably should have been broken up into microservices a while ago.

If its a monolith, then I think its not a monorepo?


People forget, micro services commonly serve massive tech companies. With 100 developers working on the same product, you need it broken up and separated. If you’re in a small company, the value proposition is not as great. It’s contextual, and a tech stack that solves organizational problems, not always technical ones


Agreed, but I'll go a step further and say microservices are really valuable when you have products that are powered through a reasonable DevOps and CI/CD support within and organization. If you're a company that only releases quarterly, a monolith probably makes sense. If you're a company releasing changes daily / hourly, monoliths make progressively less sense and become progressively harder to work with. When we release software (a lot) our downtime SLO is generally zero minutes. If you're a well out together outfit with strict discipline, this can be achieved with microservices.

Inversely, monoliths almost never have to deal with multiple components at different release levels, so they don't do a particularly good job to support it, which is why you often see hours long upgrade windows for monoliths. Shut everything down, deploy updates, start everything back up and hope the house is still working plus the changes.


> If you're a company releasing changes daily / hourly, monoliths make progressively less sense and become progressively harder to work with.

Counter argument: Your overall CI/CD infrastructure landscape will be much simpler and muss less error-prone.

As will be your release and test pipelines. For instance, if you have a few dozen microservices, each being released every other hour, how do you run E2E tests (across all services) before you release anything? Let's say you take your microservice A in version v1.2.3 (soon to be released) and test it against microservice B's current prod version v2.0.1. Meanwhile, team B is also working on releasing the new version v2.1.0 of microservice B and testing it against A's current prod version v1.2.2. Both test runs work fine. But no one ever bothered to test A v1.2.3 and B v2.1.0 against each other…


Another issue is backwards compatibility. With microservices, you have to keep your old endpoints open until everyone has migrated, and make sure the new endpoint is always available when upgrading.

Much simpler to verify with tests with a monolith.


At FB we introduced service boundaries for technical reasons, like needing a different SKU.

Everything that could go into the giant, well-maintained repo/monolith did, because distributed systems problems start at “the hardest fucking thing ever” and go up from there.


Apparently Google is doing fine without Microservices.


Looks like microservices became a self-goal. However author can be praised buy giving a context (implicit) of a huge team which should be split.


Microservices have one big advantage over monoliths: when you have a very large number of employees developing software it means that you can keep your teams out of each others' hair. If you don't have a very large team (as in 50+ developers) you are probably better off with a monolith, or at best a couple of much larger services that can be developed, tested and released independently of each other. That will get you very far, further than most start-ups will ever go.


> when you have a very large number of employees developing software it means that you can keep your teams out of each others' hair

Two things here. One is that it often looks like the tail wagging the dog. Microservices are often introduced to manage and support development teams that grew too large. Unfortunately the correct course of action in such cases is layoffs not microservices.

Two. In my experience the places I worked at where microservices were extensively used had even more issues of employees stepping on each other as compatibility issues and upgrade sequence between services were a frequent concern and the independent testing was a total myth as most teams did not provide good mocks for their API surfaces. Meanwhile troubleshooting such distributed systems was _very_ real and very hard.


Agreed on the first if the company was hiring armies of juniors, which isn't going to work either way.

As for the second: yes, you can mess it up, a solution like that requires both discipline and someone in charge of architecture with the power to enforce the rules. If everybody runs off to create their own little kingdom and APIs then it's just going to make matters worse. Sometimes much worse.

But here is an idea: if you run a typical web company you already have a couple of ways in which you can slice up your monolith:

The hot path, all that which is visible to the unwashed masses and the internal software that your various employees use to deal with order flow, support requests and so on (assuming that you built those rather than bought them, as you probably should have).

The part that is visible to everybody on the web can in turn be split up into content delivery, pages that are visible to non-logged in users and pages that are visible to users. Possibly you have a mobile phone application. Maybe your back-end has certain features that can be easily parted out and made to operate stand alone. Before long you'll have somewhere between 5 and 10 cleanly separated chunks, each of which can be worked on in isolation because the only thing all these have in common is the persistence layer, which can serve as a single source of truth. Everything else can be made as stateless as possible.

Such a solution would serve the vast bulk of the e-commerce and SPA solutions out there and would scale to very large size without breaking a sweat. Then, if it becomes necessary to break things up further this should be done with the realization that those parts will need to continue to function as a whole and that architecture will have to take a front seat if it is to work at all, which means the company will have to hire a capable systems architect who can balance the extra complexity with the flexibility required. This is not a trivial thing to do and messing it up will come with a hefty price tag.


Not my experience at all. It takes the same skills to manage a monolith as it does managing microservices. However microservices are inherently more complex than monoliths. So they are harder to keep clean/maintainable. All complex problems/systems are solved by breaking them down into smaller problems/systems that can be solved. Using microservices doesn’t make that easier. It actually makes it harder because you now also have to deal with the complexity of a distributed system.


The advantages of services and the complexity of distributed systems are obviously at odds with each other and proper architecture is going to be a requirement to be able to use a services oriented setup to its advantage.

If your team was bad at systems architecture before attempting such a project then they will do much worse than before. But this is akin to saying that cooks should not use sharp knives because they can cut themselves. Yes they can. But expert cooks would much rather use sharp knives than blunt ones even if they run the risk of cutting themselves.

A team of also-rans that go for a services oriented solution in a cargo cult fashion is going to get bogged down by complexity. That's no change from them getting bogged down by using a monolith that turned into a bowl of spaghetti.

I've seen microservices applied in ways that made my hair stand on end, every shitty little function was its own service and had its own API. Utter madness. But I've also seen a monolith with several major functional blocks cut up on sensible lines and then dealt with by a much larger number of teams than before with success. Like everything else: if you are doing it wrong no amount of magic is going to help you, you first need to know what it is that you are doing.

My best advice if this sounds familiar is that you should up the hiring bar, considerably so. But that will also require paying more per person and companies that have a problem with the former usually also have a problem with the latter.


Smart experienced developers use the least complex tool to solve a problem. And monoliths are inherently less complex than microservices. So it’s quite simple really.

Unfortunately “beginner experts” tends to pick the most complex tool possible to solve a problem, because they think it somehow demonstrate how skilled/smart they are. Which always amuses the more experienced developers. It’s like watching a train wreck in slow motion :)

I am looking forward to reading the future “we picked microservices because we thought we were clever and it turned into a cluster f*” stories.


I've honestly never understood this argument. There's some changes that are intractable. Just being in a separate codebase doesn't mean you don't need to deal with it. Team A just modified their service to require a new field in a request? Now you just have Teams B-G having to play a card to update their corresponding code if they consume that service. And unless you have everything feature flagged across all teams in a consistent way, or are very careful about API versioning, you'll all need to release at the same time anyways. Nothing magical about microservices solves this, no?


If you suck at architecture, then yes, microservices won't solve that problem. But used properly, like with every other tool, this is one problem that can be solved this way.

The whole idea that you take a monolith and then explode it into 50 or even 100 pieces with all the added complexity that involves without thinking about it beforehand and aiming for specific goals is obviously broken and that's not what I'm advocating for. But when properly used you can design your services in such a way that those problems do not arise. Some eco-systems, notably Erlang have elevated this concept to first class status and from personal experience and looking at some companies that did it properly I can tell you that it works, and that it works very well.


Why microservices called "micro"? When it's almost implied it requires a dedicated team to develop and maintain? Looks it's like SOA but with JSON and k8s.


In the context of microservices, 'micro' means that the service itself is small. Thus, each service performs a 'micro' (and preferably independent) task. It is the opposite of a monolith, which you could call a 'macro' service.

The system as a whole (whether it being monolith or microservices) still requires a dedicated team to maintain. Switching to microservices will not magically remove the team requirement. In fact, splitting a monolith into smaller services creates overhead, so you'll probably end up with a larger team then what you'd have to maintain a monolith.


In reality many of the micro services end up as tightly coupled macro services. I’ve rarely seen teams with the discipline or need for creating truly self contained separate services.


Maybe because there are very little real world cases when they're better than alternatives?


> 10 Modularize the monolith

A couple paragraphs on a couple of tools for the middle to late stages of such an effort is tantamount to "and then draw the rest of the fucking owl".

Decomposition is hard. It's doubly hard when you have coworkers who are addicted to the vast namespace of possibilities in the code and are reaching halfway across the codebase to grab data to do something and now those things can never be separated.

One of the best tools I know for breaking up a monolith is to start writing little command line debugging tools for these concerns. This exposes the decision making process, the dependency tree, and how they complicate each other. CLIs are much easier to run in a debugger than trying to spin up the application and step through some integration tests.

If you can't get feature parity between the running system and the CLI, then you aren't going to be able to reach feature parity running it in a microservice either. It's an easier milestone to reach, and it has applications to more immediate problems like trying to figure out what change just broke preprod.

I have things that will never be microservices that benefit from this. I have things that used to be microservices that didn't need to be, especially once you had a straightforward way to run the code without it.


You can actually use microservices to reduce the complexity of a monolith.

One example which I built just for this purpose is Barricade https://github.com/purton-tech/barricade which extracts authentication out of your app and into a container.


Authentication is a very simple part and doesn't add complexity. Every platform has a plethora of libraries to deal with anything. Authorization is a real deal.


And best of luck to anyone attempting to extract authorization to a microservice in a way that doesn't create new problems.


There are libraries but you still have to integrate them into a front end whilst being aware of any security issues.

Those libraries generally come with lots of dependencies so splitting out the front end and back end auth code into a container can reduce the complexity of your main app and is a nice separation of concerns in my opinion.


But you solution requires architectural decisions. Like fixed DB. For example main app uses Mongo. And now it must be integrated with PG. I choose orthogonal set of primitives as a library any day. Frameworks and more so 3d-party services are too rigid and can contain hidden maintenance cost regarding flexibility. For example, it seems barricade doesn't support TOTP or SMS. What should I do as a backend developer?


Agreed Barricade is a Postgres solution so may not fit if you are already using Mongo.

Barricade supports OTP via email. TOTP is on the road map.


Authentication/sso, logging, metrics, persisted data stores (ie. sql server), caches (ie. redis), events, workflows and few others are all well known, understood, isolated behaviors with good implementations that doesn't need to be embedded/reinvented - it can and in many cases should run separately to your core service/s. It doesn't imply microservices in any way.


I'd argue the exact opposite and would say it's the perfect example of mixroservices architecture.

you'd have one instance for your auth token generator, one for the gateway, one for the dashboard, and so on


Nope. Microservices are guaranteed to be more complex than an equivalent monolith. It constantly blows my mind that it isn’t obvious to some developers. A distributed system (microservices) is always more complex than a non-distributed system (monolith).


I have a strong feeling that the author never in their career transitioned from monolith to microservices. Even to the point "we are getting somewhere", not to mention "we are successful at this". Text reads like self- complacency.


hold on, if we have half a dozen web apps that serve different needs, but all share similar "guts" — a monolith doesn't really make sense because App B will have all the code of App A but only use like 10% of it...

So if I split App A from App B (and App C, D, etc.), but create a single "back-end monolith" for both apps, that makes sense right? Is that still microservices?

And if half of them need authentication, but I want to keep my auth data separate from my other backend data, shouldn't I create a "regular db backend" and an "auth backend", for my many web apps?

Essentially this is what I've done. I don't start with "microservices" but at the same time I don't want to glom ALL my random sites and apps' front-end and back-ends into the same code base... that makes upkeep WAY too difficult.

Is this a monolith, a microservices, or none of those? I'm not really a "real" engineer and haven't drank any koolaid so I honestly don't know if this is the right way to do it. Most of my sites and web apps are Sveltekit / Vercel, with CF Workers, Supabase, Deta, and Railway doing some other stuff Vercel can't do.

(I don't have a hosted machine or anything lol)


This sounds like "multiple frontend apps talking to a single API service."

Frontend apps are a different beast here, the general "monolith -> microservice transition" like discussed here is talking just about the backend services. For standalone app code, that "I only share 10% of the stuff" problem can be fairly easily solved with libraries.

For backend services, libraries most likely aren't what you want, because you don't want 5 different services all using the same library to talk directly to the same SQL database or whatever. You'd rather have a single service responsible for that database communication.

In a microservices world, you tend towards "one logical domain object in the system? one service." In a more monolithic world, you have much larger groupings, where a single service might be related to, say, user accounts and user text content and references to photo libraries, or whatever. It's probably not actually everything (some stuff scales at wildly different rates than others) but you aren't nearly as aggressive at looking for ways to split stuff up, and your default for new functionality tends to be to find the service to add it to.


If different apps share code, why is that code not in a linked-against library, like has been normal for 40 years or more?


Is there a guide on doing the opposite?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: