If you're considering switching your monolithic application into a SOA you should consider the testing and debugging implications seriously.
If your call graph goes more than one level deep then doing integration/functional testing becomes much more complicated. You have to bring up all of the services downstream in order to test functionality which crosses that boundary. You also have to worry a lot more about different versions of services talking to each other and how to test/manage that. The flip side is that the services will be much smaller, so leaf nodes in the call graph can reach a level of test coverage higher than a monolithic service.
Debugging and performance testing becomes more complicated because when something is wrong you now have to look at multiple services (upstream and downstream) in order to figure out where the cause of some bug or performance issue is. You also run into the versioning issue from above where you have a new class of bug caused by mismatched versions which either have tweaked interfaces or underlying assumptions that have changed in one but not the other (because the other hasn't been deployed and those assumptions are in shared code). The bright side for debugging and performance is that once you know which service is causing the issue it's way easier to find what inside the service is causing the issue. There's a lot less going on, so it's easier to reason about the state of servers.
It depends how you do SOA. We try to publish "events" rather than to "call" another service and expect responses. We try to decide as much as possible in the service that publishes an event, so that information doesn't have to be returned. Other services act on that information.
Your kind of SOA sounds more like distributed RPC, which indeed is complicated.
Yeah, if you can get away with that model things are simpler. The best first step into SOA to take is offloading work that doesn't need a user response to a pool of workers (often by publishing to a message bus, as mentioned elsewhere in the thread). I've implemented systems like that using Rabbit and Redis and it worked fairly well.
However, some kinds of requests are fundamentally about integrating the results of a bunch of different services into a response to send to the user. In that case you somehow need to gather the results of your rpcs/events in one place to integrate them. An example is Google search where the normal results, ads, and various specialized results/knowledge graph data need to be integrated to present to the user.
Another consideration is how much you want to be able to isolate services. If you have a user/auth service as in the article which completely encapsulates the database and other resources needed for data about users then you'll end up with a lot of calls into that service. It's a disadvantage because of all the reasons in my original comment, but it's great from the perspective of being able to isolate failures and build resilient systems
Ok, yes, in the case where you have to have all information on one page. Another way is of course to get that information in a ajax call, or open a SSE/Websocket connection to listen for events from the event bus. But there are of course cases where that's not feasible.
And in the case of auth systems what we typically do is to have a separate app for logins/authentication, then do simple SSO or domain cookie sharing and let each sub system handle the authorization.
My point is that not all SOA has to be as complicated as the article's. But if you go that way, yes, then all your points apply.
I'm using MailTab Pro, which does notifications as well as presenting a mobile web view of the gmail website. I actually end up doing a lot of my email writing and processing directly from the web view because it's so much faster than the full Gmail site.
You're better off going to a hotel in terms of cleanliness, and they are located all over midtown. I've never been refused from a hotel when I searched for or asked where the restroom was
If merchants were doing sales and returns using Bitcoin it seems like you could make money on the volatility by returning items when the value of Bitcoins went up within the return window. You'd have to be willing to keep items you bought if the value of Bitcoin went down, but if you systematically made all your regular purchases that way you ought to come out ahead. Is this a real risk for merchants, or am I missing something?
No, it's one of the things you have to consider before accepting Bitcoin. Overstock for example gives store credit and is explicitly says up front that is what they are doing. Anyone that tries to refund in USD at the current exchange rate instead of returning the original payment w/o stating upfront that is what they are going to do is being hugely dishonest. Think about it what if you tried to return your laptop to Sony and they said find but we are going to refund your money in JPY instead of USD and o by the way the exchange rate has changed 10% since you bought this so here is a refund for 10% less. No way that would fly so I'm surprised people would think doing the same with Bitcoin would be ok.
Generally multiple currencies aren't accepted and the volatility is much lower. You probably couldn't make a profit on normal consumer goods that have a return policy. For this to work you'd need potentially extreme currency changes over the course of a few weeks.
I don't think accepting multiple currencies is a requirement to do what you are saying. But I think the real answer is that you could do this with other currencies but it isn't profitable and the only reason it looks like a reasonable scheme with BTC is because of the large price movements which as it grows will lessen making this a non-issue if it even is right now.
It does. File system, databases, requests, whatever. At least I am not aware of edge cases currently. Only be aware that C++ node.js modules are incompatible with node-webkit, you have to build these modules with nw-gyp. A little inconvenient, but should work fine.
Neat. An Objective C bridge to access OS X's frameworks was specifically what I was thinking of, and it looks like there is one: https://github.com/TooTallNate/NodObjC
Fun fact: If you're taking one vowel and five consonants the Wheel of Fortune letters RSTLNE—not in that order—are the letters that are most likely to occur
Impressive. I think these sorts of articles are hard to come by because many (if not most) super productive programmers are doing their work for a company where the outside world won't find out about what they have done. Fabrice is notable because he's got a lot of high profile work.
For anyone who isn't ready to completely turn off the news spigot, consider switching to a weekly (or monthly) source of news.
I get all of my (non-HN) news from The Economist's audio edition. It's released weekly and they have a section right at the start about big things happening in business/politics around the world in the last week. It's no more than a couple minutes to scan, and 10-20 in normal speed audio.
The rest of the articles are at least one step back (since they summarize a week of what's happened). Many others are looking at some larger event or trend, sometimes with a recent event/anecdote as a lead in.
I like the audio edition in particular since I can put it on while I'm doing chores or commuting and I'll pick up bits and pieces even if I'm not fully paying attention. I can also have only the sections I care about included, which lets me skip the ones I really don't care about.
Read the economist a lot while I was studying for my undergrad. Never felt more informed, and have never found a more balanced presentation of things, imho.
Nowadays I read the guardian website and watch RT sometimes. But I think I'll go back to the economist soon.
Yes. Audio Edition is a word-for-word reading of the print edition. Economist Radio looks like it has the random other audio things they do (there's also a podcast on iTunes with those sorts of things)
I also love the Economist audio edition. I listen to it during my commute, and mostly get through it each week. It kept me sane on a horrible commute with my previous job.
They have a separate digital subscription, if you just want the website and audio edition. It is included in a print subscription as well. You can buy a single week, if you want to try it out.
Highly recommended if you want to ignore the advice of the OP.
Sorry, it changed since I last looked. They used to offer a regular subscription with both digital and print, or a cheaper digital only. As you say, now they offer the regular option with both, or a cheaper option with either digital or print.
News media, such as The Economist, is also VERY different from what they talk about in the article:
>The media feeds us small bites of trivial matter, tidbits that don't really concern our lives and don't require thinking.
Most of the news stories aren't really what I would call news anyway. The type of news that is apparently bad for me, is the same type of news that embodies everything that is wrong with most news media. The topics of you evening news broadcast is about as relevant as the sports scores.
That section is really great. If you would read nothing else at all, and just add one thing, that would be the first thing to add. What I miss is something similar on a more local level. Both local-local, but also for my country. I even thought of doing a start-up at this (produce a one page PDF weekly to complement this spread).
Is there a list of languages which use separate heaps for each thread? I'm really interested in learning more about that concurrency model, but haven't had much luck compiling a comprehensive list of which languages actually use it
If your call graph goes more than one level deep then doing integration/functional testing becomes much more complicated. You have to bring up all of the services downstream in order to test functionality which crosses that boundary. You also have to worry a lot more about different versions of services talking to each other and how to test/manage that. The flip side is that the services will be much smaller, so leaf nodes in the call graph can reach a level of test coverage higher than a monolithic service.
Debugging and performance testing becomes more complicated because when something is wrong you now have to look at multiple services (upstream and downstream) in order to figure out where the cause of some bug or performance issue is. You also run into the versioning issue from above where you have a new class of bug caused by mismatched versions which either have tweaked interfaces or underlying assumptions that have changed in one but not the other (because the other hasn't been deployed and those assumptions are in shared code). The bright side for debugging and performance is that once you know which service is causing the issue it's way easier to find what inside the service is causing the issue. There's a lot less going on, so it's easier to reason about the state of servers.