I have the opposite feeling. I understand very well the reasons some are pushed away from relational data and data normalization, and set theory formalism. Some are need for change. Some are fears of over engineering. Some are because the trend is to Twitter-like blobs. A big reason is because doing things right is a pain in the...
And come the only valid reason, from my POV: We have to denormalize and avoid using the powers of relational data sometime because we do not know how to store (read and write) conveniently huge amounts of relational data; yet. Notice the "yet"? Well, I think it is a technical problem, that may not be so far to find its solution.
One example of a solution showing its nose recently: "Google plus with your world". It do strike me that the fact that any query I make to the closest Google server respond /instantly/ to any random word with a join on a very possibly monstruous matrix of all the likes of all my circled users.
I don't know how they do store this, where and how they denormalize, but in any case it seems to me to be just "relational data as usual".
So in two years, if there is a "bigreldata" software allowing to have a Postgres sitting happily on 1000 T of relational data with instant reads and writes, I would certainly use that with a layer of python glue on the server feeding a slim client than a blob datastore with NoSQL handcuffs and a fat client with 20 libs of third party Javascript code.
I may be wrong, however, and would love more insights on this.
Google is not using a RDBMS to get Google Plus content on SERPs. What makes you think they are? It works just like the rest of Google on their leading edge kit: bigtable, GFS, etc. Amazon is able to personalize their site for each customer quite a bit more than Google is and relies on similarly horizontally scalable architecture.
Google talked a little about how personalized search works in a paper about BigTable, it's worth a review:
> Personalized Search stores each user's data in Bigtable. Each user has a unique userid and is assigned a row named by that userid. All user actions are stored in a table. A separate column family is reserved for each type of action (for example, there is a column family that stores all web queries). Each data element uses as its Bigtable timestamp the time at which the corresponding user action occurred. Personalized Search generates user profiles using a MapReduce over Bigtable. These user profiles are used to personalize live search results.
Regardless, even in your scenario with the perfect RDBMS, the future web stack wouldn't change much. You still have the same issues with blocking and different languages for client and server. As a developer myself, it doesn't matter at all to me if my call to a method is backed by a relational, document or key/value database. It's all an abstraction somewhere. It just needs to come back quickly and be easy to scale up.
The big change we're seeing is the client becoming primarily JS driven and the server more or less relegated to sending/receiving JSON. It's a much richer experience, but a pain when the toolsets on either end are completely different.
And come the only valid reason, from my POV: We have to denormalize and avoid using the powers of relational data sometime because we do not know how to store (read and write) conveniently huge amounts of relational data; yet. Notice the "yet"? Well, I think it is a technical problem, that may not be so far to find its solution.
One example of a solution showing its nose recently: "Google plus with your world". It do strike me that the fact that any query I make to the closest Google server respond /instantly/ to any random word with a join on a very possibly monstruous matrix of all the likes of all my circled users.
I don't know how they do store this, where and how they denormalize, but in any case it seems to me to be just "relational data as usual".
So in two years, if there is a "bigreldata" software allowing to have a Postgres sitting happily on 1000 T of relational data with instant reads and writes, I would certainly use that with a layer of python glue on the server feeding a slim client than a blob datastore with NoSQL handcuffs and a fat client with 20 libs of third party Javascript code.
I may be wrong, however, and would love more insights on this.