Many ideas from the Semantic Web are currently in use. The overall advance right now is in what is called "Semantic Enterprise" where everyone knows what everyone else is working on via knowledge modeling.
Every time you do a Google/Bing search you can see metadata returned. Projects like DBpedia and OpenCyc are huge and will become more important as more devices are networked.
It is unlikely that AI will be able to disambiguate concepts from a raw text corpus like a book (indexing) at the quality of a human specialist any time soon. Perhaps with some recent advances in quantum computing the raw horsepower will arrive.
I'm not sure that lack of adoption of sem-web tech (even the loosely defined kind) in sites serving cat pictures is a big deal.
For some, semantic web means RDF and linked data. Of course, the "full promise" is queries that have the ability to draw inferences from indirect relationships in the data - admittedly the few examples I've seen, whilst remarkable - perform best when there's only one, homogeneous underlying dataset. Where you have disparate datasets from many organizations/institutions (and the data spans decades), these things struggle outside of demos due to the huge work required in normalizing/mapping onto something common that can be sensibly queried against: and sometimes that's even when the same ontologies are in use! The underlying data just doesn't necessarily map very well into the sem-web representations, so duplicates occur and possible values explode in their number of valid permutations even though they all mean the same handful of things. And it's the read-only semantic-web, so you can't just clean it, you have to map it..
Which is why I'm always amazed that http://www.wolframalpha.com/ works at all. And hopefully one day https://www.freebase.com/ will be a thing. I remember being excited about http://openrefine.org/ for "liberating" messy data into clean linked data... but it turns out that you really don't want to curate your information "in the graph"; it seems obvious, but traditional relational datasets are infinitely more manageable than arbitrarily connected nodes in a graph.
So, most CMS platforms are doing somewhat useful things in marking up their content in machine-readable ways (RDFa, schema.org [as evil as that debacle was], HTTP content-type negotiation and so on) either out-of-the-box or with trivially installed plugins.
If you look around, most publishing flows are extremely rich in metadata. For all sorts of things, like describing news articles [1], journal articles [2] (DOIs weren't built to serve the semantic web, but certainly are rich in metadata), movie/book/audio titles and their content...
Beyond that, we just had GovHack [3] here in Australia a few months ago where groups were encouraged to do what they could with public government datasets (which themselves, again, aren't necessarily "semantic web" but are increasingly using "linked-data" formats/standards for, if not interoperability, then at least dataset discovery). There are RDF representations of everything from parliamentary archives [4] to land use [5].
I've personally seen some great applications of inter-organizational data mashing/sharing/discovery in materials science and a few years ago I really enjoyed working with bioinformatics services such as [6] which allows some fun SPARQL queries to answer interesting questions.