There’s been a ton of conversation in social media circles about LinkedIn’s new marketing campaign. LinkedIn has been notifying users that they’re among the top 5% (and even 10%, now) of viewed profiles on the social network. Some mentions of the campaign bemoan it. Other mentions seem to make attempts at a “humble brag” about it. And still others seem to genuinely think it’s pretty cool. But what I love about the campaign is that these e-mail notifications aren’t aimed at the receivers of the e-mails at all. These congratulatory messages are targeted at the LinkedIn users who aren’t getting them.
One of LinkedIn’s most valued assets is the data it possesses about each of its registered users. As of January 2013, LinkedIn has 200 Million registered users. Most of them have at least indicated their current career situations and probably their work history. But there’s so much more a registered user can do with their profiles, accounts, and activities. And as LinkedIn constantly likes to imply while it encourages those activities, doing these things could help a user’s career.
So when 2(0) Million LinkedIn users start talking about the congratulatory e-mails they’ve received about the interest in their profiles (which, presumably, has some notion of career benefit attached), it leaves the other 95% who never received such an e-mail to wonder: What can I do to enhance my profile on LinkedIn and help my own career? After all, even after people realize 5% of 200 million is 10 million, the other 95% must ponder what they’re failing to do to be among that massive pool of “better” profiles.
LinkedIn’s answer: Add more data about yourself and get more active on their social network. (Read: Give us more data.)
It’s a brilliant move. It’s Judo Marketing. And yet I can’t help but recall this Onion’s piece on “personal branding” even as I enjoy this gambit.
This movement has been gaining momentum for more than a decade. Human beings who make investment decisions based on their assessment of the economy and on the prospects for individual companies are retreating. Computers—acting on computer-generated market trend data and even newsfeeds, communicating only with one another—have taken up the slack. — Raging Bulls: How Wall Street Got Addicted to Light-Speed Trading | Wired Business | Wired.com
Why Are We Still Consuming News Like It’s 1899? | benhuh!com -
The limited amount of space on news homepages and their outmoded method of presentation poses big problems for the distribution of news as well as consumption by the public. Even though it’s been more than 15 years since the Internet became a news destination, journalists and editors are still trapped in the print and TV world of message delivery.
The traditional methods of news-writing, such as the reverse pyramid, the various “editions” of news pose big limitation on how news is reported and consumed. Unfortunately, internet-based changes such as reverse-chronological blogging of news, inability to archive yesterday’s news, poor commenting quality, live-blogging, and others have made news consumption an even more frustrating experience.
When the country elected Barack Obama just four years ago, Twitter was a fledgling startup. During the campaign, Obama overtook Kevin Rose as the most followed person on Twitter, passing him at 56,482 followers.
Five years ago, according to Pew, less than half of Americans used email daily; less than a third used a search engine.
YouTube was founded in 2005 and Facebook in 2004 — and it would be a while after that until they became such integral parts of our day-to-day Internet experience.
Today nearly half of Americans own a smartphone. The iPhone is five years old. — Technology - Rebecca J. Rosen - 59% of Young People Say the Internet Is Shaping Who They Are - The Atlantic (via infoneer-pulse)
Sometimes you have to stop and look back to appreciate the rush.
Last week during Google’s big I/O event, Sergey Brin, co-founder of Google, “interrupted” the keynote of the event’s first day in order to show off his personal pet project: Google Glass. Glass is a technology integrated into an eyeglass frame to allow “wearable computing.” Brin’s means of showing off Glass last week was by way of an impressive, muti-person, multi-modal stunt involving sky-divers, stunt bikers, and people abseiling down the side of a building to show off the sharing capabilities of Google Glass. The stunt impressed a great deal of folks, and it was followed up by an opportunity for members of the media to actually briefly experience wearing Google Glass for themselves. Quite a few of those folks are leaving the experience impressed, going so far as to exclaim that they’ve seen the future of computing. And, conveniently for Google, they’re buying the humanistic marketing pitch for Glass; the project moves technology out of the way of communications and experiencing life. But that’s not really why Google is racing towards a dominant position in wearable technology. The real reason is that if Google can get “technology out of the way,” then Google can marginalize Apple’s primary competitive advantage.
Last month, Apple unveiled their newest version of the iOS operating system, and much was made of the fact that Apple has dropped Google Maps for their own proprietary mapping technology in partnership with a smaller online map purveyor. That step, along with the previous introduction of Apple’s “Siri” technology - a voice activated digital assistant that essentially can search for answers to your questions on the iPhone and iPad, are moves by Apple designed to negate Google’s influence over their iOS technology. And given Apple’s market dominance of mobile computing, those moves also provide them with opportunities for market dominance in the search and mapping service industries. You can imagine how Google must be feeling about being cut out of the dominant mobile computing platform.
Read any single review of any smart-phone or tablet computing device of the last 2 years, and the benchmark against which all other machines are judged is Apple technology - the iPad and the iPhone. And very rarely do any non-iOS devices make par. Rarer still are the devices that might make a reviewer gush that they’re better than iOS options without any caveats. And the reason for that is the massive competitive advantage Apple enjoys in the realms of design and human-computer-interaction. With the slick, tightly controlled iOS environment, and the existence of only 2 form factors for iOS devices, no technology maker can build a device that will be as pain free and enjoyable (not to mention sexy) to use as the iPad or iPhone.
In the smart-phone and tablet computing world, the winner of the game is the competitor who makes the most beautiful, elegant device. That’s why Google wants to end the beauty pageant. Sure, the current iterations of Google Glass are “geeky looking.” No beauty contests are going to be won by them. But for centuries, people have been wearing glasses. And over time Google will be able to minify the technology backing the Glass product until it looks like any other pair of fashionable frames. Who knows, maybe a pair of Google Glass contact lenses isn’t out of the question. At any rate, eventually, nobody will notice the glasses. Which means nobody will be noticing the device. Which means nobody will care about the form of the device any longer. All anyone will care about is the service the device provides. Given its history, that’s a game Google’s got to look forward to playing.
Google Glass and its successors wont eliminate tablets and smart phones. Brin has already conceded that point in his discussions over the project. But there are still only 24 hours in any day, and any day only involves a finite number of times that a person actually needs or wants to seek information, exposition, or entertainment. By providing Glass, Google will be offering people a way to get that without having to pick up a tablet or a phone. From Google’s point of view, if Glass takes off, Apple can hang on to it’s market dominance when people care about how the tech they’re using looks and feels.
Right, sorry for the link-baity title. But the thesis holds true. Long term, anyway.
Today the key note of Apple’s WWDC was presented, and a slew of new Apple products from hardware to software were unveiled. Despite the fact that the hotly anticipated Apple Television (not to be confused with the Apple TV box unit) never did make an appearance, anyone who followed the presentation or at least read up on the results would find it hard to argue that Apple didn’t come out with guns ablaze. The pricy new hardware is as beautiful as it is expensive. The slick new OS Mountain Lion features are as pretty as the retina displays on the new Mac Book Pro. And the new integrated apps in the soon-to-arrive iOS6 look brilliant… and really familiar. In fact, those new integrated apps are so familiar because they mimic a great deal of functionality of some of the most popular third-party-created apps in the iOS ecosystem. And by taking the ideas that were germinated and then perfected in those third-party apps and integrating them tightly with iOS 6, Apple killed those apps and the entire ecosystem too.
First, let’s just accept that there is no finish line in this game. There’s no eventual winner. Apple’s the king of the tech and business world in so many ways right now, it’s not worth trying to argue that what’s coming is Apple’s demise. But this matter is bigger than Apple anyway, and Apple will do just fine finding its way in the post-app era. It probably just wont be too eager to see that era arrive.
Today, by eating the best apps cultivated in its own ecosystem, Apple made a few declarations: 1) Even though Apple creates beautiful objects with beautiful interfaces, they’re pretty unclear about what people actually want to do with pretty little things. 2) Apple feels no qualms about snatching success from its ecosystem developers, even when those developers show extreme loyalty to iOS. And 3) Apple sees no reason not to annoint winners in Software as a Service categories, and then integrating them tightly with their own applications as partners, the rest of the ecosystem be damned. iOS is not an open playing field or a level one. It’s just a playing field where the rules are up to the hopefully benevolent dictator, and all the players are at the mercy of that dictator’s market analysis and app store rankings.
For a time now, a common meme among web development professionals is that the web is the most hostile development environment in the history of computing, all because of the various takes on “standards” by the array of browsers on the market, the seemingly endless niche scenarios exposed by the seemingly endless tool choices, and the exposure to the population at large with only rudimentary access controls made available by the web platform itself. But for all its flaws, the web has never been hostile in the way that iOS is today. With the declarations Apple just made, the iOS platform (and really any other proprietary OS) is hostile to innovation. And that hostility will kill the ecosystem.
The iOS ecosystem wont turn into a pumpkin at midnight tonight. It will continue to thrive for some time, and the extrapolation of near-term data of iOS development and usage will make for easy arguments that the app era is only just beginning, but the signs of strain are easy to find. There’s already a dedicated piece of lingo for the notion of having Apple take your software idea and ship it with their iOS out of the box: “Sherlocked.” It’s been around for a couple of years now. And today there was a seemingly endless font of the “s-word” springing from the twitter accounts of various well known iOS developers who had, for a time, found a happy little niche market that made for a nice living until Apple came in and ate it. The cracks are starting to develop. No doubt some of these highly capable iOS developers will just look to create another clever iOS app, but it’s just as possible that folks with such skills might choose to now apply themselves to an ecosystem not wholly owned and dictated by the monster that just ate them.
One admirable quality of many of the best iOS app developers is that they treat their work like a craft. They often seek to find elegant, beautiful, and innovative solutions to common problems. Many of them are prolific writers on the problems they’re solving and the efforts to do so. And while the monetary payoff is a prime motivation for these people, it’s clear that the creativity and innovation involved in their work is what keeps them going. That innovation has value to these developers too, and Apple’s been keen to let them go right on innovating until they see fit to take all that innovation and use it for themselves. And all it cost Apple was 70% of the third-party apps’ purchase prices from the App store. That’s a heck of a ROI on market research and development.
And when such people, with their clever ideas, come to an intersection with better tools for developing advanced applications on the web, that’s when those cracks in the app ecosystem are going to give way. Ideas are malleable on the web in a way they can’t be on a proprietary OS. The ever popular “pivot” is something that can be executed at relatively small cost on the web versus the OS. There’s no need to conform to an albeit beautiful but restrictive set of UI guidelines on the web. Nobody worries that one’s web app ever need meet some form of “approval” except whether or not users find value in it. And on the web, as long as you put in the work, your app will always be discovered by someone.
Oh. Yeah. Discoverablity - the gaping wide hole in the proprietary OS ecosystem’s polished armor. Funny how everyone waited to hear if Apple was getting into televisions when they haven’t managed to solve their biggest problem yet - the ability for users to easily, intelligently, and at times serendipitously discover great apps on the ecosystem. Apple has now reached 6 full iterations of their iOS platform and still have nothing better than a few small improvements over the years to its base App store. And Google Play? By the company that provides the de facto standard in web-based search? Yeah, forget it.
The gold-star standard for application discoverability was invented years ago by Tim Berners-Lee before apps were even really a consideration. That standard is the web, and that standard will be the benefactor to generations of web-based applications that learn to harness that power. No proprietary OS ecosystem will ever match it because to do so would be to implement an internal, proprietary version of the web that would be too hard to get third party app creators to agree to implement. And so discoverability will not only remain a weakness for app platforms, but a major competitive disadvantage.
Proprietary app ecosystems wont fade because developers explicitly leave them. The getting for so many app developers is so good - for now. But the ecosystem’s best developers are being chased away. The web app ecosystem will simply begin to thrive as the tools improve, the skills advance, and the opportunity to be discovered remains open. Eventually developers looking to build something will seek open spaces where their innovations may grow without need to agree to the indentured servitude of the digital age.
With relatively rare exception, I turn on my television and my DirecTV box every night to see “what’s on.” Note that I’m rarely turning it on with “appointment tv” in mind. I’m almost always disappointed. Who could blame me? For about $90 a month, I pay DirecTV for the privilege of taking up about 85% of my DVR space with Sesame Street, Super Why, and Dino Train for the kids (all from PBS). And as an added bonus, the best thing I’ll see on TV that entire week might be something like The Killing - a show that (ask any of its regular viewers) clearly hates its audience. This sort of behavior is unhealthy, but more importantly to me, it’s bad economics. Near as I can tell, I’m paying about $90 a month for about 300 channels that almost never interest me, so I’m ending that now.
There’s been a lot of discussion about consumers “cutting the cord” from cable and dish television providers. Yesterday an entire site dedicated to a sort of social-media form of petitioning HBO for a stand-alone HBOGo cropped up. And recently, during the All Things Digital 10 conference, quite the kerfuffle was made over various comments by Hollywood super-agent Ari Emanuel that essentially boiled down to “ala carte models wont work” and “people don’t want to pay for anything.” And along with that, he made a lot of misinformed comments about magical anti-piracy technology that the likes of Google could and should implement if they were really willing to help protect his industry. (Yep, his biggest angle was protectionism.)
I’ve seen a lot of numbers on the subject lately, and the ones I tend to believe more tend to support the argument that the “old models” of Hollywood are dying, but I’ll grant that those holding the opposite opinion have their own data too. Instead, I’d rather just speak to my own logic for deciding to cut the cord from these old models of televised entertainment. When it comes to money I spend on entertainment, I’m seeking value. When I’m connected to the old model via a cable or dish subscription, I’m paying money for entertainment that’s ostensibly available 24/7. If I turn the TV on at any hour, there’s something I’m paying for displaying on the television. Of course, whether or not it’s actually entertaining or informative is an entirely different question - one that’s often answered with a resounding “no.” So in essence, I’m paying $90 a month to have by-and-large non-entertaining, non-informative noise pushed to me, whether I’m trying to receive it or not. What’s worse, when I do just give up on trying to find something compelling to watch, but choose to watch “something dumb” on television using the old model, I’m paying for it both with my subscription money and my time. And time’s my most valuable resource in this equation by far. Cable and dish network television make it entirely too easy to waste my resource of time.
So instead, I’ll save that $90 bucks a month, and use services like Netflix to provide childrens’ entertainment when I (God help me) want to have the kids watch tv. Luckily we’ll still have PBS available as well - Sesame Street is actually better today than it was when I was a kid. And for my own entertainment, I’ll be using things like Apple TV to occasionally purchase things I really want to watch, like the Game of Thrones series. At a pure, per-unit rate, such viewing might cost me more, but at least I’ll feel like I’m getting some sort of value for both my money and my time, and I’m not subsidizing things like Operation Repo on truTV. I don’t even really know what either one of those things is, but I know I don’t want to help pay for it.
So here’s a warning to all the moving picture content companies seeking to protect their old business models. It comes with no malice or ill intent: I am not the consumer type you want to dismiss. I am not an early adopter. I’ve never pirated a single movie. Throughout my life, from AOL to broadband, from Napster to Spotify, from wine appreciation to craft brews, I’ve proven to be part of the “early majority.” If I’m making the move, it might not yet be too late to change your models, but it will be before too long.
This evening, while scanning Twitter and not watching TV, I saw this tweet re-tweeted many times over…
HBO faces the same issue that studios do with premium VOD. At what point do youset yourself up for the future while you shiv the past?— Jason Hirschhorn (@JasonHirschhorn) June 6, 2012
It’s a tricky question, but there’s a deceptively simple answer, provided by the music industry: You shiv the past before the future shivs you.
Let’s put it this way: if you can build a $100 billion company by using the Internet to replace the college yearbook—imagine what you can do if you use the Internet to replace college.
— RealClearMarkets - Bigger Than Facebook (via pieratt)
That’s what is just beginning to happen. It all became official when the Massachusetts Institute of Technology appointed as its new president the guy who is responsible for MITx, the school’s free online education program.
A textbook example of what is disruptive, and what isn’t:
- Google Wallet installs new credit card readers at Walgreens
- Square sells new credit card readers at Walgreens
Which could change the world?
To disrupt, you’ve got to democratize your space.
This week Mary Meeker of Kleiner Perkins unveiled her annual “Internet Trends” presentation to the world. To web trend geeks, this presentation has become sort of mini web holiday, as it usually provides a fantastic mix of easily digestible information, great high-level insights, and occasional ephemera that make for good copy on blogs and media sites that cover the business of the web. Yesterday, Derek Thompson of The Atlantic explored a particularly key piece of information regarding consumers’ time spent versus ad dollars spent across types of media, and came away with 3 key learnings…
— Takeaway #1: We still love TV.
— Takeaway #2: Advertisers still love print.
— Takeaway #3: Audiences move faster than advertisers.
That last takeaway is the most important one for any brands or marketers to consider, but not quite for the reasons stated in Mr. Thompson’s piece.
According to Meeker’s data, we spend 42% more time consuming media on mobile than on print while advertisers spend 25x more dollars on print than on mobile. As Thompson reasonably surmises, there’s quite likely some lag between the growth of popularity of mobile media for consumers and the movement of ad dollars towards that medium and away from print. But that’s not the really important way in which audiences move faster than advertisers today.
In traditional print media, advertisers are forced to stand still. They pick their publisher platform, buy ad space, and expect their customers to come to them. While on the web, customers may come to find your brand, but whether you’re advertising or engaging in social marketing or not, your customers are on the web talking about you. And they move so fast, whether you like it or not, as brands, you’re not just in competition with other members of your industry, but with every other brand on the web.
Case in point, yesterday I saw this rather amusing tweet in my time-line that was in response to one of the users I follow…
@cwilk ORD experiencing Air Traffic Arrival delay over 2hours. Can’t leave till we get clearance. Silly weather control machine NOT working— JetBlue Airways (@JetBlue) May 31, 2012
Personal, responsive, engaging, and funny. It was so good, that even though the customer was annoyed about the delay, he retweeted Jet Blue’s communication. Which allowed everyone who follows him see the engagement as well. Honestly, at a brand-building level, it worked on me.
Apparently I wasn’t the only one to follow @cwilk and notice. Because a bit later, this happened…
So @cwilk asks @JetBlue ONE question and gets immediate reply. I ask @Nikon_USA the same question 20+ times…ignored. Who wants a Canon?— Matt Cashore (@mattcashore) May 31, 2012
Not only are your audiences - your customers - not standing still on the web, waiting for you to push messages to them, but the speed of their communication and the level of their interconnectedness is such that every brand is competing with every other brand on the web at real-time.
Whether you like it or not.
This dynamic creates a demand on brands to own themselves. People are on the web, both in mobile and on the desktop. They’re already there much more than they are on the unidirectional medium of printed media, and that trend will just continue to grow. Whether brands decide to start spending the proportionate ad dollars on the desktop and mobile web or not, they’ve got to be in the web space and representing themselves in such a tight-knit way as to be responsive to current and potential clients needs at web speed.
That doesn’t necessitate more ad dollars be spent on the mobile and desktop web, however. It just necessitates that brands re-think the way they understand and communicate with their audiences and do so very quickly.