Archive for the ‘Google’ Category
A lot of people constantly harp on the fact that Google is not the market leader in any other market segment than search and contextual advertising and hold it up as proof of the fact that Google is a one-trick-pony. While it is desirable for Google to lead the market in everything it gets into, it is not the only factor that Google is looking for when it kicks off a new product.
To understand why Google does things differently, you need to first understand how goes Google work differently as a company.
At its core, Google is one massive computing infrastructure. What the company excels is in building, and maintaining applications on top of this infrastructure, only parts of which are known to us as Big Table, Google File System and Map Reduce. Almost every application (yes, this is speculation, sue me) is built atop this infrastructure, giving Google the ability to have consistency across storage, classification, categorization of any data that comes into its system. Other companies, like Yahoo! and Microsoft, have years and years of legacy sitting on different frameworks and infrastructure, giving Google amazing leverage over them.
For Google, the only real product is user experience and the value the user derives from using Google’s products. This, in turn, helps further refine and better offerings across the plate for Google, creating an endlessly iterative and self-improving product ecosystem. And the products by themselves are a means to bettering the end-product of user experience.
For instance, not many would have much to say about Google’s “web history”, but not many know that the same is used to do drive recommendations in Google Reader, which also uses geographical data (I was recommended feeds related to Trivandrum after being there for a week) which Google collects in conjunction with ISPs (driven by Google Analytics) to further refine these recommendations.
In a similar manner, Google already tracks the clicks that originate from Gmail and I would not be surprised if they are already tracking and indexing the thousands of billions of messages that flow across Google Talk to better know and predict which link you are likely to click more on Tuesdays and Wednesdays (match data from the messages and your web history), compared to Friday or Sunday. And that is very much in line with their mission statement of being more useful to you, in a manner that borders on the eerie quite a few times.
And that is where the greatest challenge lies for companies that aim to compete with Google. Learning systems that improve itself iteratively with time and usage are hardest to beat, because it improves by using you against yourself (something like going against your best time in a racing game than against a pre-programmed computer run) and since Google has been around for such a long time, the amount of data it has about you is something that the competition can’t match unless a vast majority of Google’s users switch overnight to the competing services.
Which brings us back to the non-search problem. Google really does not need to be the number one in other areas (other than the silly acquisitions like Jaiku). It does not cost Google much to create new products (many Google projects like Reader and News were started as 20% time projects) and it does not cost them anything to run those either (they are written with the same framework that is maintained for their core offerings). So, even if all of them were to fail, it would not make a dent on Google, while the fact is that a lot of them don’t.
Now, add Google App Engine to the mix, which opens up the same infrastructure (leveraging the same Google Accounts identity system) to the wider web. With the App Engine, for the tiny cost of supporting the bootstrap process for free, Google now gets even more focussed and specific data regarding usage(in the hierarchy of usage quality, context is king. Apps would have a context that is locked-down taking out the guesswork for Google and the data that is stored in such contexts would also be in a format that Google natively understands).
It would really be stupid to assume that all these processes and data collection is not already being used to improve the advertising business, which is from where they earn their bread.
p.s: This post has been edited for clarity and a couple of grammatical snafus from its first version.
There are events and then there are not-so-ordinary events that give us hints, even in their disassociation, about the direction that technological (or any other type, for that matter) developments will head.
In the past week we have seen three such events – Microsoft’s formal overture towards Yahoo!, Facebook’s less-than-stellar numbers and Twitter’s ongoing saga in trying to keep a web-scale messaging framework up and running – that give us tasty hints as to where we may be headed.
The simpler, shorter version of the Microsoft – Yahoo! story is that companies that do business in the old school way – a manner similar to a behemoth, clumsy and ugly in gait – are history on the internet. Lock-in of the user and his/her data to platforms or products is a strategy that is history. It is only a stellar product that will keep companies alive in the future. And neither Microsoft, nor Yahoo! have built and in-house hit web-scale product in recent times.
The feeling that keeps coming back to my mind is that Microsoft and Yahoo! will be one of those weddings that look perfect as a mental image (for the shareholders and business wonks), but in practice it ends up being an absolute nightmare. There is a staggering amount of redundancy (for every Yahoo! product you can think of, there is almost a competing one with MSN/Live.com) and the integration will also be rotten in terms of platforms and cultures.
Even if you set apart the strong stench of desperation in the move, the fact remains that these are two companies that are struggling to catch the imagination of the younger and upcoming generation. By the time the dust settles on this one, much confusion would have ensued, which would tick off the loyal users who make up a vast majority of the numbers that make the deal look exciting.
That said, it is indeed a sad development to see an internet icon like Yahoo! being in the position that it finds itself in now. And in that state of distress lies a story for everyone who makes a living off the internet – don’t take anything for granted. Earlier, a company’s lifecycle – from inception to success to the demise – used to take decades, now the same is being compressed into ten years.
It is a theme that I will never tire of telling everyone I know: being nimble is a priceless asset in doing business now – nurture it, grow it and covet it with as much care as you covet your bottom line.
- Do this only when you a bit of good time to spare, don’t rush through it.
- Mark all individual feeds that have more than 100 unread times as “all read.” You are likely to spend a lot of time working through this and getting very little real value out of it.
- Reduce top level clutter: Keep as few folders on your top level as you can. My total boils down to nine, ordered in terms of reading frequency (daily, india-blogs, links, misc, music, news, private, technology, testing).
- If you need to organize your feeds in a more granular manner, use sub-folders. Do this only if you are an organisation freak. What has worked best for me is the following method.
|-Top Level Category (By frequency first and by usage type later)
|-Sub Category (By theme/topic: Technology, Business, Blogs)
- Always read using the “River of News” view (Folder Level view) on Top Level folders
- Find your comfort level in terms of number of items you can read in a fixed period of time and switch to List View if the items are above a fixed number (I keep it at 100).
- Star lengthy items that need more reading time, catch up on them later.
- Frequently prune your subscription lists: Check your reading trends regularly. Unsubscribe from feeds that are below a certain read percentage in subscription trends. Follow that up with with the same treatment done on the reading trends.
- My average reading percentage is 30% for my Top 40. If you have the same numbers, it is a good idea then to let go of the feeds that have less 30% reading percentage. Chances are you won’t miss them because you don’t read them much anyway.
Just a quick note before the day starts in full flow choc-a-bloc with meetings. For all those really loud people who have dismissed outsourcing and given it so much grief, go and take a look at IBM’s Q4 2007 earnings call. A lot of IBMers in the US are getting to keep their jobs because of their company’s strong performance have to thank outsourcing for it. Services, which is mostly driven by outsourcing has been their rock star performer:
Looking at our results by segment, Services continued the momentum we’ve seen over the year. Global Technology Services revenue was up 16%, profit up 26%. Global Business Services revenue was up 17%, profit up 9%. And we signed $15.4 billion of new business, and importantly short term signings were up 8%.
They have also benefited from having a truly global operation, which enables them to focus on emerging markets, that drew in a 3rd of their revenue for the quarter. You can expect a similar set of results from Google (growth, but at a much slower rate) and from other geographically diverse companies who can limit their exposure to the carnage that we will see this year in the US market. Google itself crossed the crucial landmark a while ago, when, in Q1 2007, their international revenues pretty much hit 50% of their total revenue.
Once again, companies that will take a huge hit are ones like Apple, who depend extensively on retail spending in the US markets, which is why they have been gradually moving into other segments (iPhone) for which you can look at recurring revenues per customer than a one time engagement. 2008 will be critical for Apple and if they have to escape the carnage, they have no choice but to forge ahead with the launch of the iPhone in other markets.
Then again, the iPhone is vastly overpriced for a market like India, even if you were to assume something like INR 16,000 as the price point. If they need a winner, they’ll need something in the sub-INR 10,000 price range to set the market alight and I don’t see something like that coming from Apple.
On an unrelated parting note, I think I’ve linked to David Manners’ blog on the semiconductor trade, but it is an absolutely lovely blog to read even if you are not a semiconductor wonk. Highly suggested.
RSS readers have over time become pretty fully-featured software on their own. Most now provide the standard set of features: OPML import/export, categories, river of news and search irrespective of their avatar — online or offline — and I have pretty much grown used to depending on my reader of choice Google Reader to satisfy the need to read my feeds.
That said, there is one feature I’d really love to have in my RSS reader – to have clustering on feeds as an additional way to categorise data, other than the current methods of categories and tags. Think of it as a cross between your RSS reader and Google News/Techmeme. Would it not be nice to have your little personal Google News or Techmeme from the sources that you have picked than be led by what Gabe or the kind folks at Google News may have seeded their websites with?
There are, though, a couple of problems that could make this impossible:
Processing: Any algorithm that finds similarities in text is computationally intensive even in cases where the data set is limited. Scaling is often possible in such circumstances when the size of the data set is reasonably fixed and with the variance that comes in the size of different RSS subscription lists, it would be a royal pain to find a right algorithm that will scale effectively and efficiently.
Entropy: Traditional similarity match approaches work best when they cover a similar domain so that an apple would mean apple the fruit rather than Apple the company. The entropy that is found in the data set needs to be reasonable for the algorithm to function reasonably well and learning systems also need to be taught with training data, which may not be possible in this case.
Link Match: What we are then left with is to hit the problem purely by tracking outgoing links. This would thankfully involve a far less computationally intensive approach than going via the pure text analysis approach. The degree of accuracy and the utility this approach may have may not be stunning, but it would certainly be good enough for the immediate purpose – a reasonable way of classifying what my subscription list is talking about.
Ars Technica has a nice piece that separates the wheat from the chaff on Windows 7, which is the successor to Windows Vista. As much as I dislike Microsoft’s Windows Vista, Windows 7 is a product I am deeply interested in since the man who is heading the effort is Steven Sinofsky, who was responsible for bringing out the revolutionary changes that was seen in Microsoft Office 2007.
Another reason why Windows 7 would be an interesting one to watch is because it will be the first major operating system release that will have the full weight of the new architect in town — Ray Ozzie — bearing down on it. Both Sinofsky and Ozzie represent a change in style and direction for the company, but what is more interesting is that the expected street date for Windows 7 — either late 2009 or 2010 — would represent a technological landscape that would be drastically different from what we see today.
Some of the points to ponder:
Google: The company has eaten Microsoft’s breakfast, lunch, dinner and the entire banquet. They are also making minor inroads into areas that Microsoft defends with all its might like the enterprise. Admittedly, a couple of thousand tiny firms defecting over to Google Apps is not even enough to make even the tiniest scratch on Microsoft’s bottom line, but what it does for sure is to create yet another distraction for the company when it could really do with none.
But to take these distractions as low-impact would be amazingly silly. Not every firm is born as a member of the Fortune 500 club. Some of them grow over years to become behemoths, others end up being medium size ones, while the rest stay on where they are. All of them represent sales of differing volumes and as Microsoft already knows, it is hard to get the enterprise segment to switch once they have made a choice. Most of these also upgrade their degree of involvement with Microsoft through time, adding more license seats and upgrades. In short, Microsoft is fibbing when they tell you that such players who move over to Google don’t worry them.
That said, Microsoft too is working on the “cloud” in terms of storage and processing to add that facet to their enterprise suite. But it is a hard one to pull off when you are playing both sides, saying Google’s strategy of being cloud-bound is flawed and also trying to up sell their own cloudiness at the same time.
Mobility: I will refrain here from writing on Google’s Android since it really has to be seen how it will play out eventually. Meddling in consumer-owned hardware is not Google’s strength and even if the platform is good, it will have to contend with operators and other variables (like Google’s bid for spectrum) in making it a success. Microsoft here has limited success, but the stars of the show here are Nokia (Symbian, Mameo) and Research in Motion (Blackberry). Between those two, the entire spectrum of individual users and enterprise has been covered, leaving Redmond not much wriggle space.
The possibility of disruption here is incredibly huge because it unbundles the data and processing from the desktop, which is what keeps the kitchen going at Microsoft, and puts it in an environment that is not dominated by Microsoft. Even at this stage, someone like me who used to open up the laptop every night at home, no longer does that because of my Blackberry that helps me access practically every service I need either directly or through Opera Mini. Two years down the line all of these services would be better by at least twofold.
Decreasing cost of technology: With the overwhelming influence of opensource, outsourcing and the commoditization of technology it has become easier and cheaper to start up almost anything these days. Almost every successful recent web venture has taken advantage of this and we have new players like Amazon (combination of S3, EC2 and SimpleDB) who now allow you to focus more on creating products than understanding how the individual parts work. In two years time, it would not be hard to imagine browser-based desktops and computing environments that function pretty close to how a regular desktop environment would function.
What this would mean eventually is that the base level of quality for newer products would increase with time, while the cost to create them would decrease dramatically. Compared to that, Microsoft would be spending more to release every new product and operating system and fighting more products that would be increasingly cheaper to build and better to use.
The next generation: My generation, the one that came before mine and the one that is to follow won’t be the same as the coming ones. We were not born in a world where computing surrounds us in various forms. Most of us who design the digital experience, do not natively think in such a world. While it would be wrong to think that people who think natively in those terms would be a vast majority by the time Windows 7 is pushed out of the door, a fair number of them would have the OS as a gateway to their digital experience and not designing the OS to meet their expectations would in all likelihood alienate them, while designing the OS with them in mind would alienate the current crop of users. It is a space that is not much fun to be in.
To conclude, it would be totally over-the-top to assume that by the time Windows 7 comes up the world would have walked away from Microsoft. They will be challenged very hard over many aspects of their business and there will be considerable erosion in their customer base. But to take even 2010 as a benchmark year would be foolish to see the impact new technologies would have on Microsoft. The enterprise is a slow moving creature and takes time to change and even Google can’t do much to change that.
These are thoughts that have been lingering in my mind for a while now regarding the future of the news publication business. I have been putting off publishing them for a while to write that perfect long winding essay, but spare time being such a luxury these days, numbered points will have to do for the time being.
- The newspaper as we know it, printed on paper and published once or twice a night, won’t survive it to the time when the generation after the next joins the workforce. It is an outdated endeavor that is mostly supported by a dying habit, which won’t find any resonance in the generation to come after the next. Additionally, we’ll also get to save so many trees and help better content to get published by means of cheaper publishing costs, compared to awful and painful experience that procuring newsprint is these days.
- The concept of a newspaper, though, won’t die. While Kindle is not the future, it is a pointer to the direction that we’ll head out in the future. While we have historically been happy to carry more than one portable device (mobile phones, MP3 players, pagers, walkmans etc), the trend definitely is pushing towards converged devices (mobile phones and PIMs, mobile phones and MP3 players, mobile phones and GPS units). Something really has to give here. The gazillion dollar question is how would it be possible to marry the existing form factor with the ease of distribution that is associated with newspapers these days.
- Publications will do something dramatic to survive. In all likelihood, this will be to drop their agency deals and pool resources to cut costs. Think about it: newspapers and television channels push out some fifty reporters at a Prime Minister’s press conference to report the same boring bits and bytes. There used to be a love for the language the way in which was used in the old time that could have earlier justified it. But the new zest is for sharper, crisper language and not 900 word works of art that nobody has the time to read. The unions and the current generation of journalists won’t take to this one lightly, but they can do it the painless way and restructure in peace or do it the painful way: to fight it and make a mess of things. The news publication of the future will have most of its senior reporters covering
non-unique stories, while a pool of common reporters will do what an agency does now, at a considerably lesser cost.
- Distribution will also undergo a major change. We could have, instead of newspaper stalls, subscription points where you can transfer content to your device from kiosks through bluetooth (imagine the security nightmare! Or, if you are smart, you would start investing in a company that provides secure and reliable bluetooth connects through some form of fingerprinting) or wifi. The displays will show leading pages, very much like a normal newspaper vendor’s stall these days, but you won’t be buying the edition, you would be transferring it. Once transferred, you can read, share and suggest the content, making you a subconscious editor of a virtual publication.