Blue Screen Of Duds

Where the alter ego of codelust plays

Posts Tagged ‘social media

Short Notes: Quantitative Hedge Funds, Google App Engine, DTH, itimes

leave a comment »

I know the Google App Engine just took over the interwebs. We will get to that, but, later. Alpha Magazine has a nice write up (a bit all over the place in terms of direction, but very rich in terms of content) on quantitative hedge funds who marry cutting edge research from various faculties in science and marry them with the normal hedge fund business.

It is quite a long article, but is well worth the time you spend on it, if numbers, markets, arbitrage, learning (artificial and natural), behaviour and systems design are things which make you salivate more than blondes and brunettes. In a somewhat-related topic we have a fascinating entry on search algorithms that also mentions Pareto in the same breath. Most of the math in it flies like a supersonic above my head (confession time, I absolutely suck at math, go figure!), but do stick with him till the point where explains why there is no “best” search algorithm.

On to Google App Engine (finally!). If Google can bless this with the levels of reliability that they are known for, it will have the same effect that Ruby on Rails has had on start ups, by making bootstrapping of products so easy that it becomes absolutely irritating. David Recordon believes that the App Engine will provide apps that use it with a shared sense of a user, which is one of the major problems that face every socially-enabled product these days. Krzysztof Kowalczyk adds to the existing commentary and says it is the first, true internet operating system. But I do wonder about one thing. Everyone is very bullish on EC2 and GAE from the entry barrier point of view, the thing that remains to be seen is the exit barrier and how difficult it would be to leave such a framework, both in terms of cost and effort.

The last word on the GAE launch has to go to Michael Arrington, who can’t ever be expected to sit out a slugfest, especially one that draws traffic to Techcrunch on what is not really its strong ground – technology. He makes a post on the website about how Google has pulled down one of the first apps, HuddleChat, built by one of the Google employees showcasing the technology from a product perspective. He calls it “censorship” and the easily-inflamed community sets itself alight (rather predictably) over it, while the simple reason behind the move has no more logic behind it than avoidance of bad PR karma, which any company would want to avoid during such a major product launch. The move, by itself, does not make or break the world. Get over it (and yourselves, too) guys.

Meanwhile, all is not well in DTH land in India. The two leading players — Dish TV and Tata Sky — are said to be raking in losses to the tune of Rs 1400 crores (combined) in their quest to do a market land grab first and aim for profitability later. The current cost per user is Rs 1600 – Rs 2300 for each new subscription and the newer  MPEG 4 set top boxes that will hit the market soon are expected to increase the costs and losses even more.

Interestingly, Dish TV seems to think the tipping point where the ARPU will start going up, instead of down, is at the 7-8 million subscription mark. Which would mean that with 3 million subscribers, DIsh TV itself has to double its market penetration before margins start working in the opposite direction for them. That could easily see them doubling current losses in the coming years and that alongside other costs could see their current Rs 300 crore loss going up to 700 crores. In short, this won’t be a fun competition to be in, if you wind up being second-best.

In one of the last links for the day, we have David Manners deviating from his usual domain of semiconductors posting a note on how bad T5 at Heathrow is, which should be sent to every person who is fond of doing the customary India-bashing bits under the pretext “this just would not happen it the west.” The price quote from the post is the captain saying: “We’ve landed at Heathrow which is in chaos”.

Lastly, for today’s silly Twitter apps update. Grouptweet is a Twitter application that allows you to send tweets to a group of people. Even better is the blog Twitterholics, which will allow you to track such inane products without having to leave the comfort of your browser tab.

p.s: Oh yes, Indiatimes has launched their social networking website (finally!). Looks like TIL now has two schools of thought: the IIS/.Net based in-house products and the LAMP-stack based outsourced products. Then there is the Java stack that powers the e-commerce offering, the entirely outsourced email offering. Oh well, this is Indiatimes after all.

That said, it is a very clean implementation and if you want to make friends with half of the staff at iWorld Gurgaon, this is the place to be at! Product-wise this looks like something that was put together after cobbling together everything they could find on other products. And for those who are wondering about the email part in it, it looks like a re-implementation of the current whitebox email solution provided by Indiatimes.

I guess the thinking is that there are way too many inactive/spam accounts on the main Indiatimes email framework, this could be a clean/fresh start towards having a better user base that can be sold for more to the advertisers. Let us file this one away in the “social media will buy me lunch (dinner and next day’s breakfast too!) department.” (hat tip: Contentsutra).

Written by shyam

April 9, 2008 at 12:38 pm

Decentralized Social Data Framework: A Modest Proposal

with 3 comments

Twitter being down is no longer funny, nor is it even news anymore and the same is the case with Twitter-angst, where loyal users fret and fume about how often it is down. One of the interesting suggestions that have come out as a result of this is to create a decentralized version of Twitter – much on the lines of IRC – to bring about much better uptimes for the beleaguered child of Obvious Inc.

I would take the idea a lot further and argue that all social communication products should gradually turn into aggregation points. What I am proposing is a new social data framework, let us call it HyperID (since it would borrow and use heavily ideas and concepts from OpenID), from which social media websites would subscribe, push and pull data from.

Essentially, this would involve the publication of the user’s social graph as the universal starting point for services and websites to subscribe to, rather than the current approach where everyone is struggling to aggregate disparate social graphs as the end point of all activities. Ergo, we are addressing the wrong problem at the wrong place.

The current crop of problems will only be addressed when we stop pulling data into aggregators and start pushing data into service and messaging buses. Additionally, since this data is replicated across all subscriber nodes, it should also provide us with much better redundancy.

Problem Domain 

Identity: Joe User on Twitter may not always be the same as Joe User on Facebook. This is a known problem that makes discovery of content, context and connections tricky and often downright inaccurate. Google’s Social Graph API is a brave attempt at addressing this issue using XFN and FOAF, but it won’t find much success because it is initiated at the wrong end and also because it is an educated guess at the best and you don’t make those with your personal data or connections.
 
Disparate services: Joe User may only want to blog and not use photo sharing on the same platform, unlike Jane User who uses an entire gamut of services. In an even worse scenario, if Jane User wants to use blogs on a particular service provider (say, Windows Live Spaces) and photo sharing on another (Flickr, for instance), she will have to build and nurture different trust systems, contacts and reputation levels.

Data retention: Yes, service providers are now warming up to the possibility of allowing users to pull out user data from them, but it is often provided without metadata or data that is accrued over time (comments, tags, categories etc). Switching providers often leaves you with having to do the same work all over again.

Security: Social information aggregators now collect and save information by asking you for passwords and usernames on other services. This is not a sane way to work (extremely high risk of phishing) and is downright illegal at times when it involves HTML scraping and unauthorized access.

Proposed solution

Hyperid Layout

Identity, identity, identity: Start using OpenID as the base of HyperID. Users will be uniquely addressable by means of URLs. Joe User can always be associated with his URL (http://www.joeuser.com/id/), independent of the services he has subscribed to. Connections made by Joe User will also resolve to other OpenIDs. In one swipe you no longer have to scrape or crawl or guess to figure out your connections.
 
Formalize a social (meta)data vocabulary: Existing syndication formats like RSS and ATOM, are usually used to publish text content. There are extensions of these formats like Media RSS from Yahoo!, but none of them address the social data domain. 

Of the existing candidates, the Atom Publishing Protocol seems to be the most amenable to an extension like this to cover the most common of social data requirements. Additional and site-specific extensions can be added on by means of custom namespaces that define them.

You host your own social graph: With a common vocabulary, pushing, pulling and subscribing to data across different providers and subscribers should become effortless. This would also mean that you can, if you want to, host your own social graph (http://www.janeuser.com/social) or leave it up to service providers who will do it for you. I know that SixApart already does this in part with the Action Streams plugin, but it is still a pull than a push service.

Moreover, we could extend the autodiscovery protocol for RSS and use it to point to the location of the social graph, which is a considerably better and easier solution than the one proposed Social Graph.

Extend and embrace existing tech: Extend and leverage existing technologies like OpenID and Atom to authenticate and advertise available services to users depending on their access levels.

What this could mean

For companies: They have to change the way they look at usage, data and their own business models. Throwing away locked-in logins would be a scary thing to do, but you get better quality and better-profiled usage.

In the short run you are looking at existing companies changing themselves into data buses. In the longer run, it should be business as normal.

Redundancy: Since your data is replicated across different subscribers, you can push updates across to different services and assign fallbacks (primary subscriber: twitter, secondary: pownce and so on).

Subscriber applications can cache advertised fallback options and try known options if the primary ones are unavailable. 

For users: They will need to sign up with a HyperID provider or host one on their own if they are savvy enough to do that. On the surface, though, it should all be business as usual, since a well-executed API and vocabulary should do the heavy lifting behind the scenes.
 
The Opportunity

For someone like WordPress.com, diversifying into the HyperID space would be a natural extension. They could even call it Socialpress. The hypothetical service would have a dashboard like interface to control your settings, subscriptions and trusted users and an API endpoint specific to each user.

Risks

Complexity: Since data is replicated and pushed out across to different subscribers, controls will be granular by default and across different providers this could prove to be very cumbersome.

Security: Even though attacks against OpenId has not been a matter of concern, extending it would bring with it the risk of opening up new fronts in what is essentially a simple identity verification mechanism.

Synchronization: Since there is data replication involved (bi-directional like any decent framework should do), there is the possibility that lag should be there. Improperly implemented HyperID compliant websites could in theory retain data should be deleted across all subscribed nodes.

Traction: Without widespread support from the major players the initiative just won’t go anywhere. This is even more troublesome because it involves bi-directional syncing and all the parties involved are expected to play nice. If they don’t, it just won’t work. We could probably get into certification, compliance and all that jazz, but that would make it insanely complicated.

Exceptions: We are assuming here that users would want to aggregate all of their things under a single identity. I am well aware of the fact that there are valid use cases where users may want to not do that. HyperID does not prevent from doing. In fact, you could use different hyperIDs, or even specify which services you don’t want to be published at all.

Feedback

The comment space awaits you!
 
p.s: Apologies for the crappy graphic to go with the post. I am an absolute newbie on Omnigraffle and it shows! 

Written by shyam

February 4, 2008 at 1:46 pm