New Forms of Internet and Media Experience

Yesterday, Amazon announced its much anticipated Android-tablet / next-generation-ebook-reader. Nick Carr raised an interesting issue on his blog:

…we are seeing the e-book begin to assume its true aesthetic, which would seem to be far closer to the aesthetic of the web than to that of the printed page: text embedded in a welter of functions and features, a symphony of intrusive beeps.

Yet the “aesthetic of the web” is also changing. Granted, most websites today still have their 1990’s blizzard-of-content-festooned-with-links aesthetic, but mobile apps are starting to have a real impact. Even, one of the more stuck-in-the-’90s of successful websites, is making mobile-influenced changes.

With features such as full-screen browser mode in Apple’s OSX Lion, and Microsoft’s Windows-8 in the future, web pages, and web apps, can now take on the immersive aesthetic characteristic of a mobile or tablet app – or indeed of a physical book – with distractions excluded.

For instance, here is an example from a traditional website, a tool bar from With its large buttons and simplified graphics it could have come straight from an iPad app:

Relatedly, there are real signs that HTML5 web apps now provide a level of user-experience fluency as good as mobile native apps. The Financial Times replaced its native iPad app with a web-app – and the web-app is just as good. Amazon’s web-app is essentially as good as the native Kindle reading app on iPad, delivering a powerful example of a “modal” immersive media experience. And so on.

Certainly, developing mobile apps remains much harder than, for instance, developing for the iPad. But this is something that can be solved over time, with startups providing development kits and environments, or the growth of open-source community efforts, or development support from platform providers.

Web apps carry with them all the promises of the web – freedom to innovate; infinite cross-reference-ability; mashup solutions that combine different data sources; ability to share ideas and material immediately and freely; and so on. I think we can see now that, in terms of technology platform, web technologies will eventually overwhelm proprietary app development for most applications.

The direction of user experiences is less clear, however. FaceBook, in particular, has succeeded in capturing a huge user base and making them experience a great deal of the web, and media, and even life in general, through the prism of the FaceBook page. Analogously to the way some media-consumption experiences might keep a central focus on one piece of content while providing related and interlinked material, FaceBook is now a different kind of aggregation and cross-referencing system to the traditional web, centered on your social network and people you’re interested in, displacing the traditional unfocussed “web” of HTML links.

Just as experiences like reading and watching a movie call for immersion and distraction-exclusion, we see that web browsing is evolving to provide immersion. Yet at the same time the web wants to keep desired “distractions” just a click or a swipe away – where desired distractions might be Wikipedia information about the media you’re experiencing, finding the dictionary definition of an unfamiliar word, looking at your friends’ opinions about a TV series, sharing your own ideas, or taking a break from immersion to check news or messages.

So it’s not (simply) that ebooks are becoming more like the web. There are multiple evolutions going on simultaneously, with mobile apps shifting to web-technology foundations, and web experiences developing the immersive distraction-excluding characteristics of books but with a wealth of context and interaction always kept close by.

The opportunities for startups? The web-technology platforms, and (even more so) the development of applications that find the right user-experience aesthetic are both wide open today.

Comments Off on New Forms of Internet and Media Experience

Filed under Tech

“Different” Businesses

Netflix announced today that they are going to run their video-streaming-over-the-Internet and mail-a-DVD-to-your-house businesses separately, and brand them separately, as “Netflix” and “Qwikster”. CEO Reed Hastings says that it is becoming “increasingly clear” that they really are separate businesses, and (hence) need to be operated separately.

What makes businesses “the same”, or “separate”?

  • Customer base
  • Customer benefits
  • Suppliers and supply agreements
  •  Teams / skills required
  • Technologies /products to be used or developed
  • Equity market and/or acquirer perspective

I can recall how hard it is, especially in a startup, to serve two different customer bases. As a business-focussed startup, if we tried focussing on large-enterprise, we were pulled into the special services that those customers required; whereas mid-sized businesses needed a standardized low-cost-to-implement solution. The two different kinds of customer needed different sales teams, different support organizations, even different product. We just had to focus on one or the other, or we would have never been able to succeed at either. We did most of our sales in mid-sized eventually, mainly due to their shorter sales cycle.

In a larger company such as Netflix, the opportunity is there to setup and run two operating groups.

How convincing is Reed Hasting’s argument that the DVD-by-mail and streaming-video business are separate?

They do offer different benefits. DVD-by-mail provides a massive catalog and high-quality (Bluray) video, whereas streaming offers instant delivery from a smaller catalog with low-to-moderate video quality. DVD-by-mail targets living-room viewing, whereas streaming can address the living room and is a natural fit with other more portable viewing methods, such as tablet-computers.

From a content-supplier perspective, the suppliers are the same, but might be willing to offer different terms to a video-streaming business vs. a DVD-by-mail business. Given the challenges Netflix has been having with licensing content, the opportunity to present streaming and mail as “separate” in Hollywood negotiations may be part of the motivation.

Technology and financial-model wise, Netflix may want to be able to invest in the streaming business without having to show the immediate returns that would be demanded for less-exciting DVD-by-mail business.

Product-focus-wise, having two separate teams can help each focus on producing the best experience they can, without having to be concerned about compromises with the other group.

Having two separate operating groups also provides the opportunities to sell / spin-out one of the business if desired. I do wonder if this is really the primary motivation – the two businesses are going to bill separately, which does suggest that ultimately they will split. Relatedly, it makes it easier for Wall St. to see the value of the two business separately.

The problem with all this is that it doesn’t include enough customer perspective. Many customers – especially those who are heavy consumers of video – will want both DVD-by-mail / high-quality / big selection and streaming / instant / mid-sized catalog / medium quality. They want to manage their video selections as one, and receive one bill.

What’s worse, the DVD-by-mail business is the perfect onramp to streaming video as streaming goes mass market. How much of that advantage will Netflix be sacrificing by separating the businesses?

Surely it would be possible to organize internal teams, and internal management accounting, so that both businesses could develop appropriately, without having to degrade the customer offer and streaming-transition strategy?

If Reed Hastings is planning to sell one or both of the business in next 18 months, then separating them could make sense. An acquirer might be willing to pay a high price for the video-streaming business, but not want to take on DVD-by-mail. Beyond that, the negative effect on customers of full business-unit-level separation just doesn’t seem worth it.


Filed under Tech

Steve Jobs and the Importance of “Unacceptable”

After a long summer with a lot of travel (Big Sur, Tuscany, Rome, and England), I find myself at home, kids back at school today, with a lot of tech-world headlines having gone by. The two biggest have been the resignation of Steve Jobs as Apple CEO, and HP’s announcements that it was giving up on WebOS and planning to spin out its PC division.

Steve Jobs resignation has stirred a great deal of emotion online, and especially here in Silicon Valley. It’s a truism that when people strongly mourn the departure of someone they don’t know, they are in large part mourning for themselves. Given the extent to which Silicon Valley entrepreneurs identify with “Steve”, the Valley’s inhabitants feel like they are loosing not only an icon but also a proof-in-living-form of the superiority of the entrepreneurial model – of “product excellence” (Apple) over “marketing” (Microsoft-as-was); of entrepreneurs over company boards; of disrupters over incumbent companies; even of vision over execution. And naturally the challenges of Steve Job’s medical condition stir all kinds of sympathetic emotions, too.

Some of the more outré commentary has claimed that Steve Jobs is indeed a “unique visionary” without whom the tech-industry will be lost.

Of course, this is mostly bunk. The primary world-changing tech development of the last 15 years – the growth of the Internet with all its ramifications – is very much not a Steve Jobs’ endeavor. Especially since his return to Apple in 1996, Steve Jobs extraordinary success has been less about broad-brush vision than about the specifics of focus, about following through uncomprimisingly on the implications of his goals, and about getting Apple to do what they choose to do extraordinarily well.

Others sought to serve the consumer. Apple made it a true focus, closing products that didn’t fit.

Others sought well-designed easy-to-use products (Nokia, for instance). But those other companies were not so uncompromising. For example, when Apple was looking for telephone-company partners for the iPhone, they were unwilling to back down on their control of the user experience. They had their idea of how the iPhone would be, and they kept talking to the carriers until they found a carrier (AT&T) willing to go along with it. A company that was more “reasonable”, more willing to compromise with the established order, more “business minded”, would have given carrier-partners all kinds of control over the iPhone for the sake of broader and earlier distribution, and broken the simplicity and ease of use that powered its success. Apple followed through on their focus on user/consumer – that end user’s ease-of-use mattered more than any issues of distribution, partner relationships, time to market, or anything else.

Similarly, many other companies had targeted the tablet “market” before Apple – Microsoft famously announced that the tablet was the future in 2000 – but the earlier products just weren’t simple enough, light enough, coherent enough, targeted enough at real consumers.

In most of our working lives, there are tremendous pressures not to be focussed, not to be uncompromising, and not to maintain excellence. The desire to be liked is strong for most. Collegiality is rewarded and welcomed by bosses, colleagues, even business partners. Conversely, motivation and morale can easily be damaged by “unreasonable” leaders. Many people are measured on achieving time-based goals, but uncompromising approaches can take more time, or unpredictable amounts of time. Focus is easily lost in a world of “Can’t we just…”, “Wouldn’t it be nice if…”, and all the other enticing phrases by which defocussing is achieved.

How did Apple avoid all that? Really, it was because of Steve Jobs. He knew that power of saying “This is unacceptable” – or “That sucks!” – and demanding people do better. Equally important, as a returning founder CEO, he enjoyed untrammeled authority to be focussed, uncompromising, and obsessive about excellence. No-one was going to step in say “This is too much stress for the team to bear.” And it probably helped that however obnoxious he was after his 1996 return, he was likely less obnoxious than people might have feared given the legend of his first period at Apple. Finally, his famed charisma and persuasiveness helped people identify with him and keep working for him even when he yelled at them.

Contrast that with the changes at HP. In February and March, HP CEO Leo Apotheker declared WebOS central to their strategy, and said the PC business gave them “an immense competitive advantage”. In August, he abandoned both. The justifications were classic business-school-style strategy – HP is getting out of low-margin consumer products, and seeking to build a high-margin enterprise service-and-software business.

Those may – or may not – be wise moves strategically. But whether they are or not, the success of HP is more likely to be driven by whether they can do what they choose to do well, rather than by what they choose to do. Here the signs are not good. Indeed, the very way the PC-division spin out has been fumbled, in contrast to IBM’s PC-division sale in 2004 through 2005, is strongly suggestive of a lack of focus on excellence. Similarly, observers can’t help but get a sinking feeling over HP’s purchase of Autonomy – HP is getting into a business (software) it doesn’t know much about, it is doing it in a peculiar way, and it will most likely make a mess of it over time – won’t it?

HP CEO Leo Apotheker needs to find the leadership authority, and inner will, to say “That’s unacceptable” – he could even try a “That sucks!” – and to really mean it by following through on demanding something better, or no strategy is likely to prove successful.

Startup CEOs usually have the authority (exceptions being when there are other entrenched founders, or overly invasive investors, or unusually footloose employees) to declare “That’s unacceptable”, and follow through on it. For all the reasons discussed, it can be hard to do it. Yet, like it or not, do it is one of the keys to building a great company.


Filed under Tech

Security In the Enterprise Public Cloud

We listed the following areas that need to be “solved” to enable enterprise public cloud:

  • Management and monitoring
  • Flexibility
  • Security / privacy
  • Data separability
  • Service separability
  • Redundancy, backup, reliability

Let’s drill into security and privacy.

“Traditional” security within an enterprise is border-and-access-based. The edge of the corporate network is a heavily fortified firewall. Within the enterprise, much data flows over the network in unencrypted “in the clear” form. Access to individual servers, data repositories, network devices, or applications and services is provided via a username/password scheme; typically, username/password is centralized to a single-signon-username+password-system such as Microsoft’s Active Directory, with group membership and/or roles in the central repository defining access for each user. Data is typically not encrypted when stored. Physical access to the data is guarded by locked server rooms or locked server cabinets.

Very little of this traditional model can survive the move to the cloud.

While the “border” concept can be emulated by using a virtualized private area in the cloud with virtualized private network tunnel between enterprise network and the cloud, this is quite unsatisfactory – i) it requires users to login to the VPN before accessing services; ii) it prevents access to services on devices that do not have VPN support built-in; and iii) it requires a leap of faith to believe that the virtualized privacy can really be as secure as the physically-based privacy of a conventional corporate network.

Instead, in a public cloud, service will typically be provided from publicly accessible (not hidden behind a firewall) front-end servers; back-end servers, which contain the actual data, will be hidden from public access. There are opportunities for providing fortified-front-servers (fortified against hacking); and for providing the “hidden” backend servers, and seamless configuration of both and of the connections between front-end and back-end.

Client-server access will always be encrypted – unlike within today’s enterprise – usually with SSL. The need for handling massive numbers of enterprise SSL connections, together with load-balancing and other related systems, is going to be substantial. Any security or tracking system that relies on being able to access network traffic will have to be replaced with application-level security features. There are powerful vendors in the access-systems today – F5, Cisco, Juniper – yet it certainly seems possible that there is room for major innovation there, too, in scalability, manageability, and in the ability to efficiently handle the more complex/obscure network protocols used in enterprise IT.

The concept and implementation of identity and and access rights need to be reworked for the public enterprise cloud. FaceBook, with FaceBook-connect, have made real progress with respect to consumer identity – but that is based on FaceBook’s proprietary system, and is not designed for the enterprise. A number of consumer services, including Twitter, are using OAuth; but that is only a solution to part of the problem. The “identity/access” problem includes the issue of allowing cross-company access to services when desired – one of the benefits of putting services in the cloud should be to enable people to collaborate more broadly – yet cross-company access would need to be achieved without driving a hole through individual company security policies and systems. Public cloud identity systems would need to be synchronized and/or federated with on-customer-premise identity/access-rights systems, and controlled by customer IT administrators – which, in both cases, is in sharp contrast to consumer identity systems like FaceBook’s that are very much run and administered by the cloud service provider.

A public-enterprise-cloud identity system that can handle the complexity of enterprise access rights, role definitions, and hybrid cloud-premise access and administration would be a very valuable asset.

Data encryption of “data at rest” – stored data – is a huge issue for enterprise cloud. System adminstrators at the cloud provider will inevitably have access to stored data, at least in some circumstances, creating risks of abuse or theft. Encryption is often difficult, in that cloud applications would need access to the encrypted data – yet enabling data access by cloud applications makes the data accessible independent of the enterprise customer. Finally, government authorities can subpoena access to stored data at the cloud provider without the knowledge of the enterprise customer – a so-called “blind subpoena” – whereas if data is stored at the customer premises, the government would have to subpoena the enterprise directly, revealing to them that some kind of investigation was going on.

Some modest level of protection can be enabled by having the cloud provider encrypt the data, with the cloud provider holding the encryption keys. Applications can use non-reversibly-derived keys (also used during the actual encryption algorithm), so that simply pasting originating keys into apps does not provide access. Access to keys amongst cloud provider personnel can be more restrictive than access to servers and storage.

As an extension, the customer could hold the originating keys, with only derived keys being transferred to the cloud provider; the derived keys would be used in applications and in data encryption, but the originating keys would be required in any data management tools (tools designed to access the data in the raw), preventing those keys being used by cloud system administrators.

Although these kind of schemes are better than nothing, they are eminently hackable. For instance, a skilled hacker could hack the binaries of the data management tools to work with derived keys rather than originating keys; and they could do so relatively easily if they could get access to the source code of the tools. Or they could hack the applications to intercept the data at the points at which it is unencrypted by the application.

Many mid-sized business would likely be willing to tolerate these hacking risks. With encrypted data in the cloud, the setup is arguably still more secure than the setups mid-sized enterprises are running today on-premise, even taking into account the additional risks of the cloud. A startup that delivered easily deployed, enterprise-holds-the-originating-keys, encryption systems in the cloud might find that they have broad appeal.

One further extension to the derived-key approach would be to have the applications dynamically fetch from the customer-premise a derived form (or part thereof) of the encryption key (or sequence of keys), and then to (non-rerversibly) derive a actual, final key. Dynamic fetching of part of the key would at least avoid the need to store the key, even in derived form, in the cloud; but it is still vulnerable to hacking of admin tools or of applications.

Even that doesn’t solve the challenges of encryption at large enterprises. For some large-enterprise applications, only genuinely encryption-secured data, in which only the enterprise has in-the-clear access at all times, would be satisfactory.

Genuinely-secured-data would be achievable for cloud-based filing systems – the customer would hold the keys, and file system drivers would be provided for a variety of customer’s client and middle-ware systems that made use of the keys. The cloud-provider would never have access. A startup could be based around the management of those keys and drivers.

The problem, though, is that nothing could be deployed in the cloud to make better use of the data – no search, no processing, no display of data deployed in the cloud could take place, because applications deployed in the cloud would not have access to the data.

The only “complete” solution is to produce a means of application distribution – hybrid cloud-premise applications –  in which the portions of the application that require access to the data “in the clear” are performed in the client, or on the customer premises. For instance, for search, an on-premise appliance might produce encrypted search indexing, and encrypted indexes could be stored in the cloud. Another example – for an Intranet server, encrypted web pages could be delivered to clients for decryption; javascript applets would run at the client; and any click or other data action would then read/write encrypted material from/to the cloud.

Hybrid-cloud deployment to allow full encryption in the public cloud thus requires a major re-architecting of the applications themselves, on both client and server sides. To that extent – because it is so broad – it is problematic for startups, though a startup could produce some of the fundamental technologies. It might be easier for a major application vendor – Oracle, perhaps? Or Microsoft? – to get the necessary breadth of traction.

Overall, we can see multiple startup opportunities – and multiple as-yet unsolved problems – in public-enterprise-cloud security.


Filed under Tech

Public Cloud for The Enterprise

Last week, there were several articles linking Sony’s security failure (hackers stole 77 million custome credit card record; Sony had to shut the Playstation’s network down for a week; data that should have been encrypted was stored in plain text) and Amazon’s major outage (parts of  Amazon’s AWS – their public cloud – were offline for two days; parts of AWS supposedly isolated from one another nonetheless were engulfed in a cascading failure).

How do these – and other – failures create opportunities – especially in enterprise IT?

There is a common assumption, which I broadly share, that many enterprise IT activities will migrate into the cloud. Cloud providers can build at a scale, and with a level of expertise, that is beyond the capabilities of even the largest enterprise, achieving superior cost efficiency, capabilities, and reliability. That’s the theory.

How is it working out?

So far, a handful of services are delivered from the cloud – is the most prominent. This may not seem like “real” public cloud, in that it is a service, not a general purpose utility cloud; but it might point the way, with cloud APIs, web-hooks and other forms of extensibility enabling packaged services to become as flexible as traditionally-delivered IT.

Google has made some inroads with Google Apps and Google-Mail-for-business, but largely in small tech-oriented businesses. Google Apps is a long way from being mainstream. Larger enterprises find the limited functionality of the applications troublesome, and are not convinced by service-reliability guarantees nor data security. The online and realtime collaboration features are nice, but have in any case been imitated to a significant extent by Microsoft’s incumbent products. Switching costs can be quite high. For most mid-sized or large enterprises, the incentive to move from Microsoft’s Office suite to Google Apps in the cloud is not strong enough.

Amazon’s AWS, together with Rackspace’s competing service, is getting meaningful traction as a public cloud computing platform. The traction, though, is mainly coming from tech companies that are using it to deliver their own packaged cloud services – often, though not always, consumer services. Zynga (online games), Netflix (online video), Engine Yard (Ruby on Rails automation), Foursquare (location-based social service), Hootsuite (social media monitoring and management), Heroku (Ruby cloud), Quora (Q&A site), and Reddit (news links) all use AWS. Some startups use AWS for “internal” IT, and a few large companies are trying it for certain projects. Yet, the mid-sized and large enterprise is still a long way from embracing public cloud as the means of delivering IT.

Such is the appealing-in-theory-but-resisted-in-practice nature of a move to cloud for mid-sized and large enterprise IT, many vendors are trying to reposition their on-customer-premises products as “private cloud.” There may be more to it than positioning in some cases – by bringing standardized architectures, deployment and management inside the customer, some of the benefits of public cloud (efficiency, predictability) may be transferred also – but in general “private cloud” is no more nor less than the next evolution of the corporate data center. As such, it may offer many opportunities for startups; indeed, in many ways it is probably easier to build a business by being evolutionary, rather than pressing for a public cloud revolution.

Yet, in the long term, shouldn’t the benefits of really-public-cloud overwhelm all the resistance? Is that where a mega-business could be built?

For public-cloud to take over, a whole new infrastructure needs to be created – problems that are solved in private data centers today have to be solved differently for public cloud. Here are some key ones:

  • Management and monitoring
  • Flexibility
  • Security / privacy
  • Data separability
  • Service separability
  • Redundancy, backup, reliability
Over the next few days, I will blog some thoughts on these and related issues – what needs to be done, and the opportunities created.

Comments Off on Public Cloud for The Enterprise

Filed under Tech

Android + eReaders First Stirrings of iPad Challenge?

Recently, we looked at whether Android tablets would inevitably “catch up” with iPad in a couple of years; and/or whether some more radical approach to tablets might be possible.

There are signs of both today.

In an interview with the Wall Street Journal, Michael Dell said:

What’s also interesting is Apple’s great success with the iPhone. Android comes along, even greater success. I think you’ll see the same thing on tablets, with enormous numbers of Android tablets with Dell certainly playing a role in that as well.

Interesting for a couple of reasons.

Firstly, he simply doesn’t mention Microsoft as a factor in tablets at all; if Dell doesn’t think Microsoft will have anything to offer in tablets, Microsoft have lost one of their most loyal supports to the Google/Android camp.

Secondly, Dell has a long history as an assembler of commodity components, innovating in distribution not product. They are the very picture of the mass market. If they believe that, two years from now, the tablet mass market will be Android, then they might just be right.

So much for the “low cost copy-cat” Android challenge. What about tablet innovation?

One way to innovate on product is actually to narrow functionality – or, to put it another way, to focus the tablet on one primary use case, making it unlike the multi-function iPad.

Barnes and Nobel just released their latest “Nook Color” update. The Nook Color is an 7″ eReader; with its new update, it gains support for several Android applications, including web browsing with Flash, email, and even the Angry Birds game. 7″ eReaders can be much lighter (Kindle is only 40% of the weight of an iPad) and much more portable than an 10″ iPad. And by focussing on the experience of reading, with supplemental support for email, the social web, and regular web browsing, they could take a real bite out of the tablet market.

eReading probably isn’t a way in for a start-up – Amazon in particular holds too many of the cards in that game – but it could be one way in which innovative Android-derived tablet products start to unlock iPad’s hold on the market.

Comments Off on Android + eReaders First Stirrings of iPad Challenge?

Filed under Tech

Will iPad Dominate Tablets?

Digital Daily carries a report from Goldman Sachs forecasting continued Apple dominance of the tablet computer space.

It is easy to see why Goldman might think so, when you look at the current landscape. The RIM playbook will no doubt achieve some sales, but looks set to be a not-ready-for-prime-time disappointment. Existing Android tablets are clumsy and have little prospect of large-scale success; and Google is showing signs of strain in trying to apply their mobile-phone/Android model to tablets, holding back Android’s Honeycomb/tablet-version from full open-source release. Meanwhile, HP/Palm WebOS devices seem a long shot, to say the least.

There are two ways in ways in which the forming “Apple dominates” conventional wisdom might turn out to be false.

One is Android catching up. Android’s whole product strategy is not much more than “copy Apple’s product, use a different, advertising-supported, business model.” It may not excite much admiration, but it is working in mobile phones; could it work for tablets?

Tablets are harder. The range of tasks performed is richer and more complex than on a mobile device. The blocks of time users spend interacting with the device are longer. The challenge of making a user-experience that really coheres is deeper – it is like taking the complexity-management of a fully-fledged computer operating system, and the challenges of inventing a natural-feeling mobile device interface, and multiplying them. Also, the market structure is different – in cellphones, the wireless carriers have major influence, with substantial ability to help fragment the market between different device manufacturers, and well-established device-manufacturers to help them do it to Apple’s disadvantage; all that will be much less of a factor in tablets.

Yet, Android is well funded and determined – give them a year, or 18 months, and surely they have the chance to produce a coherent and effective tablet operating system that could attack Apple from the low-end. At least – they will certainly be able to, unless i) the Android team messes it up; or ii) Apple substantially raises the bar between now and then. After all, the iPad itself was widely criticized as “just a giant iPhone” on its release, and not without reason; producing a “a giant Android device” should not be beyond the Android team, given some time.

The other, second, way in which the dominance of Apple’s iPad could be undermined would be if someone introduces a radical, innovative, and better alternative. I suspect that this is more possible than most suspect. The iPad is good, but it is, in the end, just a product derived from the iPhone. How a tablet handles multi-tasking, and co-operation between multiple apps; how it handles media consumption; how it handles content production; and how it handles interactivity and participation – potentially, all of these things could be rethought for the tablet form factor (in fact, for both the 10″ and 7″ form factors).

Where could such a major innovation come from? Android’s copy-cat approach seems entrenched, so it hardly seems likely to be them, though I suppose it’s possible another group within Google might take a shot. Microsoft managed some genuine rethinking for Windows Phone 7, but have been struggling with execution beyond the first version – they might not have the stomach now for something so visionary.

Would it work for a startup? It would need to be formidably well funded. A “fork” of Android could be a place to start (just as MAC OSX used Linux as a place to start). It might need to support an existing corpus of apps – probably Android – to avoid lack of apps being an insurmountable barrier to initial adoption.

The amount of funding needed, and the amount of work needed to get to version-1 product, would put me off. Yet, the opportunity for innovation here is genuinely large. If we are in the “post PC” era, perhaps we need a post-PC-era-company to be founded and funded to really deliver the changes that tablets promise.

1 Comment

Filed under Tech

Mega-Broadband in U.S.

Comcast, the largest U.S. consumer broadband provider, announced that they are extending their “Extreme 105” offer to all regions, meaning 100Mbps home Internet connection is available for most U.S. consumers.

100Mbps is a lot. Enough to stream 20 DVD-quality movies simultaneously; or 5 simultaneous full-BluRay movies.

Much higher data rates are coming to wireless too, with the gradual spread of 4G / LTE technology.

Some caution is necessary. There are caveats in both cases. For Comcast, the standalone price of Extreme 105 is high – $199 per month, four-times the basic broadband price; and even if they drop the price, adoption rates will be constrained by the speed of their DOCSIS-3 rollout. And for 4G wireless, rollout, and device support, will be gradual and partial. A former colleague used to say that the adoption of a new infrastructure technology always takes as long as you think it will – but multiplied by Pi. That might be true again for mega-broadband.

Nonetheless, there will be substantial startup opportunities created.

Firstly, there are the streaming services themselves – whether for video, for interactive communication, or for anything else. Certainly the ways in which people consume, store, produce, edit and share media may be substantially altered.

Secondly, there are the infrastructure opportunities. For storage, processing, and playback, new streaming systems may be needed, or new forms of efficiency may need to be exploited. For networks, new forms of network virtualization, dynamic reconfiguration, cacheing, bandwidth optimization, bi-directional bonding, and rights management may be required.

Mega-bandwidth may be slow to fully rollout for a while yet. But when it comes, it may ignite a build-out bonanza for many service and infrastructure suppliers.

Comments Off on Mega-Broadband in U.S.

Filed under Tech

Cisco and Flip

Cisco announced that they are closing their Flip video business, acquired by Cisco for $590m in March 2009, almost exactly two years ago.

Flip of course was facing significant strategic headwinds even when Cisco acquired them – video recording was integrated into smartphones, which users always carry with them. The opinion in Silicon Valley has long been that the owners of Flip “did a number” on Cisco, taking advantage of Cisco’s apparent naiveté in the consumer space to sell a company that would otherwise have run into trouble.

The legend of how Cisco came to acquire Flip is that a very senior Cisco exec (I won’t say the name, to protect the guilty) was at a house party where all the kids were running around recording each other with Flip cameras. Cisco of course has a long standing interest in video, as the thing which is driving the next wave of growth in demand for network capacity and capability. Our Cisco-exec-hero was sure that Flip must be the next big thing in consumer electronics – the next iPhone, if you like – and made the acquisition happen.

Flip really was an excellent product, too – an exercise in minimalism that made recording and transferring video trivially easy. In many ways, it did deserve to be the next iPhone – bringing a high-end capability set to the masses.

What does this all tell us?

Firstly, from the startup perspective, if you face a strategic threat – here, Flip’s functionality was being integrated into other products – and someone offers you a great price, you should sell.

Secondly, products that are based on pure simplicity can be vulnerable to changes in fashion. Making complicated things seem simple – as the iPhone does – can be quite defensible. But making simple things seem simple – which is primarily a matter of getting the over-engineered feature-set out of the the user’s line-of-sight – may be a lot more vulnerable to competitive attack – especially if there is no platform-like element to lock the user into the product. iTunes and iTunes-music-collections arguably provide such lock-in for the iPod music player, however, despite their efforts with Flipshare, in the end Flip was just too easy for users to live without, once they had similar functionality in their smart-phones.

Thirdly, if you sell to a player who has little competence in your space – for instance, selling your consumer electronics company to a network-hardware vendor – you significantly increase the risk that the business will break down after the acquisition is done. It can happen for a whole bunch of reasons – lack of prioritization; the rejection of a “foreign”-seeming body from within the main business; management processes that don’t and can’t align – but the risk is high. Taking the money is always nice, and often the right thing to do, but any founder needs to weigh the high risk of seeing their “baby” lost at sea.

[Hat tip: ReadWiteWeb]

Comments Off on Cisco and Flip

Filed under Tech

How to Make iTunes Really Sync Multiple Computers With The Cloud

Toronto startup Pushlife has just been acquired by Google for $25m. Hardly a huge outcome, but probably quite profitable (their website said they had an “impressive list of venture capital and angel investors”; I was able to identify just one who is known publicly, Duncan Hill, an angel investor).

ReadWriteWeb has a reasonable speculation on what may have motivated Google’s purchase.

We previously discussed some of the reasons why music might stay in the form of a music collection, rather than transform itself in to a streaming music service.

In fact, it is possible today to sync a music collection to the cloud, and thence to synchronize it between multiple computers, simply using iTunes and any generic cloud file-syncing and file-storage system, without the need for any special applications or services such as Amazon’s Cloud Player.

For a bit of a change to our normal startup topics, here’s how to synchronize an iTunes library with DropBox (the cloud file sharing service) and as many computers as you like.

First, move both the media files and the library files to Dropbox on the main MAC (you could do the analogous things on a PC, though I haven’t tried that myself):

  • Look under iTunes -> Preferences -> Advanced, for the iTunes Media Folder Location. You’ll see something like: /Users/<your-user-name>/Music/iTunes/iTunes Music
  • Quit iTunes
  • Create a new folder called “Music” inside your DropBox folder. For instance, the new folder might be: /Users/<your-name>/Dropbox/Music
  • Drag (move) your existing iTunes folder (e.g. /Users/<your-name>/Music/iTunes) into the new Music folder
  • Restart iTunes, then under Preferences -> Advanced, change the Media Folder to /Users/<your-name>/Dropbox/Music/iTunes/iTunes Music (or equivalent in your case)
  • Quit iTunes
  • Delete the default (now effectively empty) library files iTunes just created in /Users/<your-name>/Music/iTunes
  • Holding down the alt/option key, restart iTunes.
  • You will be prompted to choose a library, navigate to /Users/<your-name>/Dropbox/Music/iTunes and choose the “iTunes Library” file (or else just doubleclick on iTunes folder).
  • The Library shows up in iTunes
  • Quit iTunes and restart it – it will remember the new library location, library will show up and can be played as normal

Allow Dropbox to fully sync from your primary computer to the cloud, and thence to your secondary machine(s). This may take quite a while – for instance 24 or 36 hours – depending on the size of your music library and the speed of your Internet connection.

Once synced, on your secondary computer (or additional machines):

  • Open iTunes, under Preferences -> Advanced, change the Media Folder to /Users/<your-name>/Dropbox/Music/iTunes/iTunes Music
  • Close iTunes
  • Hold the alt/option key and start iTunes; you’ll be prompted to find a library
  • Navigate to the /Users/<your-user-name>/Dropbox/Music and double click on iTunes folder
  • Close iTunes, delete the contents of /Users/<your-user-name>/Music/iTunes, it’s not needed any more
  • Start iTunes again
  • Library will appear and is playable

Subsequently, new music purchases, play lists, additions to art work, etc. etc. will sync between the computers (this is much richer syncing than the Apple home sharing system, by the way…). And all your music is backed up to the cloud.

Of course, you can still synchronize music to your mobile devices as always, using iTunes sync for iPod/iPad/iPhone, and something like TuneSync for Android devices.

No doubt PushLife were doing more than this sync. Or else, they are good advertisement for packaging something up in any easily-digestible user-friendly solution. Congratulations to them in either case.


Filed under Tech