The holiday season is wonderful, but it's also the time where bloggers, editors, pundits, psychics and all of us slightly- to completely-uninformed people issue predictions about what is going to happen in 2010. I just read a few of the predictions for 2010 from Danny King of Video Business, and I had a few, uh, disagreements.
For example, King thinks that it's a foregone conclusion that Amazon will buy Roku. Not very likely, given that the Roku set-top box was designed by Netflix and was originally supposed to be a Netflix-branded product. I'm sure that Netflix still has first dibs on the product, and perhaps on Roku itself. He also predicts that Tivo will be sold to Best Buy. Huh? Tivo is turning into an audience- and advertising-research company; Tivo's DVR sales and market share continue to decline, and Best Buy buying the company wouldn't change that. I think that both companies will end 2010 as independent entities, but if I had to come up with likely purchasers, I'd vote for Cisco for Roku and Google for Tivo.
He also thinks that Redbox will do a deal with Starbucks, but only if they come up with kiosks that do digital downloads (to thumbdrives or SD cards, I presume.) Starbucks tried it with music and it didn't work, so why would they think that it would work any better with video? My personal opinion is that the digital download kiosk model being pursued by Blockbuster and MOD Systems will be dead on arrival. Why would I drive to Starbucks to load a video onto a thumbdrive when I can download it over the Internet to my PC or stream it to my Netflix- or Amazon-equipped set-top box or Blu-Ray player?
Another of his predictions is that NCR will drop out of the video kiosk market, just after the company acquired DVDPlay; his logic is that NCR is number one in ATMs and doesn't want to be number two in video kiosks. True, they don't want to be number two--that's why they purchased DVDPlay and partnered with Blockbuster. There is certainly room for more than one company in the video kiosk business, and I think that NCR will stay in.
I've learned not to make end-of-the-year predictions, because too much can happen too quickly to anticipate. I will state a hope for 2010, however, and that is that the economy recovers, so that the millions of people without jobs can find work, and the millions who are underemployed can find full-time work and better opportunities. I hope that you have a happy holiday season and a healthy, loving and comfortable 2010 for you and your family.
Monday, December 21, 2009
Saturday, December 19, 2009
If you want a better tool, build it yourself
I just finished reading a great post on Gizmodo by Frank Beacham about Orson Welles's last project and his fascination with the then-new Sony Betacam. Welles took one look at the first professional camcorder and intuitively understood what its impact on video and film production would be. Beacham's article brought to mind how many great filmmakers over the years were also technologists: Not only did they use the tools available, but they helped to design them (or actually designed them themselves.) The recent list includes:
If you want a better tool, build it yourself.
- Francis Ford Coppola, who partnered with Sony for many years and was one of the first to apply video to motion picture production
- George Lucas, who developed one of the first computer-based non-linear editing systems, owned Pixar (and still owns ILM, Skywalker Sound and Lucas Digital) and pushed the limits of digital production, post-production, computer animation and special effects
- Garrett Brown, a cinematographer who invented the Steadicam and Skycam and changed the way that both motion pictures and sports television look
- James Cameron, who first pushed the envelope of computer graphics and special effects, and later, with his partner Vince Pace, created the Fusion 3D system that's helping to make 3D a core production and display technology
If you want a better tool, build it yourself.
Tuesday, December 15, 2009
Where should you locate your startup?
I monitor the Lean Startup Circle group on Google, and a member asked for some suggestions on how and where to find a contract development team. The discussion quickly turned to relocating to where the team is (the member who asked for advice was in Denver); Silicon Valley came up a few times, and one person even suggested relocating to India. My suggestion was to stay right where he was, find a qualified developer locally to run development (I found some good resources in the Boulder area, and other members in the area offered their help), and go from there.
So, where should you locate your startup? (I'm assuming that your business will be technology-based.) If your customer base is concentrated in one geographic location, the answer is simple--go where your customers are. However, if your customers are spread out all over the place, should you stay put or move? It depends on who you (and your partners) are and what your expertise is. If you have the experience to develop at least a portion of the product or service yourself, and you're comfortable managing a development team, you can locate wherever you're comfortable and where you can find the other business and technical resources you'll need.
If you're not a developer or engineer, you need to have at least one person on your team who can run development. That person should be a full member of the team, not a contractor or consultant. No matter how good or committed a contractor is, they're always thinking about the next client and the next project. The work is always done better when the person who does it has skin in the game. It may not be as hard to find that person as you think. If you live in or near a major city, there are always developers that might be interested, or who might know someone qualified who would be interested. Search on Google with your city's name and terms like "startup" and "venture" to find local groups and events where like-minded people congregate, or use a service such as Meetup.com.
That's fine, you say, but why not move to Silicon Valley? I spent more than 25 years living and working there; I consider it my home. You'll find experts in just about every skill set you can imagine. I love the weather; not too hot, not too cold, and you're no more than a few hours from the beach or the mountains. Now for the downside: Silicon Valley is an incredibly expensive place to live, work and run a business. I would easily have to pay 50% more than I pay now for a condo comparable to the one I rent in a suburb of Chicago. If I wanted to buy a home, I'd pay at least three times as much in Silicon Valley for a home with comparable square footage and yard space. Taxes are very high, yet the quality of schools is poor, and parents pay big premiums to live in cities that have good schools, such as Los Gatos and Palo Alto.
Just about everything else is more expensive as well: Food, gasoline, utilities and so on. Office space is much more expensive. People have to earn more money in order to have a decent quality of life, so salaries are much higher. It all adds up to a much higher burn rate than in other, less expensive places to live.
I've never understood why venture capitalists push their investments to move from lower-cost areas such as Texas and Chicago to Silicon Valley. Yes, investors can keep closer tabs on their investments if they can drive over to them, but airfare is truly not that expensive, and teleconferencing is effectively free. If I could run my business successfully at a 30% to 50% lower burn rate simply by staying right where I am, why would I move?
So, where should you locate your startup? (I'm assuming that your business will be technology-based.) If your customer base is concentrated in one geographic location, the answer is simple--go where your customers are. However, if your customers are spread out all over the place, should you stay put or move? It depends on who you (and your partners) are and what your expertise is. If you have the experience to develop at least a portion of the product or service yourself, and you're comfortable managing a development team, you can locate wherever you're comfortable and where you can find the other business and technical resources you'll need.
If you're not a developer or engineer, you need to have at least one person on your team who can run development. That person should be a full member of the team, not a contractor or consultant. No matter how good or committed a contractor is, they're always thinking about the next client and the next project. The work is always done better when the person who does it has skin in the game. It may not be as hard to find that person as you think. If you live in or near a major city, there are always developers that might be interested, or who might know someone qualified who would be interested. Search on Google with your city's name and terms like "startup" and "venture" to find local groups and events where like-minded people congregate, or use a service such as Meetup.com.
That's fine, you say, but why not move to Silicon Valley? I spent more than 25 years living and working there; I consider it my home. You'll find experts in just about every skill set you can imagine. I love the weather; not too hot, not too cold, and you're no more than a few hours from the beach or the mountains. Now for the downside: Silicon Valley is an incredibly expensive place to live, work and run a business. I would easily have to pay 50% more than I pay now for a condo comparable to the one I rent in a suburb of Chicago. If I wanted to buy a home, I'd pay at least three times as much in Silicon Valley for a home with comparable square footage and yard space. Taxes are very high, yet the quality of schools is poor, and parents pay big premiums to live in cities that have good schools, such as Los Gatos and Palo Alto.
Just about everything else is more expensive as well: Food, gasoline, utilities and so on. Office space is much more expensive. People have to earn more money in order to have a decent quality of life, so salaries are much higher. It all adds up to a much higher burn rate than in other, less expensive places to live.
I've never understood why venture capitalists push their investments to move from lower-cost areas such as Texas and Chicago to Silicon Valley. Yes, investors can keep closer tabs on their investments if they can drive over to them, but airfare is truly not that expensive, and teleconferencing is effectively free. If I could run my business successfully at a 30% to 50% lower burn rate simply by staying right where I am, why would I move?
Labels:
Consultant,
contractors,
location,
Silicon Valley,
startups,
Technology,
Venture capital
Monday, December 14, 2009
The "Google Phone" (HTC Nexus One) Begins to Make Sense
A short time ago, Engadget posted part of the FCC certification for the HTC Nexus One, the phone that Google mass-distributed to its employees last weekend. There's been a lot of speculation that the GSM-compatible would be sold unlocked by Google (meaning that in the U.S., it would work with AT&T and T-Mobile.) Frankly, a lot of the story didn't make sense--why would Google start competing with its biggest distributors just as Android started getting market traction?
The FCC certification shows that the Nexus One will work on a variety of international GSM networks, but it will only work in the U.S. in G3 on T-Mobile--AT&T customers can use it as a phone, but data speeds will be limited to EDGE. And, now the story begins to make sense. T-Mobile has been Google's primary partner in the U.S. since the launch of the first Android phone, the G1.
So, here's my speculation: Google is going to sell the phone, and technically, it will work on either T-Mobile or AT&T, but there will be a special T-Mobile account just for the Google Phone. It will be based on T-Mobile's Pay-as-you-go pricing models, and it can be considerably less expensive than T-Mobile's prepaid plans because T-Mobile isn't subsidizing the price of the phone.
Google will, in my opinion, subsidize the price of the phone, because the user will be locked into a suite of advertising-supported Google functions that work anywhere, even on WiFi, and even if the Nexus One doesn't have any GSM SIM card at all. (Yes. that means that Google Phone users will be able to take advantage of Google Voice wherever there's an open WiFi hotspot.)
T-Mobile won't be threatened by the Google Phone, because they'll be the preferred broadband voice and data service. Verizon won't be threatened, because the T-Mobile 3G network is even less well built out than AT&T's. Sprint has a foot in just about every camp, and they're becoming less of a market factor every day. AT&T is hostile to Android, so there's no reason for Google to play nice with them. Perhaps most importantly, Google has a chance to dramatically increase market penetration of Android phones and the appeal of the Android platform to developers, and they'll move a lot more mobile advertising inventory.
The FCC certification shows that the Nexus One will work on a variety of international GSM networks, but it will only work in the U.S. in G3 on T-Mobile--AT&T customers can use it as a phone, but data speeds will be limited to EDGE. And, now the story begins to make sense. T-Mobile has been Google's primary partner in the U.S. since the launch of the first Android phone, the G1.
So, here's my speculation: Google is going to sell the phone, and technically, it will work on either T-Mobile or AT&T, but there will be a special T-Mobile account just for the Google Phone. It will be based on T-Mobile's Pay-as-you-go pricing models, and it can be considerably less expensive than T-Mobile's prepaid plans because T-Mobile isn't subsidizing the price of the phone.
Google will, in my opinion, subsidize the price of the phone, because the user will be locked into a suite of advertising-supported Google functions that work anywhere, even on WiFi, and even if the Nexus One doesn't have any GSM SIM card at all. (Yes. that means that Google Phone users will be able to take advantage of Google Voice wherever there's an open WiFi hotspot.)
T-Mobile won't be threatened by the Google Phone, because they'll be the preferred broadband voice and data service. Verizon won't be threatened, because the T-Mobile 3G network is even less well built out than AT&T's. Sprint has a foot in just about every camp, and they're becoming less of a market factor every day. AT&T is hostile to Android, so there's no reason for Google to play nice with them. Perhaps most importantly, Google has a chance to dramatically increase market penetration of Android phones and the appeal of the Android platform to developers, and they'll move a lot more mobile advertising inventory.
Wednesday, December 09, 2009
The inside scoop on buying market research reports
Whether you're a marketer, product manager or investor, you can't help but see articles about newly-released market research reports in the trade press, on websites and in blogs. These reports could be a gold mine of information about your customers, vendors, competition or the market in general. However, there are lots of reports and publishers to choose from. Should you consider them at all, and if you do, how do you choose?
I earned my living for a few years writing these market research reports (sometimes called "syndicated" reports, because they're not sponsored by or paid for by a single company.) In that time, I learned a lot about how the business works, what to look for and what to avoid.
The first rule is that syndicated research will never replace talking to your current and potential customers directly. You can often learn more from your customers, for less money, by talking to them yourself. However, if you need research that would be too expensive or too impractical to gather yourself, or if you're starting a new business and you don't have a customer base to talk to, syndicated research may make sense.
You may be able to find the information you need for free. If you're looking for demographic and economic data and forecasts, national and state governments publish a wealth of information that's almost always available for free, online or at a local library. For U.S. statistics, USA.gov is a great place to start.
Just about every research company sends out press releases with valuable statistics and findings when they release a new report. A table of contents is also usually available from the company's website, which will give you a good idea of what the report actually covers.
If the report is newsworthy enough that the researcher who wrote it is interviewed or quoted in an article, they usually reveal additional facts not included in the press release. By starting with the press release (which you can get from the research company's website, PR Newswire, Business Wire, Google, Bing and many other sources,) and then searching for other mentions of the researcher/analyst or research company, you can start fitting together a picture of the report like pieces of a puzzle. Even if you can't get all the answers you need this way, this technique is a great way to narrow down the list of reports that might give you what you're looking for.
Let's assume that you've now got a list of two or three potential reports that might serve your needs. How do you choose? Here are a few guidelines:
I earned my living for a few years writing these market research reports (sometimes called "syndicated" reports, because they're not sponsored by or paid for by a single company.) In that time, I learned a lot about how the business works, what to look for and what to avoid.
The first rule is that syndicated research will never replace talking to your current and potential customers directly. You can often learn more from your customers, for less money, by talking to them yourself. However, if you need research that would be too expensive or too impractical to gather yourself, or if you're starting a new business and you don't have a customer base to talk to, syndicated research may make sense.
You may be able to find the information you need for free. If you're looking for demographic and economic data and forecasts, national and state governments publish a wealth of information that's almost always available for free, online or at a local library. For U.S. statistics, USA.gov is a great place to start.
Just about every research company sends out press releases with valuable statistics and findings when they release a new report. A table of contents is also usually available from the company's website, which will give you a good idea of what the report actually covers.
If the report is newsworthy enough that the researcher who wrote it is interviewed or quoted in an article, they usually reveal additional facts not included in the press release. By starting with the press release (which you can get from the research company's website, PR Newswire, Business Wire, Google, Bing and many other sources,) and then searching for other mentions of the researcher/analyst or research company, you can start fitting together a picture of the report like pieces of a puzzle. Even if you can't get all the answers you need this way, this technique is a great way to narrow down the list of reports that might give you what you're looking for.
Let's assume that you've now got a list of two or three potential reports that might serve your needs. How do you choose? Here are a few guidelines:
- It's common to look at the size and price of the report, and go for the "best value": A 300-page report priced at $1,500 costs $5 per page, while a 60-page report priced at $600 costs $10 per page. What matters is the quality of the information and how applicable it is to your needs, not the page count or the cost per page. Many reports are padded out with duplicate information, boilerplate tables and data that you could find by yourself with a few hours of work.
- Most research publishers list biographies of their analysts and managers on their website. Some research firms hire freelancers to write reports, and their information may not be as readily available. Try to find out who actually wrote the report you're interested in. Search for them on Google, Bing, LinkedIn and other sites, and see how much experience they actually have in the area they're researching. It's not uncommon for reports to be written by researchers fresh out of college, or who came from a completely different market area. That doesn't necessarily disqualify either the researcher or the report, but it should definitely be a question mark.
- The age of a report is very important. Research reports generally have short shelf lives, especially in industries and markets that are changing rapidly. Even in markets where change historically has happened much more slowly, it only takes one major disruption to invalidate a shelf of reports. A report written in 2007 forecasting the U.S.automobile market for the next five years would look like a description of an alternate universe today. I'd be skeptical of the accuracy and relevance of any market research report that's more than two years old.
- If the report that you're interested in is part of a series, see if the publisher will send you a back copy for evaluation before you buy. A report that's a few years old shouldn't be used for planning, but it will tell you a lot about the kinds of information that you can expect from the current edition. Ask the publisher if the researcher/analyst, methodology or organization of the current report has changed since the old version was released. High turnover of researchers and analysts can be a red flag.
- Ask the research company for a current list of clients. Most companies pad their lists with clients they had a few years ago, but have since left. Compare the current client list with the list on their website. If there are a lot of names that are on the website list but not on the current client list, that's another question mark--why did those clients stop buying reports or services?
- Are the research company's clients mainly product/service vendors or end users? Research publishers target their reports to their primary customer base. End users generally demand greater accuracy and impartiality, while vendors are interested in making their products look better, and both their sales and the overall market opportunity look larger, than they may actually be.
- If the report you're interested in includes forecasts, it's important to get previous years' forecasts from the publisher and compare them with what actually happened. If a research company primarily sells to vendors, their forecasts are likely to be higher than what actually happened. Optimistic research reports sell; pessimistic ones (generally) don't.
- The market research company should be willing to share its methodology with you. How many end users or vendors did they interview, how did they conduct the interviews and gather the data, and how did they compile the data? They may not be willing to share everything with you for proprietary reasons, but they should be able to answer the questions above.
- The size of the research firm is not a good indicator of the quality of the report. Bigger firms don't necessarily do better work. One major firm recently released a study that purported to tell end users which "white box" video content management services vendor would be right for their needs, but they cherry-picked six vendors for comparison, then were forced to issue clarifications and explanations as to why they limited their report to those six vendors. A few weeks later, the vendor who was picked as number one in the report did a webinar with the research company to sell its services. Whether or not there was a business relationship between the winning vendor and the research company before the report was released is immaterial; by partnering up to promote the findings of a clearly flawed study, both the research company and the vendor were tainted. (By the way, the report is still on sale for $1,750. Caveat emptor.)
Tuesday, December 08, 2009
What does "Pivoting" really mean?
Lean Startup and Customer Development are techniques/processes being used by a lot of startups, especially software and services companies. A term commonly used in both techniques is "pivoting". It means that the company changed direction--it was developing a floor wax, which no one wanted, so it "pivots" to develop a dessert topping. Gratuitous SNL reference aside, what usually happens is that the company developed a product with one feature set, then learned that what customers really wanted was a different feature set. In forums on the Web, I often read about companies that pivoted, sometimes three or four times.
What pivoting really means is "We got it wrong." Don't take this the wrong way--I've gotten it wrong many times in my career--but often, a company could have avoided pivoting if it had done more homework upfront. So, how do things go wrong?
What pivoting really means is "We got it wrong." Don't take this the wrong way--I've gotten it wrong many times in my career--but often, a company could have avoided pivoting if it had done more homework upfront. So, how do things go wrong?
- The team understands technology but not the market: They spot what looks like an opportunity, but they don't really understand the domain all that well, so they define a product or service that looks good to them but not to their target customers.
- They talk to customers but don't listen: Even when a startup sets out to talk to customers, they dismiss negative feedback--perhaps their product is "too advanced" for the customers they're talking to, or the problems that customers are expressing aren't really problems, or frankly, their customers are stupid.
- They don't ask the right questions, or they don't ask them in the right way: Large and small companies alike distribute huge, complex, poorly organized surveys that scare off respondents, get low response rates and are too small a sample to be representative of their target customer base. Or, they use focus groups, which have their own set of risks and are often used the wrong way (the company wants to get projectable results when they actually get impressions from a small subset of customers).
- They copy what competitors are doing: The team starts with an existing product or service and then does a variation--cheaper, faster, or more features. That assumes, however, that the competitive benchmark is actually successful or has features worth replicating. It may turn out that the competitive benchmark isn't successful, and the company ends up replicating a failure.
Labels:
Customer Development,
Lean Startup,
Small Business,
Start Up
Friday, December 04, 2009
Comcast/NBC Universal: It's Not AOL Time Warner
Comcast's acquisition of 51% of NBC Universal from GE has been derided by some observers as the second coming of the AOL-Time Warner deal--two big media companies merging with few real synergies. On the contrary, I think that it's a very good deal for both companies--but it's not without risks.
AOL was "circling the drain" before the merger with Time Warner--subscriptions rates were flattening out, churn was increasing, as were subscriber acquisition costs. The company was hard-pressed to find growth, so it instead engineered one of the dumbest mergers in U.S. history, getting one of the biggest media companies in the world to essentially give itself to AOL. (Let's be clear...the merger was dumb for Time Warner but brilliant for AOL.)
By contrast, NBC Universal is in far better shape than AOL was. NBC's broadcast network is a mess, and the Universal movie studio is questionable (as it's been ever since MCA was acquired by Panasonic years ago), but its cable networks are generally strong, well-run and profitable. It's the cable networks that formed the primary reason for Comcast's interest.
The FCC is almost certainly going to require Comcast to either divest NBC's owned-and-operated television stations in markets where Comcast has cable systems (in Chicago, Philadelphia and Washington, D.C., among other cities) or its cable systems in those same markets. I suspect that it's the television stations rather than the cable systems that will be sold off.
Antitrust arguments against the merger are going to be a lot harder to make; for years, Time Warner owned Time Warner Cable (the second-largest cable operator), a movie studio and a collection of cable networks at least as powerful as those of the Comcast/NBC Universal combination without running afoul of antitrust regulators. Comcast has already pledged to make NBC Universal's cable networks available to competitors. The deal is likely to get done without major concessions beyond those required by the FCC.
The NBC television network can be fixed; it fell from first to fourth place in little more than a year, and one or two years of strong program development could turn things around. (To do so, however, Comcast will have to get Jeff Zucker and his cronies away from the network and install a new programming team.) Universal is a bigger problem, in that Comcast will be its sixth owner in less than 20 years, and no one in that time has figured out how to return the studio to success. The solution may be to sell off Universal in parts, keeping its library and selling off the ongoing studio operations.
NBC Universal's digital assets have been called a key reason for the deal, but I think that they're clearly the tail in this deal, not the dog. The most important digital asset is Hulu, but NBC Universal is a minority owner. Comcast will get a seat at the table, and Hulu will get to play in the TV Everywhere initiative, but it's not going to negate News Corporation's and Disney's interests.
I've learned from my own sources is that Comcast is working on its own low-cost, Roku-style set-top box to make its Xfinity service available on television sets without having to replace millions of existing set-top boxes. This could become the "official" mechanism through which Hulu will get to television sets.
In short, this deal makes sense for both Comcast and GE: Comcast gets control of a treasure trove of content, decreases its costs for distributing some of the most popular cable channels (they become internal transfer costs instead of outright expenses) and gets partial ownership of the Internet video distributor that poses the biggest risk to cable operators. GE gets out of the entertainment business without taking a financial bath, and can focus on industrial, medical and financial areas. The merger will almost certainly go through.
AOL was "circling the drain" before the merger with Time Warner--subscriptions rates were flattening out, churn was increasing, as were subscriber acquisition costs. The company was hard-pressed to find growth, so it instead engineered one of the dumbest mergers in U.S. history, getting one of the biggest media companies in the world to essentially give itself to AOL. (Let's be clear...the merger was dumb for Time Warner but brilliant for AOL.)
By contrast, NBC Universal is in far better shape than AOL was. NBC's broadcast network is a mess, and the Universal movie studio is questionable (as it's been ever since MCA was acquired by Panasonic years ago), but its cable networks are generally strong, well-run and profitable. It's the cable networks that formed the primary reason for Comcast's interest.
The FCC is almost certainly going to require Comcast to either divest NBC's owned-and-operated television stations in markets where Comcast has cable systems (in Chicago, Philadelphia and Washington, D.C., among other cities) or its cable systems in those same markets. I suspect that it's the television stations rather than the cable systems that will be sold off.
Antitrust arguments against the merger are going to be a lot harder to make; for years, Time Warner owned Time Warner Cable (the second-largest cable operator), a movie studio and a collection of cable networks at least as powerful as those of the Comcast/NBC Universal combination without running afoul of antitrust regulators. Comcast has already pledged to make NBC Universal's cable networks available to competitors. The deal is likely to get done without major concessions beyond those required by the FCC.
The NBC television network can be fixed; it fell from first to fourth place in little more than a year, and one or two years of strong program development could turn things around. (To do so, however, Comcast will have to get Jeff Zucker and his cronies away from the network and install a new programming team.) Universal is a bigger problem, in that Comcast will be its sixth owner in less than 20 years, and no one in that time has figured out how to return the studio to success. The solution may be to sell off Universal in parts, keeping its library and selling off the ongoing studio operations.
NBC Universal's digital assets have been called a key reason for the deal, but I think that they're clearly the tail in this deal, not the dog. The most important digital asset is Hulu, but NBC Universal is a minority owner. Comcast will get a seat at the table, and Hulu will get to play in the TV Everywhere initiative, but it's not going to negate News Corporation's and Disney's interests.
I've learned from my own sources is that Comcast is working on its own low-cost, Roku-style set-top box to make its Xfinity service available on television sets without having to replace millions of existing set-top boxes. This could become the "official" mechanism through which Hulu will get to television sets.
In short, this deal makes sense for both Comcast and GE: Comcast gets control of a treasure trove of content, decreases its costs for distributing some of the most popular cable channels (they become internal transfer costs instead of outright expenses) and gets partial ownership of the Internet video distributor that poses the biggest risk to cable operators. GE gets out of the entertainment business without taking a financial bath, and can focus on industrial, medical and financial areas. The merger will almost certainly go through.
Sunday, November 22, 2009
Cable Networks 2.0 (or 3.0)
The cable network model that we all know, which was based on the broadcast television model that we all know, is this: A centralized organization acquires programming, schedules and distributes it to affiliates (broadcasters) or cable operators. The network produces some of its own programming (or most of it, if it's a news or sports channel), but it acts primarily as an aggregator, scheduler and dsitributor.
It was a wonderful model for 1925, or 1949, but it's completely obsolete today. It was based on the technical limitations of the dawn of the radio and television eras, limitations that no longer exist. It was effectively impossible to have a two-way conversation between media creators and consumers prior to the Internet and broadband speeds. Now, we've got the means for that two- (or N-) way dialog. The cost of production and distribution is a tiny fraction of what it was even thirty years ago, which was in turn far less expensive than what was being done in the 1960s. YouTube...well, you know all about YouTube, and Vimeo, and Dailymotion, and, and...
My point is that the one-way, centralized network model is obsolete. I don't believe that a new, one-way network will be successful. Future cable networks will have to bake an open, two-way model into their architecture from the very beginning. What does that mean?
It means that the network becomes more of a curator than an all-powerful programmer. It selects and makes available content from external producers, internal teams and viewer/producers (since viewers can now easily be their own producers). It also enables viewers to curate their own programming and make their own selections.
The production process will become far more distributed. Viewers with a few thousand dollars and a high degree of patience can create content that looks as good as anything seen on broadcast television or cable. Field production is simple; it's done thousands of times a day. Studios can be built and sent anywhere. A shipping container can be turned into a perfectly functional television studio. Put it on a fast-and-dirty foundation and you've got a permanent studio. If you want room for an audience, there are a lot of older movie theaters out there being underutilized or gathering dust. Extend the stage, put in LED and fluorescent lights to keep the heating load down, and voila, instant studio!
You may argue that this model has already been tried, at Current, and it hasn't worked very well: Current TV just laid off 80 staffers, shut down production on some shows, and is consolidating two Los Angeles facilities into one. However, the problem wasn't with the production model, it was with trying to fit that model into a conventional cable/broadcast channel. The most-watched shows on Current have been InfoMania and SuperNews: Fairly conventional (from a structural point of view) 30-minute productions that viewers can find easily and that are repeated many times during the week. The bulk of Current's airday has been taken up with brief, four-to-eight minute videos, many of which are submitted by viewers. The problem is that it's been impossible to know exactly what's going to be on when. If you happen to tune in when they're showing a video that's engaging, you're likely to stick around for a while, but if you don't like what you see when you first tune in, you're unlikely to wait around for something better.
The problem with Current TV is that it's programmed from the top down, just like any other cable network, even though viewers contribute a lot of content. Current also has a web presence that allows a more egalitarian approach to programming (in other words, watch what you want, when you want), but with serious limitations: Cable operators prohibit Current.com from running its on-air feed live, or from making programs available prior to their airdates.
That brings me to my last point: The cable network of the future will reside primarily on the Internet, not on cable. So long as the cable operators can dictate terms of when and where programming can be shown, no cable network can become a truly two-way operation. That's why Current is struggling, and why Hulu is only a shadow of what it could be.
In the future, the cable network will be equivalent to the "curated feed", but the open ecosystem will reside on the Internet.
It was a wonderful model for 1925, or 1949, but it's completely obsolete today. It was based on the technical limitations of the dawn of the radio and television eras, limitations that no longer exist. It was effectively impossible to have a two-way conversation between media creators and consumers prior to the Internet and broadband speeds. Now, we've got the means for that two- (or N-) way dialog. The cost of production and distribution is a tiny fraction of what it was even thirty years ago, which was in turn far less expensive than what was being done in the 1960s. YouTube...well, you know all about YouTube, and Vimeo, and Dailymotion, and, and...
My point is that the one-way, centralized network model is obsolete. I don't believe that a new, one-way network will be successful. Future cable networks will have to bake an open, two-way model into their architecture from the very beginning. What does that mean?
It means that the network becomes more of a curator than an all-powerful programmer. It selects and makes available content from external producers, internal teams and viewer/producers (since viewers can now easily be their own producers). It also enables viewers to curate their own programming and make their own selections.
The production process will become far more distributed. Viewers with a few thousand dollars and a high degree of patience can create content that looks as good as anything seen on broadcast television or cable. Field production is simple; it's done thousands of times a day. Studios can be built and sent anywhere. A shipping container can be turned into a perfectly functional television studio. Put it on a fast-and-dirty foundation and you've got a permanent studio. If you want room for an audience, there are a lot of older movie theaters out there being underutilized or gathering dust. Extend the stage, put in LED and fluorescent lights to keep the heating load down, and voila, instant studio!
You may argue that this model has already been tried, at Current, and it hasn't worked very well: Current TV just laid off 80 staffers, shut down production on some shows, and is consolidating two Los Angeles facilities into one. However, the problem wasn't with the production model, it was with trying to fit that model into a conventional cable/broadcast channel. The most-watched shows on Current have been InfoMania and SuperNews: Fairly conventional (from a structural point of view) 30-minute productions that viewers can find easily and that are repeated many times during the week. The bulk of Current's airday has been taken up with brief, four-to-eight minute videos, many of which are submitted by viewers. The problem is that it's been impossible to know exactly what's going to be on when. If you happen to tune in when they're showing a video that's engaging, you're likely to stick around for a while, but if you don't like what you see when you first tune in, you're unlikely to wait around for something better.
The problem with Current TV is that it's programmed from the top down, just like any other cable network, even though viewers contribute a lot of content. Current also has a web presence that allows a more egalitarian approach to programming (in other words, watch what you want, when you want), but with serious limitations: Cable operators prohibit Current.com from running its on-air feed live, or from making programs available prior to their airdates.
That brings me to my last point: The cable network of the future will reside primarily on the Internet, not on cable. So long as the cable operators can dictate terms of when and where programming can be shown, no cable network can become a truly two-way operation. That's why Current is struggling, and why Hulu is only a shadow of what it could be.
In the future, the cable network will be equivalent to the "curated feed", but the open ecosystem will reside on the Internet.
Labels:
Broadcast,
Broadcasting,
cable television,
Dailymotion,
Networks,
Television,
Television network,
Vimeo,
YouTube
OWN: DOA?
Last week, Oprah Winfrey bid a tearful farewell to her show, telling viewers that she will end her run on syndicated television in 2011. She said that the reason she's leaving is that 25 years are enough, but plenty of industry scuttlebutt contradicts her. Discovery Networks has been working with her on a new cable network, the Oprah Winfrey Network (OWN), which will replace Discovery Health in 2011. The launch of OWN has already been delayed twice, and there's been a revolving door in the management suite, with most executives only lasting a few months.
The oft-repeated rumor is that David Zaslav, President & CEO of Discovery, read her the riot act: Bring her show to OWN and help to get the operation under control, or lose the network. The first shoe dropped a couple of weeks ago, when she sent some of her top managers to take over key positions in OWN, and the second shoe dropped last Friday, with her announcement that she'll be leaving her syndicated show. Zaslav's ultimatum may not have been THE reason why she's leaving her syndicated show, but it's a pretty good reason nonetheless.
The question is, what is she going to? Women's networks on Cable have had an upward struggle: Oxygen, in which Winfrey was a partner, launched with high audience expectations that were never met, and ended up being sold to NBC Universal. Will OWN fare any better? Unless it pursues a radically different model than that of today's cable networks, it won't. That's the topic of my next entry.
The oft-repeated rumor is that David Zaslav, President & CEO of Discovery, read her the riot act: Bring her show to OWN and help to get the operation under control, or lose the network. The first shoe dropped a couple of weeks ago, when she sent some of her top managers to take over key positions in OWN, and the second shoe dropped last Friday, with her announcement that she'll be leaving her syndicated show. Zaslav's ultimatum may not have been THE reason why she's leaving her syndicated show, but it's a pretty good reason nonetheless.
The question is, what is she going to? Women's networks on Cable have had an upward struggle: Oxygen, in which Winfrey was a partner, launched with high audience expectations that were never met, and ended up being sold to NBC Universal. Will OWN fare any better? Unless it pursues a radically different model than that of today's cable networks, it won't. That's the topic of my next entry.
Saturday, November 21, 2009
One Reason Why the Movie Business Is In Bad Shape
Earlier this week, Peter Sciretta of Slashfilm and Jason Kottke of kottke.org reported that only two of the top 30 films of the decade were based on original material: Disney/Pixar's "Finding Nemo" and DreamWorks Animation's "Kung Fu Panda". In fact, only nine of the top 50 films were based on original material, and five of them were from Pixar! Everything else was based on an existing motion picture, novel, comic book, or in the case of the "Pirates of the Caribbean" series, a theme park ride!
The movie business has been on a tear, with ever-increasing budgets fueled by DVD revenues. To mitigate the risk of failure for those stratospherically-budgeted movies, studios produce films based on known properties. Original works get crowded out or pushed to the studios' "independent" arms, which operate on relatively small budgets and get very little promotional support unless a movie has Academy Award potential.
The result is like never eating a meal that you haven't already eaten. Yet Pixar, which has only created one sequel in the history of the studio, has the best track record of any studio in terms of average revenue per movie. Is the lesson here to create fewer but better, more original, movies? Something to think about.
The movie business has been on a tear, with ever-increasing budgets fueled by DVD revenues. To mitigate the risk of failure for those stratospherically-budgeted movies, studios produce films based on known properties. Original works get crowded out or pushed to the studios' "independent" arms, which operate on relatively small budgets and get very little promotional support unless a movie has Academy Award potential.
The result is like never eating a meal that you haven't already eaten. Yet Pixar, which has only created one sequel in the history of the studio, has the best track record of any studio in terms of average revenue per movie. Is the lesson here to create fewer but better, more original, movies? Something to think about.
Tuesday, November 17, 2009
Disintermediation, again
Kim Masters was on NPR's "Morning Edition" this morning talking about the concerns that motion picture exhibitors (theater owners) have about the movie studios' plans to change their "release windows". Release windows are the order in which movies are released to different channels, and how long each channel has exclusivity. The issue is a seemingly innocuous request made by the studios to the FCC for "selectable output control" on set-top boxes, Blu-Ray players and other devices. Selectable output control would allow the studios to control whether, when and how much a movie or other video program could be played on a compatible device.
Consumer groups and consumer electronics vendors oppose selectable output control because the studios could use it to prevent their content from being recorded on DVRs and other devices. Now, the National Association of Theater Owners (NATO) has filed opposition to the studios' request because they fear that the studios will use selectable output control to make movies available in the home at the same time that they're in theaters.
I don't support selectable output control because it takes away consumer choice and negates thirty years of progress in consumer electronics since the Betamax decision, but the theater owners' opposition to the studios is more an effort to hold back the ocean than a friendly, consumer-oriented action. Extremely few movies make money in theaters today; theatrical distribution is most valuable for promoting films for future sale as DVDs and Blu-Ray discs. As the sales of DVDs erode and Blu-Ray fails to pick up the slack, the studios are forced to look at online digital distribution as a viable alternative. However, for digital distribution to generate the kind of revenue that physical media does, the movies have to be available much sooner, and that means cutting into the theaters' release window.
Whether or not the studios get selectable output control, theaters' release windows are going to erode; it's just a matter of when and how. When theater patrons are forced to go through metal detectors and hand over their cellphones before going in to watch a movie (as shown on a recent edition of CBS's "60 Minutes") in order to prevent piracy, theaters are not long for this world.
The studios are looking for any and every way to increase revenues, including cutting out the middleman, even if the middlemen are movie theaters. The same thing is happening with broadcast television. Comcast is close to buying 51% of NBC Universal from General Electric, which will give it control of the company. NBC has done such a superb job of running its broadcast network into the ground that it may become the first broadcast network to become a cable network. CBS and ABC are beginning to demand a cut of the retransmission payments that cable operators have to pay broadcast stations for the right to transmit their programming. Those retransmission payments have become the only thing keeping some stations on the air in this recession. Add to that the practice of some networks demanding "reverse compensation" from stations: Instead of paying stations to carry the networks' programming, the networks demand that the stations pay for the right to carry the programming.
Network television affiliates are an endangered species, because the networks can make more money, at lower cost, by dealing directly with the cable and satellite operators. For Comcast, the deal becomes almost a no-brainer, since it would control both the network and the cable systems. It will have to sell off the NBC owned-and-operated stations in cities where it has cable systems; the next step would be to take NBC to cable in city after city as network affiliate agreements expire.
Fifteen years ago, we were talking about disintermediation in retail and wholesale distribution brought about by the Internet; now we're talking about it again, this time in media. The future of theatrical motion picture exhibition and free broadcast television hang in the balance.
Update, December 5, 2009: According to the December 2nd edition of the Chicago Sun-Times, a patron at the Muvico Theater in Rosemont, IL was arrested and spent two nights in jail for videotaping four minutes of "Twilight: New Moon." She claims that she was actually videotaping her sister's birthday party at the theater, and the video on her camera (a still camera that records video segments) supports her contention. Nevertheless, the theater's managers pressed charges against her under a little-used law designed to punish film bootlegging. She faces up to three years in prison. I've lost whatever sympathy I still had for theater operators after this mind-boggling incident.
Consumer groups and consumer electronics vendors oppose selectable output control because the studios could use it to prevent their content from being recorded on DVRs and other devices. Now, the National Association of Theater Owners (NATO) has filed opposition to the studios' request because they fear that the studios will use selectable output control to make movies available in the home at the same time that they're in theaters.
I don't support selectable output control because it takes away consumer choice and negates thirty years of progress in consumer electronics since the Betamax decision, but the theater owners' opposition to the studios is more an effort to hold back the ocean than a friendly, consumer-oriented action. Extremely few movies make money in theaters today; theatrical distribution is most valuable for promoting films for future sale as DVDs and Blu-Ray discs. As the sales of DVDs erode and Blu-Ray fails to pick up the slack, the studios are forced to look at online digital distribution as a viable alternative. However, for digital distribution to generate the kind of revenue that physical media does, the movies have to be available much sooner, and that means cutting into the theaters' release window.
Whether or not the studios get selectable output control, theaters' release windows are going to erode; it's just a matter of when and how. When theater patrons are forced to go through metal detectors and hand over their cellphones before going in to watch a movie (as shown on a recent edition of CBS's "60 Minutes") in order to prevent piracy, theaters are not long for this world.
The studios are looking for any and every way to increase revenues, including cutting out the middleman, even if the middlemen are movie theaters. The same thing is happening with broadcast television. Comcast is close to buying 51% of NBC Universal from General Electric, which will give it control of the company. NBC has done such a superb job of running its broadcast network into the ground that it may become the first broadcast network to become a cable network. CBS and ABC are beginning to demand a cut of the retransmission payments that cable operators have to pay broadcast stations for the right to transmit their programming. Those retransmission payments have become the only thing keeping some stations on the air in this recession. Add to that the practice of some networks demanding "reverse compensation" from stations: Instead of paying stations to carry the networks' programming, the networks demand that the stations pay for the right to carry the programming.
Network television affiliates are an endangered species, because the networks can make more money, at lower cost, by dealing directly with the cable and satellite operators. For Comcast, the deal becomes almost a no-brainer, since it would control both the network and the cable systems. It will have to sell off the NBC owned-and-operated stations in cities where it has cable systems; the next step would be to take NBC to cable in city after city as network affiliate agreements expire.
Fifteen years ago, we were talking about disintermediation in retail and wholesale distribution brought about by the Internet; now we're talking about it again, this time in media. The future of theatrical motion picture exhibition and free broadcast television hang in the balance.
Update, December 5, 2009: According to the December 2nd edition of the Chicago Sun-Times, a patron at the Muvico Theater in Rosemont, IL was arrested and spent two nights in jail for videotaping four minutes of "Twilight: New Moon." She claims that she was actually videotaping her sister's birthday party at the theater, and the video on her camera (a still camera that records video segments) supports her contention. Nevertheless, the theater's managers pressed charges against her under a little-used law designed to punish film bootlegging. She faces up to three years in prison. I've lost whatever sympathy I still had for theater operators after this mind-boggling incident.
Sunday, November 08, 2009
Get Your Products Out
The new generation of high-quality, video-enabled DSLRs is thoroughly changing both the high-end still camera and camcorder markets. It's now inconceivable for a manufacturer to release a $1,000+ DSLR without some sort of HD video capability. Even Sony, the lone holdout, is rumored to be biting the bullet on November 18 with a HD-capable DSLR that will compete with the Panasonic GH1. (Technically, the GH1 isn't a DSLR, it's a Micro Four Thirds camera without a viewfinder, but it does everything that a DSLR does.)
The ironic thing is that RED, the company that made cinematic video production much more affordable with the RED One, first identified the need for print photojournalists to be able to shoot competent video without having to carry two cameras. It rechanneled its development effort for the Scarlet, which was originally supposed to be an inexpensive, handheld 2K camcorder, into a video-capable DSLR. That was almost two years ago, and not only has RED not yet shipped the Scarlet, it hasn't even provided a definitive list of features or release date.
When the Scarlet comes to market, it will have to compete with a variety of products from virtually every major DSLR manufacturer, at price points starting around $1,000 to over $5,000, with a huge range of capabilities. While Nikon, Canon, Panasonic, etc. weren't educated about the market opportunity solely by RED, it didn't do RED any good to tell its competitors so early about what it was doing. In my opinion, the early announcement was sheer hubris: "We beat you with the RED One, and we'll beat you again with the Scarlet."
Once you make a product announcement, you have to get the product to market quickly. You cannot assume that your competitors are too slow or too dimwitted to respond. The first time around, competitors took RED for granted because they were a new company, run by someone from outside the broadcast electronics business. Lots of companies like that had announced products, perhaps even shipped a few, and then sank beneath the waves. But RED was for real, and its competitors learned to pay attention.
If you're a new entrant into a market, you usually get one free pass where your competitors underestimate or dismiss you. Once you become successful, you're on their radar, and the requirement to get to market quickly becomes paramount.
The ironic thing is that RED, the company that made cinematic video production much more affordable with the RED One, first identified the need for print photojournalists to be able to shoot competent video without having to carry two cameras. It rechanneled its development effort for the Scarlet, which was originally supposed to be an inexpensive, handheld 2K camcorder, into a video-capable DSLR. That was almost two years ago, and not only has RED not yet shipped the Scarlet, it hasn't even provided a definitive list of features or release date.
When the Scarlet comes to market, it will have to compete with a variety of products from virtually every major DSLR manufacturer, at price points starting around $1,000 to over $5,000, with a huge range of capabilities. While Nikon, Canon, Panasonic, etc. weren't educated about the market opportunity solely by RED, it didn't do RED any good to tell its competitors so early about what it was doing. In my opinion, the early announcement was sheer hubris: "We beat you with the RED One, and we'll beat you again with the Scarlet."
Once you make a product announcement, you have to get the product to market quickly. You cannot assume that your competitors are too slow or too dimwitted to respond. The first time around, competitors took RED for granted because they were a new company, run by someone from outside the broadcast electronics business. Lots of companies like that had announced products, perhaps even shipped a few, and then sank beneath the waves. But RED was for real, and its competitors learned to pay attention.
If you're a new entrant into a market, you usually get one free pass where your competitors underestimate or dismiss you. Once you become successful, you're on their radar, and the requirement to get to market quickly becomes paramount.
Wednesday, November 04, 2009
People are Buying Blu-Ray Players, But Not for Blu-Ray
Regular readers of this blog (okay, I admit that there are no regular readers of this blog) know that I've been skeptical about Blu-Ray. It took far too long to resolve the Blu-Ray/HD DVD battle, and then to actually get Blu-Ray players to market at a reasonable price. However, I'm willing to admit that I was wrong. People are willing to buy Blu-Ray players...just not to play Blu-Ray discs.
The Blu-Con conference was held this week in Beverly Hills, and if there was one overriding theme, it was that the motion pictures studios are really, really desperate. DVD sales were down more than 13% in the third quarter, and the studios depend on the profits from DVDs to underwrite the cost of producing blockbusters. As DVD sales drop, film financing gets riskier. We may be heading back into an era when a single bomb can sink a studio, as the movie "Cleopatra" almost did to 20th Century Fox in the 1960s.
As DVD sales are dropping, Blu-Ray sales are increasing, but at nowhere near the rate needed to compensate for DVD's decline. However, sales of Blu-Ray players are growing proprotionally much faster than sales of Blu-Ray movies. Why? The biggest reason is that the prices of the least expensive Blu-Ray players are now overlapping the high end of DVD player prices--around $99. At that price, why not buy a Blu-Ray player, which can also play DVDs?
Another key reason, and the biggest motivator for sales of Blu-Ray players in the $200 range, is Internet connectivity. The studios thought that the Internet connections on Blu-Ray players would be used for games, chatrooms and other content connected with Blu-Ray movies, but that's not been the case. The biggest use for the Internet connections is to play online movies from Netflix, Amazon.com, CinemaNow and Vudu, and Internet videos from sites like YouTube. The Blu-Ray player manufacturers are in a race to add more and more online services, and retailers are racing to drop prices in time for the Christmas season.
So, are Blu-Ray players going to have a big Christmas? Yes, but I suspect that the movie studios won't be so lucky. The very Blu-Ray players on which they've been pinning their salvation have turned into Trojan Horses, bringing streaming movies right along with them. It's ironic that the success of Blu-Ray players is now no longer in serious doubt, but the success of Blu-Ray as a medium for distribution movies is still questionable.
The Blu-Con conference was held this week in Beverly Hills, and if there was one overriding theme, it was that the motion pictures studios are really, really desperate. DVD sales were down more than 13% in the third quarter, and the studios depend on the profits from DVDs to underwrite the cost of producing blockbusters. As DVD sales drop, film financing gets riskier. We may be heading back into an era when a single bomb can sink a studio, as the movie "Cleopatra" almost did to 20th Century Fox in the 1960s.
As DVD sales are dropping, Blu-Ray sales are increasing, but at nowhere near the rate needed to compensate for DVD's decline. However, sales of Blu-Ray players are growing proprotionally much faster than sales of Blu-Ray movies. Why? The biggest reason is that the prices of the least expensive Blu-Ray players are now overlapping the high end of DVD player prices--around $99. At that price, why not buy a Blu-Ray player, which can also play DVDs?
Another key reason, and the biggest motivator for sales of Blu-Ray players in the $200 range, is Internet connectivity. The studios thought that the Internet connections on Blu-Ray players would be used for games, chatrooms and other content connected with Blu-Ray movies, but that's not been the case. The biggest use for the Internet connections is to play online movies from Netflix, Amazon.com, CinemaNow and Vudu, and Internet videos from sites like YouTube. The Blu-Ray player manufacturers are in a race to add more and more online services, and retailers are racing to drop prices in time for the Christmas season.
So, are Blu-Ray players going to have a big Christmas? Yes, but I suspect that the movie studios won't be so lucky. The very Blu-Ray players on which they've been pinning their salvation have turned into Trojan Horses, bringing streaming movies right along with them. It's ironic that the success of Blu-Ray players is now no longer in serious doubt, but the success of Blu-Ray as a medium for distribution movies is still questionable.
Labels:
Amazon.com,
Blu-ray Disc,
CinemaNow,
DVD,
Netflix,
Vudu,
YouTube
Friday, October 30, 2009
Put a Roku in your Cable Set-Top Box
For years, I've doubted the ability of third-party set-top boxes from companies like Apple and Roku to make much market impact. Consumers generally detest adding more boxes and more wiring to their televisions. That's why home theater-in-a-box systems have been so successful, and a big reason why TiVo, which still has the best PVR, has struggled to build a viable business selling hardware. Consumers know that they have to have a set-top box from the cable or satellite company, and they accept two other boxes: DVD players (slowly morphing into Blu-Ray players), and game consoles.
In turn, both Blu-Ray players and game consoles are morphing into Internet digital video players. Netflix's streaming movie service is integrated into many Blu-Ray players, and with Microsoft's XBOX360. Earlier this week, Netflix made official support for Sony's Playstation 3, and support for Nintendo's Wii is right around the corner. However, none of these devices have anywhere near the household penetration of the ubiquitous cable or satellite set-top box.
Cable and satellite set-top boxes have always been closed, monolithic devices--they act as the gateway to the service provider's content, and nothing gets on them or through them without the service provider getting a cut of the action. Even with initiatives like Tru2Way, there's been little progress on opening up service provider STBs. Perhaps now is the time for them to do so.
Network-enabled Blu-Ray Players and game consoles represent the first viable competitors to the service providers' programming hegemony in the living room. The cable and satellite operators can rail against the competition, try to keep their content suppliers from working with them, and try to limit the value of competitor's services with artificial release windows for movies, none of which are likely to work in the long run. Or, they could add network-enabled features to their own set-top boxes and make the competition irrelevant.
Consider a cable or satellite set-top box that allows subscribers to access the same content as Roku. That means Netflix, Amazon, Major League Baseball, and in the near future, YouTube, Hulu, Revision3, Mediafly and a host of other services. You may be thinking, "Netflix? Amazon? Are you out of your mind? Don't the cable and satellite operators have their own Video-on-Demand services that they've spent millions of dollars to build?" Yes they do. But, those VOD systems have limited capacity and are extremely expensive to expand.
According to Comcast, in the first quarter of 2009, over half of its new VOD movies were available the same day as the DVDs. To limit the impact of $1-a-night services like Redbox, the movie studios are pushing to require Netflix and the kiosk operators to get their titles a month or so after the DVDs are shipped to retailers. The way things are going, if you want to see the movie as soon as it's out on DVD, you can buy the DVD or Blu-Ray, or watch it on cable or satellite VOD and pay a premium. If you're willing to wait a month, you can watch it on Netflix.
But, under the model I'm proposing, even if it's from Netflix, you'll still watch it on your cable or satellite set-top box. The service provider will charge Netflix a small fee for access to your set-top box--perhaps pennies per title viewed or a dollar a month. The service provider will get to brand and sell advertising on the interactive program guide and menus that subscribers access in order to find titles. A similar approach would work for other content providers: The cable or satellite operator gets the right to surround the content with advertising, and possibly to even insert advertising directly into the content.
Is the service provider cannibalizing itself? Yes, but it's capturing a chunk of the revenue that it's now losing, and will lose in even greater amounts in the future, from over-the-top (OTT) services that completely bypass the cable or satellite operator's set-top box.
The OTT services also offer leverage that the cable and satellite operators can apply to providers of conventional cable networks. For example, many service providers have long wanted to move ESPN to a premium sports tier, but ESPN demands a fee for every subscriber, even if they have no interest in sports and never watch any ESPN channels. OTT services like Major League Baseball can be integrated into the service providers' offerings to make a premium sports tier more popular and provide negotiating leverage to move ESPN's services into the same tier.
It's time for the service providers to stop trying to prevent the growth of over-the-top services like Netflix, and to start working with them.
In turn, both Blu-Ray players and game consoles are morphing into Internet digital video players. Netflix's streaming movie service is integrated into many Blu-Ray players, and with Microsoft's XBOX360. Earlier this week, Netflix made official support for Sony's Playstation 3, and support for Nintendo's Wii is right around the corner. However, none of these devices have anywhere near the household penetration of the ubiquitous cable or satellite set-top box.
Cable and satellite set-top boxes have always been closed, monolithic devices--they act as the gateway to the service provider's content, and nothing gets on them or through them without the service provider getting a cut of the action. Even with initiatives like Tru2Way, there's been little progress on opening up service provider STBs. Perhaps now is the time for them to do so.
Network-enabled Blu-Ray Players and game consoles represent the first viable competitors to the service providers' programming hegemony in the living room. The cable and satellite operators can rail against the competition, try to keep their content suppliers from working with them, and try to limit the value of competitor's services with artificial release windows for movies, none of which are likely to work in the long run. Or, they could add network-enabled features to their own set-top boxes and make the competition irrelevant.
Consider a cable or satellite set-top box that allows subscribers to access the same content as Roku. That means Netflix, Amazon, Major League Baseball, and in the near future, YouTube, Hulu, Revision3, Mediafly and a host of other services. You may be thinking, "Netflix? Amazon? Are you out of your mind? Don't the cable and satellite operators have their own Video-on-Demand services that they've spent millions of dollars to build?" Yes they do. But, those VOD systems have limited capacity and are extremely expensive to expand.
According to Comcast, in the first quarter of 2009, over half of its new VOD movies were available the same day as the DVDs. To limit the impact of $1-a-night services like Redbox, the movie studios are pushing to require Netflix and the kiosk operators to get their titles a month or so after the DVDs are shipped to retailers. The way things are going, if you want to see the movie as soon as it's out on DVD, you can buy the DVD or Blu-Ray, or watch it on cable or satellite VOD and pay a premium. If you're willing to wait a month, you can watch it on Netflix.
But, under the model I'm proposing, even if it's from Netflix, you'll still watch it on your cable or satellite set-top box. The service provider will charge Netflix a small fee for access to your set-top box--perhaps pennies per title viewed or a dollar a month. The service provider will get to brand and sell advertising on the interactive program guide and menus that subscribers access in order to find titles. A similar approach would work for other content providers: The cable or satellite operator gets the right to surround the content with advertising, and possibly to even insert advertising directly into the content.
Is the service provider cannibalizing itself? Yes, but it's capturing a chunk of the revenue that it's now losing, and will lose in even greater amounts in the future, from over-the-top (OTT) services that completely bypass the cable or satellite operator's set-top box.
The OTT services also offer leverage that the cable and satellite operators can apply to providers of conventional cable networks. For example, many service providers have long wanted to move ESPN to a premium sports tier, but ESPN demands a fee for every subscriber, even if they have no interest in sports and never watch any ESPN channels. OTT services like Major League Baseball can be integrated into the service providers' offerings to make a premium sports tier more popular and provide negotiating leverage to move ESPN's services into the same tier.
It's time for the service providers to stop trying to prevent the growth of over-the-top services like Netflix, and to start working with them.
Sunday, October 25, 2009
Can TV News Do Without Anchors?
Variety reports that local television stations across the U.S. are laying off high-paid anchors, cutting newsroom staffs, consolidating news operations from previously competing stations, replacing salaried reporters with stringers, and even making anchors operate their own teleprompters. Lost in all this is a way to potentially retain qualified reporters and editors while cutting costs--getting rid of anchors altogether.
The role of the news anchor is an anachronism that dates back to the earliest days of television, when live feeds from reporters in the field were impossible and most news stories were either read from wire service copy or films that the anchor narrated. Over time, anchors became the "brands" of local stations and television networks alike. The most popular anchors turned in the best ratings, and at the national level, a change in the anchor chair (Dan Rather replacing Walter Cronkite, Brian Williams replacing Tom Brokaw) was the subject of endless analysis.
Today, however, the news anchor is truly an anachronism that stations should consider doing away with. Stories can easily be introduced and narrated by the reporters themselves, whether in the newsroom or in the field. Wire service stories and stories without video can be reported from a desk in the newsroom. More and more stations bring reporters onto the news set to introduce their own stories and answer questions posed by the anchor, so why not "hand the ball" to the reporters and let them do the whole job? Couldn't they ask each other questions?
In my opinion, anchors are becoming less and less important for drawing an audience as the anchor position itself increasingly becomes a revolving door. As the Variety story points out, the show that precedes the late local news (the "lead-in") is perhaps the single most important determinant as to which news program a viewer will watch. NBC's local stations have lost a scary percentage of their late news audiences due to the weakness of The Jay Leno Show and NBC's overall prime time schedule--for example, KNBC, the Los Angeles NBC station, has lost 25% of its late night news audience.
If stations are replacing high-paid anchors with younger, less-experienced substitutes in order to save money, why not experiment with getting rid of anchors altogether? Use the savings to retain more of the experienced beat reporters and editors who really form the backbone of a successful news organization.
The role of the news anchor is an anachronism that dates back to the earliest days of television, when live feeds from reporters in the field were impossible and most news stories were either read from wire service copy or films that the anchor narrated. Over time, anchors became the "brands" of local stations and television networks alike. The most popular anchors turned in the best ratings, and at the national level, a change in the anchor chair (Dan Rather replacing Walter Cronkite, Brian Williams replacing Tom Brokaw) was the subject of endless analysis.
Today, however, the news anchor is truly an anachronism that stations should consider doing away with. Stories can easily be introduced and narrated by the reporters themselves, whether in the newsroom or in the field. Wire service stories and stories without video can be reported from a desk in the newsroom. More and more stations bring reporters onto the news set to introduce their own stories and answer questions posed by the anchor, so why not "hand the ball" to the reporters and let them do the whole job? Couldn't they ask each other questions?
In my opinion, anchors are becoming less and less important for drawing an audience as the anchor position itself increasingly becomes a revolving door. As the Variety story points out, the show that precedes the late local news (the "lead-in") is perhaps the single most important determinant as to which news program a viewer will watch. NBC's local stations have lost a scary percentage of their late news audiences due to the weakness of The Jay Leno Show and NBC's overall prime time schedule--for example, KNBC, the Los Angeles NBC station, has lost 25% of its late night news audience.
If stations are replacing high-paid anchors with younger, less-experienced substitutes in order to save money, why not experiment with getting rid of anchors altogether? Use the savings to retain more of the experienced beat reporters and editors who really form the backbone of a successful news organization.
Labels:
Journalism,
News presenter,
Newsroom,
Television station
Saturday, October 24, 2009
The Atomization of Media
Over the last fifteen years, I've worked on Internet software, streaming media, home video, telecommunications, and most recently, eBooks. Through that experience, it's become clear that the very nature of what constitutes media is changing. Singles have replaced albums as the primary way for people to purchase music. The six-minute short video, whether on YouTube or Hulu, is increasingly replacing the 30- or 60-minute television show (and those shows are increasingly looking like a collection of short videos). Newspapers are being replaced by their web equivalents, and by news aggregators like Google and Yahoo, enabling readers to go right to the topics and stories that they're interested in. 1Cast, a video aggregator that just launched, is doing exactly the same thing for television news.
I call this division of what used to be monolithic media "packages" into smaller, individually-searchable and selectable chunks, atomization. All media are subject to atomization in one form or another. eBooks, my current area of focus, are especially vulnerable...but I think that it's a good thing. The truth is that there are a lot of different book industries, segmented by categories, subjects and target age groups. Reference books and textbooks are particularly ripe for atomization, as are computer and business books, and other types of instructional works. These books are rarely read front-to-back; readers "dive in" at different points to get specific pieces of information. Users of these types of books rely on their indices and tables of contents in order to find what they're looking for. These readers would love to have a robust search engine on top of a collection of books, in order to find the information they need quickly. Some services are providing just such a search engine; after all, that's the idea behind Google Books.
Publishers, on the other hand, sell books, not topics or paragraphs. They're resistant to the idea of atomization--after all, how do you price a topic? Their contracts with writers and third-party content suppliers (image libraries, illustrators, etc.) are written on the basis of revenues from book sales, not sales of chunks of books. Nevertheless, this is the direction that publishing is going in. Books will be "exploded" into bits and pieces, aggregated with other titles, augmented with videos, audio and animation, stored in databases and indexed by search engines. The concept of individual books may eventually go away, to be replaced with databases from publishers focused on a single subject area or category, or from aggregators that combine books from multiple publishers into a single database.
There are, of course, some categories that probably won't be atomized in this way. Fiction and narrative non-fiction are intended to be read front to back, beginning to end. Some publishers and distributors are experimenting with selling these titles on a serialized, per-chapter basis, much like the novel serializations in newspapers of the 1800s, but that's probably as far as atomization can go with these kinds of works.
The point is that we need to stop looking at media forms as monolithic and start asking two questions: "How can we break this into small, usable chunks?" and "How can we best monetize those chunks?".
I call this division of what used to be monolithic media "packages" into smaller, individually-searchable and selectable chunks, atomization. All media are subject to atomization in one form or another. eBooks, my current area of focus, are especially vulnerable...but I think that it's a good thing. The truth is that there are a lot of different book industries, segmented by categories, subjects and target age groups. Reference books and textbooks are particularly ripe for atomization, as are computer and business books, and other types of instructional works. These books are rarely read front-to-back; readers "dive in" at different points to get specific pieces of information. Users of these types of books rely on their indices and tables of contents in order to find what they're looking for. These readers would love to have a robust search engine on top of a collection of books, in order to find the information they need quickly. Some services are providing just such a search engine; after all, that's the idea behind Google Books.
Publishers, on the other hand, sell books, not topics or paragraphs. They're resistant to the idea of atomization--after all, how do you price a topic? Their contracts with writers and third-party content suppliers (image libraries, illustrators, etc.) are written on the basis of revenues from book sales, not sales of chunks of books. Nevertheless, this is the direction that publishing is going in. Books will be "exploded" into bits and pieces, aggregated with other titles, augmented with videos, audio and animation, stored in databases and indexed by search engines. The concept of individual books may eventually go away, to be replaced with databases from publishers focused on a single subject area or category, or from aggregators that combine books from multiple publishers into a single database.
There are, of course, some categories that probably won't be atomized in this way. Fiction and narrative non-fiction are intended to be read front to back, beginning to end. Some publishers and distributors are experimenting with selling these titles on a serialized, per-chapter basis, much like the novel serializations in newspapers of the 1800s, but that's probably as far as atomization can go with these kinds of works.
The point is that we need to stop looking at media forms as monolithic and start asking two questions: "How can we break this into small, usable chunks?" and "How can we best monetize those chunks?".
Saturday, October 10, 2009
Sidekick data lost, Danger may be dead
According to this report from CNet, T-Mobile Sidekick users have been unable to access their data, such as calendars, address books, etc. starting more than a week ago. The Sidekick stores most of its users' personal data on servers, rather than on the device itself. Danger, the company that created the Sidekick and operated the servers, was purchased by Microsoft for $500 million last year. Now, according to multiple sources, T-Mobile and Microsoft have notified Sidekick users that the data have most likely been permanently lost; there is a slim but highly unlikely chance that they can be recovered.
T-Mobile has apparently suspended online (and, one presumes, in-store) sales of Sidekicks, and it's hard to believe that they would put them back on sale, or convince customers that the Sidekicks will be any more reliable in the future than they are now. To the contrary, T-Mobile and Microsoft are probably facing a massive recall and expense as customers turn in their Sidekicks for other phones.
This could turn into an unexpected windfall for Google and the Android platform if customers decide to replace their Sidekicks with Android-compatible phones. (In an ironic twist, Andy Rubin, the father of Android, was one of Danger's founders.) Microsoft might try to steer Sidekick customers to Windows Mobile phones, but since Microsoft was responsible for the failure, it's unlikely that they're going to be able to force anyone to go with a Windows phone.
The handwriting has been on the wall for the Sidekick for some time; it's a dead platform. Key members of the Danger team have left Microsoft, and most of the others are now working on the ever-increasingly less secret "Pink" project. However, the platform still had an active base of users and supporters, at least until today.
If T-Mobile and Microsoft don't announce a replacement program soon, expect to see class action lawsuits filed across the country. (Even with a recall, the lawsuits may get filed anyway due to the loss of personal data.) In any event, the Sidekick platform is dead, and the future reliability of any Microsoft-run cloud-based service is in question.
UPDATE 15 October 2009: According to TechCrunch, it now appears that Microsoft has been able to recover most of the lost Sidekick user data, and will begin restoring the information "soon" (the next status update is scheduled for Saturday, so it may be several days or more before users get their data back.)
T-Mobile has apparently suspended online (and, one presumes, in-store) sales of Sidekicks, and it's hard to believe that they would put them back on sale, or convince customers that the Sidekicks will be any more reliable in the future than they are now. To the contrary, T-Mobile and Microsoft are probably facing a massive recall and expense as customers turn in their Sidekicks for other phones.
This could turn into an unexpected windfall for Google and the Android platform if customers decide to replace their Sidekicks with Android-compatible phones. (In an ironic twist, Andy Rubin, the father of Android, was one of Danger's founders.) Microsoft might try to steer Sidekick customers to Windows Mobile phones, but since Microsoft was responsible for the failure, it's unlikely that they're going to be able to force anyone to go with a Windows phone.
The handwriting has been on the wall for the Sidekick for some time; it's a dead platform. Key members of the Danger team have left Microsoft, and most of the others are now working on the ever-increasingly less secret "Pink" project. However, the platform still had an active base of users and supporters, at least until today.
If T-Mobile and Microsoft don't announce a replacement program soon, expect to see class action lawsuits filed across the country. (Even with a recall, the lawsuits may get filed anyway due to the loss of personal data.) In any event, the Sidekick platform is dead, and the future reliability of any Microsoft-run cloud-based service is in question.
UPDATE 15 October 2009: According to TechCrunch, it now appears that Microsoft has been able to recover most of the lost Sidekick user data, and will begin restoring the information "soon" (the next status update is scheduled for Saturday, so it may be several days or more before users get their data back.)
Labels:
Danger,
Google,
Microsoft,
T-Mobile Sidekick,
Windows Mobile
Sunday, October 04, 2009
It's Not the Tools, It's the Talent
If you're at all interested in video production, one of the most exciting developments in years has been the introduction of digital SLRs (DSLRs) with first-rate video capabilities, especially the Canon 5D MkII and 7D, and the Panasonic Lumix GH1 (which technically isn't an SLR, but fits into the category in every other way.) For the cost of a mid-range professional camcorder from a few years ago, you can buy any of these cameras, a nice support rig from Redrock Micro or Zacuto, and a case full of Zeiss Prime lenses. The 5D MkII and 7D were used to shoot the new opening title sequence for Saturday Night Live, so keeping the cameras' limitations in mind (primarily their rolling shutter/"Jello" image problems with motion), they're "ready for prime time."
The tools cost less than they ever have, and provide superb image quality, so it therefore means that more people than ever can produce professional videos and motion pictures. Perhaps, but you can't buy talent at B&H. It takes more than a nice camera and great lenses to shoot compelling video. You have to know how to frame a shot, how to light, and how to pull focus. You have to know how to break down a scene into a sequence of shots, and how to get those shots under changing conditions. You have to know how to work with talent and how to communicate what you're looking for.
The point is that the ever-decreasing cost of tools is democratizing video production, just like the Internet is democratizing video distribution and promotion, but it takes talent (and training) to know how to use the tools to their best advantage. Talent can't be bought and isn't getting any easier to acquire. So, as excited as I am about these new tools, I'm also afraid that we're going to get even more schlocky videos, albeit with really great bokeh.
The tools cost less than they ever have, and provide superb image quality, so it therefore means that more people than ever can produce professional videos and motion pictures. Perhaps, but you can't buy talent at B&H. It takes more than a nice camera and great lenses to shoot compelling video. You have to know how to frame a shot, how to light, and how to pull focus. You have to know how to break down a scene into a sequence of shots, and how to get those shots under changing conditions. You have to know how to work with talent and how to communicate what you're looking for.
The point is that the ever-decreasing cost of tools is democratizing video production, just like the Internet is democratizing video distribution and promotion, but it takes talent (and training) to know how to use the tools to their best advantage. Talent can't be bought and isn't getting any easier to acquire. So, as excited as I am about these new tools, I'm also afraid that we're going to get even more schlocky videos, albeit with really great bokeh.
Saturday, October 03, 2009
Building a National Local News Service on the Cheap
Clay Shirky gave a talk at the Shorenstein Center at Harvard last week, where he performed a physical "biopsy" on a copy of his hometown newspaper, the Columbia (MO) Daily Tribune. Let's get out of the way the fact that this is one newspaper and an unscientific study, but the analysis that he did was very illuminating. He found out the following facts for the issue that he dissected:
The core function of the newspaper for its readers is fulfilled by the 17 reporters and the editors assigned to them. If you want to provide a local, web-based news service, you'll need those 17 reporters and their editors. If you've got several of these local services, you can have a centralized team that adds content from syndicated services, lays out and populates the websites, and manages the IT infrastructure.
If you're someone like Comcast that already has a national local advertising sales team (Spotlight), you can handle local and national advertising sales for the news service through the salesforce you already have. Thus, you can build a local news service on the cheap using experienced, professional journalists, and with national reach.
- Less than a third of the paper was locally-created content; the majority came from the Associated Press and other syndicated sources.
- Less than half of the locally-created content (not including sports) was written by the paper's hard news reporters; the rest came from columnists, covering topics such as cooking, travel, etc.
- The paper has only six hard news reporters, out of a staff of 59. Every reporter filed at least one story that day; three of the six filed two stories.
- The paper also has 11 sports reporters, including one dedicated only to high school sports.
The core function of the newspaper for its readers is fulfilled by the 17 reporters and the editors assigned to them. If you want to provide a local, web-based news service, you'll need those 17 reporters and their editors. If you've got several of these local services, you can have a centralized team that adds content from syndicated services, lays out and populates the websites, and manages the IT infrastructure.
If you're someone like Comcast that already has a national local advertising sales team (Spotlight), you can handle local and national advertising sales for the news service through the salesforce you already have. Thus, you can build a local news service on the cheap using experienced, professional journalists, and with national reach.
Are TV Stations Worth More Dead Than Alive?
Tom Wheeler of Core Capital Partners has written an interesting article on how the U.S. broadband bandwidth crunch could be solved. He points out that at current market prices, Sinclair Broadcasting's 30 television stations are worth less than $100 million, or $3.3 million per station, and there are 100 bankrupt stations that, by definition, have no market value.Why not take some of those stations' bandwidth and apply it to either Hulu-like offerings or common carrier-style data services?
The idea is seriously worth considering, especially when mated to the "white space" data service proposals made by Microsoft, Google, HP, Philips and others. Microsoft in particular has had significant technical success experimenting with two-way WiFi-like data services and devices that can work within the "white spaces" between television channels, without interfering with the adjacent channels. However, there's a lot more bandwidth available within a channel than between channels. If a television channel is converted to a data service, the "white spaces" will do what they're supposed to do, which is protect other channels from interference.
The FCC is going to have a lot to say about this, of course. It issues (essentially free) licenses to broadcasters for television, not data. It won't allow the only television station serving a community to become a data service. The U.S. Congress could also get involved, because if free television bandwidth is repurposed into a paid data service, the Government could resell tha licenses and make money from them.
However, converting some television stations to data services would be very appealing. Broadcasters in markets with converted stations would see competition for advertising sales go down and revenues go up. Existing "white space" users such as public safety and wireless microphones wouldn't have as much competition for their frequencies and would have far fewer problems with interference. Even cable and IPTV operators, who would naturally oppose the entry of new competitors, would benefit, because their "must carry" obligations for the stations converted to data usage would end. They could then use those channels to add more cable networks, add more bandwidth to their own data services, or both.
A company like Microsoft or Google could pick up Sinclair for pocket change. They would then have the ability to offer high-speed wireless data services in 30 cities, if the FCC approves. This could turn virtually worthless television stations into very valuable properties indeed.
The idea is seriously worth considering, especially when mated to the "white space" data service proposals made by Microsoft, Google, HP, Philips and others. Microsoft in particular has had significant technical success experimenting with two-way WiFi-like data services and devices that can work within the "white spaces" between television channels, without interfering with the adjacent channels. However, there's a lot more bandwidth available within a channel than between channels. If a television channel is converted to a data service, the "white spaces" will do what they're supposed to do, which is protect other channels from interference.
The FCC is going to have a lot to say about this, of course. It issues (essentially free) licenses to broadcasters for television, not data. It won't allow the only television station serving a community to become a data service. The U.S. Congress could also get involved, because if free television bandwidth is repurposed into a paid data service, the Government could resell tha licenses and make money from them.
However, converting some television stations to data services would be very appealing. Broadcasters in markets with converted stations would see competition for advertising sales go down and revenues go up. Existing "white space" users such as public safety and wireless microphones wouldn't have as much competition for their frequencies and would have far fewer problems with interference. Even cable and IPTV operators, who would naturally oppose the entry of new competitors, would benefit, because their "must carry" obligations for the stations converted to data usage would end. They could then use those channels to add more cable networks, add more bandwidth to their own data services, or both.
A company like Microsoft or Google could pick up Sinclair for pocket change. They would then have the ability to offer high-speed wireless data services in 30 cities, if the FCC approves. This could turn virtually worthless television stations into very valuable properties indeed.
Monday, August 24, 2009
Criswell Predicts: Apple to Approve Google Voice on Friday
Apple has got to make this Google Voice thing go away, and quickly. As I stated in my previous post, there may be nothing that the U.S. Government can do to force Apple to approve the application, but they've already forced the company to make major changes to both how it runs the App Store and to its penchant for secrecy.
In Apple's response to the FCC, it make its case for why it probably should reject Google Voice (although Michael Arrington reported that Apple did formally reject it, contrary to its statement to the FCC.) It can't just turn around and approve it without making some explanation, but it can't admit that it's approving it due to government pressure, either...too many egos at stake.
So, I'm betting that Apple announces that it has approved Google Voice with a simple press release this coming Friday, after the U.S. stock markets close. Why Friday? The first reason is that late Friday is the best time to get minimal coverage for an announcement--much of the business press has already left for the weekend. However, the second, and far more important reason is that it's the day that Apple ships Snow Leopard. The computer press will be falling all over itself to get out the first reviews of the new operating system, and the Google Voice approval will likely fade into the noise.
On the other hand, if Google Voice isn't approved on Friday, don't look at me...it was Criswell's prediction.
In Apple's response to the FCC, it make its case for why it probably should reject Google Voice (although Michael Arrington reported that Apple did formally reject it, contrary to its statement to the FCC.) It can't just turn around and approve it without making some explanation, but it can't admit that it's approving it due to government pressure, either...too many egos at stake.
So, I'm betting that Apple announces that it has approved Google Voice with a simple press release this coming Friday, after the U.S. stock markets close. Why Friday? The first reason is that late Friday is the best time to get minimal coverage for an announcement--much of the business press has already left for the weekend. However, the second, and far more important reason is that it's the day that Apple ships Snow Leopard. The computer press will be falling all over itself to get out the first reviews of the new operating system, and the Google Voice approval will likely fade into the noise.
On the other hand, if Google Voice isn't approved on Friday, don't look at me...it was Criswell's prediction.
Sunday, August 23, 2009
Apple to the FCC: "You see? You see? Your stupid minds! Stupid! Stupid!"
Well, okay, that's actually a line from "Plan 9 from Outer Space", but someone at Apple was thinking along those lines when they responded to the FCC's questions last Friday concerning Google Voice. (You can find Apple's entire response here.) After indulging in seven paragraphs of marketingspeak, Apple claims that it hasn't actually rejected Google Voice, which TechCrunch's Michael Arrington disputes; his sources inside Google assert that Apple did, in fact, reject the application. (We don't yet know Google's side of the story, because it redacted the section of its response to the FCC dealing with its discussions with Apple.) However, Apple states its case for why it would likely reject the application, a case that doesn't stand up to even the most cursory examination.
Apple claims that the Google Voice application replaces the functionality of Apple's carefully-crafted Visual Voicemail and text messaging features. In fact, it does no such thing. Google Voice sets up an additional phone number for the user. If the user chooses to give out their Google Voice number, and if they've configured Google Voice to forward calls to their iPhone, then and only then does Google Voice replace the iPhone's voicemail functionality for calls placed to their Google Voice number. If someone sends a SMS to the user's Google Voice number, the user would access the message through the iPhone Google Voice application instead of the iPhone's own SMS feature. But, if someone calls the user's iPhone directly, then the iPhone's voicemail system is used, and if they send a SMS directly to the iPhone, the iPhone's own capabilities are used. Google Voice doesn't usurp any functionality of the iPhone--it adds additional capabilities.
Apple also charges that the Google Voice application copies the user's entire contacts list to Google's own servers, which Michael Arrington again claims is untrue. Even if the charge is true, Apple itself enables the user's contact list to be copied to Google's servers via iTunes. Why is it okay when Apple does it but not when Google does the same thing?
Apple seemed to believe that the FCC wouldn't or couldn't figure out how Google Voice actually works. Making the assumption that you can baffle the U.S. Government with B.S. is dangerous. It reminds me of Jim Allchin's infamous testimony during Microsoft's antitrust trial, when he presented a bogus video purporting to demonstrate that Internet Explorer couldn't be removed from Windows without causing the operating system to slow down or malfunction. The Government shot holes in the video and Microsoft was forced to withdraw it. Microsoft subsequently admitted that it also falsified a second video that purported to show how easy it was to install Netscape Navigator on Windows.
Microsoft did a lot to destroy its own credibility in the course of its antitrust trials, and even though it managed to avoid serious damage through a "sweetheart" settlement with the Bush Administration, the company is still paying the price.
The FCC has a great deal of power over AT&T, but if Apple independently made the decision to ban Google Voice, as both it and AT&T claim, there's very little that the FCC can do. Apple's market share in mobile phone and smartphones is too small to claim that the company is a monopolist, so there's also very little that the U.S. Department of Justice can do. (DOJ could charge Apple with perjury in its response to the FCC; good luck trying to get that to stick.) The Federal Trade Commission might be able to take action, but I'm not sure what its grounds would be. So, Apple is probably not at risk for prosecution, but its reputation is every bit as much at risk as Microsoft's.
Apple's public behavior concerning the App Store has already changed substantially since the FCC issued its inquiry letter: Phil Schiller, Apple's Senior VP of Marketing, has personally written a developer and a blogger to explain the App Store approval process and state that improvements are underway. The response to the FCC made public a number of key details about the approval process, such as the average number of weekly filings and the number of reviewers. A formal appeal process for rejected applications also seems to be in the works.
With all that, however, Apple needs to approve Google Voice as an application, to try to put the immediate crisis behind it. In the long run, it should allow iPhone users to install applications from any developer, whether or not they've been approved by Apple, as Google's Android operating system does. Not only would this lessen government scrutiny, it would eliminate a huge reason for jailbreaking the iPhone.
Apple claims that the Google Voice application replaces the functionality of Apple's carefully-crafted Visual Voicemail and text messaging features. In fact, it does no such thing. Google Voice sets up an additional phone number for the user. If the user chooses to give out their Google Voice number, and if they've configured Google Voice to forward calls to their iPhone, then and only then does Google Voice replace the iPhone's voicemail functionality for calls placed to their Google Voice number. If someone sends a SMS to the user's Google Voice number, the user would access the message through the iPhone Google Voice application instead of the iPhone's own SMS feature. But, if someone calls the user's iPhone directly, then the iPhone's voicemail system is used, and if they send a SMS directly to the iPhone, the iPhone's own capabilities are used. Google Voice doesn't usurp any functionality of the iPhone--it adds additional capabilities.
Apple also charges that the Google Voice application copies the user's entire contacts list to Google's own servers, which Michael Arrington again claims is untrue. Even if the charge is true, Apple itself enables the user's contact list to be copied to Google's servers via iTunes. Why is it okay when Apple does it but not when Google does the same thing?
Apple seemed to believe that the FCC wouldn't or couldn't figure out how Google Voice actually works. Making the assumption that you can baffle the U.S. Government with B.S. is dangerous. It reminds me of Jim Allchin's infamous testimony during Microsoft's antitrust trial, when he presented a bogus video purporting to demonstrate that Internet Explorer couldn't be removed from Windows without causing the operating system to slow down or malfunction. The Government shot holes in the video and Microsoft was forced to withdraw it. Microsoft subsequently admitted that it also falsified a second video that purported to show how easy it was to install Netscape Navigator on Windows.
Microsoft did a lot to destroy its own credibility in the course of its antitrust trials, and even though it managed to avoid serious damage through a "sweetheart" settlement with the Bush Administration, the company is still paying the price.
The FCC has a great deal of power over AT&T, but if Apple independently made the decision to ban Google Voice, as both it and AT&T claim, there's very little that the FCC can do. Apple's market share in mobile phone and smartphones is too small to claim that the company is a monopolist, so there's also very little that the U.S. Department of Justice can do. (DOJ could charge Apple with perjury in its response to the FCC; good luck trying to get that to stick.) The Federal Trade Commission might be able to take action, but I'm not sure what its grounds would be. So, Apple is probably not at risk for prosecution, but its reputation is every bit as much at risk as Microsoft's.
Apple's public behavior concerning the App Store has already changed substantially since the FCC issued its inquiry letter: Phil Schiller, Apple's Senior VP of Marketing, has personally written a developer and a blogger to explain the App Store approval process and state that improvements are underway. The response to the FCC made public a number of key details about the approval process, such as the average number of weekly filings and the number of reviewers. A formal appeal process for rejected applications also seems to be in the works.
With all that, however, Apple needs to approve Google Voice as an application, to try to put the immediate crisis behind it. In the long run, it should allow iPhone users to install applications from any developer, whether or not they've been approved by Apple, as Google's Android operating system does. Not only would this lessen government scrutiny, it would eliminate a huge reason for jailbreaking the iPhone.
Labels:
apple,
Federal Communications Commission,
Google,
GoogleVoice,
iPhone,
TechCrunch
Subscribe to:
Comments (Atom)


![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=b35dd107-3aae-4364-89fe-ba28c7e94bba)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=e00208d7-50de-4ffa-84d5-66f6ce4a0741)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=cd89995f-6fde-4880-bb07-607924965244)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=3bef267f-6d53-4353-bf0b-27c61af15e15)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=13a4a496-aa5b-4ed7-ac0b-242604f7a10a)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=88005d16-053a-4699-9bff-f6c3ea7c26c7)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=83409b39-a53c-4fa9-8e17-e723ac6de3af)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=24e218c6-7232-4d01-bb3a-cb0ffd37b6ab)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=dd4ebeb5-7fe5-472a-8e73-d1c01d7c1fc1)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=b09816da-b292-4ac0-816f-064e77ed92bd)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=9ed8aa6e-fabc-4e97-a9ac-11fc7f89088c)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=21519fe2-7416-4d07-879a-fb104ef0f9c1)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=6582e072-9a4a-4851-af55-173a2a3eb7a0)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=074eb7aa-6df2-4c93-b1e1-689685766768)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=fd4a5dba-1141-46ef-abc8-deb691a9ea77)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=8982a1a9-d511-4cc5-85af-1fe8ee73efe4)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=fda37fcf-00a6-4195-948a-5a0276e84283)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=afc3a1ec-a900-4bd5-a333-346dc6d6b914)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=f0f60968-1f84-428c-a179-120c0d1fcce0)
![Reblog this post [with Zemanta]](http://img.zemanta.com/reblog_e.png?x-id=493bf0e1-b239-413a-83ca-ee5f426b465c)