Wednesday, November 19, 2014

How Uber crossed the line with its threats against Sarah Lacy

You probably know that, at a dinner last Friday, Uber Senior Vice President Emil Michael suggested that his company should hire four top political opposition researchers and four journalists to investigate the personal lives and families of journalists who write negative articles about the company. Michael focused his anger on PandoDaily's founder Sarah Lacy, and said that the Uber "smear team" could, according to BuzzFeed's Ben Smith, "...in particular, prove a particular and very specific claim about (Lacy's) personal life."

Since BuzzFeed broke the story, there's been a firestorm of media attention, which led to apologies from Michael and a tweetstorm from Uber head Travis Kalanick, in which he disavowed Michael's statements and said that they didn't represent Uber, but that he wouldn't fire Michael. Uber's response, or lack thereof, hasn't dampened the firestorm one bit, and the company has managed to alienate much of the press that it needs for future publicity.

One thing to keep in mind is that it's been long-standing policy at some tech firms to retaliate against journalists and publications that report stories negative to the the companies' interests. Apple and Microsoft are most notorious for doing this, but many other companies in Silicon Valley have done the same. However, with one known exception, the method that the companies have used is to withhold information from the targeted journalists. Both Microsoft and Apple have documented cases in which they barred certain journalists and publications/websites from embargoed previews of new products, reviews of unreleased products and announcement events. Without access to new and forthcoming products, those journalists and publications were at a competitive disadvantage, because they couldn't review or cover the products until they were released.

Where Michael's threat crossed the line is that it moved from withholding proprietary information, which any company has the right to do, to digging up and revealing negative information about journalists with the intent of damaging or destroying their reputations. Even in the one case that I mentioned above, which was Hewlett Packard's hiring of private investigators in 2006 to identify the source of leaks by lying to phone companies in order to get journalists' call records, HP's objective was to find out who leaked the information, not to gather information to destroy the reputations of the journalists.

Both Michael and Kalanick have characterized Michael's remarks as, essentially, a revenge fantasy rather than anything that the company would actually do. However, by making the statements in front of Kalanick at the dinner, and with Kalanick not disavowing them immediately, both Michael and Kalanick reinforced the increasingly common view of Uber as an amoral, out-of-control company that will do anything in order to win, up to and including breaking the law and ignoring court rulings. I would remind Kalanick and Uber that Microsoft had the same philosophy, and believed that it was too big to touch by anyone. However, both the U.S. Justice Department and European Union successfully prosecuted Microsoft for antitrust violations. That in turn led to a loss of management focus, disillusionment and loss of morale for employees and damage to Microsoft's reputation, all of which contributed to the company's decline and loss of direction.

In essence, unless Uber starts making fundamental changes in the way that it does business, it's setting itself up for an eventual battle (or battles) that it can't win. There is always someone who can take you down if they really want to.

Monday, November 17, 2014

3D printing and iSTREAM innovate car manufacturing

If you're a long-time reader, you know that I'm a car nut. I don't do much work on my car myself, but I love new cars and new technologies. The car industry is in the middle of two transitions: The first is the transition from exclusive gasoline power to a mix of energy sources--gasoline or diesel with electric, electric stored in batteries, electric dynamically generated with a fuel cell, and even gasoline or electric with pneumatics (Peugeot-Citroen's Hybrid Air.)

The second is the widespread application of intelligence to cars. By that, I mean the infotainment and telematics systems that integrate entertainment, navigation, online information, sensors, safety systems, controls, smartphone integration and even drive-by-wire systems. In the 1990's, automakers began adding these systems to their cars, starting with the German luxury brands. The problem was that the electronics were sourced from multiple vendors and were poorly integrated, which led to reliability problems for BMW, and particularly for Mercedes-Benz, which had long had a reputation for "bulletproof" cars. As carmakers greatly improved their cars' mechanical integrity, they simultaneously introduced a plethora of new points of failure with electronics. Today, these systems have improved but remain a sore spot--Consumer Reports' latest car reliability survey found particular problems with the following brands:

  • Fiat Chrysler's Jeep, Ram, and Fiat placed at the very bottom of the rankings, due in part to their electronics. (The Fiat 500L was the worst car in the entire survey, by a significant margin.)
  • Ford and Cadillac continue to struggle with their infotainment systems, MyTouch and CUE, respectively. GM does have some very good systems, but CUE is considered to be the worst infotainment system on the road by many car reviewers.
  • Nissan's Infiniti reliability ratings fell 14 places in one year, largely due to the Q50, which replaced the popular G37 for 2014. The big problem with the Q50 is that Infiniti switched to a numb "drive by wire" system for steering, which combined with problems with the car's In Touch infotainment to pull the car's ratings down.
Now, we're entering a third transition, this time in how cars are manufactured. The development that's gotten the most press is Local Motors' Strati, which has a 3D-printed body, chassis and seats, and a Renault power train. The Strati looks somewhat like an oversized toy car, and it currently takes 44 hours to build the 3D components, but it's more a harbinger of things to come than a viable product in its own right. Fairly early on in automobile manufacturing, bodies ("coaches",) chassis and engines were built by different companies. For example, Duesenberg, which at one time made the most expensive cars in the world, actually built only the engines and chassis. Local Motors' approach opens up that possibility again.

Start with a high-volume, standardized chassis, with or without an engine and transmission. Enable manufacturers to build interchangeable parts--different chassis, suspensions, engines, motors, transmissions and power sources, all that fit within the same 3D "envelope" and with the same mounting points for the body. Next, print the body, doors, hood and seats, install the glass, lights, instruments, airbags, infotainment system, telematics and other hardware, integrate the components and attach the body to the chassis. Depending on the chassis and powertrain, you could build anything from a two-seat roadster to a mini-CUV on the same chassis and assembly line. You could literally customize not just the colors, but the designs of the cars, by making changes to the cars' CAD files. And, by designing the chassis to include the safety crumple zone, you'd have even more flexibility in how to design the body.

Another approach is being pioneered in the U.K. by legendary car designer Gordon Murray. He claims that his manufacturing process, called  iSTREAM (for iStabilised Tube Reinforced Exo-Frame Automotive Manufacture,) can build cars with 85% less capital investment and 60% less energy than today's auto plants, in plants 20% the size of existing plants with comparable capacity, and with cars that generate 40% lower CO2 emissions over their working lives. iSTREAM eliminates stamping presses, which are extremely expensive, require very expensive tools, use enormous amounts of power and take months to switch from one design to another, Instead, the body panels are made out of honeycombed sandwiched composite materials, which are lighter than steel but stronger. The system uses lasers and CNC machines to cut, bend and weld metal tubes into an integrated chassis and roll cage with front and rear crush zones. That dramatically decreases the load that the body panels have to bear, while still enabling iSTREAM-based cars to exceed European safety standards. Gordon Murray claims that the iSTREAM manufacturing process creates cars that weigh half as much as comparable vehicles, and can scale up to full-size sedans, SUVs and minivans. 

To date iSTREAM has been used for producing prototypes of two electric-powered city cars, the T.25 and T.27, and has been used to build prototypes of a Yamaha electric city car, the Motiv.E. The iSTREAM weight advantage enables electric cars to use batteries with half the capacity, and therefore half the cost and weight, of batteries in conventionally-built cars. For example, Nissan's Leaf uses a 24kWh battery for an EPA range of 75 miles. A comparable iSTREAM car should get the same range with a 12kWh battery, or double the range from the same 24kWh battery.

Both the Local Motors and iSTREAM approaches lend themselves to smaller, more decentralized manufacturing plants, more customization options, and "just-in-time" manufacturing approaches that minimize both parts and finished car inventories. If they're successful, we could see the revival of "bespoke" cars, but instead of the highly-customized super-luxury models of today, they would be priced about the same or even less than comparable conventionally-built models.

Friday, November 14, 2014

CBSN: An experiment worth watching

Last week, CBS launched its own dedicated news channel, called CBSN. Unlike CNN, Fox News or MSNBC, CBSN isn't a cable news channel--it's only available over the Internet. In addition, unlike those cable networks, CBSN isn't (yet) a true 24-hour-a-day service; it has live content from 9 a.m. to midnight U.S. Eastern time on weekdays. I typically watch CBSN on my Roku box, but it's available on basically any device with a fast Internet connection and a web browser. When viewing it on a PC, tablet or smartphone, CBSN has a well-designed user interface that enables the viewer to go back and watch previous stories along with the live feed, and it also gives a preview of upcoming stories.

It's not the user interface or the fact that it's on the Internet that makes CBSN interesting, however; it's the way that the channel presents the news:
  • There are plenty of short feature stories, as we've come to expect from cable news, but there's also long, in-depth pieces with details that the 30-minute national news on CBS would never have time to give. These background stories, from reporters such as Pentagon reporter David Martin, provide much more insight than the cable networks usually give.
  • So far, CBSN is blissfully free of the spinmeisters and "instant experts" of the cable networks--it's largely hard news, not news and opinion whipped into an indiscriminate souffle.
  • With one exception, CBSN has so far stayed away from the "beat it to death" style of continuous coverage of a news story, such as CNN's infamous coverage of the disappearance of Malaysia Airlines Flight 370. On Thursday, CBSN carried WCBS's wall-to-wall coverage of two window washers dangling from the side of World Trade Center 1. It was an obvious decision, but in hindsight, they could have cut away from time to time to at least give headlines.
  • There's a refreshing casualness to CBSN's anchors. Instead of the suit-and-tie look for men and power suits for women, there are shirt sleeves and open collars--exactly what reporters and editors wear in a working newsroom.
CBSN is going through teething pains; earlier this week, I saw the same segment start and stop three times, and they're having other "slight technical difficulties." However, that's to be expected. On the plus side, CBSN has access to CBS News's worldwide bureaus, local stations and affiliates, and it doesn't have the political conflicts that keep MSNBC and NBC News operating at arm's lengths from each other. That means that it has the resources to compete with any of the cable networks, but with the ability to operate at a much lower cost, because most of the money is already being spent to run CBS News, its owned & operated stations, and affiliates.

While CBSN is still evolving, it's already a good alternative to the cable networks, and it's likely to get better. My hope is that the service will soon start offering some weekend coverage, which might happen if an important news event breaks, or continues, on a weekend.

Thursday, November 13, 2014

Would the big U.S. TV networks sell their stations?

Earlier today, TVNewsCheck ran a story about the positions of the Big 4 U.S. television networks (ABC, CBS, Fox and NBC) on ATSC 3.0. The Advanced Television Systems Committee (ATSC) administers the U.S. standard for digital terrestrial television broadcasting, and ATSC 1.0 is the system currently in use. ATSC 3.0 is intended to implement capabilities that are limited or missing in the current standard, including support for image resolutions beyond HD. Most importantly for many broadcasters, however, is that ATSC is intended to bring mobile TV reception to parity with the fixed HDTVs that we use today, The broadcasting industry realizes that an ever-increasing percentage of its audience is watching television outside the home on smartphones and tablets, but today, access to those devices is mediated by the mobile phone carriers (AT&T, T-Mobile, Sprint, Verizon, etc.). Broadcasters want direct access to those devices and viewers, and are hoping that ATSC 3.0 will give them that access.

The transition to ATSC 3.0 won't be without problems: Broadcasters spent many billions of dollars on new cameras, production equipment and transmitters to move from analog to digital television. Moving from ATSC 1.0 to 3.0 probably won't entail that level of investment, but it will still be expensive for broadcasters. In addition, smartphone manufacturers, mobile phone carriers and consumer electronics companies will have to be convinced (or required by law) to support the new features of ATSC 3.0 in their products. That will take time--potentially as long as ten years.

According to TVNewsCheck, both ABC and CBS have gone on the record as withholding their judgment on ATSC 3.0. Both NBC and Fox support ATSC 3.0 in principle, but both are waiting for more details of the standard to emerge before making a commitment. That led me to wonder whether the network broadcasters actually want or need to make the investments needed to support ATSC 3.0 in the television stations that they own.

All of the top four commercial television networks in the U.S. own and operate several television stations in major cities; in the industry, these are called O&Os (for Owned & Operated.) For example, all four networks own and operate stations in New York, Los Angeles, Chicago and Philadelphia. In Dallas-Fort Worth, all but ABC own and operate their own stations; in San Francisco-Oakland-San Jose, all but Fox own their own stations. The local stations are a big source of revenue and earnings for the networks; for example, in 2013, CBS's network had gross revenues of $8,645 billion and operating income of $1.593 billion, while its local Owned & Operated stations, both television and radio, gross revenues of $2.696 billion and operating income of $807 million. On a percentage basis, the local stations, while not the most profitable unit of CBS, made a much bigger profit than the network (30% vs. 18%.)

On the surface, it seems obvious that CBS, and the other big networks, should keep their stations. However, when you look further, the choice becomes less clear:

  • The major networks could easily get $1 billion or more for each of their stations in the top U.S. markets, and those sales would be taxed as long-term capital gains, not ordinary income.
  • The networks are already getting a significant amount of their income from retransmission fees charged to cable, satellite and IPTV video operators. They get those fees directly from the video operators in the markets where they own stations, and indirectly in other markets through the fees that they charge their affiliates for carrying their programs. If the networks sell some or all of their stations, they would get affiliate fees from those stations without any of the costs of operating the stations.
  • If the networks no longer own over-the-air stations, they would no longer be directly subject to FCC rules. That means no more multi-million dollar fines for "fleeting expletives" or unplanned nipple slips. The networks would still have to abide by FCC content rules to protect their affiliates, however.
  • Over 90% of U.S. households already get their television via cable, satellite or IPTV. Over-the-air reception is increasingly an anachronism.
As little as ten years ago, it would have been unthinkable for the Big 4 networks to sell their stations--if anything, they aggressively wanted to buy more. However, since then, we went through the 2008 Great Recession, which hammered local ad revenues. Network television viewership has been declining for several years, and ratings for many of today's successful network series would have guaranteed their cancellation just a few years ago. Now, many industry analysts are forecasting that digital will supplant broadcast television as the biggest recipient of advertising revenue within the next few years. If the Big 4 have the choice between spending billions of dollars to upgrade their stations to comply with ATSC 3.0, or making billions of dollars from the sale of their stations, it's looking increasing likely that sales, at least of their smaller-market stations, will make more sense.

Is Amazon shifting its Core Value Proposition from price to speed?

Two weeks ago, Amazon announced that it intends to build and open its first distribution center in Illinois. Currently, Amazon fulfills many orders to Illinois, including Chicago, from a distribution center in Indiana and others further away. Having a distribution center in Illinois would enable Amazon to provide next-day or even same-day delivery to Chicago customers. However, Amazon also agreed to collect sales tax from Illinois residents as a condition of opening a local distribution center; that effectively represents a 6.25% to 9.75% price increase, depending on the customer, municipality and product category.

Amazon's decision to open a distribution center near Chicago aligns with its recent decisions to open centers near other big cities, including Boston, Charlotte, Los Angeles, Milwaukee and New York Metro (all of which have been announced since September,) and collect sales taxes in those states. In addition, earlier today, Amazon announced an agreement with Hachette to sell its print books and eBooks, after months of public squabbling. The terms of the deal between the two companies are confidential, but both companies have confirmed that Hachette will have the ability to set its own prices for eBooks. That's in line with the agreement that Amazon reached with Simon & Schuster last month, which also gave that publisher the right to set eBook prices.

One other data point, this one anecdotal, is that I recently purchased some food products from Amazon. The company is moving aggressively into grocery sales and delivery with its AmazonFresh service in San Francisco, metro Los Angeles and Seattle. For customers in other areas, Amazon recently launched a service called Amazon Prime Pantry, which enables Amazon Prime members to order a limited selection of non-perishable items for a flat fee of $5.99 with 3-4 business day delivery. That also represents a price increase for Prime customers, who pay nothing for 2-day delivery of other products. I ordered one of the grocery products that didn't require Prime Pantry shipping, but I paid $17.99 for a product that I subsequently purchased for $4.79 from a local supermarket. You can safely assume that much of the price difference went for the "free" shipping.

So, we have multiple points of evidence that Amazon is willing to let its effective consumer prices increase. If that's true, it's a dramatic shift in the company's strategy, which has been to underprice its competition, much like Walmart's now-abandoned "Always the Low Price" strategy. However, if Amazon is willing to no longer be the low-price vendor, what's the tradeoff? Amazon's recent flurry of announcements suggests that it's speed of delivery, and possibly, the ability to provide same-day grocery delivery to the largest metropolitan markets. Amazon is signalling that it intends to offer next-day delivery to perhaps 90% of the continental U.S., and same-day delivery in as many as the 50 largest cities.

I discussed my hypothesis with one of my former clients, and he made a very important point: Anyone can offer the lowest price as long as they're willing to take losses; price is never an effective long-term differentiator. On the other hand, same-day delivery is a powerful differentiator, because it requires massive capital and labor investments that few competitors are willing or able to make. Amazon is making the investments to give it a long-term competitive advantage over not only brick & mortar retailers, but also quasi-competitors like Google that don't have logistics as a core competency. In summary, Amazon is taking some of the money that it's been using to subsidize below-cost sales to consumers and shifting it to capital investments in and labor costs for distribution centers.


Tuesday, November 11, 2014

Does Android have the advantage in the smartwatch war?

Earlier today, ReadWrite ran an article about two new Android Wear-based smartwatches: LG's G Watch R and Asus's ZenWatch. According to ReadWrite, both watches are significant improvements over previous Android-based smartwatches. The G Watch R has a round face, like the Motorola 360, but unlike the Motorola watch, it uses the entire watch face as the display. In addition, it looks more like a conventional watch, uses conventional watchbands, and has better battery life. The ZenWatch is rectangular like the Apple Watch and has an attractive design, but it uses conventional watchbands, has an AMOLED display and runs Asus's ZenUI user interface on top of Android Wear.

Neither of the new watches has all the features of the Apple Watch, but it's becoming increasingly clear that feature parity really isn't an issue. The watch you buy depends primarily on the smartphone you use--the Apple Watch only works with iPhones, while Android Wear watches only work with Android smartphones. Android smartphones had 84.7% of the worldwide market share (in units shipped) in Q2 2014 according to IDC vs. 11.7% for iOS. That means that about 88% of potential buyers are not in the market for an Apple Watch unless they also want to throw away their current smartphone and switch to an iPhone.

Android smartwatches have another, less-obvious advantage: There are several companies making them, and new smartwatches are being released all the time. Samsung, LG, Motorola/Lenovo and Asus are all making Android Wear-based smartwatches, and each company is on a more aggressive release cycle than Apple. Consider that Apple generally only releases a new version of a product once a year. That makes Apple's upgrades predictable, but it also means that it takes a year for new features to reach the market. In the Android world, on the other hand, smartphone makers are releasing new models all the time, often with features that are intentionally experimental, in order to gauge consumer interest. That enabled Android smartphone makers to get big screens and NFC for financial transactions out before Apple.

The Android smartwatch makers could have the same advantage over Apple. Their watches are evolving continuously, as new features and form factors are introduced, while Apple is a monoculture that will probably only release new watches once a year. That means that if a smartwatch vendor stumbles on a very popular combination of designs and features, Android smartwatch makers can clone the designs and features and get them to market faster than Apple can.

In my opinion, the jury is still out as to whether there really is a mass market for smartwatches. However, the Android ecosystem is in a far better position to take advantage of the market opportunity if and when someone comes up with the "killer app" of smartwatches.

Friday, November 07, 2014

Way off topic: How many calories am I feeding my cat?

You may be asking: Why is Len Feldman writing about cat food? The reason is that I've stumbled onto a problem that a lot of people are likely to have with their cats and dogs, and in finding an answer, it might help others. My cat, KayTee, recently developed Type 2 diabetes because I've been giving her too much food. To be more precise, she sees me as a box that dispenses Mars's Temptations treats whenever she pushes my handle (yells at me). So, in addition to putting her on insulin (Sanofi's Lantus, which seems to be the most effective for cats), her vet had me switch her food to a Purina Veterinary Diet formula, DM (Dietary Management,) from Purina's Fancy Feast Chicken & Turkey dry food. (She only eats dry.) Unfortunately, everything is expensive--Lantus is $200 or more per bottle, depending on where I buy it, her syringes are around $25 per 100, and the DM food is over $30 a bag for 10 lbs.

I'm running out of her DM, and I temporarily can't buy another bag, but I do have a lot of the Fancy Feast left. The question is, can I safely substitute the Fancy Feast for the DM? While in some cases, veterinary foods are prescribed to address urinary, kidney or gastrointestinal problems, in this case the primary issue is regulating caloric intake. So, if I can match the calorie count of Fancy Feast to that of DM, I should be close enough for temporary use. (I'm not suggesting that you do this permanently, only in an emergency.) But, here's the problem: Purina publishes the calorie content of many of its pet foods, but not Fancy Feast--not on the bag, and not online. So, how can I figure out how much Fancy Feast to substitute for the DM?

It turns out that there are two methods:
  1. Pet Obesity Prevention has published a list of dry cat foods with their calorie content per cup as a downloadable PDF file. This document lists hundreds of foods and flavors from dozens of different brands, so you may find your cat's food on the list. However, Fancy Feast isn't on the list.
  2. All pet food manufacturers are required to publish standards for their foods for protein, fat, fiber, moisture, and (if relevant) ash. These numbers are provided in two forms: A "Guaranteed Analysis," and numbers calculated using Federal standards. The Federal standard numbers are more accurate, but the Guaranteed Analysis numbers can be used if the Federal numbers aren't available. Plug the numbers into an online calculator provided by the Feline Nutrition Awareness Effort, and it will tell you approximately how many calories your cat food has, per 100 grams and per ounce. Purina states the calorie count of its foods in cups, so multiply the per ounce value by 8 to get the calories per cup.
In my case, DM dry has 592 calories per cup. When I plugged the numbers for Fancy Feast into the calculator and multiplied the per ounce value by 8, I came up with 960 calories. (Which might explain why Purina doesn't list the calories on the label.)  Next, I took my vet's instructions to feed KayTee a half-cup of DM a day. 592/960 is about 62%, meaning that I should feed KayTee only 62% as much Fancy Feast per day to give her the same number of calories as DM. That works out to just under 1/3rd of a cup a day. So, instead of feeding her 1/4 cup of DM twice a day, I'll feed her 1/3rd cup of Fancy Feast once a day.

Again, I don't recommend that you do this other than as a temporary measure, and I definitely don't recommend that you do this if your cat has been prescribed food to treat conditions other than overweight or diabetes, because those prescription or veterinary foods have (or don't have) specific ingredients based on your cat's condition. In that situation, calorie counts alone are largely irrelevant. However, this exercise is helpful, even if you've got an overweight cat and you want to find a cat food with fewer calories, or if you need to bulk up an underweight cat.

Wednesday, November 05, 2014

Contract jobs: The employee takes all the risks

I had a conversation with a recruiter for an East Coast contracting agency last week that was very illuminating, in that it made clear a number of drawbacks to the contract worker model that I hadn't really thought about. Rather than rehash the discussion, let's go right to the key points:

  • When a contract recruiter contacts a candidate for a job, the candidate will be working for the recruiter's client, but the employer of record will be the contract recruiter's own agency.
  • What's rarely admitted by either clients or agencies prior to giving the worker the final contracts to sign in order to start work is that he or she is initially being hired on a conditional basis--typically a two-week trial period. If the client isn't happy with the worker's performance, or the worker isn't getting along with other people in the client's company, the client can terminate the contract without a penalty (paid to the agency, not the worker.) That's critically important if you have to relocate to take a contract job. You should try to negotiate with the agency for temporary housing and, if needed, car rental in the new city for the trial period, so that you don't uproot yourself until you know that the client wants you to stay on.
  • Neither the client nor the agency will provide any relocation benefits. That means that the worker is completely responsible for covering the costs of finding a new place to live, packing, moving, and any hotel or motel costs.
  • When an agency recruiter says that a contract is for six months, a year or two years, that really means nothing. All it means is that the client estimates that the project or job will last that long. The worker can be let go of at any time, for any reason. Some clients give two weeks' notice, but others may give just a few days or none at all.
  • Eligibility for unemployment benefits varies by state, but in general, the client will argue that it has an independent contractor agreement with the agency, and the agency will argue that the worker is an independent contractor. While there's a possibility that you can get unemployment after a contract ends, it's safer to assume that you won't.
In my opinion, contract jobs only work well for both parties when the worker doesn't have to relocate to take them. A second possibility is when the worker is both single and would be working in an area with much better employment opportunities than where they're currently living. In that case, it may make sense for the worker to bear the moving costs, so long as they can find other work quickly once their contract project ends. However, many of the contract jobs that I've seen lately are in smaller cities and towns, and they're recruiting outside the local area because they can't find qualified candidates locally. Prior to 2008, those companies would have likely hired permanent employees and relocated them, but there was undoubtedly high attrition as many of those workers chose to return to where they came from. Today, they can bring on contract workers, pay them no benefits, aren't responsible for unemployment compensation and don't have to underwrite relocation costs.

That's why I say that the contract worker bears all the risks in the temp economy. The worst risks that clients face are the loss of a couple weeks of work, and possibly, some penalties that they have to pay to agencies if they terminate contracts early and without cause. This is inevitably going to boomerang into the faces of clients, because of high contract employee turnover, less-qualified candidates that make it through the screening process but that would have been weeded out with more extensive interviewing, and a complete lack of loyalty on the part of both clients and workers, all of which will lead to lower productivity. A better term might be the "mercenary economy" rather than the temp economy.

Sunday, November 02, 2014

The Sensor Revolution

I read an article last night on the IEEE Spectrum website about a hand-built, open source "scientific tricorder" using a small Arduino clone, a 1.5" OLED display and a host of sensors. The "Arducorder Mini," designed and built by Peter Jansen, has most of the sensors that you'd find in a smartphone or tablet (magnetometer, gyroscope, accelerometer, and a microphone,) but it also has many other sensors, some of which have only recently been developed for consumer use, including:
  • Ambient Temperature and Humidity
  • Ambient Pressure
  • Multi-gas sensor
  • Lightning sensor
  • X-ray and Gamma Ray Detector
  • Low-resolution thermal camera
  • Home-built linear polarimeter
  • UV
  • Spectrometer
It wasn't very long ago--prior to the launch of the original iPhone--that the primary markets for sensors were automobiles, industrial applications and scientific research. The sensors built for those applications were big, rugged, and in the case of industrial and scientific applications, expensive. One reason for the cost was that demand for the devices was low, but the buyers who needed them were willing to pay a high price to get them.

The iPhone and subsequent smartphones changed all that. Apple put a magnetometer, gyroscope, accelerometer, microphone and camera into the iPhone, partly to support its unique functionality (such as automatically detecting when the phone was rotated and switching into portrait or landscape mode,) and partly to replicate the functionality of existing feature phones. However, once those sensors became widely available to consumers, developers figured out new ways to use them. In addition to the myriad of camera apps, there are several apps that use smartphone cameras to detect the user's pulse rate. Some apps turn smartphones into fitness or sleep trackers. Phone cameras are being used for light metering apps for still photography and video. The combination of motion sensors and touch screen is being used for game input, replacing joysticks, button pads and steering wheels.

As smartphone sales increased, the cost of making sensors, and the prices of the sensors, declined. That made other sensor applications, such as the many fitness trackers now on the market, practical. Now, as with the Arducorder Mini, we're starting to see sensors that were primarily used for industrial and scientific applications drop in price and size to the point where they're practical to use for consumer applications. And, as we saw with smartphone sensors, developers will come up with new, innovative applications for this new generation of sensors, from home security to low-cost field blood testers.

Sensors are expanding from a niche business to core technology for many products and markets, and as the markets for sensors grow, more and more types of sensors will become available at mass market prices. That virtuous cycle will benefit app developers, hardware vendors and sensor manufacturers, as well as consumers and businesses.

Tuesday, October 28, 2014

Regal Entertainment: "Exploring options" in a new world

Yesterday, Regal Entertainment, the largest movie theater chain in the U.S., announced that it's exploring "strategic alternatives," including selling the company. Regal has long been the top buyer of theater chains around the country--it now owns 574 theaters with 7,349 screens. However, when there's a run of unpopular movies, as happened this summer, Regal and other theater operators suffer; the company's revenues and profits from the summer quarter fell sharply year-over-year.

Regal's decision to explore a sale is being driven by several trends:
  • Industry revenues from theatrical exhibition have increased modestly over the last decade, but the number of tickets sold has been in a long decline. Theaters and movie studios have maintained their revenues by increasing ticket prices, in part by showing 3D movies at higher prices and by installing IMAX and similar panoramic projection systems. No one really knows at what point higher prices will lead to diminishing returns, but many people in the industry are afraid that ticket prices are close to that "tipping point" already.
  • Netflix's recent moves to fund and distribute its own movies are sending shock waves through the industry. Regal and the other three of the four largest U.S. theater chains announced that they would refuse to show any movies distributed by Netflix. However, that didn't stop companies like The Weinstein Company and IMAX, or actor Adam Sandler and his Happy Madison production company, from signing up to produce and distribute movies with Netflix. "Direct-to-home" movies have long been a staple of the home video business, but those titles are usually either not good enough or appeal to too small of a niche audience for theatrical distribution. That's going to change with Netflix, and likely other companies, pumping money into producing theatrical-quality movies for the OTT streaming audience.
  • As I've written before, High Dynamic Range video, which is under development by several companies, will provide in-home viewers with a picture that's superior to anything other than IMAX. Currently, the only way to view HDR is with a modified HDTV or Ultra HDTV. In order to view HDR in a movie theater, it's likely that entirely new projectors or huge flat-panel displays will be required, which will require major capital investments less than a decade after theaters replaced their film projectors with digital models.
  • Many Chinese and Japanese investors, including Alibaba and Softbank, are exploring investments in the U.S. entertainment industry. AMC Theaters, the second-largest U.S. theater chain, was purchased by China's Dalian Wanda Group last year for $2.6 billion.
Put those four trends together, and it's likely that this is the right time for the big theater chains to consider selling. Ticket prices are about as high as they can go without dragging down top-line revenues, non-theatrical competition hasn't yet made a dent in theatrical revenues, the major capital investments to support HDR are still a few years away, and there's a lot of foreign money looking for a home in the movie business. 

Monday, October 27, 2014

The price of (brain) complexity

Before launching into this post, let me state that I'm not a neurobiologist, psychologist or psychiatrist. I'm writing based on what I've read about neurobiology; I claim no expertise in the subject, other than the fact that I've got a brain.

It's generally believed that the human brain is the most complex object on Earth--certainly more complex than even the most powerful supercomputers. However, we've learned from experience that the more complex a system is, the more there is to go wrong. To keep the brain from failing in myriad ways, evolution has selected for a brain that, through a combination of redundancy and plasticity, is fault-tolerant. In other words, the brain can continue to operate normally (or more precisely, within a range of normality,) even with some damage. That damage may come from genetics, environmental factors such as pollution, or physical injury such as concussions, hemorrhages or tumors.

When the brain sustains damage beyond its capacity to compensate for, the result is mental illness. Given the brain's complexity, most human brains are already operating at or near the limits of normality just from compensating for the normal damage that accumulates throughout life. (In the brain, only the olfactory bulb and the subventricular zone contain nerve cells that regenerate in adults.) That's one reason why some neuroscientists theorize that conditions such as schizophrenia have both a genetic and environmental basis. For example, having a genetic predisposition to schizophrenia is a necessary but insufficient condition for a patient to start showing symptoms; an environmental stressor, or normal age-related changes, appears to be required in order to actually trigger the condition.

As we put more of a cognitive load on ourselves, we may very well be pushing our brains beyond their ability to compensate:

  • Only twenty years ago, we were able to cope with the flow of incoming information, but today, we're continuously bombarded with text messages, emails, tweets, alerts and social media posts via our smartphones, tablets and PCs.
  • We have to decide what to pay attention to, what to ignore, and what to postpone. For the messages we pay attention to, we have to understand them and decide how to respond.
  • Despite the long-term recovery in the U.S. economy, employment opportunities remain diminished for recent college graduates and middle-aged workers. In addition, with the shift from permanent to contract jobs, even people who are employed need to be constantly searching for new jobs in preparation for when their contract expires or is terminated.
  • The sharing economy (for example, companies like Uber and Lyft) is changing the nature of work from fairly regular schedules with reliable incomes to demand-driven work with both unpredictable schedules and income.
Technology can help with some of the added stressors: For example, automatic filters can screen out spam and organize incoming messages by priority, and newer cars have multiple sensors that can help prevent accidents. However, technology is doing far more to add to the cognitive load than it's doing to relieve it. In addition, the change in the nature of work from semi-permanent to temporary and demand-driven is unlikely to be reversed. Therefore, the amount of environmental stress will continue to rise.

My hypothesis is that we're likely to see a rise in mental illness as a result of these stressors--most likely in the forms of depression, anxiety disorders, domestic violence and violent crimes outside the home, inability to hold onto jobs not caused by the inherent transitional nature of temporary work, and suicide. Interruption-free weekends or smartphone-free vacations, which are still seen as unusual, are likely to become commonplace self-therapies to deal with the consequences of ever-increasing cognitive loads. In short, we need to become aware of the increasing cognitive stress that we're subject to, and come up with strategies for coping with the stress.

Saturday, October 25, 2014

Broadcasters to FCC: No, we won't tell you the details of our retransmission deals

The Wrap reports that the Federal Communications Commission has put its review of the Comcast-Time Warner Cable merger on hold for the second time. This time, the delay is due to the refusal by ABC, CBS, NBC, Fox, Viacom and Discovery to supply the agency with details of their retransmission agreements with cable, satellite and IPTV operators. The reason that the FCC wants the retransmission information in the first place is that opponents of the merger have charged that the combined company would have too much power over program suppliers (including the broadcast and cable networks.) The networks have agreed to provide the U.S. Justice Department with the data because it will be kept confidential, but FCC rules require that the data be made available to both supporters and opponents of the Comcast-TWC deal, so that they can use it in their briefs. Only the general public is prohibited from seeing the data.

The six networks have very good reasons for wanting to keep their contracts secret, because once buyers of their content learn how much other companies are paying, they'll want to renegotiate their contracts down to the lowest price. On the other hand, four of the six companies (ABC, CBS, Fox and NBC) are granted licenses by the FCC to broadcast over-the-air. Unlike mobile carriers such as AT&T, Sprint, T-Mobile and Verizon, television broadcasters get their spectrum for free. So, they are in essence underwritten by U.S. taxpayers for the multi-billion dollar value of their airspace. (Update, November 5, 2014: The FCC has released a "price list" in conjunction with its plan to get broadcasters to relinquish their spectrum so that it can be used for other applications. The FCC values the nationwide recovery of as much as 126 MHz of spectrum at a maximum of $38 billion dollars.) In addition, whenever a retransmission dispute between a broadcaster and a cable, satellite or IPTV operator results in the broadcaster removing their signals from the video operator, the public is stuck in the middle. Therefore, I believe that there's a strong argument for public disclosure of broadcast retransmission deals, above and beyond the Comcast-Time Warner Cable case.

My suggestion is that, if broadcasters want to prohibit anyone outside a handful of government employees from seeing their retransmission deals, they should be forced to pay the full market value for their bandwidth, just as mobile operators do. If they don't want to do that, they always have the option of relinquishing their frequencies and feeding their programs directly to service providers and to consumers over the Internet. CBS threatened to do exactly that if the Supreme Court ruled against it in the Aereo case, so it's clearly an option that's been considered by broadcast networks. If they want to operate in secret using the public's airwaves, they should pay for the privilege.

Saturday, October 18, 2014

The Transition State

Countries go through transition periods where political, economic and technological developments combine to cause sweeping societal changes. One such transition period in U.S. history was from the very end of the 19th Century to the start of the Great Depression. Here's a list of some of the key events in the transition:

  • A massive influx of refugees, primarily from Eastern Europe and Russia, which initially was an enormous social burden but laid the groundwork for the U.S. to become the world's leading manufacturer and the center of science and technology.
  • A vast movement of population from farms to cities, which again initially put huge burdens on the cities but ultimately provided the talent for them to become economic powerhouses.
  • The invention and adoption of cars and trucks, which replaced horse-drawn vehicles and expanded the distances that workers could commute and that goods and services could be delivered.
  • The start of the era of mass media, where radio could reach an entire city instantly with the same entertainment, news and sports, and where the transition from news once or twice a day from local newspapers to continuously-updated news began.
  • Mass production techniques, which were pioneered earlier in the 19th Century, became widely adopted and enabled manufacturers to manufacture goods more quickly and cheaply, with better quality.
  • The labor movement started as a response to long working hours, low pay and poor working conditions.
  • Fossil fuels became the primary energy source for the country, replacing wood, whale oil and other renewable fuels. The availability of cheap coal and petroleum-based fuels supercharged manufacturing and transportation, and helped to make electricity both practical and economical.
  • Electrical distribution, which began in a handful of big cities in the late 19th Century, swept across the country, making gas lighting obsolete, replacing steam- or water-powered engines with electric motors for manufacturing, and facilitating hundreds of new industries.
  • The first heavier-than-air aircraft flew, and while it would take decades for airplanes to become an essential transportation and shipment technology, they made long-distance travel much more practical for the people who could afford to use them.
  • Telephones became commonplace, replacing telegraphs and making instantaneous person-to-person communications possible for most people living in cities.
The bottom line is that, if you took someone living in 1880 and magically transported them to 1930, they would recognize very little. An enormous amount of good came out of this transition, but so did the Great Depression and two World Wars. I believe that we're going through a similar transition, which probably started around 1990 and is likely to change society just as thoroughly:
  • Climate change driven by global warming is likely to shift areas of food production further north and south in the respective hemispheres, and will raise sea levels, requiring massive capital investments to protect coastal cities and the abandonment of some coastal areas and islands.
  • We've begun replacing fossil fuels with renewable energy sources, both as a response to global warming and in order to protect ourselves from supply disruptions.
  • Internal combustion engines in cars and trucks are being supplemented or replaced by electric motors powered by batteries and fuel cells.
  • The Internet has erased geographic boundaries and made commerce, information and entertainment available everywhere.
  • Mobile devices, especially smartphones, have made the Internet and all of its capabilities available to people all over the world, including third-world countries.
  • The combination of the Internet with vast and comparatively cheap IT systems is changing the nature of work and the types of work that are done by people vs. computers. Increasingly, work previously done by untrained, high school educated people is being done by computers and humans overseas, and the boundary between work suited to automation and work suited to humans continues to rise.
  • After a 50-year period when people moved from cities to suburbs, the population flow has reversed, and cities are once again desirable places to live, not just work.
  • The combination of greater urban population density, new transportation options like car- and ride-sharing, and the advent of autonomous cars, signals the beginning of a long decline in the automobile industry, as cars are increasingly seen as utilities to be used as needed rather than essential property.
  • We're only a few years from being a majority minority country, which will have dramatic political and economic consequences.
  • The application of computers to medicine is driving advances in genomics, proteomics and pharmaceutical development that are resulting in new ways to detect, diagnose and treat diseases, as well as ways to extend both the length and quality of life. That too has dramatic societal, economic and political implications.
  • A similar application of computers to materials science has led to the development of new engineered materials that are better suited to their purpose and cheaper than previous materials, or that can do things that no material could previously do.
  • At the same time, income disparity between the rich and everyone else continues to grow, and that disparity is reflected in the fact that children from high-income families have much better educational opportunities throughout their school years than children from middle- and low-income families.
If I were to summarize what's driving our current transition, I'd say that by far the two most important developments were Moore's Law and the Internet. Both of them predate the transition, but they've resulted in the ability to put intelligence into just about everything at an incredibly low cost, and the ability to connect everything, no matter where it is.

Are we going to have the same kind of global disasters that occurred during and after the 19th/20th Century transition? Not necessarily. We had a transition of a similar magnitude after World War II, and while we had the Cold War and many proxy wars, there was nothing approaching either World War and, in general, improved financial conditions. However, we still don't all the ramifications of global warming and may not know for decades, and the same technologies that improve communications and health care can be used to impose a surveillance state and create organisms deadlier than anything that's naturally evolved on Earth. So, we'll continue to be faced with a nasty assortment of unintended consequences. The outcome of this transition will depend on how many of those consequences we avoid, and how we deal with the ones we can't avoid.

Thursday, October 16, 2014

Over-The-Top video: HBO and CBS crack open the door

Yesterday, HBO announced that it plans to launch an over-the-top (OTT) Internet video service next year that will be available to anyone with a high-speed Internet connection, even if they don't subscribe to cable, satellite or IPTV services. HBO didn't announce any specifics about which shows will be available, if there will be live streaming or Video-on-Demand (VOD) only, and how much the service will cost. Then, today, CBS announced CBS All Access, which will provide nationwide VOD access to current CBS shows and classic shows from the libraries of CBS, Paramount and Desilu, for $5.99/month. Subscribers living in 14 major cities will also get live streaming from their local station 24 hours a day.

In the course of 24 hours, the competitive landscape for OTT video in the U.S. changed from Netflix and a bunch of smaller content companies to a field including the #1 pay television service and #1 broadcast network. Both HBO and CBS will be "canaries in the coal mine" for the other broadcast and cable networks. The right price points and assortments of live and VOD content are experiments in progress, and we're likely to see lots of different combinations as other companies enter the market. These announcements are also likely to impact existing OTT services--especially Hulu. Fox, Disney and Comcast own Hulu, but Comcast is not allowed to exercise any control due to restrictions that it agreed to in order to get regulatory permission to acquire NBC Universal. Fox in particular has put severe restrictions on when its shows are made available to Hulu and for how long. With CBS effectively opening the floodgates, Fox may either have to loosen the restrictions on Hulu, or as I suspect, launch its own Fox-branded OTT service that's much closer to CBS All Access. Disney, which already has an extensive web and mobile app presence, is likely to do the same, but with several services:

  • Sports, which will offer ESPN's channels
  • Family, which will offer Disney's family and children's channels, and possibly ABC Family
  • ABC, which will offer ABC's channels
Other cable networks and groups that are candidates to offer their own OTT services include:
  • Starz, which has already announced an OTT service for international markets
  • Time Warner, which in addition to HBO could offer services based on CNN and its Turner cable networks (it already offers a large collection of classic movies and TV series through its Warner Archive service)
  • Fox, which in addition to its television network, can offer services based on FX, Fox Sports, Fox News and the National Geographic channels
  • Discovery, which in addition to its namesake networks could stream Animal Planet, OWN, Science Channel, TLC, Velocity and others
  • A+E Networks, jointly owned by Disney and Hearst, which could stream such channels as A&E, History and Lifetime
  • Viacom, which could stream movies and TV shows from Paramount Pictures and cable channels including BET, Comedy Central, MTV, Nickelodeon and TV Land
  • AMC Networks, which owns AMC, IFC, Sundance TV and WE TV
Comcast may be forced to make NBC Universal's broadcast and cable networks available for OTT as a condition of its merger with Time Warner Cable, but I think that it's unlikely that the company will offer its channels for streaming to non-cable subscribers any time soon.

That brings up a possible "unintended consequence" of HBO's and CBS's announcements: They'll make it much more likely that the FCC will allow OTT services to be classified as Multichannel Video Programming Distributors (MVPDs), the same as cable, satellite and IPTV operators. After all, if content providers can offer exactly the same programming over the Internet that they offer through MVPDs, why shouldn't an Internet-based program distributor be allowed to do the same thing?

Wednesday, October 15, 2014

Getting Ebola under control

The Ebola virus is in the U.S.--specifically, two nurses who treated a patient who returned to the U.S. with an Ebola infection are themselves now being treated for the virus. The U.S. is doing what it usually does when confronted with a health or security emergency--panicking. The panic, which is far more communicable than the virus, is being spread by news organizations that are desperate for ratings, and it's not being helped by the Centers for Disease Control (CDC,) which seems to be a couple of steps behind the situation on the ground.

From what I've seen and read, there are a few common-sense steps that we can take to limit exposure and get ahead of the virus:
  • We need to quickly set up more high-containment medical wards. At the time that the first Ebola patient in the U.S. was identified, there were only five high-containment wards that were fully equipped for treatment of Ebola and other viruses. In addition to improving containment in existing public hospitals, we should also consider basing some of the new facilities in isolated domestic military bases with their own airfields in order to limit the risks of infection spreading in cities.
  • It's clear that Texas Health Presbyterian Hospital Dallas was unprepared to treat Ebola patients, and the Washington Post reports that the hospital kept adding more protective equipment as the first patient deteriorated. The two nurses who were infected may have gotten their infections early in the process, before sufficient isolation was established. A standard protocol has to be distributed to every hospital and health care facility in the country that might have to treat a patient with Ebola, even if the procedure is only to isolate the patient and prepare them for transport to a more suitable facility. Standard protocols are also necessary for the destruction of the bodies of people who die from Ebola, because their bodies remain highly infectious after death.
  • Voluntary quarantine doesn't work, especially when even trained medical professionals ignore the rules. NBC's medical correspondent Dr. Nancy Snyderman broke quarantine, and Amber Vinson, the second nurse infected in Dallas, flew to Cleveland and then back to Dallas, where she checked herself into the hospital yesterday, even though she was under observation. Quarantine has to be mandatory, and if rules for compensation don't exist, we need some way for the U.S. Government or states to provide some income to people who will lose their only source of income while in quarantine. That will relieve pressure on many people who would otherwise be tempted to hide their exposure or break their quarantine.
  • The U.S. Government and other countries must continue to send trained health and safety professionals to West Africa, both for humanitarian reasons and to stop the spread of Ebola to other countries.
  • We need to establish monitoring at every Customs/Immigration checkpoint in the U.S., focusing on all flights originating from West Africa and all passengers who came from or spent time in West Africa.

Saturday, October 11, 2014

Liability and its impact on the adoption of technology

Thursday night, Tesla unveiled its dual-motor variation of the Model S sedan, and it also revealed several technologies that move part way down the road to self-driving cars. One system uses cameras to read street signs and set the car's speed to the posted speed limit. Another system keeps track of lines on the road and can steer the car between the lines without human intervention. A third system brings the car to a stop if it senses an obstacle in front of the car. These capabilities were implemented in other cars before Tesla, but in every case, they're intended to be used with a driver actually driving the car. Even though Elon Musk and other Tesla representatives took their hands off the wheel and their feet off the pedals to show how the systems work, they can't tell, advise or even suggest that Tesla owners do the same, for now.

Elon Musk was quoted as saying that he believes that a commercially-priced self-driving car is possible within five years, and I believe him. However, it may be 20 years before we start seeing fully self-driving cars in commercial use. The reason isn't technology or cost. All the pieces are there, and the prices of even the most expensive items like LIDAR systems are dropping dramatically. The real problem is liability: If a self-driving car is involved in an accident, who's responsible? With today's cars, one or more of the drivers involved in the accident is usually found to be responsible for the accident and liable for damages. There are cases where an automobile or parts manufacturer is found liable, as in some of the accidents caused by faulty Firestone tires on Ford Explorers, and the stuck accelerator problem on some Toyotas. However, manufacturers are very rarely successfully implicated in car accidents.

Self-driving cars change the liability argument dramatically. It's extremely likely that self-driving cars will be safer than cars driven by humans: Cars don't get distracted, they don't drive drunk, they don't get into arguments with passengers, and they don't experience road rage. Their sensors and computers should respond more quickly to changing conditions than humans can. They won't drive over the posted speed limit, and they won't follow other cars too closely. A working self-driving system should be much safer than a human driver. However, technology can fail. An excellent example from 2005 is a test performed by Stern TV on three S-Class Mercedes with an early version of self-braking. Three cars plowed into each other because the system wouldn't work in a steel-framed garage. Presumably, that problem has been long since fixed, but electronic systems in cars do fail, especially when they're still new.

If a self-driving car is in an accident, who's to blame? For simplicity, let's assume that any other cars involved in the accident weren't at fault (although they might at least share the blame in a real-world accident.) If the person in the driver's seat wasn't actually driving, something in the car might have failed. But what if something did fail, the car alerted the passengers, and they ignored the warning? That might save the car manufacturer from liability, but the manufacturer would almost certainly be sued because it has the "deepest pockets." Or, what if the person in the driver's seat turned off the self-driving system before the accident? Then, the driver would most likely be at fault.

We won't see fully self-driving cars until their technology is sufficiently mature that the reliability of critical systems is around the "five-nines" level (99.999%,) or around one failure in every 100,000 hours of operation. In addition, car manufacturers are likely to push legislation that limits their liability in accidents involving their self-driving cars--the trade-off for the greater overall safety in self-driving cars is a limitation on liability if an accident does happen. Finally, we're likely to see legislation that requires one of the passengers in self-driving cars to be a licensed driver and to be able to take control of the car at any time. These conditions will probably dampen demand for self-driving cars, because of the cost of highly reliable systems, and because companies like Uber and Lyft may still have to employ human drivers.

For these reasons, fully self-driving cars may be technically feasible, and even practical from a cost standpoint, well before they're commercially available.

Sunday, October 05, 2014

Film is dead--and a few big-budget movies won't save it

Last July, after lobbying by directors including J.J. Abrams, Christopher Nolan and Quentin Tarantino, the major U.S. movie studios signed a deal with Kodak to continue to purchase motion picture film for several years, whether or not they actually use the film. Kodak, which emerged from bankruptcy in September 2013, had planned to shut down its (and the industry's) last remaining motion picture film manufacturing facility. Between the virtually total replacement of film cameras with digital and the almost complete transition of U.S. theatrical projectors to digital from film, Kodak's shipments of motion picture film fell 96% between 2006 and 2014.

Why do this relative handful of directors continue to insist on shooting film? There are two primary reasons:
  1. Film can capture a bigger range of colors than digital camera--the equivalent of 16 bits of resolution. By comparison, Arri's Alexa, the most popular camera for high-end cinematography, captures color information with 15 bits of resolution, but then may lose bits of resolution when it converts the video to a color space for editing and viewing.
  2. Under ideal conditions, 35mm film has an image resolution of around 5300 x 4000 pixels, while the Digital Cinema standard for 4K acquisition and projection is 4096 x 2160 pixels. However, many movie theaters are still using 2K projectors, which limits the resolution to 2048 x 1080. Compared to either digital standard, film (hypothetically, at least) provides far more image detail.
So, those directors who still want to use film have a solid rationale for doing so, even if most of the increased resolution and color space are lost once they're projected on digital projectors or watched on HD, or even Ultra HD TVs. However, the vast majority of directors and cinematographers have switched to digital for several reasons:
  1. Digital cameras have dramatically more dynamic range (the ability to capture bright and dark subjects at the same time) than does film. Typically, the dynamic range of movie film is 1,000: 1 (approximately 10 bits or 10 f-stops.) Even video cameras costing a few thousand dollars can capture 10 f-stops or more, and the Arri Alexa has a range of more than 14 f-stops, or better than 16,384:1.
  2. Digital media is much less expensive than film over time, because it can be reused. Movie productions typically offload recorded flash media to hard and flash drives during the day, then erase and reuse the flash media. Digital's much lower costs also enable directors to get coverage from a variety of angles and framings.
  3. 35mm film magazines usually only allow a maximum of 1000 feet of film to be loaded, due to size and weight. That means that magazines have to be changed every 11 minutes of filming (at 24 frames per second.) Depending on the image resolution, dynamic range and color depth, a single piece of digital flash media can hold hours of video. That allows for long continuous shots and far fewer interruptions to change media. 
  4. Digital cameras have an enormous range of sizes and weights, many of which are smaller and lighter than any practical motion picture film camera can be. This gives filmmakers enormous flexibility for shooting in tight quarters and in sports and action situations. It also makes lightweight drones feasible for shooting, where previously only helicopters and airplanes were viable platforms for aerial photography.
  5. Specialized digital cameras can provide much higher frame rates than are either economically or technically feasible for film cameras. For example, Vision Research's Phantom Flex4K digital camera can shoot Digital Cinema 4K at 1,000 frames per second, or 2K at 1,950 frames per second. By comparison, film cameras usually shoot 24 frames per second.
There's a technology coming down the pike that's likely to make film obsolete, even for the directors who still insist on using it. I wrote about High Dynamic Range (HDR) video two weeks ago, and I won't repeat all the arguments I made in that post. Here's a summary of HDR's advantages:
  • Much greater dynamic range; Dolby says that its Dolby Vision HDR system will have as much as a 200,000:1 dynamic range. Many current digital cinema cameras can be adapted to shoot HDR video when there are commercially-available ways to view it.
  • Color spaces that are as big or bigger than motion picture film.
However, there are several problems with HDR, beyond what I stated in my post:
  • There are no current theater projectors, film or digital, that can project HDR images. Film is only capable of 1,000:1 dynamic range, and film can't project a true black because even the blackest part of an image can't block all the light from the powerful xenon bulb in film projectors. Digital projectors use a similar xenon light and have the same problem. It's possible that laser-based digital projectors could project both the dynamic range and color space of HDR, but to my knowledge it hasn't been demonstrated yet.
  • With the exception of the handful of expensive professional Organic LED (OLED) displays in use, today's current HDTVs can't provide either the dynanic range or color space of HDR. However, LCD HD and Ultra HD TVs can be engineered to have a separate LED backlight for every pixel, and most LCD displays are capable of displaying a bigger color space than is currently used. Dolby, Technicolor, Philips and the BBC are all either in talks with or have already licensed technology to consumer electronics manufacturers to implement their HDR formats in future HD and Ultra HD TVs.
I believe that when the aforementioned film-holdout directors see HDR, they're going to want to use it--and that's when film dies, once and for all, for movie production. (Film is already dead in movie theaters, despite some last-gasp attempts to keep it viable.) The problem, of course, is that there's no easy way to implement HDR in movie theaters. However, as I wrote in my previous post, if past experience is a guide, it make take as long as ten years for HDR to become standardized and available to consumers at an affordable price. That will give theater operators and digital projector companies time to figure out a way to make HDR work in theaters.

Thursday, October 02, 2014

Netflix jumps into the movie production business

Many people in the movie and television businesses have believed that given Netflix's success with original television series, it was only a matter of time before the company would begin producing movies. Those beliefs have been confirmed in a big way: Last week, Netflix announced that it has partnered with The Weinstein Company and IMAX to produce a sequel to "Crouching Tiger, Hidden Dragon" called "Crouching Tiger, Hidden Dragon: The Green Legend," and today, Netflix announced a four-picture production deal with Adam Sandler and his Happy Madison production company.

The terms of the deals aren't public knowledge, but some of the plans have been revealed: In the "Green Legend" deal, IMAX was brought in to distribute the film to IMAX theaters. IMAX develops the cameras, projectors, screens and processing software for its various formats, but its theaters are actually owned and operated by other parties, and a number of those parties in the U.S. are very unhappy. The four largest theater circuits in the U.S., Regal, AMC, Carmike and Cinemark, have said that they won't show the sequel. Cineplex in Canada and Cineworld in Europe have also refused to show it. That doesn't completely eliminate IMAX as a viable outlet for the movie, because there are many IMAX theaters operated by museums and public institutions, and smaller theater chains with IMAX theaters may decide to show it.

It's not clear whether Netflix changed its strategy overnight or whether it had already expected the theater chains to react the way they did, but in today's announcement, Netflix said that none of the four movies to be produced by Adam Sandler will be shown in theaters. In addition, they also made clear that none of the movies that Adam Sandler or Happy Madison are already committed to for other producers or distributors are included in the four films.

No one should be surprised that big theater chains won't show Netflix's films--they've pushed back against major studio day-and-date Video-on-Demand (VOD) tests (the movie is released in theaters and on VOD on the same day,) starting with Universal's "Tower Heist" in 2011. By and large, the big studios have backed off of day-and-date VOD, but they're aggressively testing shorter windows between some movies' theatrical release and their availability on VOD. Smaller independent studios such as Magnolia Pictures have adopted day-and-date VOD releases. 2929, parent company of Magnolia, also owns Landmark Theaters, which has 50 theaters in 21 markets, so Magnolia is guaranteed of theatrical distribution in many major cities, no matter what other theater chains decide.

It's likely that Netflix is structuring its movie production deals with the expectation of no domestic theatrical revenues. Whatever theatrical distribution Netflix gets will be promotional, not a significant revenue generator. Over time, if Netflix's movies prove very popular, the big theater chains may be forced to start bidding for the right to show them in their markets. However, for now, the safest move for Netflix is to budget movie production in line with VOD revenues.

Earlier today, The Verge reported on Adam Sandler's deal with Netflix, and wrote:
Under the deal, Sandler removes the burden of risk. Netflix will solely fund the films, taking full responsibility for providing investment — and securing additional investment — off Sandler's Happy Madison Productions. Though Netflix will be the sole financier, the films will still have their $40 million to $80 million budgets. Sandler's payments are a large chunk of his films' budgets. He reportedly receives $15 million and over per film as an actor, and can make an additional $5 million as the producer, which explains how Grown Ups 2, a comedy with a handful of special effects, reportedly cost $80 million. On top of all that cash, it's likely Sandler and his production company will make an additional, undisclosed lump sum of money simply by signing the deal. Netflix decline to provide comment to The New York Times on the specifics of the agreement.
It's inconceivable to me that they would agree to pay production costs anywhere near $40 to $80 million or $15 million per picture for Sandler's acting, especially since Sandler's last several movies have bombed in the U.S. Netflix probably has a "back-end" deal with Sandler that pays him additional compensation if the movies reach or exceed performance targets, such as the number or percentage of subscribers who watch them. As the Verge article points out, Sandler laces his films with product placements, which can defray some production costs, or put money into his pocket. That might be enough to enable Sandler to, say, produce a film for $25 million, get $10 million in product placement funds, deliver the movie to Netflix for $20 million and put $5 million before tax into his pocket.

Netflix may be the first VOD company that will underwrite major motion pictures for its own distribution, but it almost certainly won't be the last. I expect Amazon to follow suit, and possibly Redbox. (Update, October 4, 2014: TechCrunch reported today that Redbox will shut down its streaming service on Tuesday, October 7.  That makes it much less likely that the company will get into original production.) SoftBank, the owner of Sprint in the U.S, SoftBank Mobile in Japan and the single largest shareholder of China's Alibaba, just invested $250 million for 10% of Legendary Entertainment, with options to invest a total of $750 million more between now and the end of 2018. Legendary, whose movies are co-financed, marketed and distributed by Universal, could produce movies for SoftBank and Alibaba should either company decide to distribute its own original titles.

Tuesday, September 30, 2014

Rabbit season? Duck season? No, another pilot season!

In Los Angeles, actors, writers, producers and networks are getting ready for the madness that's known as pilot season. If you're unfamilar with the term, pilot season is when pilots for new televsion shows are approved for production by networks. There's a mad dash that starts in January to complete scripts, cast actors and shoot the pilots. With that in mind, networks are already getting a head start by "greenlighting" (approving) some new shows.

I'd like to propose some new shows for 2015-16. I'll start with what's called a logline (a one-sentence description of the show,) followed by some of the pitch to the network:
  • "You're The Judge": We take people off the street, put them in judicial robes and let them decide cases between real litigants, with binding decisions.

    This show doesn't deal with the kind of namby-pamby disputes that you might see on "Judge Judy." Think about "Rowe v. Wade" or "Brown vs. Board of Education" decided by a plumber with no legal training whatsoever. Or, all the "Apple vs. Samsung" patent lawsuits decided in 30 minutes by a hairdresser.

    In the same vein...
  • "Kids' Court": Disputes between first-graders are decided by a judge who's also a first grader, with binding decisions AND an appeals court.

    Remember those playground arguments you had when you were a kid? Were you bullied by someone? "Kids' Court" will let first-graders argue their cases in from of a first-grade judge. It's kids arguing and trying to understand the law at the level of most daytime television viewers! And, in a unique twist, the losers will be able to appeal their case to an appeals court with three third-grade judges. Think of it--a snowball fight case could fill two complete episodes!
  • "Hatewatch": The cast from "Mystery Science Theater 3000" makes on-screen snarky remarks during the second showing of an existing network series.

    Why should Tweeters have all the fun? "Hatewatch" brings the lovable professional hate watchers from "MST3K" together with YOUR bad shows! Instead of just burning off the remaining episodes, let "Hatewatch" set them on fire! And, it's incredibly inexpensive--you can use the showing rights you've already paid for, and the "MST3K" comments don't need much in the way of production value. Come to think of it, they don't need ANY production value.
  • "Life With the Joneses": A family situation comedy written by everyone who ever wrote for "The X Files," "Twin Peaks" and "Lost".

    What would a conventional family sitcom look like if it was written by the writers of some of the strangest, most elliptical shows ever seen on U.S. television? Pay us to find out!
  • "Detroitia": A fun-loving romp through the twee areas of America's most depressed major city.

    IFC's "Portlandia" has been so successful in communicating Portland's culture that even parts of the city that weren't twee before it went on the air are now twee. "Detroitia" will bring the same lighthearted point of view to Detroit. A male-female couple will visit Detroit's many overpriced, gentrified neighborhoods while trying to avoid arsons, abandoned buildings and physical assault. (What? Detroit doesn't have any overpriced, gentrified neighborhoods? Never mind.)
  • "Reality Island": The heads of reality programming for all the major television and cable networks fight to the death on a desert island.

    Each season, all of the top reality programmers at all the major television and cable networks are sent to an isolated desert island, with only enough food and water to last two weeks (but all the knives, guns and poisons they want.) When the food and water run out, they'll have only their wits and the dead bodies of their companions to live on. Waters around the island will be mined, and anti-aircraft guns will shoot down any helicopters that try to rescue programmers. The winner returns to his or her job; the networks employing the losers have job openings.
If you like any of the ideas, please feel free to use them. Somewhere, Paddy Chayefsky will be laughing.

Sunday, September 28, 2014

Why I'm not racing to buy a Ultra HD TV...yet

Over the last 18 months, there's been an explosion of products for creating and editing 4K video, from cameras and switchers to editing and compositing software. Costs have declined dramatically: A few years ago, there was only a handful of cameras that could do 4K, and they were priced in the mid- to high-five figures. Today there are 4K cinematography-quality cameras priced as low as $2,000, and GoPro is said to be planning to release its 4K HERO 4 sport camera the week of October 5th, probably at a price below $400. (Update, Sept. 29) GoPro announced three new sports cameras today, with prices. The new HERO 4 Black is the 4K/30fps model, and it will sell for $500, not the $400 I estimated. However, it will ship on October 5th.

4K consumer televisions are becoming more common, and again, much less expensive. In late 2012, there were only two 4K televisions for sale in the U.S. market, and they were priced at $20,000 and $25,000 respectively. Today, the average selling price for an Ultra HD TV (the new name for 4K video) in North America is just under $2,000, and 50" Seiki and TCL models can be had from Amazon for under $450. Vizio has just started shipping its P-series Ultra HD TVs, which are claimed to be comparable to more expensive models from the top manufacturers; its 50" model sells for $1.000.

The better models from the first tier TV manufacturers (including Vizio) should have excellent picture quality, refresh rates of 120Hz or more, and a good upscaler that resizes conventional HD video to Ultra HD without distortion. However, independent researchers have found that, at the normal distances that viewers sit when watching their televisions, there's almost no meaningful difference in the perceived quality of a HDTV and Ultra HD picture. This chart explains how it works:



There was a huge jump in quality between analog TVs and even 720p HDTV. If you had a 50" set, you could see the full difference at 10 feet; with 1080p, you saw the full benefit over 720p at about six feet. However, with Ultra HD, you won't even begin to see any improvement over HD until you're about five feet from the TV, and you won't get the full benefit until you're only about 3 1/2 feet away (a little more than a meter.) At that distance, the television picture is filling most of your field of vision. So, I'm not planning to buy any of this generation of Ultra HDTVs. The reason is that there's a new technology not too far down the road that will provide a much more dramatic improvement over conventional HD picture quality than Ultra HD provides by itself.

This new technology is called High Dynamic Range, or HDR. HDR expands the contrast range of television pictures. Imagine that you're outside on a bright sunlit day. You can see well-illuminated objects quite clearly, and you can also see what's in shadow. That's because your eye has a contrast range of about 1,000,000:1 (20 f-stops.) LCD televisions have a much lower contrast ratio--Rtings.com tested a variety of 2014 HDTVs and found that the highest contrast ratio, 5,268:1, was measured on a Toshiba L3400U. Manufacturers like to claim much higher ratios--for example, in its current E-series, Vizio claims a contrast ratio of 500,000:1, but Rtings.com measured it at 4,581:1. Still very good for a current-generation HDTV, but less than 1% of the advertised contrast ratio.

Even video cameras don't have the same contrast range as the human eye. The Arri Alexa XT. one of the most popular cameras for episodic television and high-end movie production, has a 16,384:1 contrast range. However, through HDR technology the contrast range can be extended significantly, to as much as 262,144:1 (18 f-stops.) That's still not as wide as what the eye can see, but it's dramatically better than anything ever seen on consumer television sets. Even plasma TVs, which have a much wider contrast range than LCDs (up to 13,000:1) are nowhere near what HDR can represent.

One of the several companies developing HDR technology for consumer television, Dolby, claims that its Dolby Vision technology will provide a dynamic range of as much as 200,000:1. Other companies developing HDR technology for video include Technicolor, Philips and the BBC. In addition to more dynamic range, Dolby and its competitors are implementing bigger color spaces (simply put, displays built using their systems will be able to display more colors than current televisions.)

One of the big reasons why HDR isn't in the consumer market yet is that existing formats for transmitting video don't support the increased dynamic range and bigger color spaces from the HDR system developers. These formats, if they're used for over-the-air broadcasting, usually have to be approved and standardized by each country's governmental broadcasting authority (the FCC in the U.S., Ofcom in the U.K., etc.) These standardization processes take time, and they take more time when there are multiple vendors competing to set the standard. In the U.S., it took almost five years for the competing companies to agree to work together on one digital television standard, and another five years for manufacturers to begin shipping digital televisions that were compatible with the standard.

Implementation of HDR is likely to be much less painful and take significantly less time than the move from analog standard definition television to digital HD. However, it will take several years, and it's likely that some TV manufacturers will release HDR TV sets using different, incompatible formats. HDR system vendors also have to design their HDR formats so that they're 100% compatible with today's formats, so that HDTVs already in use will simply ignore the HDR portion of the signal. Backward compatibility is never easy to do, and that's why digital HDTV had to be a clean break from the earlier analog SD formats.

So, unless my HDTV dies prematurely, I'm not going to buy an Ultra HD until the television industry settles on a single HDR format, either through government agency decisions or the rise of a de facto standard. There's a huge quality difference between HDTV and Ultra HD with HDR--a difference that you'll clearly see in retail stores and in your living room.

Saturday, September 27, 2014

Hey Kids! Let's Build "The Machine"!

(Updated October 4, 2014) If you follow me on Twitter, you probably know that I'm a rabid fan of CBS's "Person of Interest", a wonderfully-written drama about the dystopia of permanent, continuous surveillence disguised as a "crime of the week" thriller. It has several key human characters, but the character that drives the show isn't human at all--it's a computer, or more correctly, a whole bunch of computers, called "The Machine." The fundamental purpose of The Machine is to identify threats to U.S. national security wherever they may be, so that they can be "neutralized" (and we all know what that means.) Last season, The Machine was displaced, but not eliminated, by another system based on Artificial Intelligence software. This new system, called Samaritan, uses quantum processors that Samaritan's builders, Decima Technologies, stole from the NSA.

This season, POI is focusing on the impact and ethics of AI. Showrunners Jonathan Nolan and Greg Plagerman see AI as potentially having the same magnitude of effect on the world as the atomic bomb, and call AI's development our era's Manhattan Project. I'm not sure that a system like The Machine needs Manhattan Project-scale development in the state-of-the-art of AI in order to fulfill its primary objective.

Here are the key functions that both The Machine and Samaritan perform:
  • Signals Intelligence: Both systems are connected to all of the same sources and feeds as the NSA, CIA, FBI and presumably National Reconissance Office (NRO), Defense Intelligence Agency (DIA) and other U.S. and Five Eyes (U.S. plus Canada, U.K, Australia and New Zealand) country sources. That means that it can get virtually every phone call, email, text, tweet, webpage and app data transfer anywhere in the world. It can geolocate any mobile phone call, and turn the microphones of certain mobile phones on for surreptitious listening, even if the phone itself is turned off.
  • Image Recognition: They can access the images from security cameras around the world, and use those images to recognize peoples' faces. It can also, presumably, categorize the actions in the images and analyze them to determine whether or not they represent threatening behavior.
  • Database: They have massive databases of all the data they've collected, as well as a lot of historical data.
  • Pattern Recognition and Classification: The systems have to be trained, or train themselves, on previous patterns of activity that indicate a threat. So, for example, they would be given every piece of information related to the 9/11 attack: Who the terrorists were, where they came from, where they traveled to, who they met, who they talked to, where they lived, how they trained, etc. Those data would then be analyzed to build a pattern that indicates a terrorist event being planned. That pattern would be modified with new information continuously. Other patterns, based on subsequent terrorist attacks and changes in terrorist behavior, would be identified. Then, as the systems see current activity, they'll try to match it with previous patterns and calculate the probability that what's going on is actually leading up to an attack. Both systems probably also have the ability to learn from previous attacks and make inferences about the activity even if no previous attack is well-matched. If a probable attack is identified, the systems alert human analysts and provide their analysis and underlying data. 
  • Voice Response and Recognition: Both systems have voice response interfaces and accurate voice recognition.
In addition to these functions, both The Machine and Samaritan are what's called in the AI community "Hard AI." Hard AI has the ability to reason and independently solve problems without human intervention. Beyond that, Hard AI is self-aware--conscious, although its form of consciousness may not look or act like human consciousness.

That's a lot for any system to do, so where are we now (at least in developments that the public knows about?):
  • Signals Intelligence: All the databases and sources that I listed above exist. The problem comes in access and coordination. The Five Eyes countries have extensive data sharing systems, but not all of their data are shared with all of the other partners, not all of the data are in online databases, and we can't assume that U.S. intelligence agencies have 100% of the functionality of other Five Eyes intelligence services available to them. For example, a Five Eyes member may have the ability to get geolocation information, caller identity and even the content of a phone call, but it may not have the legal authority to provide all of that information to the U.S. in real time. In addition, non-Five Eyes countries such as Russia and China may have the ability to limit, disrupt or completely block U.S. and Five Eyes access to their signals. That would keep a system like The Machine from getting every signal, everywhere, in real time.
  • Image Recognition: This is the biggest problem for building a Machine-like system today, not because the quality of image (face) recognition is unacceptable (it's getting better every day) but because of the scarcity of networked security cameras. In New York, where POI is shot, you'd say that the last sentence is dead wrong, because there are security cameras everywhere. New York police had real-time access to 6,000 public and private security cameras and 220 license plate cameras last year, and the ACLU reports that Chicago police had access to 22,000 security cameras last year, but it's not clear how many of them offered real-time access.

    When you get beyond those two cities and a handful of others, including London, the number of cameras per 1,000 residents goes down significantly. However, even in the high-camera cities, there are many cameras on private property and in buildings that are not accessible from the Internet, either because they're on a private network or they're not networked at all. Getting images from these cameras requires physical access to the cameras or video recorders. Sometimes, the local police department or FBI has to take the entire video recorder to its facilities in order to copy the video. So, the real-time acquisition of video from every security camera everywhere simply isn't possible today.
  • Database: The initial capacity of the NSA's Bluffdale, UT Data Center has been estimated to be between 3 and 12 exabytes (3 and 12 million terabytes,) and that's just one site. Storage developers are starting to think in terms of zettabytes (1000 exabytes) and even yottabytes (1 million exabytes.)
  • Pattern Recognition and Classification: The world leader in this technology so far (at least the one that we know publicly) is Palantir, which was a spin-off from PayPal's fraud detection team. The company has two publicly-disclosed products: Gotham, which is a data management system for management and analysis of complex datasets containing both structured and unstructured data that can be both quantitative and qualitative. Metropolis is for model-based analysis of structured, quantitative data. Both systems require analyst and Palantir engineer inputs: Gotham requires Palantir engineers to build the model that maps all the data together, and analysts to query the data and develop their own hypotheses and conclusions, while Metropolis requires analysts to create and modify models.

    The functionality of both The Machine and Samaritan is a combination of both Gotham and Metropolis--they can build models that incorporate all types of data, not just quantitative. In addition, they have the ability to build and modify their own models. It's likely that The Machine's creator, Harold Finch (Michael Emerson,) initially trained it before it took over the task of model building and analysis.
  • Voice Response and Recognition: Both voice response and voice recognition are mature but still improving technologies. As an example, the voice recognition in Apple's Siri hasn't been as good as that in Google Now, in large part because Siri does its recognition on the mobile device, while Google does it on its own computers with dramatically more horsepower. 
So, where does that leave us in building an all-knowing, all-seeing AI security system? We've still got a long way to go, but all the pieces are there. Voice and image recognition use both AI and non-AI technologies. There are some image processing systems that can analyze video to identify anomalies and threats. To my knowledge, there are no pattern recognition and classification systems that work on multiple types of data and that build and test models without human intervention, but research on neural network training and optimization methods (backpropgation, for example) is making big strides, fast enough that we may be five years away from a commercially-viable Machine. All of this is with processors using a conventional von Neumann architecture; it doesn't need quantum processors, although they could eventually dramatically speed up parts of the problem best suited to superposition and entanglement.

Also, let me be clear: The resulting system won't be conscious. We don't yet even have a consensus scientific definition of consciousness. This system would be what's called Soft AI: It can solve a specific problem that's it's been programmed to handle, but it's not self-aware. It may be able to analyze data and make decisions about a class of problems, and it may be able to hold a conversation, but beyond that, it will only do other things if it's programmed to do so by its developers. I'd hope that its developers will have seen "War Games" or "Colossus: The Forbin Project" and don't give it the ability to launch ICBMs all by itself.

It doesn't hearten me that the biggest obstacles to building The Machine or Samaritan are time, politics and bureaucracy, not fundamental science, but I can only hope that the benefits to medicine, science, education and engineering outweigh the risks to civil liberties.