Wednesday, July 28, 2010

Apple's new iMacs and Mac Pros need better I/O

There have been a lot of negative comments in the media production community about Apple's new Mac Pro tower computers and iMacs. The new models are faster and have better graphics than the models they replaced, but they have the same I/O capabilities, and in the case of the Mac Pro, the same number of slots as the old models.

Many observers were looking for Apple to introduce USB 3.0 interfaces or the 1.6/3.2 Gbps FireWire standard, and were disappointed. I suspect that one big reason why Apple didn't implement USB 3.0 is that Intel is moving very slowly to support the new standard. Intel's motherboard chip sets, which are by far the most popular, still don't include USB 3.0 support, although USB 3.0 cards can be added. Yesterday, Intel announced that it had completed development of all the components necessary to implement 50 Gbps transfers over a single fiber-optic link. Intel calls this technology the Silicon Photonics Link (SPL,) and expects to have it available commercially within five years. This looks like the latest version of Light Peak, which was reportedly developed by Intel at Apple's request.

My suspicion is that SPL is what Apple is really waiting for. With SPL, Apple could attach many high-speed devices that currently need different interfaces, such as disk arrays, displays and audio/video capture devices, using a single interface. Apple doesn't want to encourage adoption of slower interfaces that it will replace in a few years' time with SPL. However, the problem is timing. If SPL is two years out, Apple might be able to hold out, but if it's really five years away, they can't wait that long for a faster interface. I don't believe that S1600/S3200 FireWire is an option; Apple has wanted to drop its FireWire interfaces for years and has kept them only because of strong pressure from the user community.

There's another issue, and it concerns how the iMacs and Mac Pros coexist in Apple's product line. The new iMacs, which come standard with Core i3 or i5 CPUs and can be ordered with i7 CPUs, are fast enough for most media production tasks, but they're crippled by slow I/O. For example, there are very few ways to get high-speed video into or our of an iMac; MacBook Pros have better I/O options because they support Expresscards, while iMac users are limited to USB 2.0 and FireWire.

An iMac or MacBook Pro with a couple of USB 3.0 interfaces, or in the future, a SPL interface, would make the Mac Pro obsolete except for highly specialized applications. Once Apple's notebook and desktop computers have extremely fast I/O, they will be able to do perhaps 95% of what a Mac Pro can do. I think that one big reason why the Mac Pro hasn't had the massive redesign that it's needed for a few years is that Apple sees it as eventually going away.

The viability of Apple's desktop and tower computers for high-end media creation depends in large part on how Apple solves its I/O issues. It can't wait five years for SPL; it's got to come up with something much sooner than that.
Enhanced by Zemanta

Amazon fires next salvo in eBook reader wars

According to Engadget, Amazon has announced its new-generation Kindles, which will replace the Kindle 2. The new models have the same 6" display size of the Kindle 2, but use the same improved E-Ink display technology as the Kindle DX. The new display offers 50% better contrast and a 20% faster page refresh time (meaning that the annoying "flashing page turn," while not eliminated, is shorter.) Also, the new Kindles are 21% smaller and 15% lighter than the Kindle 2.

Amazon has increased the Kindle's built-in storage to 4GB, slightly modified the controls to make them easier to use, improved the PDF reader to include support for password-protected files, and added an "experimental" Webkit-based browser that should, at least in theory, be able to display HTML5 content. (The fact that Amazon calls the browser "experimental" means that users shouldn't expect too much from it.)

The Kindle's Text-to-Speech capabilities have been improved, and vision-impaired users can now navigate using voice prompts. Given the Copyright Office's ruling on requiring publishers to permit text-to-speech, this could dramatically increase the number of speech-enabled titles offered by Amazon.

The single Kindle 2 has been replaced with two models, one with both 3G and Wi-Fi, priced at $189 in the U.S., and the second with Wi-Fi only, priced at $139. Both models will ship on August 27th. Amazon has priced the new 3G/Wi-Fi model at the same price as the just-discontinued Kindle 2, and both models underprice their Barnes & Noble nook equivalents by $10 in the U.S.


Enhanced by Zemanta

Tuesday, July 27, 2010

Panasonic to intro Micro Four Thirds 3D lens by year-end

According to Engadget, Panasonic is working on a 3D lens for its Micro Four Thirds cameras (G1 and G2), and plans to ship it by the end of the year. The 3D lens will use the Lumix G-series mount. It will operate on the same principle as the snap-on 3D lens just announced for the HDC-SDT750 camcorder; the right and left eye images will be split and recorded together on one imager. The focal length and aperture range of the lens hasn't yet been disclosed, and the lens shown by Engadget is still a prototype.

Panasonic is clearly very serious about 3D for both camcorders and DSLRs, and it'll be interesting to see how other manufacturers respond.
Enhanced by Zemanta

Panasonic's new consumer 3D (maybe) camcorder

Panasonic has formally announced the HDC-SDT750, a consumer-targeted 3D camcorder. Instead of creating a true full HD 3D camcorder, which would have required two imagers, Panasonic uses a snap-on 3D lens that splits the right- and left-eye images and sends each one to half of the imager, resulting in a 960 x 1080 image for each eye. Both images are captured in one 1920x1080 frame. (When the 3D lens is removed, the camcorder can be used for conventional 1080P recording.) About the only thing you'll be able to do on Day One will be to run the video from the camcorder out to a 3D HDTV for direct viewing, but I suspect that Adobe and Apple will build support for Panasonic's format into future versions of Premiere Pro and Final Cut Studio. The HDC-SDT750 will ship in October, priced at $1,399 in the U.S.
Enhanced by Zemanta

Monday, July 26, 2010

Pace to acquire 2Wire for $475 million

On the heels of passing Motorola in the worldwide set-top box business, Pace will acquire 2Wire for $475 million. 2Wire supplies DSL routers for AT&T and other service providers, and also supplies media management software. Pace has recently been chosen to provide next-generation set-top boxes to Comcast, so the addition of 2Wire will give Pace an excellent position in both the U.S. Cable and IPTV markets.
Enhanced by Zemanta

Two more eBook reader companies fall by the wayside

The fallout of second-tier eBook readers continues. According to Wired, two more eBook reader companies have dropped out of the market: Audiovox has dropped plans to introduce a RCA-brand eBook reader, and Plastic Logic has canceled all orders and scrapped plans to ship its eBook reader for business applications. Audiovox's decision is no surprise; they could see that it would be foolhardy to release an eBook reader now. Plastic Logic's reader, on the other hand, was at one time highly touted, but was going to cost more than an iPad yet have a monochrome display and much less capability.

Audiovox and Plastic Logic follow iRex Technologies, which went into bankruptcy in The Netherlands, and Interead, the vendor of the Cool-er eBook reader, which was forced into involuntary liquidation in the U.K.
Enhanced by Zemanta

Thursday, July 22, 2010

Does content in the cloud mean the end of DRM?

Two days ago, I wrote about The Digital Entertainment Content Ecosystem's (DECE) Ultraviolet DRM initiative. More than 60 content, distribution, consumer electronics, DRM and services companies have joined DECE to build a uniform DRM-neutral platform for controlling content access and making content available to multiple devices.

Ultraviolet is based on the premise that consumers will purchase physical media--CDs, DVDs and Blu-Ray discs--and will want to play the content stored on that media on a variety of devices. Today, consumers can "rip" content to hard drives and personal media players. That's illegal under the U.S. Digital Millennium Copyright Act (DMCA) if they circumvent any security systems in order to copy the content, but it's fairly easy to do. As I understand Ultraviolet, when consumers purchase a Blu-Ray disc (for example), they will be able to access a copy of the movie on the disc in an online "digital locker", and download that movie to personal computers and portable media players using the DRM system supported by each device. Ultraviolet will prohibit customers from making more copies or using the content on more devices than they're entitled to.

Ultraviolet is a way to keep most consumers from making copies of digital content themselves, but it primarily makes sense in a "physical media" world, or at least a world where every device requires its own local copy of content. However, we're moving to a streaming environment, where consumers increasingly never touch a piece of physical media and instead stream content from the cloud. Netflix, Amazon, Rhapsody, Microsoft's Zune Pass, Pandora and other services are based on this streamed, monthly subscription model. Apple is widely rumored to be planning to launch its own cloud-based subscription service later this year.

With streamed content, must less is needed in the way of DRM. A partial or complete copy of each song or movie is streamed to each authorized device while it's being played, but the copy is temporary ("ephemeral") and is deleted when the viewing or listening session ends. The biggest downside of streaming is that a high-speed connection is required while viewing or listening to the content. That makes it unusable on most airplanes and in other places with no broadband wireless Internet service. However, the range of places without service, and devices without at least a WiFi connection, is getting smaller every day.

By the time Ultraviolet gets fully defined and commercially implemented, it will probably be made obsolete by streaming media. As for the heavyweight DRM schemes that Ultraviolet is designed to simplify, they're likely to be replaced by authentication and encryption systems that are easier to implement, more standardized and more compatible across devices. The new DRM standard is likely to be a HTML5-compliant browser or its equivalent.

Enhanced by Zemanta

Tuesday, July 20, 2010

What do Mark Zuckerberg and "The Producers" have in common?

I now direct your attention to the curious case of Paul Ceglia, who filed suit on June 30th in New York state Supreme Court against Mark Zuckerberg and Facebook, asking to be awarded 84% of Facebook. Ceglia submitted a copy of an agreement that he claimed he entered into in 2003 that required Zuckerberg to create a website for Ceglia's company. Tied together with that project was an agreement for Zuckerberg to sell Ceglia a 50% interest in "The Face Book". Ceglia paid Zuckerberg $1,000. In addition, a clause in the contract required Zuckerberg to give up an additional 1% ownership for every day after January 1, 2004 that "The Face Book" didn't go live. The website, at thefacebook.com, finally went live on February 4, 2004, and Ceglia claims that due to the delay, he's owed an additional 34% of the company. The entire lawsuit, including the contract, is viewable at Scribd.

Ceglia has his own problems; he's under investigation in New York state for fraud relating to his wood pellets business. Ceglia waited more than six years to file his lawsuit, and it was filed in a New York state court, not U.S. Federal court. The contract itself is a mess (I'm not an attorney, but I've read a few business contracts in my day, and this one could be an example of "Ten things not to do in a contract.")  The easy conclusion would be that Ceglia had faked the contract, faked Zuckerberg's signature, or otherwise engaged in some kind of fraud. However, that's not what appears to have happened.

Neither Facebook nor Zuckerberg's attorney have come out and said that the contract is a fake. When asked directly by the U.S. Federal Court judge who's taken over the case if Zuckerberg actually signed the contract, his lawyer said "Whether he signed this piece of paper, we’re unsure at this moment." That's a pretty startling statement by defense counsel. If Zuckerberg didn't enter into the contract, his statement to his counsel would be a straightforward "No", and that's what his attorney would have told the judge. However, it appears that Zuckerberg and Facebook are hoping that Ceglia can't produce an original copy of the contract. If not, and all that Ceglia can provide is a facsimile, it makes the case for the authenticity of the contract much weaker. In essence, what Zuckerberg's lawyer said is that they don't know if he signed that particular piece of paper, not that he signed a contract in general. In fact, Zuckerberg's attorney has admitted that Zuckerberg entered into a contract with Ceglia to build a separate website for him.

Facebook and Zuckerberg's counsel are also arguing that the statute of limitations has already run out on any claim that Ceglia could make. In addition, given the enormous amount of publicity that Facebook has had over the years, and the publicity that sales of equity holdings in Facebook have had, Ceglia had an obligation to take action to protect his interests and mitigate losses of other investors.

There's a good chance that Facebook and Zuckerberg will prevail on their arguments, even if Zuckerberg did enter into the contract...but if did he enter into the contract, why? Facebook has already had to pay a $65 million settlement to the Winklevoss brothers, who originally contracted with Zuckerberg to create the website that eventually became Facebook. If Paul Ceglia's contract is shown to be authentic, even if it can't be enforced, Zuckerberg sold him 50% of Facebook for $1,000. Why was Zuckerberg apparently selling Facebook to everyone he could find? It sounds like the plot from "The Producers"; perhaps he thought that "The Face Book" would fail, and he'd keep everyone's money, or perhaps he desperately needed the money for some reason.

In any event, would you like to have this man running your $20 billion dollar valuation business?
Enhanced by Zemanta

DECE's Ultraviolet: Still invisible to the naked eye

The Digital Entertainment Content Ecosystem, a consortium of 60 video content, distribution and DRM companies including Sony, Time Warner, News Corporation, NBC Universal, Paramount, Comcast and Cox, has announced Ultraviolet, the new brand name for DECE's initiative. Today, consumers have to deal with a thicket of different DRM schemes; content that works on one device might not work on another one, and their rights for using the same content on multiple devices can differ dramatically. Ultraviolet's goal is to smooth over the differences, so that consumers have uniform rules for how they can access and use content on multiple devices.

Consortia are inherently slow at making decisions, and with so many different members with different technology and agendas, DECE has been very slow to arrive at an architecture that its members can actually implement. In fact, there are still no finalized technical specifications, only a trademark and a set of goals. Also, the promise of universal interoperability is only a promise unless everyone buys in, and Apple and Disney have opted out of the DECE consortium in order to develop their own platform.

The odds for Ultraviolet's success, when it's finally defined, aren't great. Streaming content, which is inherently stored in the cloud and doesn't require the same level of DRM as content that's intended for local storage, is taking over in video, which is DECE's primary target. Ultraviolet is likely to be irrelevant by the time that DECE completes its specifications.
Enhanced by Zemanta

Sunday, July 18, 2010

How much is a camcorder body worth?

I've been thinking about how much a camcorder body is worth vs. a DSLR ever since Sony released details about its NEX-VG10 camcorder last week. The NEX-VG10 is essentially a NEX-5 with a camcorder body and updated firmware, for $1,200 more ($1,999 for the NEX-VG10 vs. $800 for the NEX-5 with the same lens.) Leaving the firmware aside for now, is there really a $1,200 difference between the two cameras?

DSLRs don't have the proper ergonomics for video. Still cameras are designed to be held only when a photographer is framing and shooting an image or a series of images, while camcorders are designed to be held at the eye for a long time. To compensate for DSLR ergonomics, companies like Zacuto and Redrock Micro have come up with eyepieces that fit over the cameras' LCD displays and a variety of mounting hardware that makes its easier to hold a DSLR at the eye for a long period of time.

Taking Zacuto as an example, a Zacuto Z-Finder Jr. lists for $265, and a Target Shooter, which is the company's least-expensive DSLR mounting solution, is $475. You're also going to need some kind of audio recording solution; at the bottom of the price range, you could probably get away with Zoom's $99 H1, although most people will go with something like the H4n for $299 to get its dual XLR inputs. So, to bring the NEX-5 up to the NEX-VG10 in handling and audio capability will cost from $839 to $1039. And, you still wouldn't have the updated firmware that supports manual controls and a higher AVCHD bit rate.

Once you put it into perspective, the $1,200 premium that Sony is charging for the NEX-VG10 really isn't that much of a premium. The bigger question is whether you'd want to use the NEX-VG10 at all, when you can get DSLRs from Canon that are much more flexible and have a much wider range of lens options, or from Panasonic with third-party firmware that blow away the image quality of the NEX-VG10.

I believe that the NEX-VG10 is going to find a market, and it's probably going to be quite successful, but more serious videographers and filmmakers will stick with the Canon and Panasonic DSLRs, even if they need third-party hardware and firmware to work well as camcorders. However, I also believe that the NEX-VG10 and Panasonic's forthcoming AG-AF100 are the first entries in an entirely new class of camcorders that integrate DSLR imagers, electronics and interchangeable lenses into camcorder bodies. There may be new entries as soon as IBC in September, and there's certainly going to be more at CES in January and NAB next April. At that point, you won't have to trade off form factor for capability as you will with the NEX-VG10.
Enhanced by Zemanta

Tuesday, July 13, 2010

Sony's NEX-VG10 Camcorder Ships in September

According to Camcorderinfo.com, Sony has announced specifications and pricing for the NEX-VG10, its new camcorder based on the same platform as the NEX-3 and NEX-5, Sony's answer to Panasonic and Olympus' Micro Four Thirds system. The NEX-VG10 uses the same E-mount interchangeable lenses as the NEX-3 and NEX-5, and has the same sensor and resolution as the NEX-5. However, it's packaged into a prosumer-style camcorder body, with both an electronic viewfinder and a 3" LCD display, and a new "Quad Capsule Spatial Array" microphone that's designed to address some of the complaints that DSLR users have had with poor audio recording performance.

Sony has also added some features to the NEX-VG10 to distinguish it from the NEX-5. For example, the new camcorder does away with the NEX-5's recording time limitations, has a higher maximum AVCHD bitrate (24 Mbps,) and has manual control over shutter speed, aperture, gain and white balance (all of which are automatic with no manual overrides when the NEX-5 is in video mode.)

The list price of the NEX-VG10 with a 18-200mm zoom lens with optical stabilization will be $1,999 in the US when it ships in September. This compares with approximately $800 for the NEX-5 with the same lens (which is not yet available.) On the other hand, Panasonic's AG-AF100, a camcorder based on the company's GH2 Micro Four Thirds DSLR, is rumored to be priced at $6,000 in the US when it ships later this year. The AG-AF100 will have more professional features, including dual XLR audio inputs and a wider range of frame rates and resolutions, and it's likely to be able to take advantage of some of the new third-party firmware that is available for the GH1 and GH2, but $2,000 looks like quite a bargain compared to $6,000.

Enhanced by Zemanta

Monday, July 12, 2010

The iPhone 4 antenna problem: An overreaction?

Earlier today, Consumer Reports Magazine reported that it gave the iPhone 4 its "Not Recommended" rating because of the problems with signal drop-off and lost calls when holding the phone so that both antennas are bridged together. Consumer Reports tested the iPhone 4 in its own facilities using its own cell tower simulator and other testing equipment. The problem has been duplicated by many other testers, so let's get a few points out of the way:
  1. The problem is real and affects anyone using an iPhone 4 in an area that doesn't have a strong AT&T signal.
  2. Even though the FCC doesn't require that cell phones be held during testing, Apple should have done its own tests well before the iPhone 4 shipped.
  3. The software fix proposed by Apple to "correct" the number of bars of signal strength that are displayed has nothing to do with the antenna problem. It won't fix the signal attenuation caused by holding the phone "the wrong way."
Apple screwed up badly and hasn't helped itself with its proposed fix. However, Consumer Reports also said that it fixed the problem with a strip of duct tape, and that leaving the antenna issue aside, the iPhone 4 is perhaps the best smart phone it's ever tested. So, if you can fix the problem with $0.01 worth of adhesive tape, it it really all that much of a problem?

In my opinion, there's been a massive overreaction to an overall minor problem. Consumer Reports wants Apple to give every iPhone 4 owner a free bumper, and Apple could surely come up with something a lot less expensive than the $29 bumpers that it's selling. (Recently-published testing showed that the $29 bumpers eliminate the antenna problem but provide no real protection for the phone, so something lighter and much cheaper should work as well.) The best thing that Apple can do is to announce a real fix, a timetable for implementation, and the process for requesting or distributing the fix.



Enhanced by Zemanta

Monthly time spent on site: Good for Google and Facebook, Bad for Yahoo and Aol

According to Silicon Alley Insider, comScore's monthly report for total time spent on popular U.S. Internet sites as a percentage of time spent on all Internet sites is a mixed bag: Good news for Google and Facebook, bad news for Yahoo and Aol, and more disappointment for Microsoft. Here's the chart:


Google is now the leader in terms of total time spent on its sites; in June, its share passed that of Yahoo for the first time. Facebook is an even more impressive story; its share is close to that of Google, and if the two companies stay on their trend lines, Facebook is likely to pass Google within a year.

Yahoo's share has been trending down for six quarters, and in June, the company had its lowest time spent on site since comScore started keeping records. Aol is on an almost three-year-long downtrend, and now has the smallest share of any of the leaders. It's hard to argue that, unless something radical happens, Aol could drop to a 2% share by the middle of 2011.

As for Microsoft, despite the billions of dollars that the company has invested in its online businesses over the past few years, the company's share of time spent was almost identical in Q2 2010 to what it was in Q3 2006. In other words, all that money just kept Microsoft even.

The traffic contest has turned into a three-horse race, and one of the horses (Yahoo) is starting to pull up lame.
Enhanced by Zemanta

Barnes & Noble gets into the higher ed eBook business with NOOKstudy

According to Engadget, Barnes & Noble announced NOOKstudy, a software eBook reader and content manager for the higher education market, earlier today. NOOKStudy is a free eBook application for Windows and OSX PCs targeted at the higher education market that enables students to manage all their digital content (eTextbooks, class materials and notes) on their computers: From the press release:
"NOOKstudy lets students view multiple books and sources at once and offers access to complementary content (e.g. toolsets, reference materials, etc.), as well as the unprecedented ability to highlight and take notes that are searchable and customizable. This comprehensive software solution also provides students access to all of their materials – eTextbooks, lecture notes, syllabi, slides, images, trade books and other course-related documents – all in one place, so their digital library goes wherever they go."

"NOOKstudy will be compatible with the company's entire catalog of eBooks and digital content, including relevant study aids, test prep guides, periodicals, and hundreds of thousands of trade and professional titles. NOOKstudy will also enable students to save money, as eTextbooks offer up to 40% savings off new textbooks."
NOOKstudy is in beta test at a number of universities including Penn State and UNLV, and will go into general distribution for the 2010 fall semester. It's a direct response to Follett's CafeScribe/MyScribe eBook service, which Follett has bungled since it acquired Fourteen40, the company that developed it. It's also a preemptive strike against the Kno eBook reader and service. Kno will make students purchase its proprietary eBook reader (for "under $1,000",) while NOOKstudy will work on the PCs that students already have. While B&N will undoubtedly focus on the schools where it has bookstore contracts, B&N will use NOOKstudy to go after students across the country.

Enhanced by Zemanta

Cool-er: Another one bites the dust

According to TheBookseller.com, Interead, the company behind the Cool-er eBook reader, has gone into liquidation in the U.K. The company's websites are still live, but products can't be purchased. According to the article:
Interead was founded in 2009 by former banker Neil Jones. Jones told the Guardian in September 2009 that he had big ambitions for the company. "I'm pretty confident we'll be number two in America by this time next year in terms of sales, and number one in the UK."
Interead joins iRex Technologies as a failed vendor of eBook readers. iRex declared bankruptcy last month, so there's still a slim chance that the company or its readers could be revived, but Interead's liquidation means that the best that could happen is that the Cool-er readers could be sold to another company to help pay off Interead's debts.

More eBook reader companies are likely to fail or withdraw their products from the market over the next year. There are simply too many eBook reader vendors for the available market, and the best-positioned companies are Apple, Amazon, Barnes & Noble and Kobo.
Enhanced by Zemanta

Saturday, July 10, 2010

Everything you know about motivation is wrong

Employees are motivated by money--the more money, the more they're motivated. Bonuses are a great way to get employees to work harder and do a better job. If you can't motivate people with carrots (money), you've got to do it with sticks (the threat of dismissal), because they're inherently lazy. Do you believe any of this? All of this? If you do, you're not alone. This is the way that workers in the U.S. have been motivated for more than 100 years. Carrots and sticks. They work when employees are doing low-value, repetitive work. However, if your business requires creative, high-value work, the carrot and stick model not only doesn't work, it's counterproductive.

Three recent books, Dan Ariely's "The Upside of Irrationality," Daniel K. Pink's "Drive", and Clay Shirky's "Cognitive Surplus", cover some of the most recent research in behavioral economics. In particular, Ariely and his associates have done a lot of research, verified independently, that turns much of what we thought we knew about motivation on its head.

Consider the relationship between money and performance. Paying someone more will result in better performance, right? Ariely and his associates conducted studies in multiple countries where they asked volunteers to complete a series of tasks, ranging from very simple to very complex. Participants were ramdomly assigned to one of three groups. The first group was offered a nominal payment--perhaps the equivalent of a day's pay--for fully completing all the tasks. The second group was offered the equivalent of a week's pay, and the third group was offered the equivalent of a month's pay. For each task, the volunteer had to successfully complete a series of tests or actions. If they completed a at least a minimum number, they would receive 50% of the money for that task. If they completed a higher number, they would receive 100% of the money for that task. If they didn't complete the minimum number, they would receive nothing for that task.

The first two groups--the day's pay and week's pay groups--had almost exactly the same performance. However, the month's pay group was so fixated on the money that it did far worse than the other two groups. The pressure to perform interfered with the month's pay group's ability to concentrate on the tasks. The bottom line is that putting more money at stake can actually decrease, rather than increase, performance.

Next, let's look at the value of money vs. personal accomplishment. In one study, the researchers asked participants to assemble Lego Bionicle toys. For each toy built, each participant would be paid on a sliding scale: $2.00 for the first one, $1.89 for the second one, and $0.11 less for each following completed toy. They could build as many toys as they wanted, and stop whenever they wanted to. Participants were randomly assigned to two groups: Those in the first group were told that at the end of the experiment, their toys would be disassembled so that they could be used by another participant. In the second group, after the participant completed building the first toy and started building the second, the experiment's supervisor disassembled the first one in front of the participant, telling the participant that they were doing it in for them to have another toy to assemble.

If money was the motivating factor, it shouldn't have mattered whether or not the toys were disassembled by the supervisor, but those in the first group that didn't see the toys disassembled before their eyes built substantially more toys and made substantially more money than those in the second group. The participants in the first group also enjoyed the project much more and were more satisfied by their work. 

How about bonuses? Aren't they a great way to motivate employees to work harder? Experiments show that the first time a bonus is offered, it does have some benefit, subject to the situation discussed above--if the bonus is too big, employees fixate on the money and can't concentrate on the work. However, workers subsequently associate the bonus with the effort and expect to get it again. If they don't get it, they're demotivated and their work suffers. This is why companies find themselves locked into offering bigger and bigger bonuses in order to get the same amount of performance improvement.

Multiple studies have shown that money is a strong motivator when people are struggling to make ends meet--when they're at the lower rungs of Maslow's Hierarchy of Needs. However, in the U.S., once a person's annual income passes $60,000 (on average), the motivating power of money diminishes rapidly. It's at this point that, in Daniel Pink's terms, primary motivation shifts from extrinsic (carrots and sticks) to intrinsic (finding challenges, satisfaction and worth.)

Pink defines intrinsic motivation as having three components:
  • Mastery: Performing your task well and being challenged by it. Both Pink and Ariely refer to the groundbreaking work of Mihaly Csikszentmilalyi, who identified "Flow", a state of intense concentration where the participant loses track of time. Flow is a state of optimal performance, and it occurs when you're working on a challenge that's difficult enough to tax you but not so difficult that it's impossible to do. If a task is too simple, it leads to dissatisfaction through boredom; if it's too difficult, frustration leads to dissatisfaction and disengagement.

    This is a reason why athletes are kept relatively closely grouped by skill level while they're learning a sport. If an athlete's skills are too advanced for his or her group and they don't get a chance to play with more advanced players, they get bored and lose motivation. On the other hand, if an athlete is grouped together with others who are much better skilled, he or she may get frustrated, lose motivation and quit.

    You don't have to be an expert at a subject in order to experience mastery, but you have to be good enough at it to accomplish key tasks, yet still find challenges that push you to become better.
  • Autonomy: Autonomy is the amount of control that you have over how you do your job. Specifically:
    • Where you do it: Are you required to do your job in an office or factory, or can you work from home or another location of your choosing?
    • When you do it: Can you set the hours that you work, or are you required to begin and end your workday at specified times? Can you eat when you want and take time to handle personal matters, or do you have to clock in and out at specific times and get permission from your manager for any personal time?
    • Who you do it with: Do you have control over who you work with, or are you assigned to work with individuals and teams and have no say in the matter?
    • What process you use to do it: Do you have a lot of latitude in deciding how to achieve your objectives, or are you required to follow a set of tightly-defined procedures and use a specified set of tools?
  • Purpose: Can you find a purpose for your work (other than earning a paycheck, of course)? Is your work meaningful to you, or is the reason you're doing it because you were told to do it by your supervisor? Do you believe that you're adding value or doing "make work"? Satisfaction and motivation are strongly driven by a sense of purpose. Workers who find an intrinsic purpose to their efforts have been shown to be more motivated and more efficient. On the other hand, people for whom a paycheck is their only purpose have been shown to have much lower job satisfaction, are more likely to change jobs and typically require more money over time in order to maintain the same level of performance.

    In fact, one could argue that the entire system of promotions, pay increases and bonuses that has evolved since the late 19th Century is based on a lack on intrinsic purpose. When pay becomes the purpose, it's inherently demotivating over time, so more pay is needed to "increase" motivation, which in turn creates a dependency on future pay increases.
With all this as background, what are a few of the things that progressive employers can do to improve motivation and job satisfaction?
  • Pay above-average salaries at the start in order to attract better employees, instead of relying on bonuses to motivate better performance.
  • Keep employees challenged with new projects that help them hone their skills. Projects that are too simple lead to boredom, and those that are too difficult lead to frustration.
  • Give employees as much autonomy as possible. Things like flextime and personal days only give the appearance of autonomy. Employees need to have control over how, when, where and with whom they do their jobs.
  • Enable employees to find an intrinsic purpose for what they do. If they don't understand why they've been asked to do something, or they know that what they've been asked to do won't make a meaningful contribution to the company's success, their sole motivation will become money, and money loses its effectiveness over time.
Moving from extrinsic to intrinsic motivation goes against most of the management practice and economic theory of the last century or more, as well as against the culture of most companies. It may well be impossible for most existing organizations to make the change. If that's the case, it's up to new, agile organizations to adopt the lessons of behavioral economics for better productivity, greater employee satisfaction and lower costs.
Enhanced by Zemanta

Friday, July 09, 2010

Gutenberg's miscalculation: A lesson for big media?

In Clay Shirky's new book "Cognitive Surplus", he discusses a lesser-known aspect of the achievements of Johannes Gutenberg. Gutenberg invented the movable-type printing press, which changed the way that knowledge was distributed and forever changed education, religion, government and commerce. The Gutenberg Bible is by far his best-known work, but it wasn't his biggest, or even most important work.

Prior to printing Bibles, Gutenberg received permission from the Catholic Church to print and sell indulgences on its behalf. (Gutenberg borrowed the money he needed to build and operate his printing press based on the anticipated revenues from selling indulgences.)  Indulgences were a way for Catholics to "neutralize" their sins and thus move from Purgatory to Heaven much faster, or even skip Purgatory altogether.

Priests and pardoners (authorized agents of the Church) would sell the indulgences and write them down on pieces of paper. Gutenberg saw the opportunity to bring mass production to the process. By pre-printing batches of indulgences, he could sell them much faster and make more money for himself and the Church. The plan backfired, however, when other pardoners got their own printing presses and flooded the market with printed indulgences. The entire practice of selling indulgences (among other things) so angered Martin Luther that he wrote his "Ninety-Five Theses" in 1517 and nailed them to the door of a church in Wittenberg, Germany. The Theses were, in turn, reproduced on printing presses and widely distributed, which helped to bring about the Protestant Reformation, which in turn broke the hold that the Catholic Church had over Europe, and eventually, caused the Church to end the practice of selling indulgences.

The introduction of Gutenberg's printing press was an early, excellent example of the fact that it's impossible to predict the true impact of a discontinuous innovation when it's first introduced. Gutenberg thought that his printing press would make money for himself and the Church, but it instead led to revolutionary changes throughout European society that still resonate today.

When the Internet became a commercial reality in the mid-1990s, established media saw it as a sideline at best, but certainly no threat to their primary businesses, be they newspapers, magazines, music, books, television or movies. What they didn't foresee was the radical impact that the Internet would have on all of their businesses. In the case of newspapers and magazines, the Internet cannibalized their audiences and sources of income, replacing paid content with free services. Record companies nearly collapsed before they came to an uneasy truce with online distributors. Broadcast and cable networks found their shows scattered all over the Internet and worked desperately to get distribution back under their control.

Now, established media companies are grasping at apps as perhaps the last chance to keep their historical business models alive. They can sell apps, as well as subscriptions to the content made available by apps. That strategy works so long as apps are the only way to get to the content that people want to access, but it's a strategy with a limited future.

Earlier this week, Google released a new version of its mobile website for YouTube, written in HTML5 and far superior to the YouTube app supplied by Apple. It takes two clicks to put the YouTube site on the iPhone home screen, and from that point on, it's indistinguishable from an app. Eventually, all the content that's available on the Web will be available via HTML5, at which point the walled garden of apps will no longer provide any protection for established media companies. Apps may postpone the day of reckoning for media companies, but they won't eliminate it.
Enhanced by Zemanta

Thursday, July 08, 2010

New WebM/H.264 Comparison

A new comparison of VP8 (the video codec in WebM), H.264 and XviD has been published by the Moscow State University's Graphics & Media Lab. The comparison is an addition to a comprehensive review of codecs. The team that performed the comparison makes it clear that it didn't do as extensive a test of VP8 as it did of the other codecs, since the study was apparently virtually completed by the time that Google announced that it was open-sourcing VP8 and including it in WebM. Nevertheless, the team came to a number of conclusions:

For movie encoding:
Comparing VP8 to XviD, VP8 is 5-25 times slower with 10-30% better quality (lower bitrate for the same quality). When comparing VP8 and x264 VP8 also shows 5-25 lower encoding speed with 20-30% lower quality at average. For example x264 High-Speed preset is faster and has higher quality than any of VP8 presets at average.
For HDTV:
Comparing VP8 to XviD, VP8 is 5-20 times slower with 10-20% better quality (lower bitrate for the same quality). When comapring VP8 and x264 VP8 shows 5-20 lower encoding speed with almost the same quality, excluding x624 High-Quality preset.
The source material that the team used for the Movie testing was clips from the films "Ice Age 3", "Raiders of the Lost Ark", "Enemy of the State" and "Up". For the HDTV test, the team used video shot in the Amazon taken from a Microsoft site, the trailer from "Iron Man 2", a close-up video of a calendar and a clip from the movie "Troy."

The WebM team responded to the Moscow State University results, admitting that a lot of work needs to be done to VP8 in order to improve its encoding speed. However, they contend that VP8 would have provided better-quality output had the source material not previously been encoded (only one sample, the calendar, was uncompressed.) Here are their comments:

We've been following the MSU tests since they began and respect the group's work. One issue we noticed in the test is that most input sequences were previously compressed using other codecs. These sequences have an inherent bias against VP8 in recompression tests. As pointed out by other developers, H.264 and MPEG-like encoders have slight advantages in reproducing some of their own typical artifacts, which helps their objective measurement numbers but not necessarily visual quality. This is reflected by relatively better results for VP8 on the only uncompressed input sequence, "mobile calendar.

Even with this limitation, VP8 delivered respectable results against other encoders, especially considering this is the first time VP8 has been included in the test and VP8 has not been specifically optimized for SSIM as some other codecs have.

To date, WebM developers have focused on the VP8 decoder performance and are only starting to optimize the encoder for speed. The WebM project has only been underway for three weeks, and we believe that our encoder speed will improve significantly in the near future.
The WebM team's comments about the source material have some merit, but in the real world, video that's previously been compressed is often included in projects. It may be difficult or impossible to get uncompressed source material--for example, AVCHD camcorders and some DSLRs output video using H.264. Therefore, the performance of VP8 on previously compressed video is important.

The Moscow State University comparison provides useful insight into the performance of VP8, and points out some of the areas that Google and its partners need to work on.
Enhanced by Zemanta

The rats are running for the lifeboats at Blockbuster

Home Media Magazine reports that Mark Wattles has sold more than 1.7 million shares of Blockbuster stock for a grand total of $230,741, or an average price of just a little over $0.13 per share. Blockbuster CEO Jim Keyes sold more than 245,000 shares at a price of $0.18 each, for a total of $44,100. According to the article, he still holds more than 1.8 million shares, which if valued at what Wattles got for his sale, would be worth a bit more than $234,000.

Blockbuster's stock has been delisted by the New York Stock Exchange and its biggest holders are dumping their shares while they still have some value. Even though the company claims to have avoided bankruptcy through a one-month reprieve in debt payments from creditors owed $440 million of the company's $920 million in total debt, Wattles's and Keyes's stock sales indicate that they don't believe that the company will be able to stay out of bankruptcy court. Based on first-quarter earnings, the company is on track to have less than $100 million in earnings this year before interest, taxes, depreciation and amortization--not enough to cover scheduled interest and principal payments on its outstanding debt.
Enhanced by Zemanta

More evidence that tablets are cooling netbook demand?

According to DigiTimes, both Acer and Asustek (Asus) have cleared out their existing inventory of netbooks, and are monitoring market demand before launching any new models. Intel's pricing for its new dual-core Atom N550 processor, which is $11 to $22 higher than current models, is said to be one of the factors causing the netbook vendors to be cautious.

Another big reason for caution on the part of Acer and Asustek has to be the market success of Apple's iPad and the potential impact of forthcoming tablets. I've had an Acer Aspire One for more than a year, and I've switched to an iPad. There's very little that I can do with the Acer that I can't do with the iPad, plus the iPad doesn't have to be unpacked to go through airport security, it's lighter and it feels faster. (I'm running Windows 7 on the Acer, so that may explain some of its sluggish performance.)

A year ago, the company that I was working at was pushing netbooks as more flexible substitutes for dedicated hardware eBook readers. They had limited success, even though the netbooks were often less expensive than the dedicated readers. Now that the Kindle 2 and nook are priced below $200 and are likely to fall further, netbooks have lost their price advantage. At the other end of the scale, the iPad's user experience is far better than that of netbooks. With Android and WebOS tablets likely to reach the market at lower prices than the iPad before the end of the year, the argument for netbooks is even more tenuous. That's why the leading manufacturers are holding back.
Enhanced by Zemanta

Wednesday, July 07, 2010

Disney loses in "Who Wants to Be a Millionnaire" case

It's an often-told joke that no matter how much money a movie or TV show grosses, no one ever seems to make any money. According the the LA Times, Celador International, the creator of "Who Wants to Be a Millionnaire," filed suit against Disney in 2004, charging that Disney had struck a series of deals between Disney owned and controlled companies in order to make it appear that the show had made no money. (Disney actually claimed that the show has lost $73 million since it first went on the air.)

Celador asked for as much as $395 million in broadcast licensing fees, based on an expert's opinion of the fair market value of the show, plus another $10 million for Celador's share of royalties from sales of show-related merchandise. The jury returned a verdict awarding $260 million in licensing fees plus $9.2 million for merchandise royalties.

At the trial, Celador introduced evidence that Disney had received $515 million in revenue from license fees plus $70 million from merchandise sales. Kantar Media estimates that the show attracted nearly $1.8 billion dollars in advertising revenue over the life of its run. Yet, somehow, Disney calculates that it lost $73 million on the show.

For those of you who never saw the show, it consisted of two chairs, two computer displays, two people on stage, a bunch of spotlights on swivels and some foreboding music. I'd be amazed if it cost Disney $73 million to produce the show across its entire run.

This is an extreme but by no means unusual example of the problems of doing business with the "big media" companies. It's almost impossible to make money unless you've arranged fixed, unconditional payments.
Enhanced by Zemanta

Open Source Content: The next wave in content creation

Over the last few weeks, I've read Dan Ariely's "The Upside of Irrationality," Daniel K. Pink's "Drive", and I'm finishing Clay Shirky's "Cognitive Surplus." The three books cover much of the same ground and refer to much of the same research in behavioral economics, and I'll be referring to them quite a bit in future posts, as their concepts impact a variety of fields.

One by one, old media industries have been reshaped by technological changes: When music could be copied perfectly and shared at no cost with anyone around the world, the ability of record companies to control the pricing and distribution of their products collapsed. The music industry is still trying to find viable new business models.

The business of printing words on paper and distributing them daily or weekly to consumers is also collapsing. Note that I didn't say that the newspaper or magazine businesses are collapsing, only the business of distributing ephemeral content on paper. There's tremendous demand for news, analysis and opinion, but it's being fulfilled by websites, blogs and Twitter, all of which can respond instantly to events. Newspapers and magazines will survive if they can develop business models that will enable them to get out of the "words on paper" business and make money online.

The movie and television businesses see the digital steamroller coming and want to defend against it with their own technology: 3D. The cost of producing videos, and now movies, has probably dropped by two orders of magnitude in a decade. 3D, on the other hand, is still (comparatively) expensive and difficult to produce, so only "professionals" need apply. Also, the only commercially viable outlets for 3D are movie theaters; only a relative handful of 3D HDTVs have shipped, and very little programming is available for in-home viewing.

However, 3D won't stop the penetration of online digital media into the home, and it won't prevent the marginalization of "big media." People have made their own independent films for years, but they were limited by two constraints:
  1. Filmmaking was expensive, especially if you wanted to make a film that looked good enough to come from a major studio.
  2. Distribution was controlled by a handful of distributors owned by the major studios. If you couldn't get one of them to distribute your film, you'd never reach a big audience.
Today, filmmaking isn't free, but it's much less expensive than it used to be. DSLRs and their lenses have dramatically decreased the cost and size of cameras. Desktop editing, special effects, color correction and audio mixing systems can handle all the post-production work. Netflix, Amazon, YouTube and countless other vendors can distribute the production to viewers around the world.

One component can't be eliminated, however, and that's people. The best equipment and software can't produce a movie or television show by itself. That requires talented people. People like (and expect) to get paid, so no matter how much you can reduce the other costs and bypass the big media gatekeepers, you still have human costs. Or do you?

The computer software business has been turned on its head by the open source movement. With open source software, a group of developers comes together to write software, and the resulting software is made available to be used (under specified rules) at no cost. The developers aren't paid, but they get satisfaction from solving a problem for users, recognition from the developer community for their efforts, and experience that they can use to increase their income from their "day job."

Blog writers like myself do this all the time, but it's less common in audio, video and motion pictures. Call it "open source content". The key is that a like-minded group of people come together to produce a musical recording, podcast, video or movie. They don't get paid for it, and their creation will be distributed (under specified rules, such as a Creative Commons license) at no cost.

Open source content can be distributed for revenue, of course. For example, Sony Pictures could agree to distribute a particularly compelling movie made using the open source method. For that reason, before work even begins, the team needs to agree on how any revenues earned from the content will be distributed: Will the funds be distributed equally to every team member? Will the team put a percentage of the funds into a reserve for future productions? Will they contribute all or a portion of the funds to a cause or charity? I can easily see pre-made disbursement agreements along the lines of the Apache License or GNU GPL that become widely accepted, and that everyone on the team can agree to before work begins.

One important distinction here is that these open source content projects aren't intended to replace "day jobs." They provide participants with satisfaction, challenges, experience and recognition, but (in most cases) no financial compensation. If and when the project makes some money, it comes after the project is complete and is a nice bonus for the participants.

People want to make their own videos and movies, but the "old media" environment shuts them out. The DIY (do it yourself) content movement is alive and well--just look at YouTube, which gets 24 hours of video posted every minute of the day, if you have any doubts. By formally applying the concepts of open source software to content, the rules and expectations for participants can be standardized. The big software companies have been unable to stop the open source movement; the big media companies are unlikely to have any more success with open source content.
Enhanced by Zemanta

Monday, July 05, 2010

"Do-or-Die" time for Sony's eBook business

According to Engadget, Sony has dropped the prices of all of the eBook readers that it sells in the U.S. The entry-level Pocket Edition dropped from $169 to $149, the mid-range Touch Edition went from $199 to $169, and the high-end Daily Edition with 3G went from $349 to $299. The problem is that none of the models are competitive with comparable models from Amazon and Barnes & Noble. The Sony Pocket Edition has a smaller screen and lacks the WiFi of the comparably priced nook. The Daily Edition has essentially the same features of the Kindle 2 and nook 3G, but it costs $100 to $110 more. The Touch Edition is priced $20 less than the Kindle 3G but offers much less value for the money.

Sony has a very difficult decision to make, at least in the U.S. eBook market: Dramatically decrease costs and/or improve specifications to be competitive with the Kindle and nook, or get out of the business. Borders, which was one of Sony's primary sales channels, now has its own hardware reader, the Kobo, which is comparable to the Pocket Reader. It's in Borders' interest to promote the Kobo over Sony's products. Also, the deal that Sony had to make the Wall Street Journal available on the Daily Edition is non-exclusive, and readers can get a much better experience with the iPad.

Sony's eBook business has always been opportunistic; the eBook business unit is headquartered in San Diego, while most of Sony's businesses are headquartered in Tokyo. It's inconceivable that Sony's Vaio PC business unit isn't working on a tablet to compete with the iPad, and the eBook team may well find itself competing with its sister PC business unit.

I suspect that Sony will get out of the dedicated eBook reader business and shift its focus to Vaio-branded tablets and convertible PCs. The eBook group in San Diego may be maintained in order to write eBook reader software and to operate Sony's own eBookstore. However, the "sweet spot" for dedicated eBook readers is rapidly falling below $100, and it's not likely to be an attractive business for Sony.
Enhanced by Zemanta

Thursday, July 01, 2010

Kin's dead, Sidekick's dead, and what does Microsoft have to show for them?

Yesterday, Microsoft announced that it killed its Kin mobile phone line, just two months after it was launched. The Kin won't be rolled out to Europe, and production has been halted on the two models sold in the U.S. Verizon will continue to sell existing inventory until it runs out. Today, T-Mobile announced that it's discontinuing sales of the Sidekick LX and Sidekick 2008, both of which were designed by Microsoft's Danger subsidiary and were precursors to the Kin. T-Mobile is apparently reserving the Sidekick trademark to use for future models, but they won't be compatible with the older models and won't use Microsoft's technology.

Microsoft's decision to kill the Kin was a smart one--the phones were out of sync with the current market. The two Kin models looked like smartphones but weren't--users couldn't add applications. Their original prices were close to smartphones, but without the functionality. To make matters worse, Verizon made buyers sign up for two years of a $30/month data service, the same price that they would have paid for a smartphone. The result was that sales were far below expectations.

Microsoft had originally planned to base the Kin phones on the Danger operating system, then spent 18 months moving them to a Windows CE-based platform. During the 18 months that it took for Microsoft to move from Danger to Windows CE, the smartphone market exploded, thanks to the iPhone and Android. Customer expectations were completely reshaped, and what was acceptable in late 2008 was no longer acceptable in 2010.

Similarly, T-Mobile's decision to kill the Danger-based Sidekick phones makes sense. It was clear that Microsoft was discontinuing any further effort to develop the Danger platform, and the current Sidekicks were getting very old, so it was time for them to abandon Danger and move to another platform.

The question is: Why did Microsoft take the Kin phones to market at all? The company had already announced its Windows Phone 7 platform, and when the Kin phones were launched, Microsoft made it clear that they couldn't be upgraded to Windows Phone 7. I'm sure that Microsoft had to make major financial commitments to Sharp, the manufacturer of the phones, both in NRE costs and in committing to volumes of product for inventory. Microsoft undoubtedly knew what Verizon's service pricing plan was, and should have known how unattractive it would be. It knew the market trends favoring smartphones. As good a decision as killing Kin was, it would have been a far better decision not to launch it in the first place.

And what about Danger? Microsoft spent $500 million purchasing the company in 2008, and now has abandoned both its software and hardware platforms. Microsoft says that some of the social networking concepts from the Kin will be implemented in Windows Phone 7, but it seems like very little return on a big investment.

Microsoft is gearing up for its launch of Windows Phone 7 this Fall, but its mobile operation appears to be in disarray. With Apple and Google piling success on top of success and RIM holding its own, there's virtually no room for Microsoft to make a mistake. They have to do everything right with Windows Phone 7 that they did wrong with Kin to have any chance of success.

Enhanced by Zemanta