Site moved to, redirecting in 1 second...

« July 2011 | Main | September 2011 »

23 posts from August 2011

August 17, 2011

Reducing the Operating Costs of Truck Transportation

The cost of a gallon of diesel fuel remains high and it doesn't look like prices will be reduced dramatically any time soon (if ever). That fact has started a lot of people thinking about what can be done to keep truck freight costs in check. Keeping trucking costs as low as possible is important because it has been estimated that "trucks haul about 70 percent of the tonnage carried by all modes of domestic freight transportation." ["Trucking and the Economy," American Trucking Associations, 2007-2008 edition] The same report claims that because so many supply chains have evolved into "just-in-time/on-demand inventory" networks that "the manufacturers, wholesalers and retailers who represent trucking's largest customers have found that trucks are best suited to satisfy their transportation needs."

The first place one naturally looks to reduce truck operating costs is to the truck itself. Back in 2007, PACCAR and Eaton Corporation announced that they had entered into an agreement to jointly develop proprietary hybrid technology for heavy-duty commercial vehicles in North America. Their announced goal is improving vehicle fuel efficiency by 30% "for selected medium duty applications over the next seven years." ["Hybrid Truck technology hopes for 30% improvement in fuel efficiency," Gizmag, 24 August 2007] PACCAR manufactures commercial vehicles sold around the world under the Kenworth, Peterbilt and DAF nameplates. The PACCAR website indicates that "engineers at its technical centers are utilizing sophisticated simulation technologies and rapid-prototyping processes to develop several hybrid vehicle platforms across a variety of applications." In October 2008, "Navistar unveiled its new hybrid truck configuration - the International® DuraStar Hybrid tractor. The Class 7 International DuraStar Hybrid tractor has the capability to provide fuel savings from 20-25% on a standard in pickup and delivery applications." ["International Presents its New DuraStar Hybrid Tractor," Infrastructures, October 2008] I'll dedicate a couple of future posts entirely to the subject of fuel-saving commercial delivery vehicles designed for use in urban environments.

While PACCAR and Navistar are looking at hybrid engines to achieve significant fuel savings. A researcher at the Lawrence Livermore National Laboratory named Kambiz Salari believes that truck design can also make a big difference. ["Kambiz Salari and the Aerodynamics of Trucks," by John Tozzi, Bloomberg BusinessWeek, 14 July 2011]. Tozzi reports:

"Trucks waste a lot of energy overcoming the drag created by their boxy frames. Designing sleeker big rigs, though, is harder than designing sleeker airplanes given that cargo containers can't be anything other than rectangular. Kambiz Salari puts the challenge this way: 'Here is a box, and you have to somehow make it more aerodynamic, but it's still a box.' Salari ... has spent the last decade figuring out ways to make trucks slice through the air more gracefully. His research meshes with a new national imperative: The Energy Dept. awarded $115 million in grants last year to develop 'super trucks' that are 50 percent more fuel efficient by 2015. Salari says we can get part of the way there—a 17 percent efficiency gain—by making relatively modest tweaks to a truck's contours."

Add a 30 percent improvement gained by using hybrid technology to a 17 percent improvement gained through better streamlining and you're at 47 percent. That's not bad. Tozzi continues:

"An expert in the field known as computational fluid dynamics, Salari uses software to simulate the flow of air across a truck's exterior. Certain areas, such as the underbody or the gap between the trailer and the cab, cause a lot of resistance, but even the 'grab handles' that drivers use to climb into their seats create drag. Salari's complex models take days to compute, even using Livermore's supercomputers. The simulations help Salari’s team zero in on the most promising design modifications, which they then test in the world's largest wind tunnel, at NASA's Ames Research Center in Silicon Valley. One product they've tested is the 'truck skirt,' a pair of flat panels that drop down from the sides of the truck to redirect air away from the underbody. Skirts have been available for years, but many truckers question whether they work. Salari's research proves they do. Over the next 18 months, he expects to publish more such conclusions to help trucking companies evaluate products on the market and design new ones."

Tozzi reports that improvements suggested by Salari's research "could save the shipping industry 6 billion gallons of diesel and $24 billion a year." Tozzi concludes:

"Andrew Smith, founder of ATDynamics, says Salari's work helped 'us to write a business plan.' His company makes a large attachment that fits on the back of a trailer and reduces the vacuum behind a truck. Salari, who emigrated from Iran as a teenager, says he's been fascinated by the complexities of air flows since his undergraduate days at the State University of New York at Stonybrook. After earning a mechanical engineering PhD at the University of New Mexico, he studied such air-flow topics as how hypersonic jets fly and how explosions affect fluids. To Salari, making sleeker tractor trailers is just as fascinating."

Another thing that people have been considering for long-haul trucks is using natural gas rather than diesel fuel. Jeffrey Ball writes, "An 18-wheeler can burn as much fuel in a year as 40 cars. What if it burned domestic natural gas instead of imported oil?" ["Natural-Gas Trucks Face Long Haul," Wall Street Journal, 17 May 2011] Ball continues:

"The typical semi-trailer truck guzzles 20,000 gallons of diesel annually and uses the same roads day after day. So switching trucks to natural gas from diesel, which comes from oil, could make a big dent in U.S. petroleum use. And it wouldn't require building nearly as many new fueling stations as switching America's roughly 240 million cars and light trucks to something other than oil. Like all roads to a potential alternative-energy tomorrow, however, this one is strewn with potholes. A world in which most 18-wheelers run on natural gas instead of diesel is 'going to be pretty slow in coming,' said Bill Graves, president of the American Trucking Association. For years, environmentalists have lobbied for taxpayer subsidies for natural-gas cars and trucks, arguing the fuel burns cleaner than gasoline or diesel. They have had limited success—notably in smoggy Southern California—getting regulators to prod bus and trash-truck operators, owned or contracted largely by municipalities, to make the change. Often, buyers of these natural-gas trucks have received government subsidies that have helped defray the higher purchase price."

Clearly, the political climate has turned against government subsidies. If natural gas-powered tractors are going to become a significant portion of the America's truck fleet, they will have to join the fleet on merit not subsidies. With "U.S. natural-gas prices to historic lows," arguments favoring natural gas-powered vehicles can be reasonably made. Ball reports, "Where natural gas is available at U.S. pumps today as a motor fuel, it typically costs about two-thirds the price of diesel after adjusting prices for the different energy contents of the two fuels." He continues:

"Exact figures for the number of natural-gas vehicles on the road are hard to come by. But James Harger, chief marketing officer of Clean Energy Fuels Corp., an installer of natural-gas fueling stations that is partly owned by billionaire investor T. Boone Pickens, estimates 15% of U.S. buses and trash trucks run on natural gas. ... If America could affordably manufacture natural-gas trucks and build enough fueling stations to keep them on the road, the economy could shave billions of dollars a year in imported-fuel bills, backers of the technology say. But that is a big if. Trucks configured to burn natural gas cost more than trucks that run on diesel. They need modified engines and bigger and stronger fuel tanks. How much more they cost differs wildly depending on the type of truck."

Ball indicates that "long-haul trucks present a bigger challenge" than other types of trucks that use natural gas. He explains:

"In the U.S., they consume about 10 times as much diesel as trash trucks and buses combined. The biggest guzzlers are 18-wheelers, which average six miles per gallon. Some 225,000 were sold in the U.S. last year, but many analysts expect that number to soar to 400,000 this year, as the economy improves. United Parcel Service Inc., which runs one of the country's biggest truck fleets, pays about $95,000 for an average long-haul 'tractor'—the front part of the 18-wheeler, housing the engine and driver. It recently ordered 48 natural-gas versions at a cost of $195,000 apiece—about double the cost of a diesel model, said Mike Britt, UPS's director of engineering and maintenance. Fewer than 1,000 natural-gas 18-wheeler tractors have been sold in the U.S., industry experts say. They are 'just about being hand-built, much like a Rolls Royce,' Mr. Britt said. Ramping up assembly lines to build them at volume, he thinks, could 'lower the price dramatically.'"

Ball reports that "UPS bought its natural-gas trucks only after getting $4 million in federal stimulus money to help defray the cost. ... UPS won't buy more natural-gas trucks unless the government forks over additional subsidies." He claims that "the experience of other countries suggests natural-gas vehicles sputter without long-running government aid."

One final way of increasing efficiency is reducing weight. Cargoshell is trying gain ISO certification for its collapsible TEU container which is "25 percent lighter than a steel container, meaning significantly less energy would be required to carry every container on their estimated 200 million annual trips." ["The Cargoshell: ingenious collapsible replacement for the standard shipping container," by Mike Hanlon, Gizmag, 6 January 2010, and "Collapsible Cargoshell shipping container seeks ISO certification," by Mike Hanlon, Gizmag, 26 July 2010]. In its collapsed state, the container also has a significantly reduced profile. The Cargoshell containers have roll-up doors versus traditional swing-out doors, which means they can be stacked closer together. There are, however, drawbacks. It takes one man approximately half-an-hour to expand or collapse the container. Considering the millions of containers used around the world, that represents a significant manpower cost.

The bottom line is that there are some very promising advances being made toward reducing the operating costs of trucks. That's good news for supply chain professionals. The bad news is that many of these advances are not widely available.

August 16, 2011

Global Hunger: The News is both Good and Bad

Unless you don't follow the news at all, you are aware that the Horn of Africa is once again struggling with famine and hunger. "The United Nations has declared a famine in parts of rebel-held southern Somalia, the worst in nearly 20 years." ["UN declares famine in rebel-held Somalia," by Katrina Manson, Financial Times, 20 July 2011] Manson reports:

"11.5m people are severely affected by a drought across Kenya, Ethiopia and Somalia, but this is the first time the resulting food shortage has been declared a famine. ... Malnutrition rates in southern Somalia are the highest in [the] world, surpassing 50 per cent in some areas, with deaths at six per 10,000 a day, easily meeting the requirements to declare it technically a famine, Mr Bowden said. The term famine describes a situation where acute malnutrition rates among children exceed 30 per cent, more than two people per 10,000 die a day and people are not able to access food and other basic necessities."

Since Somalia remains the poster child of failed states, desperate people looking for assistance are fleeing in large numbers to Kenya and Ethiopia. According to Manson, "The UN says 3.2m Somalis are in urgent need of lifesaving assistance in the country, with 2.8m of those in southern Somalia." With Islamic extremists controlling areas containing nearly 80 percent of the at-risk children, and with funding in short supply, conditions in the Horn are not good. Interestingly, Manson writes, "Market failure rather than lack of food is to blame." She explains:

"[Mark Bowden, UN humanitarian co-ordinator for Somalia], attributes the crisis in part to a collapse in the terms of trade, saying annual inflation at 270 per cent means food importers have no incentive to bring in food."

Drought has wiped out crops and herds, but the lack of available food can be directly tied to Somalia's failed state status. A Financial Times editorial declared, "Somalia is a humanitarian disaster at the best of times." These are not the best of times. In another article concerning the drought and famine, Manson reports that neighboring Kenyan is also having a difficult time dealing with the crisis. She reports that the drought is the worst in 60 years and little is being done to address long-term consequences. She writes:

"The real reason for the food crisis, say analysts and residents, is structural: repeated failed and poor rains and long-term poor management of water, land and markets have become a fact of life. Even before this year's drought, half the people in Isiolo received food relief. That figure has now increased to 75 per cent and is set to rise further." ["Kenya fails to manage its food crisis," Financial Times, 31 July 2011]

The fact that Horn of Africa governments (or lack of government) have proven incapable of dealing with the crisis does not excuse the rest of the world from providing urgently needed help. It does underscore how important good governance is as a foundation for building a sound economy. Even before famine was declared in the East Africa, the global food sector was experiencing rising prices and increasing the numbers of people experiencing hunger. Back in February, The Economist asked, "What is causing food prices to soar and what can be done about it?" ["Crisis prevention," 24 February 2011] The article states:

"Around the world, the food system is in crisis. Prices have rocketed; they are now higher in real terms than at any time since 1984. They could rise further still if drought lays waste to China's wheat harvest, as is feared. Food has played some role (how large is hard to tell) in the uprisings in the Middle East. High prices are adding millions to the number who go to bed hungry each night. This is the second price spike in less than four years. Companies are sounding the alarm and the G20 grouping of the world's largest economies has put 'food security' top of its 2011 to-do list. This attention is welcome. But today's spike is only part of a broader set of worries. As countries focus on food, they need to distinguish between three classes of problem: structural, temporary and irrelevant. Unfortunately, policymakers have so far paid too much attention to the last of these and not enough to the first."

As Manson noted earlier, structural problems have deepened the crisis in East Africa. The article in The Economist goes on to describe some of the factors that contributed to rising food prices (including weather, speculation, and changing eating habits). It continues:

"A good guess is that food production will have to rise by 70% by 2050 to keep pace with population growth, the explosion of developing countries' megacities and the changes in diet that wealth and urbanisation bring. Big increases will be harder to achieve than in the past because there is little unfarmed land to bring into production, no more water and, in some places, little to be gained by heaping on more fertiliser. Climate change may well exacerbate these problems. For the first time since the 1960s the yields of the world's most important crops, wheat and rice, are rising more slowly than the global population."

Since that article was written, the UN has adjusted its demographic predictions (which earlier had the global population leveling at 9 billion near mid-century) and now forecasts that the global population will continue to rise to 10+ billion through the end of the century. The article claims that "the world cannot feed today's 7 billion people properly." It then rhetorically asks, "How on earth can it feed the expected 9 billion in 2050?" The answer it offers is "high prices." It explains:

"If 9 billion people are to be fed in 2050, countries that produce a miserable one tonne per hectare will have to produce two; the vast amount of food wasted on poor countries’ farms—a third or more of the total—must be saved; and plant breeders will have to reverse the long decline in yield growth. All these things require higher returns to farmers, which will attract higher investment. Without these, there will not just be a billion hungry people (the equivalent of India) but 2 billion extra (two Indias) in 2050. Somehow, returns to farmers must rise without inflicting untold misery on the poor."

Despite how it may sound, The Economist is optimistic that it can be done. It will require good policies, good science, and a little bit of good luck. Mark Bittman asserts, "There are bright spots on our food landscape, hopeful trends, even movements, of which we can be proud." ["Food: Six Things to Feel Good About," New York Times, 22 March 2011] The first "hopeful trend" is that people are becoming better educated (and, thus, more empowered) when it comes to food and nutrition. Bittman writes:

"• Not just awareness, but power -- Everyone talks about food policy, but as advocates of change become more politically potent we're finally seeing more done about it. ... Combined with increasingly empowered consumers and a burgeoning food movement (one that Time magazine's Bryan Walsh suggests has the potential to surpass and save the environmental movement), guarded optimism is called for, especially with the farm bill up for renewal in 2012. If the good guys fail to make some real gains there I'll be surprised."

Bittman's next trend involves the food supply chain. He writes:

"• Moving beyond greenwashing -- [Wal-Mart's] ... plan to re-regionalize its food distribution network, which is ... significant. The world's biggest retailer pledged to 'double sales of locally sourced produce,' reduce in-store food waste, work with farmers on crop selection and sustainable practices, and encourage — or is that 'force'? — suppliers to reconfigure processed foods into 'healthier' forms. ... McDonald’s made a 'Sustainable Land Management Commitment.' We can and should be skeptical of these pronouncements, but the heat that inspired these two giants to promise change may ensure that they follow through."

According to Miguel Bustillo and David Kesmodel, Wal-Mart and other supermarket chains are following through on the "buy local" movement. ["'Local' Grows on Wal-Mart," Wall Street Journal, 1 August 2011] Bittman's next positive trend to celebrate is a new appreciation for "real food." He writes:

"• Real food is spreading -- There are now more than 6,000 farmers markets nationwide — about a 250 percent increase since 1994 (significant: there are half as many as there are domestic McDonald’s), and 900 of them are open during the winte. ... Furthermore, serious and increasing efforts are being made to get that food to the people who really need it: Wholesome Wave, for example, began a voucher program in 2008 that doubles the value of federal food stamps (SNAP) at participating farmers markets; that program has grown more than tenfold in less than three years."

Bittman's next trend -- urban agriculture -- is important because the world grows more urbanized every day. He writes:

"• We’re not just buying, we're growing -- Urban agriculture is on the rise. If you're smirking, let me remind you that in 1943, 20 million households (three-fifths of the population at that point) grew more than 40 percent of all the vegetables we ate. City governments are catching on, changing zoning codes and policies to make them more ag-friendly, and even planting edible landscaping on city hall properties. Detroit, where the world's largest urban farm is under development, has warmly and enthusiastically embraced urban agriculture. Other cities, including Pittsburgh, Philadelphia (more on Philly in a week or two), New York, Toronto, Seattle, Syracuse, Milwaukee and many more, have begun efforts to cultivate urban farming movements. And if local food, grown ethically, can become more popular and widespread, and can help in the greening of cities — well, what's wrong with that?"

You don't necessarily need vacant plots for urban agriculture -- rooftops will do. Urban greenhouses are also making inroads in cities like New York. I'm not sure Bittman's next topic is actually a "trend" right now. I think it's more of a hope.

"• Farming is becoming hip -- The number of farms is at last increasing, although it's no secret that farmers are an endangered species: the average age of the principal operator on farms in the United States is 57. Agriculture Secretary Tom Vilsack recently noted that our farmers are 'aging at a rapid rate,' and when he asked, 'Who’s going to replace those folks?' it wasn't a rhetorical question. But efforts by nonprofits like the eagerly awaited FoodCorps and The Greenhorns, both of which aim to introduce farming to a new generation of young people, are giving farming a new cachet of cool. Meanwhile, the Nebraska-based Land Link program matches beginning farmers and ranchers with retirees so that the newbies gain the skills (and land) they need."

Emerging market countries are going find that more and more young people leave the farm for the city as their economies heat up. To replace those lost workers, agriculture is going to have become more mechanized. Even in developing countries whose economies rely more on agriculture rather than industry, better farming methods are going to be required in the years ahead. Bittman's final positive trend involves better lunches being provided in American schools. Providing children with nutritious meals is just as (if not more) important in emerging market and developing countries. Robert Jensen, an associate professor of public policy at the University of California, Los Angeles, and Nolan Miller, a professor of finance at the University of Illinois at Urbana-Champaign, point out that it is not a straightforward matter deciding who is underfed and who is not. ["A Taste Test for Hunger," New York Times, 9 July 2011] They write:

"Consider this paradox: according to conventional wisdom, hunger is supposed to decline as a country's wealth increases. Yet in China and India, hunger appears to be growing even as incomes increase at phenomenal rates. There are a few possible explanations: unequal distribution of wealth, inefficient or indifferent governments and aid agencies, and recent increases in world food prices. While these factors may play a role, at least part of the answer may be much simpler: we are measuring hunger incorrectly. Suppose you want to figure out if someone has enough to eat. The standard approach is to compare the number of calories eaten to the number needed, with 'need' defined by a statistical average across a population. In effect, policy makers tell people whether they are hungry based on whether the amount of calories they take in conforms to some externally imposed standard. Of course, very few people actually conform to a statistical average. So what if, instead, you looked not just at how many calories people consumed, but at the food they chose to eat?"

Jensen and Miller recommend using a method they call the "staple calorie share." They explain:

"We measure how many calories people get from these low-cost foods and how much they get from more expensive foods like meat. The greater the share of calories they receive from the former, the hungrier they are. The rationale behind this approach is straightforward. We are all familiar with the unpleasant sensations associated with hunger. These are the body's way of telling us that we need more calories. Once those needs are largely met, people will switch to more flavorful, but more expensive, foods. ... With the standard approach, you need to know how many calories the person has taken in and how many the person needs. But that need varies widely based on age, sex, activity level and dozens of other factors. Though some of the factors affecting calorie needs are measurable, many are not. Moreover, it's hard to know how many calories a person is actually getting, since health factors, including the widespread incidence of diarrhea, often mean that only a fraction of calories eaten are absorbed by the body. The staple-calorie-share approach eliminates both problems. Your choice of foods reveals whether you have enough calories. Staple-calorie-share 'need' is less variable across people; though one person may need more calories than another, they will both begin to switch away from staple foods when their needs are met. And your body isn't fooled by how many calories you put into your mouth; the physical sensation of hunger is regulated by the amount of calories you actually absorb."

If you accept Jensen's and Miller's logic and method, they have some good news to share -- there may be fewer hungry people in growth countries (like China) than now believed. They explain:

"The standard approach reveals that in China, the fraction of people consuming fewer than 2,100 calories increased to 67 percent from 53 percent between 1991 and 2001. However, the fraction who appeared hungry, as measured by staple-calorie share (using a threshold of 80 percent of calories consumed through staples), declined to 32 percent from 49 percent. Thus, instead of 150 million more hungry people in China, there were actually almost 200 million fewer. Rising incomes have indeed made people better off; however, they have used their increased purchasing power to buy better-tasting foods, and nonfood items, rather than to increase calories."

Professors Jensen and Miller aren't trying to downplay the seriousness of global hunger. They write, "No matter how you measure it, hundreds of millions of people around the world aren't getting enough to eat." Their point is that better measurements allow for better management. They conclude:

"Aid money is a scarce resource, and policy makers have to decide whether it is best spent on food aid or other forms of vital assistance, like health care. Adopting a more nuanced and accurate measurement of hunger would be a big help in making those lifesaving decisions."

With budget deficits now plaguing much of the developed world, resource decisions are not going to get any easier and increased foreign aid is unlikely. While governments may turn a blind eye to food crises around the globe, individuals and charities will not. They, too, need a better way to assess real need so that they can make informed decisions about how best to help in time of need. I applaud the good professors for their efforts.

August 15, 2011

The Future of Innovation Appears Secure

Sharon Daniels, chief executive of AchieveGlobal, claims that "continual innovation may be the most powerful of any competitive advantage for a business." ["The 'Six C's' Model for Building A Culture of Innovation," Chief Executive, 26 April 2011] To achieve "continual innovation," Daniels argues that a company needs to foster a culture of innovation. She claims that her company "has identified the particular tactics that businesses are using to build this culture." Fortunately, she claims that "these tactics are replicable and scalable." In her article she discusses these tactics "under six headings, each beginning with the letter C." She begins with collaboration:

"1. Collaborative -- The great innovators of years past didn’t rely on teamwork. Think Edison, Einstein, Ford. But what worked then doesn't work any more. With rare exceptions (think Michael Dell, Jeff Bezos), innovation now needs the collaboration of a large and diverse group of people -- the larger and more diverse the better. This is confirmed by a Northwestern University study of 19.9 million scientific papers and 2.1 million patents that showed that the more participants involved in innovation efforts the better the results."

Although I agree with Daniels' conclusion, I disagree with her comments. Edison, Einstein, and Ford all collaborated. In his book The Art of Innovation, Tom Kelley wrote, 'The myth of the lone genius can actually hamper [an organization's] efforts in innovation and creativity. ... Loners are so caught up in their idea that they are reluctant to let it go, much less allow it to be experimented with and improved upon.' He goes on to note that Thomas Edison, who is often pointed to as a lone genius, had a team of fourteen people that helped him conceive, build, and market his inventions." The fact that collaborative innovation is increasing is a good thing." Daniels continues:

"2. Customer Centered -- A culture of innovation requires a deep commitment to understanding customers' expectations and providing them with value. Our study revealed that a company's customers can actually take the lead in helping it make significant innovation breakthroughs."

Again I must add a caveat. In his book The Innovator's Dilemma Clayton Christensen warns companies about listening to their customers. In a interview with Jena McGregor, he explains why he offered that caution. ["Clayton Christensen's Innovation Brain," BusinessWeek, 15 June 2007]. McGregor writes:

(Mc) "In The Innovator's Dilemma you warn that the maxim 'staying close to your customers' can lead you astray. Wouldn't a cursory reading of the book say 'don't listen to your customers?'

(C) "You're exactly right. The cursory reading is 'don't listen.' The deep reading is you have to be careful which customers you listen to, and then you need to watch what they do, not listen to what they say. ... If you understand the job, the opportunities to differentiate are just extraordinary."

Customers can be helpful; but, Christensen's message is that actions speak louder than words. Daniels' next "C" is context. She writes:

"3. Context Rich -- Information is essential for innovation. Our study showed that businesses that use innovation as a competitive advantage put lots of effort into developing formal and informal systems for collecting information and free-flowing it throughout the workforce. Their knowledge management systems include advanced forms of application sharing, document sharing, collaborative workplace design and wiki group editing."

Creative people are curious people. Expose them to new ideas and their juices start flowing. They simply can't help themselves. In fact, Daniels' next tactic recommends fostering that curiosity.

"4. Curious -- Innovative leaders encourage their employees to question authority, to question their assumptions, to ask why, why not and what if? These leaders also question their own assumptions to be sure they're open to others' opinions. They make it clear to everyone that the company values experimentation. One way in which they demonstrate this is giving employees opportunities to pursue their ideas on company time and with company resources."

This suggestion deserves another caution. Companies are in business to make a profit and activities pursued on company time should be focused on that goal. Andrew Hill writes, "Applied loosely, these suggestions would be a licence for laxity. Companies need to focus." ["The tight controls needed for creativity," Financial Times, 30 May 2011] Daniels' next tactic involves confidence building.

"5. Confidence Building -- Our research shows that businesses that excel at innovation actively increase their employees' capabilities and self-esteem. They continually improve employees' skills with training that combines live instruction and on-line learning, with employer-paid college courses, job rotation, mentoring programs and stretch assignments. Their learning initiatives cover both technology and the management skills needed for selecting promising job candidates, onboarding them, assessing their performance, motivating them to be more productive and dealing constructively with performance shortcomings."

In tough financial times, the kinds of activities that Daniels describes are often the first to be halted. Long-term benefits are traded for short-term savings. Smart companies know that their most valuable assets are their best employees. Daniels' final "C" is involves challenges. She writes:

"6. Challenging -- Most business leaders focus on making their companies function efficiently. Innovative leaders focus on meeting challenges. A six-year study by faculty at Harvard Business School and Brigham Young University concluded that a key characteristic of these leaders is their willingness to challenge the status quo. Our research confirmed this. Leaders of innovative companies constantly challenge the entire workforce to reach for new heights. They set ambitious (though achievable) goals, both short- and long-term, within a particular unit and company-wide. Employees who meet their goals are commended publicly, sometimes by senior executives. Even small wins are celebrated."

Working with purpose is always a good thing. Employees that don't understand how their job benefits the company or fits into the big picture are likely to be discontent. Employees who can be excited about overcoming challenges will look forward to coming to work and are unlikely to seek employment elsewhere. Daniels concludes:

"Ongoing innovation happens only when there's a living, breathing innovation culture. Leaders who create and nurture this culture will be rewarded with both game-changing breakthroughs and ongoing smaller-scale improvements that cumulatively bring big gains."

If your creative juices are low and you need a boost, seeing what young innovators are up to is one way of rejuvenating your creative energy. Loz Blain writes, "The James Dyson Awards for young inventors are always a treasure trove of fresh ideas and up-and-coming innovators." ["Eight young inventors give us a 2-minute elevator pitch," Gizmag, 30 July 2011]. Blain explains that "the James Dyson Award is an international event that brings together products and inventions from young creators all over the world." The article discusses eight Australian finalists for the James Dyson Award and provides videos of their elevator pitches. It's not too difficult to guess that these designers are from Australia since two of the three top award winning designs involve ocean activities. The three top award winners' elevator pitches are found below.

Bronze Prize: Christina Heggie's Mass Rescue Board

Silver Prize: Chris Fox's 9th Life

"And the winner, from Sydney, was Joshua Sunghoon Mun's Liquid Nitrogen carrier - a product the judges said 'took an everyday, often overlooked problem and applied research, design and innovation to provide a sound solution.' Joshua and a few of the other candidates will go on to the international round of the Dyson awards, vying for a UKP20,000 prize, split between the designer and their university or institution."

My personal favorite was not awarded one of the top three spots; but, Ed Linacre's AirDrop Irrigation system was given a "Highly Commended" prize.

Innovators like those shown above (as well as the other four innovators whose elevator pitches were not shown -- but are worth watching), lead me to the conclusion that innovation is alive and well. To ensure that these young people continue to demonstrate their creativity, they need to find companies that foster a culture of innovation (or they need to start such companies themselves). On that point, I agree entirely with Sharon Daniels.

August 12, 2011

Reducing Healthcare Costs

Despite all of the heated debate that has taken place in the U.S. about healthcare insurance over the past few years, the cost of healthcare continues to rise along with the public's frustration. It seems that everyone talks about reducing healthcare costs, but no one seems capable of doing anything about it. As a company CEO, you know that healthcare costs are a constant concern for me. McKinsey & Company analysts Tilman Ehrbeck and Nicolaus Henke believe something can be done to reduce costs. They write:

"As leaders of health systems ponder their cost, quality, and access problems, they should draw comfort from the fact that at least some potential solutions already exist. Innovators around the globe have demonstrated effective new ways to reach and interact with patients and treat them at significantly lower cost while improving quality. The real challenge is how to implement, not how to invent. Given the pressure on health systems everywhere, their leaders should do everything possible to help organizations adopt successful innovations and thereby reap the benefits they can provide." ["The emerging market in health care innovation," McKinsey Quarterly, May 2010]

As the title of the McKinsey report implies, the authors believe that many healthcare cost-reducing innovations can be found in emerging market countries. According to Ehrbeck and Henke, "Health care is consuming an escalating share of income in developed and developing nations alike. Yet innovators have found ways to deliver care effectively at significantly lower cost while improving access and increasing quality." To demonstrate their point, they provide two quick examples:

"In Mexico, ... a telephone-based health care advice and triage service is available to more than one million subscribers and their families for $5 a month, paid through phone bills. In India, an entrepreneur has proved that high-quality, no-frills maternity care can be provided for one-fifth of the price charged by the country's other private providers."

Ehrbeck and Henke don't ignore first world efforts at cost reduction. They note, for example, "In New York City, the remote monitoring of chronically ill elderly patients has reduced their rate of hospital admissions by about 40 percent." They believe, however, that healthcare administrators aren't sharing enough information about programs that both improve healthcare and reduce costs. Their study was aimed at helping reduce that information gap. Ehrbeck and Henke write:

"McKinsey conducted research in partnership with the World Economic Forum to study the most promising novel forms of health care delivery and, in particular, to understand how these innovations changed its economics. Many of the most compelling innovations we studied come not from resource-rich developed countries but from emerging markets. Two factors help explain why. First, necessity breeds innovation; in the absence of adequate health care, existing providers and entrepreneurs must improvise and innovate. Second, because of weaknesses in the infrastructure, institutions, and resources of emerging markets, entrepreneurs face fewer constraints (this is one upside of the lack of meaningful oversight, which obviously also has many drawbacks). They can bypass Western models and forge new solutions."

Ehrbeck and Henke examined 30 successful innovations and found that successful programs "pursued a handful of strategies to change the economics of health care delivery in a fundamental way." As they conclude, these programs "were not successful by chance. By understanding the opportunities these innovators seized, leaders throughout the health care system can identify opportunities for their own organizations." The first strategy they addressed was locating healthcare facilities as close to patients as possible. They write:

"Innovators can lower distribution costs and improve adherence to clinical protocols by moving the delivery of care much closer to the homes of patients, providing services that take advantage of their established behavior patterns, or both."

They give the example of an organization called VisiionSpring that "teaches local 'vision entrepreneurs'—members of the mainly poor communities they serve—how to diagnose problems such as presbyopia (an inability to focus on nearby objects) and how to determine what type of mass-produced eyeglass would correct it." The next strategy Ehrbeck and Henke discuss is using existing technology to reinvent delivery. In previous posts about innovation, I've noted that innovation often involves combining existing technologies in new ways. Ehrbeck and Henke write:

"'Repurposing' mobile-phone systems, call centers, and other existing technologies and infrastructure allows innovators to extend health care access, increase the standardization of care, and improve labor productivity. ... The use of the existing technology infrastructure would be useful in any part of the world where health care resources are scarce. Yet this approach can also provide benefits in developed countries. Technology could be used, for example, to reduce emergency-room overcrowding by providing phone- or Internet-based advice and triage services during evenings and weekends. Similarly, it could be used to deliver care remotely for patients who require ongoing treatment for diabetes, asthma, or other chronic diseases."

The next strategy discussed by Ehrbeck and Henke is a bit more controversial. They call it right skilling the workforce. They explain:

"Some smart innovators challenge existing practices—and professional assumptions—about which health workers are allowed to do what. As a result, they can tightly link skills and training requirements to the tasks at hand, thereby lowering labor costs and overcoming labor constraints. In India, LifeSpring uses midwives to provide most of the care at its maternity hospitals. This allows just a single doctor to oversee significantly more patients by focusing on tasks that specifically require a doctor's attention. The company charges only $40 for a normal delivery, rather than the typical $200. In the United States, MinuteClinic uses nurse-practitioners rather than physicians to staff primary-care clinics. In some countries, this approach also helps to ameliorate shortages of medical talent. In sub-Saharan Africa, for example, the HealthStore Foundation has trained community health workers to diagnose and treat the region's top five diseases, which together account for more than half of preventable deaths there."

That idea is controversial because physician groups worry that patients will be lulled into a false sense of security when they are treated primarily by physicians' assistants or nurse practitioners. Regardless, support is growing for the idea that many patients can be treated by medical personnel other than doctors (read my post entitled Shortages of General Practice and Family Doctors are Impacting Emergency Health Care). Considering the shortage of general practice doctors in the United States, I'm stunned that podiatrists haven't been tapped as an immediate potential source of care. Podiatrists complete a rigorous medical school and complete the same rotations as MDs, yet they are only allowed to work on ankles and feet. Permitting them to offer care other than for feet would add about 15,000 highly-trained doctors into the mix of potential general practice care providers. The next strategy is much less controversial -- standardizing operating procedures. By that Ehrbeck and Henke aren't referring to activities in the operating room as much as they are to standard business practices. They write, "Whenever possible, successful innovators use highly standardized operating procedures to minimize waste and improve the utilization of labor and assets."

Ehrbeck's and Henke's next strategy won't make the manufacturers of medical devices very happy. They recommend sharing assets, including expensive medical technology to save money. They write:

"Smart innovators use existing institutions, infrastructure, and networks of people to reduce capital investments and operating costs. They then pass the savings on to consumers. India's Health Management Research Institute (HMRI) takes advantage of established supply chains by operating medical convoys—mobile health facilities and health workers delivering care in hard-to-reach rural areas—from public hospitals."

The final strategy recommended by Ehrbeck and Henke involves opening up new revenue streams. They explain:

"Many health care innovators extend their activities into other sectors—even shops and restaurants—to capture additional revenue streams, use them to subsidize costs, or both. Business activities in other sectors can even promote core health care services. Thailand’s Population and Community Development Association (PDA), which focuses on family planning and the prevention of sexually transmitted diseases, established a chain of restaurants and resorts to raise revenue—and to get out the message. Greenstar, a Pakistani nongovernmental organization that focuses on family planning, operates an entire network of retail outlets that sell products such as condoms and offer family-planning advice and health services for women and children."

I'm not sure how well that strategy would work in highly developed nations -- but you never know. Of course, McKinsey & Company analysts are not the only individuals who have been thinking about ways to cut the costs of healthcare. Business and innovation guru Clayton Christensen "argues that hospitals should focus primarily on what he calls intuitive medicine, the process of figuring out what's wrong with a patient. Once the treatment is set and can be routinized, that care should be transferred to lower-cost providers. The best way to do this is to have an integrated system, like what Kaiser Permanente runs in the western U.S., where the hospital owns the outlying clinics and surgery centers--and, ideally, also provides insurance. With more routinized care, nurses can be trained to do doctors' jobs and specialty facilities can focus on driving out inefficiency with high-volume surgeries." ["Clayton Christensen: The Survivor," Forbes, 14 March 2011]

Drew Armstrong writes that doctors and insurers are cutting costs by sharing information. ["The Simplest Rx: Check on Your Patient," Bloomberg BusinessWeek, 23 June 2011] He writes:

"On a computer in his St. Louis medical office, Dr. John Rice often pulls up a list of his 10 patients with the largest bills. They suffer from diabetes, heart disease, or emphysema. They sometimes land in the hospital where they rack up a long list of charges. Rice's job is to make sure they don't set foot in that emergency room. To accomplish this, the chief medical officer of Esse Health, an 80-physician practice, does what policy makers say too few doctors do now and what they hope all doctors will do in the future. Using technology supplied by a private insurer, Rice has a window into his patients' health that extends far beyond the clinic's examining rooms. He can tell when patients have stopped taking their medication or when they're overdue for a routine test. Armed with that information, he can direct his staff to fill out a prescription and deliver it to a patient's home or schedule an appointment at a diagnostics lab. Rice once spent $40 on cab fare for a patient whose daughter couldn't pick her up from the hospital—a bargain, considering an additional day's stay would have cost $1,500. The 57-year-old internist has a good incentive to take these extra steps. Every time he avoids a hospital admission or emergency-room visit, Esse splits the savings with the patient's private insurer. .. By Rice's estimates, his elderly patients spend 39 percent fewer days in the hospital over the course of a year than Medicare patients nationwide."

According to the federal Agency for Healthcare Research and Quality, Rice and his associates are doing the right thing. It says that "preventable re-admissions ... add more than $30 billion a year to the nation's health-care tab, or $1 of every $10 spent on hospital care." ["Don't Come Back, Hospitals Say," by Laura Landro, Wall Street Journal, 7 June 2011] Landro reports that one way some hospitals are trying to prevent readmissions is through the use of a "virtual nurse." She explains:

"Can a virtual nurse named Louise help keep patients from landing back in the hospital after they are discharged? The animated character on a computer screen, who explains medical instructions, is one of several new strategies hospitals are using to help patients make the transition to home, including sending patients off with a 'Home with Meds' packet of medications and having real-life case managers and nurses monitor patients by phone."

Landro discusses a number of programs being tried across the country to reduce readmissions. Katherine Hobson indicates that another way to help reduce costs is simply to let doctors know how much certain tests are going to cost their patients. ["One Way for Hospitals to Cut Costs of Tests," Wall Street Journal, 17 May 2011] She writes:

"Making physicians aware of the costs of blood tests can lower a hospital's daily bill for those tests by as much 27%, a new study suggests. It is common practice at hospitals to test patients' blood every day and it is wasting money and time, according to the study's authors from the University of Miami and Brown University."

Sarah Murray reports that "integrated systems will facilitate better care that costs less." ["IT allows patients more control," Financial Times, 29 September 2010] She writes:

"When it comes to the benefits of technology, hooking everything together is seen as the key to progress, something health-care providers are coming to recognise. In addition, in the wake of the financial crisis, governments have pledged funds to accelerate the implementation of health-care IT systems. In the US, for example, the administration has devoted stimulus funding to the digitisation of patient records. Moreover, with health-care costs rising rapidly (in the US, healthcare accounts for one dollar in every six spent), the urgency to find solutions that can reduce those costs is intensifying. IT holds that promise, with the potential to generate efficiencies through electronic prescribing and an improved ability to co-ordinate patient care, cutting paperwork and unnecessary tests. According to Deloitte, the funding pledged by the Obama administration could save up to $90bn over the next decade."

Murray points out that integrated healthcare systems augmented by personal monitoring devices hold the potential of creating much better (and probably much cheaper) healthcare. Although some people could be concerned about privacy issues, Murray reports that "evidence suggests [that personal and home monitoring] is ... something people welcome." In economically dire times, getting a handle on healthcare costs is going to continue to receive a lot of attention. As the articles cited above confirm, people are putting a lot of thought into how this can be done.

August 11, 2011

You Can Almost Hear the Bullwhip Effect Cracking

In early June 2011, Acer Inc., the world's second-largest personal computer vendor by unit shipments after Hewlett-Packard, took "a one-time charge of $150 million to reflect disputed accounts receivables at its European operations." ["Acer Reassesses Inventory Policies," by Loretta Chao and Lorraine Luk, Wall Street Journal, 6 June 2011] According to Chao and Luk, "Acer said the main reason for the disputes was 'high inventory' carried by distributors of its products, reflecting an 'inappropriate strategy' in its European operations under the current market situation." They note that "for PC companies, inventory management is crucial because large quantities of inventory stuck within distribution channels can hinder the delivery of newer products as the value of the unsold merchandise depreciates." In sectors where technology changes rapidly, like electronics and computers, older inventory is often difficult to sell. As a result, getting inventory right is critical.

Acer isn't the only manufacturer that wrestles with inventory levels. During the recent economic downturn, many manufacturers were caught off guard. Analysts at the Financial Times report:

"Producers of almost everything were left stranded when the global downturn took hold and retailers ran down inventories. On the way back up, the restocking of goods was so dramatic that most economists excluded the effect from their analysis, lest it skew the results." ["Inventories: the bullwhip effect," 31 July 2011]

As the headline states, the depletion and rapid restocking of inventory is called the "bullwhip effect." The article continues:

"This frustrating phenomenon occurs when falling customer demand prompts retailers to under-order so as to reduce their inventories. In turn, wholesalers under-order even further to reduce theirs and the effect amplifies up the supply chain until suppliers experience stock-outs – and then over-order in response. The effect can ripple up and down the supply chain many times. The whip may now be cracking on the downside. Since the February peak, the European PMI has fallen to about 50, the second largest swing in at least a decade. The US has seen a similar fall. It seems many companies restocked too much after the stock market's nadir in early 2009, and are now aggressively destocking."

The Purchasing Manager's Index (or PMI), according to Investopedia, is "an indicator of the economic health of the manufacturing sector. The PMI index is based on five major indicators: new orders, inventory levels, production, supplier deliveries and the employment environment." According to that same source, "A PMI of more than 50 represents expansion of the manufacturing sector, compared to the previous month. A reading under 50 represents a contraction, while a reading at 50 indicates no change." The Financial Times article continues:

"There are signs of stress in some supply chains. After many airlines parked surplus aircraft in deserts during the downturn, airframe builders Boeing and Airbus now have backlogs of about seven years, says Bernstein. Germany's Merck, the world’s largest supplier of liquid crystals by sales, shipped higher volumes of its crystals last quarter, yet Sony, the Japanese electronics group, admitted an 18 per cent drop in second quarter consumer products revenues was primarily due to a difficult LCD television market. To break the unhealthy cycle of glut and shortage, procurement managers need to stay calm when supply changes. But that is clearly easier said than done."

The article offers no recommendations beyond remaining calm. Because supply chains and inventory requirements are so different in various sectors, comparing them is difficult. Generally, however, the editorial staff at Supply Chain Digest reports that "Days of Inventory Outstanding numbers [have] not [been] very good over [the] past six Years." On the other hand, the SCD staffers report, "Real Progress [has been] seen in restaurant chains, multi-line retail and chemical companies." ["Inventory Performance by Industry 2005 to 2010," 27 July 2011] The article continues:

"SCDigest did some work to make the industry sectors a bit more consistent from how most would look at supply chain data, and also for the first time used average DIO to assess performance in each industry sector. ... The [the annual working capital scorecard data from Hackett Group/REL] study defines Days [of] Inventory Outstanding as how many days of sales the company is holding in inventory, and which it defines as: End of Year Inventory Level/(total revenue/365). As such, it is sort of the reverse of inventory turns, in that a higher DIO, all things being equal, means poorer inventory management performance, while a lower number signals improvement."

The magazine "published a graphic showing the year over year changes in DIO across some three dozen industries, which readers can find here: 2010 Days Inventory Outstanding Performance." In addition, the magazine's staff "took that analysis a step further ... and looked at DIO performance across these same sectors from 2005 through 2010, the scope of the REL data." The article continues:

"The results are very interesting, and are displayed in the graphic [contained in the article]. It lists each sector we analyzed, the number of companies in the sector, example companies in the group (or all companies if four or less in the sector), the companies with the lowest and highest DIO figures for 2010, average DIO totals for 2005 to 2010 for each sector (in reverse order), and finally the change for the sector from 2005 to 2010. ... Here are our observations from this data:

  • "12 of the 40 industry sectors we reviewed decreased DIO 3% or more in 2010 from the 2005 totals

  • "11 sectors saw basically flat DIO changes of +/- less than 3%

  • "17 sectors saw DIO increase 3% or more over the period.

  • "There can be some aberations in the data. For example, the top improvement number among sectors over the period was the average 57.8% decrease in DIO in the personal care products category. However, the 2005 number was artificially high based on a very high number from Alberto-Culver, which in turn was probably the result of some accounting peculiarity.

  • "Restaurant chains, multi-line retail chains,and chemical companies seemed to make true and consistent progress in reducing inventory levels over the six-year analysis horizon.

  • "Conversely, among the sectors showing 3% or greater increases DIO over the period, most were up and down over the six years, not showing any particular pattern. The one exception is Aerospace and Defense, where there is a progressive increase in DIO over the period, for reasons not immediately clear. One possible reason is that the US Dept. of Defense in recent years has asked major suppliers to take on more direct responsibility for spares management, and perhaps that has affected the numbers.
  • "We are surprised the food manufacturing sector hasn't seen more progress over the period, given the focus on inventory reduction in that industry in recent years.

"The bottom line is that few sectors have made real progress since 2005, for reasons that are not readily apparent. The sharp rise in oil prices during this time could be one factor, as companies decided to trade off more inventory to reduce transport costs. Many also point to the impact of offshoring, saying that flat inventory performance in the face of much longer supply chains is actually an accomplishment. Maybe so, but we think this overall performance needs further investigation."

I may be mixing apples and oranges, but the thing that surprised me most about the SCD data (especially in light of the Financial Times' article that claims we are still experiencing a bullwhip effect), is that there seems to be so little inventory fluctuation over time in a number of economic sectors. Sectors that did experience major changes, beyond aerospace and defense which were mentioned above, include: automobiles, building products, communications equipment, paper & forest products, and personal products. I'm not as surprised about the defense and aerospace industries as the SCD staff. Considering that we are still involved in Iraq and Afghanistan, their future as a supplier to the DoD could be determined by how well they respond to defense needs in times of conflict.

I'm also not surprised that multi-line retail chains are doing well. Nordstrom has been singled out by many analysts for its innovative approach to inventory management (see my post entitled Supply Chain Helps Nordstrom's Bottom Line) and retailer's like Macy's are putting a lot of effort into making supply chain more efficient. ["Macy's Focuses on Supply Chain Efficiencies to Outperform Competition," by Adrian Gonzalez, Logistics Viewpoints, 22 June 2011] Gonzales reports, "Rather than doing big replenishment buys, Macy's is focused on 'replenish to sales,' which means smaller, more frequent deliveries to stores, but also less inventory."

When the bullwhip effect cracks, it doesn't ripple through every supply chain at the same time. It is experienced differently by specific supply chains. As a result, there are no "silver bullet" solutions that can make it disappear. Until true demand driven supply chains are implemented, the best that can be hoped for is better collaboration that will help dampen the consequences of the bullwhip effect. Brent Nagy of TMC, offers "5 tips for successful Demand Smoothing":

"1. Dedicated Assignment. Asking a supply chain or transportation employee(s) to fit this forecasting and planning responsibility in amongst their day-to-day tasks typically does not work well. Assign someone who has time to make it a priority.

"2. Supplier and Carrier Collaboration. Give ample warning to vendors and Tier 1 carriers so they know what's coming. Often, they can make their own adjustments to help with surges in your demand.

"3. Metrics Monitoring. Understanding route guide and tender, fill rate, CWT and visibility by way of track and trace and potential late loads all act as ways to manage and isolate areas of concern and success. ...

"4. Use Proven Modeling Tools. While it seems simple, there is a science to Demand Smoothing. There are TMS-related technologies and processes that can be powerful tools.

"5. Project 100 Days Out. Work with sales and manufacturing teams on the inside and with suppliers and carriers on the outside to create a Demand Smoothing plan that looks out 100 days." ["Planning for the Bullwhip Effect," TMC Managed TMS Blog]

Forecasting, planning, and modeling are all important, but they can add to the bullwhip effect when they are wrong just like they can dampen it when they are right. Alluri Raju, Senior Project Manager, Steelwedge Software, believes that "trading partner collaboration will minimize the 'Bullwhip Effect.'" ["Lean Manufacturing Principles: Fundamental to Successful Enterprise Planning and Performance Management," Perspectives on Enterprise Planning, April 2005] He concludes:

"The most effective way for manufacturers and suppliers to minimize the 'Bullwhip Effect' in the Supply Chain is to gain a clear understanding on what drives demand and supply patterns and then, collaboratively work towards improving information quality and compressing cycle times throughout the entire process."

In the six years since Raju penned those sentiments, the call for more and better collaboration has only grown. Most supply chain analysts recognize that we are moving toward a demand driven supply chain and, when it is implemented, we can only hope that the bullwhip effect will be eliminated as completely as small pox.

August 10, 2011

Innovating by Mimicking Nature

Since our earliest ancestors began pondering the world about them, humankind has undoubtedly been fascinated with the amazing things that can be found in nature. Amazed at birds in flight, we've developed all sorts of contraptions to get us up into the air. Inspired by water fowl we have developed fins for our feet. We have developed scuba gear that permits us to swim with the fishes (in a nice way). We've developed night vision goggles so that we see like nocturnal predators. Everywhere we look nature's inspiration abounds. Map Ives, the director of sustainability for Wilderness Safaris in Botswana, told New York Times' columnist Thomas Friedman that he believes humans are hardwired to pay attention and learn from nature. "If you spend enough time in nature and allow yourself to slow down sufficiently to let your senses work," Ives says, "then through exposure and practice, you will start to sense the meanings in the sand, the grasses, the bushes, the trees, the movement of the breezes, the thickness of the air, the sounds of the creatures and the habits of the animals with which you are sharing that space." ["Connecting Nature’s Dots," 22 August 2009]

Past experience and emerging innovations reveals that humankind has benefitted greatly from paying attention to nature. Ives, however, worries that "the speed at which humans have improved technology since the Industrial Revolution has attracted so many people to towns and cities and provided them with 'processed' natural resources that our innate ability to make all these connections may be disappearing as fast as biodiversity." Fortunately, there are still enough scientists and researchers paying attention that we continue to benefit from unlocking nature's secrets. About once a year I get around to writing a blog on this subject (see past posts entitled Turning to Nature to Save Energy, Learning from Nature, and Learning from Nature II). In this post, I'll discuss what we've learned from animals like geckos, shrews, lampreys, octopuses, lizards, and clams as well as from plants like water ferns. Let's start with the latter.

Since Enterra Solutions is involved in supply chain optimization as well as port & harbor security, I follow developments in maritime transportation. Since much of the world's goods travel by sea, the more efficient ships can become the better it is for everyone. The humble water fern may help make ships more efficient in the future. Noel McKeegan explains:

"Ships are big polluters and one of the key reasons for this is the energy lost due to friction as they move through the water. Numerous innovations in marine paint technology have sought to address this issue and now a group of German material research scientists have unlocked a secret that could radically improve fuel consumption ... and it's all down to the marvelous properties of one small plant. The work by researchers at the Universities of Bonn, Rostock and Karlsruhe centers on the water fern salvinia molesta. This plant fern surrounds itself by a layer of air that enables it to remain dry when underwater. While it has been understood for some time that this is a result of tiny hairs on the plant's leaves which trap air, the problem in mimicking this phenomenon has been to make the layer stick. When replicated, this superhydrophobic surface disappears after several hours in moving water, but salvinia molesta can stay 'dry' even when submerged for weeks. What the researchers have now discovered is how the plant manages to keep this air filled layer in place using nature's version of a staple. 'We were able to show that the outermost tips of these whisks are hydrophilic, i.e. they love water,' Professor Wilhelm Barthlott from the University of Bonn explains. 'They plunge into the surrounding liquid and basically staple the water to the plant at regular intervals. The air layer situated beneath it can therefore not escape so easily.'" ["Unlocking water fern's secrets could pave the way for more efficient ships," Gizmag, 4 May 2010]

Professor Thomas Schimmel from the University of Karlsruhe asserts, "After the solving of the self-cleansing of the lotus leaf twenty years ago, the discovery of the salvinia effect is one of the most important new discoveries in bionics." According to McKeegan, increasing ship efficiency is not the only potential benefit of this discovery. For example, it could lead to "fast drying swimsuits" and "hugely effective raincoats." McKeegan concludes that "other possible applications for this bionic technology are huge."

Turning from the plant kingdom to the animal kingdom, "researchers at Cornell University have created a palm-sized device that uses water tension as a switchable adhesive bond and can support many times its own weight. The device could usher in a whole new generation of superheroes by allowing shoes or gloves that stick and unstick to walls on command, or see the creation of Post-It notes that can bear loads." ["Has the human gecko's time finally come?" by Darren Quick, Gizmag, 2 February 2010] Quick reports that it wasn't the gecko that inspired researchers but "Florida's palm or tortoise beetle, which can stick to a leaf with a force 100 times its own weight by secreting an oil and pressing tens of thousands of bristles against the leaf. It can then release itself in an instant." Quick continues:

"The device uses an electric field from a common 9-volt battery to move water through a three-layer structure. This creates surface tension which gives the device its ability to adhere. Turn off the current and the stickiness disappears. 'In our everyday experience, these forces are relatively weak,' Steen told Cornell Chronicle Online's Anne Ju. 'But if you make a lot of them and can control them, like the beetle does, you can get strong adhesion forces.' A prototype device made with around 1,000 300-micron sized holes was able to hold about 30 grams (1 ounce), but the researchers found that if they scaled down the holes to cram more onto the top plate's surface they could increase the force of adhesion. In fact they estimate that a one-square-inch device with millions of 1-micron-sized holes could hold more than 15 pounds."

Research scientists involved in robotics also draw inspiration from the animal kingdom. An article in The Economist claims that historically robots have fallen into two categories -- those that try to look and act human (Anthropoidea) and more pragmatic robots used to accomplish useful tasks (Widgetophora). ["Zoobotics," 7 July 2011] The article asserts that the "few animal-like robots that fell between these extremes were usually built to resemble pets (Sony's robot dog, AIBO, for example) and were, in truth, not much more than just amusing toys." To learn more about this kind of robot, read my post entitled Robots You Can Love. The article continues:

"They are toys no longer, though, for it has belatedly dawned on robot engineers that they are missing a trick. The great natural designer, evolution, has come up with solutions to problems that neither the Widgetophora nor the Anthropoidea can manage. Why not copy these proven models, the engineers wondered, rather than trying to outguess 4 billion years of natural selection? The result has been a flourishing of animal-like robots. It is not just dogs that engineers are copying now, but shrews complete with whiskers, swimming lampreys, grasping octopuses, climbing lizards and burrowing clams. They are even trying to mimic insects, by making robots that take off when they flap their wings. As a consequence, the Widgetophora and the Anthropoidea are being pushed aside. The phylum Zoomorpha is on the march. Cecilia Laschi and her team at the Sant'Anna School of Advanced Studies in Pisa are a good example of this trend. They lead an international consortium that is building a robotic octopus. To create their artificial cephalopod they started with the animal's literal and metaphorical killer app: its flexible, pliable arms. In a vertebrate's arms, muscles do the moving and bones carry the weight. An octopus arm, though, has no bones, so its muscles must do both jobs. Its advantage is that, besides grasping things tightly, it can also squeeze into nooks and crannies that are inaccessible to vertebrate arms of similar dimensions."

As the article notes, Laschi's team has only developed a single arm (a monopus), but plan on making a robot with multiple arms. The below video shows you the arm in action.

The article also discusses the work of another group of engineers at Sant'Anna, led by Paolo Dario and Cesare Stefanini. This group is designing robots copied after the lamprey. It explains:

"Lampreys are the simplest living vertebrates. Like octopuses, they have no bones (though they do have a rudimentary skeleton made of cartilage). Their nervous systems are simple, too, which makes them a good starting point for studies of the neurological arrangement that eventually spawned the human brain. Sten Grillner's group at the Karolinska Institute in Stockholm has therefore spent many years studying lampreys, in order to gain insights about vertebrate nerves. His latest way of doing so is to look at robot versions of the fish. Dr Dario and Dr Stefanini have built him a device called Lampetra, which is made of circular segments modelled on the lamprey’s cartilaginous vertebrae. Each segment has an electromagnet attached to it, and these are activated by a current that flows from head to tail in more or less the way that a nerve signal flows in a real animal. A segment therefore first attracts and then releases the next, creating a wavelike movement that propels the robot forward."

Although the primary purpose of the Lampetra "is to explain how vertebrates use perception to guide their movements," the article claims that "Lampetra's unique propulsion system could also have useful applications, as it has proved an efficient way to move a machine through water." Among the other projects discussed in the article is one headed by Daniel Germann of the University of Zurich. "He works on clams and is building robot versions of them to find out how the shape of an animal's shell affects its chances of survival." Another project is called "StickyBotIII, a robot gecko developed by Mark Cutkosky's team at Stanford University." Still another project is one headed by Tony Prescott at the University of Sheffield, in England. His team is "attempting to replicate the exquisitely sensitive whiskers of the Etruscan shrew." They have "built Shrewbot, a robot that reproduces the animal's head. It has 18 whiskers of different lengths and its software moves these independently of one another, using the information thus gathered to decide whether an object is worth further investigation. So far, Shrewbot can distinguish a smooth surface from a corrugated one. Soon, Dr Prescott hopes, it will be able to recognise basic shapes such as spheres, cubes and cylinders as well. The long-term goal is to build a robot that can operate in places where vision is not much use — smoke-filled buildings, for example."

The article concludes with a discussion of flying robots. It claims the Holy Grail for engineers is to develop a robot with "the flapping-wing flight of insects, with its attendant ability to hover. A tiny flying robot of this sort, equipped with a camera, could get into places that are too small or dangerous for people — enemy bunkers, for example — and report what was going on." Efforts to achieve that goal are underway by researchers at the University of Delft. "Led by Rick Ruijsink, [the team has] developed DelFly, a robotic version of a dragonfly that has two pairs of flapping wings which are moved by an electric motor. DelFly can alternate between flying at high speeds and hovering, in order to take a better look at interesting places." The article continues:

"Another flying robot, the AirBurr, developed by Jean Cristophe Zufferey at the Ecole Polytechnique in Lausanne, uses a different approach. It does not look much like an insect, but it behaves like one. In particular, it has an insect-like way of dealing with obstacles. Instead of trying hard to avoid them in the first place, it is designed to recover quickly from the occasional thud against the wall, and resume flying."

The article concludes that even though robots of the future might draw heavily from nature, they might not look very much like the creatures that inspired them. "Robots of the future might end up resembling medieval monsters, with shrews' heads, octopuses' arms and lampreys' bodies. More likely, though, is that specialist machines will be designed to collaborate, with reconnaissance airbots feeding information to groups of groundbots or seabots that are designed to perform different tasks — a robotic ecosystem, you might say." Human beings pride themselves on being clever. It appears we have finally become clever enough to realize that we have a lot more to learn from the world around us.

August 09, 2011

China Plies the Oceans' Floors Looking for Resources

Last summer "three Chinese scientists plunged to the bottom of the South China Sea in a tiny submarine" and planted the Chinese flag on the ocean's floor. ["China Explores a Frontier 2 Miles Deep," by William J. Broad, New York Times, 11 September 2010] Broad notes that the scientists' descent "signaled Beijing’s intention to take the lead in exploring remote and inaccessible parts of the ocean floor, which are rich in oil, minerals and other resources that the Chinese would like to mine." Obviously, three men in a submarine "the size of a small truck" are not equipped to mine the riches of the deep; but, exploration is the first step leading to exploitation. What is there to be exploited in the oceans' depths? Broad claims, "The global seabed is littered with what experts say is trillions of dollars' worth of mineral nodules as well as many objects of intelligence value: undersea cables carrying diplomatic communications, lost nuclear arms, sunken submarines and hundreds of warheads left over from missile tests." He continues:

"The small craft that made the trip — named Jiaolong, after a mythical sea dragon — was unveiled publicly [in late August 2010] after eight years of secretive development. It is designed to go deeper than any other in the world, giving China access to 99.8 percent of the ocean floor. Technically, it is a submersible. These craft differ from submarines in their small size, their need for a mother ship on the surface, and their ability to dive extraordinarily far despite the darkness and the crushing pressures. The world has only a few."

Deep-diving submersibles are not new. The United States, Russia, France, and Japan all operate such vehicles. The world's oldest such vessel is the U.S. Navy-owned Deep Submergence Vehicle named Alvin (named in honor of Allyn Vine, a Woods Hole Oceanographic Institution engineer and geophysicist who helped pioneer deep submergence research and technology). ["Human Occupied Vehicle Alvin"] The Alvin was built in 1964 and has made nearly 4500 dives. Despite its age, Alvin "can reach nearly 63 percent of the global ocean floor." The WHOI article on Alvin notes:

"The sub's most famous exploits include locating a lost hydrogen bomb in the Mediterranean Sea in 1966, exploring the first known hydrothermal vent sites in the 1970s, and surveying the wreck of RMS Titanic in 1986. Alvin carries two scientists and a pilot as deep as 4,500 meters (about three miles) and each dive lasts six to ten hours. ... Though it is the world's oldest research submersible, Alvin remains state-of-the-art due to numerous reconstructions made over the years."

Compared to Alvin's operational depth limit of 4,500 meters, the "Jiaolong is meant to go as deep as 7,000 meters, or 4.35 miles, edging out the current global leader. Japan's Shinkai 6500 can go as deep as 6,500 meters." Broad continues:

"China is moving cautiously. Jiaolong's sea trials began quietly [in 2009] and are to continue until 2012, its dives going deeper in increments. 'They're being very cautious,' Dr. [Don Walsh, a pioneer of deep-ocean diving], said. 'They respect what they don't know and are working hard to learn.' In an interview, Dr. Walsh said that the Chinese were especially interested in avoiding the embarrassment of a disaster that ends with the aquanauts' entrapment or death. ... Still, China is already waving flags. ... Wang Weizhong, a Chinese vice minister of science and technology, said that the Jiaolong's sea trials 'marked a milestone' for China and global exploration. The recent successes of the craft, he said, ... 'laid a solid foundation for its practical application in resource surveys and scientific research.'"

Although the Jiaolong has avoided exploration in contested ocean areas, Broad notes that, "with expanding political ambitions and territorial claims in neighboring seas, [China] has paid special attention to oceanography and building a blue-water navy, one that operates in the deep waters of open oceans." Although the designed operational depth limit of the Jiaolong is 7,000 meters, it has yet to test those limits. During its most recent dive, the Jiaolong descended "to 5,057 meters (16,591 feet)." ["China Makes Milestone Dive," by Jeremy Page, Wall Street Journal, 27 July 2011] Although that depth sets a Chinese record, it has yet to establish a new depth record currently held by the Japanese. Page reports that an attempt on that record will probably take place sometime next year. Even operating at a depth of around 5,000 meters, Page reports that the Jiaolong "is capable of reaching 70% of the ocean floor." Page continues:

"Japan's Shinkai can go down to 6,500 meters, Russia's Mir and France's Nautile to 6,000 meters, and the U.S.'s Alvin to 4,500 meters, although an upgraded version of the Alvin, designed to reach 6,500 meters, is scheduled to be ready by 2015. The capability of such vessels is significant as rising prices for many industrial commodities mean there is growing interest among state-run and private mining companies in exploiting mineral resources under the oceans, which cover about 70% of the Earth's surface."

Page reports that Chinese officials readily admit that "the Jiaolong is designed to explore for valuable mineral resources on the ocean floor." He continues:

"The British journal Nature Geoscience published a paper this month in which Japanese researchers claimed to have discovered vast deposits of rare-earth minerals—used in a variety of high-tech products—on the ocean floor east and west of Hawaii at depths ranging from 3,500 meters to 6,000 meters. China hasn't addressed whether it will explore for rare-earth deposits specifically. The Jiaolong is diving at the site in the Pacific because China was granted rights to explore for minerals there in 2001 by the International Seabed Authority, a U.N. body that oversees mining in international waters. ISA, meeting at its headquarters in Jamaica, also approved ... applications from China and Russia—the first from any countries—to explore relatively newly discovered deposits called polymetallic sulphides that form around volcanic vents in ridges on the seabed. China applied last year to explore the site in the Southwest Indian Ridge, which bisects the ocean between Africa and the Antarctic. Russia applied to explore a Mid-Atlantic Ridge site. U.S. scientists in an Alvin discovered polymetallic sulphides, which contain base metals that include copper, lead and zinc, as well as gold and silver, in 1979, when they found vents spewing superheated fluids on the ocean floor off the west coast of Mexico. But many experts say that U.S. investment in such research has declined over the past two decades, even as some resource-hungry emerging economies stepped up their efforts to develop deep-sea exploration technology."

Page notes that one obstacle preventing the U.S. from pursuing a more vigorous exploration program is "that it is hasn't ratified the 1982 U.N. Convention on the Law of the Sea, and so is only an observer, rather than a full member of ISA." For years, U.S. Navy officials have urged Congress to ratify the UNCLOS agreement. Page reports, "China, which has ratified the convention and is an ISA member, has been active in deep-sea exploration since 2002, when it launched a program that included developing the Jiaolong."

Although the wonders of "the abyss" are as fascinating as the wonders of space, any exploitation of ocean resources is more likely to involve unmanned vehicles than manned ones. But, just as with the space program, manned activities historically generate more publicity than unmanned activities. It remains unclear when mining the resources of the deep will become economically feasible; but, what is apparent, is that countries that have done the exploration will be in a better position to exploit known deposits than those who haven't done any exploration once that tipping point is reached.

Scientists at Rutgers University believe that small, semi-autonomous submersibles are likely to perform much of the exploration in years ahead. A couple of years ago, they helped develop a small submersible glider nicknamed Scarlet that made a 221-day trip across the Atlantic. Following that epic journey, Clayton Jones, an engineer who works at Teledyne Webb Research, the company that made Scarlet, said, "In a decade we think it will be commonplace to have roving fleets of these gliders making transoceanic trips around the world." ["Submersible glider spent months collecting data on Atlantic waters," by David Brown, Washington Post, 15 December 2009]. Scarlet collected oceanographic data. It wasn't built or equipped to look for deep ocean resources. In concept, however, such exploratory vehicles are probably on the books if not in work. Whatever the future holds, China has made it clear that it wants to remain on the cutting edge of oceanographic exploration and exploitation.

August 08, 2011

America's Infrastructure

Following the publication of my post entitled Analysts Raise Warnings about the Fragility of Logistics (which, among other things, discussed the disturbing state of U.S. infrastructure), I was contacted by Jennifer Lynch of who directed me to a post she thought readers of this blog might find interesting. The post is entitled "15 Startling Facts about America’s Infrastructure." For anyone who has paid the least bit of attention to the state of America's infrastructure, the facts presented are probably not all that startling. They are, however, worrisome. Although, as the name clearly states, the site is primarily focused on issues surrounding automobiles, the infrastructure post deals with a range of issues beyond highways. The other thing I really like about the post is that it links readers with articles that explain the infrastructure challenges in greater detail. The post begins:

"The infrastructure of a nation is what holds civilization together. It includes roads, water supplies, sewers, electrical grids, and telecommunications — things without which the world might prove a difficult place to navigate. While Americans enjoy a better infrastructure than many places in the world, the reality is that it is outdated, inefficient, and — in many places around the nation — currently crumbling to pieces."

That assessment parallels the assessment of procurement expert Paul Teague, who writes, "Procurement executives with logistics responsibilities know, the most appropriate [type of] music for much of the US transportation infrastructure, including the highways, is the blues." ["US transportation at breaking point," ProcurementBlog, 24 January 2011] The list of startling facts about U.S. infrastructure begins with a well-known problem -- the sorry state of U.S. bridges.

"1. More than 25% of bridges in the United States need significant repairs or are handling more traffic than they were designed to carry.

"This translates to a whopping 150,000 bridges that aren't up to snuff. In recent years, bridge and overpass collapses have even led to death. One of the most notable of these was the I-35 bridge in Minneapolis, which collapsed in 2007, killing 13 and injuring 145. If bridges are not updated or repaired, these kinds of accidents could become more common."

Since the U.S. population is predicted to increase, it doesn't take a Nostradamus to predict that bridge traffic will also continue to increase. Add to that the fact that some people want to increase the weight limit on trucks using those bridges (see my post entitled The Supersized Supply Chain) and you can see why some analysts are so alarmed.

"2. An inefficient, heavily overburdened electrical grid results in rolling blackouts and losses of $80 billion a year.

"In a world that relies heavily on technology for everything from health care to business, losing power can be a big deal. In the past decade, huge blackouts have left much of the Northeast and Florida without power for several days. This costs money, time, and can create unsafe conditions for residents."

I have been writing about the inadequacy of U.S. electrical grids for several years. Back in 2009, I cited an article from The Economist that noted that electrical grids "have changed very little since they were first developed more than a century ago." ["Building the smart grid," 6 June 2009] Of course, for most of those years there was very little reason to change. Capacity was sufficient to meet customer needs so grids simply connected power plants on one end and consumers on the other. But The Economist agrees that that industrial approach "is now showing its age." The next two "facts" concern the state of U.S. dams.

"3. Over 4,095 dams in America were deemed “unsafe” by the American Society of Civil Engineers.

"This means that they have deficiencies that leave them more susceptible to failure, especially during flooding or earthquakes. The number of dams in the United States that could fail has grown 134% since 1999, and now comprises 3,346 dams nationwide. More than 1,300 of these dangerous dams are considered 'high hazard' because their collapse could threaten the lives of those living nearby.

"4. More than a third of all dam failures or near-failures since 1874 have happened in just the last decade.

"The rate of failures is increasing at a disturbingly fast rate, as America’s dams age and deteriorate. Can't remember any recent dam failures? In 2004, 30 different dams in New Jersey's Burlington County failed or were damaged after a period of particularly heavy rainfall."

Despite debates about climate change, we all know that weather is unpredictable. One thing we can predict is that as the population increases, the need for water will also grow. Since weather is unpredictable, dams will be required to ensure that burgeoning populations have access to water regardless of changing weather patterns. The next five "facts" concern the state of the nation's roads and highways.

"5. Nearly a third of all highway fatalities are related to substandard road conditions, obsolete road designs, or roadside hazards.

"The Federal Highway Administration estimates that poor road conditions play a role in more than 14,300 traffic fatalities each year."

"6. By 2035, highway usage (and shipping by truck) is expected to double, leaving Americans to spend an average of 160 hours a year in traffic.

"If you think traffic is bad now, just wait a few years. Over the next quarter-century, experts estimate that traffic on American roads is going to be much, much worse. Commuting between work and home could be a nightmare for many, taking up nearly a week of time over the course of the year. Also, keep in mind that this number is just an average, and in high-traffic urban areas, the estimates are much higher."

"7. More than half of America’s interstate miles are at 70% of traffic capacity, and nearly 25% of the miles are strained at more than 95% capacity.

"Americans love their cars, and the roads are clogged with drivers as a result. Much of the interstate system in the U.S. is struggling to keep up with the number of people who use it each day, leading to traffic jams and accidents at much higher rates."

"8. It is estimated that over one third of America’s major roads are in poor or mediocre condition.

"If you hadn't already noticed that the streets in your city were littered with potholes and cracks, this stat will let you in on the secret: American roads are falling apart. With many states teetering on the edge of bankruptcy and unable to keep up with maintenance, this situation isn't likely to change soon."

"9. Traffic jams caused by poor transit infrastructure cost Americans 4 billion hours and nearly 3 billion gallons of gasoline a year.

"Highways designed to carry fewer cars that they're currently managing, poorly timed lights, and awfully-designed transit systems all help contribute to traffic jams. These jams keep drivers on the road for longer, wasting gallon upon gallon of gas and hour upon hour of time."

Facts concerning the future rely almost exclusively on extrapolating today's driving patterns into the decades ahead. If gasoline and diesel prices continue to soar, those patterns may change. Even if they do change, investments will still have to be made to repair and upgrade the nation's roads and highways. If there is a dramatic move away from internal combustion engines to electric motors, gasoline taxes may have to be augmented by a use tax that spreads the cost of upkeep equitably across all those who use and benefit from using the highway system. And, as noted in Fact 11, below those expenses are going to be significant. The next "fact" concerns the nation's sanitation systems.

"10. A study by the EPA exposed the dirty truth about America's aging sewer systems: they spill an estimated 1.26 trillion gallons of untreated sewage every single year.

"Not only is this a health and environmental concern, but it's also a financial one. Cleaning up these spills costs an estimated $50 billion every year."

As water becomes an even more precious commodity, treating the water used to dispose of human waste is going to receive greater attention. Managed correctly, treating waste water could prove to be a money maker rather than mother loser in the years ahead. Back to the highway and other transportation systems.

"11. The United States must invest $225 billion per year over the next 50 years to maintain and adequately enhance roads and other transportation systems to meet demand.

"Currently, the U.S. is spending less than 40% of this amount, which will make it impossible to effectively keep up with and expand the transit system."

The United States is not the only country that needs to invest in infrastructure. I have read estimates as high as $10 trillion to build out of the necessary infrastructure worldwide to support the global economy. Unfortunately, the U.S. is moving in the wrong direction.

"12. In 2005, U.S. infrastructure earned a D rating from the American Society of Civil Engineers. This was down from a D+ in 2001 and 2003.

"It's no joke that the infrastructure of the U.S. is getting worse and worse. In some areas, quality of water, electricity, and roads have been compared to those of a developing nation. Major changes need to be made to keep up, modernize, and allow America to remain competitive in the world market."

The next "fact" talks about U.S. ports and harbors.

"13. By 2020, every major U.S. container port is projected to be handling at least double the volume it was designed for .

"Imports and exports are major, major business for the U.S., and in the future, this isn't likely to change. Yet the ports we use to do our trading are going to be seriously overloaded and will need a major overhaul to adequately deal with the number of ships coming in and out."

For more on this subject, read my post entitled Ports Vie for Increased Market Share. The next "fact" deals with the airline industry.

"14. Costs attributed to airline delays related to congestion and outdated air traffic control systems are expected to triple to $30 billion from 2000 to 2015.

"Sitting on the tarmac waiting to take off or deplane isn't just annoying — it's costing businesses billions of dollars each year. The amount of time lost or wasted on flights is continually rising, up to 170 total years (15 minutes lost on 1.6 million flights) in 2007 from just 70 years lost in 2003."

It isn't just man-hours that can be lost. Antiquated traffic control systems could potentially impact the air freight business as well. Since much of that business involves the movement of time sensitive goods, the financial impacts could be quite high. The final "fact" concerns the rail sector.

"15. Railroads are expected to need over $200 billion in investment through 2035.

"Railroads are a viable, if not quick, means of transporting people and goods the world over — but in the U.S., many lines are painfully inefficient and falling apart. While money is being poured into modernizing train systems (most notably high speed rail on some Amtrak lines), much more will be needed to keep pace with the amount of rail traffic in coming years. Not to mention everything it will take to make rail travel an appealing option to notoriously phobic Americans."

The rail industry touts its efficiency in moving the goods that keep the American economy going. The industry itself bears a major role in maintaining the country's rail infrastructure. In every infrastructure area, public/private partnerships are likely going to play an increasing role in helping maintain and build infrastructure. Michael Ennis, transportation director at Washington Policy Center, in Seattle, writes:

"Using private investment through public-private partnerships (PPP), lawmakers can fund new projects, shift risk, maintain current transportation infrastructure and increase value to taxpayers. There are many benefits associated with a PPP. These include leveraging private dollars for public use, shifting risk from taxpayers to the private sector, using competition to create incentives that lower capital and operating costs, and gaining a more efficient distribution of scarce transportation resources. Other factors like public oversight, asset ownership, long-term maintenance, liability and labor costs will dictate which PPP is a better fit. In some cases, these issues have been treated as obstacles and have prevented partnerships from forming. Yet other states have solved these problems and have adopted several types of partnerships. Undoubtedly, these concerns are important, but they should not deter us from pursuing the benefits of a public-private partnership. Partnering with the private sector is one way to increase financial resources and get roads built."

With the political parties in Washington, DC, demonstrating their inability to get along, we can only hope that a few wise politicians can find a way to work with the private sector to help meet America's future infrastructure needs.

August 05, 2011

Supply Chain Recovery from Japan's Earthquake/Tsunami: An Update

Back in March 2011, supply chain professionals were just beginning to grapple with the effects of the devastating earthquake/tsunami combination that struck Japan. One thing was clear, however, sectors that were going to be impacted included: high tech and electronics, automotive, aerospace, specialty instruments, and agriculture. Although Japan has made a remarkable recovery to date, lingering challenges remain. For example, the crippled Fukushima Daiichi nuclear plant has reduced power generation capacity that could affect manufacturing for years to come. In addition, the Japanese government has abandoned its plans to become even more dependent on nuclear power plants for generating electricity. There have been some success stories, however. Reader and fellow blogger Michael Koploy directed me to one of his posts on this subject entitled The Post-Tsunami Supply Chain All-Stars | Who Recovered the Fastest and How? [Software Advice, 15 July 2011] Koploy writes:

"Many will not forget the sights of the March 2011 earthquake and tsunami in Japan. These natural disasters tested Japan and the manufacturing prowess upon which the country had been built. Unfortunately, the manufacturing methods that led to glorious growth in post-war Japan are what led to post-tsunami crippling. Lean manufacturing – established as the gold standard by Toyota and other Japanese manufacturers – relies on the knowledge that processes and conditions will stay the same day after day. Harnessing this predictability and empirically analyzing results have allowed our global economy to become what it is today. Yet the very nature of our global economy makes 'leaning out' difficult. Disasters can halt production for weeks. War and social struggle put business on the back burner. And the default on sovereign debts threatens to rip apart the supply chain one link at a time."

Shortly after the enormity of the earthquake and its consequences were understood, Paul Martyn, a VP of Supply Strategy at BravoSolution, a provider of supply chain management software, wrote, "Lean is Dead, Long Live Lean. Well, maybe not dead. No one will ever go back to the days of bloated inventories. The cost benefits of holding down inventory levels are just too compelling. We will, however, see a lessening of the rigidity of the rule of thumb that zero inventory makes the best business sense." ["Supply Chain Lessons From Japan," Forbes, 6 April 2011] Martyn's assessment of the impact of the Japanese disaster agrees with Koploy's. "Supply chains today," he writes, "are attenuated and extremely vulnerable to repercussions from events far outside the control of those who manage and rely on them." Koploy doesn't focus on impacts, however, he discusses companies that beat the odds and demonstrated their resilience. He continues:

"Fast-forward four months after the destruction of the March tsunami began, and select Japanese manufacturers have not only begun to rebuild, but emphatically rally back. So, who are these all-stars, and how did they do it?"

Koploy begins his discussion in the automotive sector. He claims that the traditionally secretive "Big 3" automakers, Toyota, Honda, and Nissan, saved themselves by sharing secrets. He writes:

"The 'secret sauce' of many supply chains is their connections, relationships, and deals with suppliers. Ironically, the Japanese Big 3 of Toyota, Honda, and Nissan were forced to share these secrets and work together to help assist the auto parts suppliers affected by the earthquake and tsunami. The result? Parts in high demand and short supply have been reduced from 500 to about 30 for Toyota, who is now operating at 90% of its normal capacity. Nissan is reportedly operating at near-normal production levels, while many Honda plants are ramping-up production to pre-earthquake production levels. While production levels are still less than desired, and certain models' availability is limited, this is a positive movement for the industry."

Koploy reports that the Japanese Big 3 continue to face some challenges, including regaining marketshare; but, he believes that their situation "could have been much worse without all three players' cooperation." He believes they "are laying the foundation for future cooperation in the event of additional disasters." Another all-star identified by Koploy, from a different industrial sector, is Canon, "the Japanese producer of many of the world's printer and camera products." Koploy indicates that "diversified production centers put Canon ahead of schedule." He explains:

"Canon ... has been one of the great success stories of production post-disaster. Although it was initially expected to take the remainder of the year to recover, Canon is now at pre-disaster production levels. These results even surprised Canon, whom initially announced it wouldn't be until the end of the year before production reached desired levels. How did they do it? One of Canon's overall strategies and focal points has been diversifying its parts production efforts, including focusing on plants in southern Japan and mainland China. Diversification is a central, reoccurring theme within Canon; general business diversification is core to Canon's strategy. Canon's success is a great example of how investing in supply chain redundancy can be fruitful both today and tomorrow."

Diversity and redundancy can increase resilience, but they can also increase supply chain costs and complexity. For years, however, supply chain professionals have argued that efforts to reduce supply chain resiliency in favor of cost-cutting efforts that they saw as risky. Canon's resiliency may provide the example they need to bolster their case. Koploy's next all-star company, Apple, has been recognized as having one of the best (if not the best) supply chains in the world (see my post entitled Gartner Publishes Its Supply Chain Top 25). Koploy reports:

"Apple's iPad 2 was announced days before the earthquake and tsunami hit Japan. Many expected the disaster to greatly affect availability of Apple's newest tablet model. In reality, supplies for U.S. markets and other parts of the world were largely unaffected. Apple's investment in its supply chain has made it seem almost impermeable to disaster. In fact, Apple's supply chain has been called one of the secrets to its success. The company's ability to produce large amounts of high-quality tech products – at a price that consumers are willing to pay – has helped Apple become the third-largest vendor of PCs in the world. Apple has found the right combination of investing in technology manufacturers and negotiating attractive supplier contracts to effectively mass produce its high-quality products. Reinvesting its profits in its supply chain has ensured Apple continued access to key components, even in light of natural disasters and increasing competition."

In the report about the Supply Chain Top 25, Gartner analysts wrote, "Leaders took some very clear lessons from the events of the past couple years, with one of them being the need for supply chain resilience: the ability to deliver predictable results, despite the volatility that many have pointed out is now here to stay. Speed, agility, efficiency, responsiveness and innovation — all remain critical, but equally important is a resilient supply chain." As Koploy points out, Apple's supply chain is a great example and it explains why it has topped Gartner's list for the past several years. Koploy's final all-star company is Fujitsu. He writes:

"One of the major manufacturers of semiconductors in Japan, Fujitsu, has wafer fabrication operations near the heart of the earthquake disaster zone. Yet, the company recovered more quickly and efficiently than any other semiconductor supplier. In fact, five plants dedicated to manufacturing the chips were back at pre-earthquake production levels in less than three months. How did they do it? iSuppli notes that Fujitsu was actually ready for a such a disaster before the earthquake. The company had developed an emergency response strategy after an earthquake rocked Iwate three years earlier. Fujitsu quickly weathered the storm by shifting front-end product manufacturing to unaffected plants in central Japan and back-end product manufacturing to southern Japan and China. Fujitsu also included redundancy in its manufacturing capabilities, such as preparing its plant in Mie to handle processing 150mm, 200mm and 300mm silicon wafers."

I have written a number of posts concerning emergency planning (including continuity of operations and disaster recovery planning). In those posts, I have cited numerous experts who claim that having a plan on the shelf doesn't necessarily guarantee that they will work as expected when disaster strikes. Fujitsu obviously did more than just put a plan on the shelf. It made sure the plan worked in reality not just in theory. Koploy, referencing the Martyn article cited above, concludes:

"Back in April, supply chain software executive Paul Martyn blogged for Forbes that he expected full recovery to take 9-12 months. These supply chain all-stars have beat these expectations by collaborating with other businesses and suppliers in crisis, focusing on redundancy in the supply chain and effectively preparing for disaster situations. For the organizations that are willing to learn, there are some great lessons in how these companies recovered. Today, businesses must identify and plan for the vulnerabilities in one's supply chains now to prevent future weaknesses. This focus is the most sound way of balancing disaster-readiness while still focusing on profits. Have you analyzed your supply chain and assessed which points are most vulnerable? What have you done in terms of developing a disaster action-plan?"

As I note above, it is not enough to develop a disaster action plan you must exercise it. No plan will survive a disaster intact because no disaster will unfold exactly as anticipated. Having exercised the plan, however, people are better prepared to deal with exigencies of the situation when the unexpected happens. As Koploy writes, "For the organizations that are willing to learn, there are some great lessons in how these companies recovered."

August 04, 2011

Who Will Win the Tagging Wars: RFID or Barcodes?

I have written several posts about the future of product tagging in the supply chain and have noted that the future of tagging remains unclear. When radio frequency identification tags were developed, they were hailed as the future for the supply chain. Tagged items would make it easier for everyone involved to monitor inventory with greater precision. But, as the editorial staff at Supply Chain Digest note, "the initial vision of an RFID-enabled consumer packaged goods to retail supply chain that was the foundation of the electronic product code (EPC) movement starting in 2003 has largely floundered." ["Will Bar Code Base Self-Scanning Systems Doom Item-Level RFID Tagging in CPG to Retail?" 6 July 2011] The greatest drawback for RFID technology has been cost. SCD staffers believe that RFID costs may not decrease fast enough for the technology to win the tagging wars. They write:

"While many believe that someday the value prop for item-level RFID in mainstream retail (grocery, mass merchandise) will emerge as tag costs go low enough, we wonder if the emergence and apparent popularity of bar code-based mobile self-scanning systems will mean the need for or potential advantages of item-level RFID in most retail applications will never get off the mat."

The "bar code-based mobile self-scanning systems" that have caught their attention are being introduced in supermarkets like Stop & Shop and Tesco. The article explains:

"The basic concept of these mobile self-scan systems is this: shoppers are given small mobile wireless devices as they begin their shopping, scan item bar codes to view product information, see and claim discount offers, and ultimately put the item into their electronic as well as physical shopping cart, making checkout fast and easy. There are several such system offerings for retailers in the marketplace, primarily deployed in Europe, though Swedish retailer Ahold has deployed its 'Scan It' system not only in several Euro countries but also in about half of its Stop & Shop and other US store banners. All told, Stop & Shop says the system has already been used about 10 million times in the US. Carrefour, the world's second largest retailer behind WalMart, has also started to roll out a system in parts of Europe, based on the My-Scan system from a technology provider called Re-Vision, which says UK retailer Tesco is also a customer, among others."

One of the reasons that barcode-based mobile self-scanning systems could be catching on is because they feel cutting edge. Customers get the impression that they are helping move the frontiers of retailing forward into the future. One woman interviewed by CNN concerning her experience with a barcode-based mobile self-scanning system told the reporter, "I love Scan It." Asked why, she replied, "Because it makes shopping fun." Surely the novelty of such systems will wear off as they become more ubiquitous; but, novelty is not their only advantage. As another woman told CNN, "I love those pop-up coupons." In the end, "those pop-up coupons" may prove to be the tipping point for retailers. The article reports, "Scan It customers spend on average about 10% more than customers that don't use the system, in large part driven by the deal promotions." If the technology does proliferate and gain wide acceptance, retailers would like to see labeling laws changed. As the SCD article explains, "Ultimately, these systems could eliminate the need for shelf labels, which are a huge cost and headache for many grocers and other retailers (though in the US such labels are currently mandated by many state laws)." The article concludes:

"If use of these type of self-scan systems takes off, one interesting question would be if there would be any real future for item-level RFID in consumer packaged goods, a cost that would be incurred then solely for the potential inventory control benefits, not to improve the shopper checkout experience, such as rapid checkout. RFID might still be used at the pallet/carton level for supply chain efficiencies of course, but as with bar code scan tunnel system Kroger's has been piloting, these bar code-based self scanning systems are a bet that item-level RFID in consumer packaged goods is a long, long way off. SCDigest has a hard time envisioning retailers paying for both."

Kroger has been testing the scan tunnel system mentioned since May 2010. The folks at Supply Chain Digest call the system "a revolutionary new approach to Point of Sale and retail checkout that involves high speed imaging of bar codes or other identifiers to reduce its own labor costs and speed shoppers through the checkout process. ["New Kroger Bar Code Scan Tunnel Could Revolutionize Retail Checkout," 11 January 2011] The article describes the system this way:

"Kroger, a pioneer in self-checkout systems generally, concepted the idea a few years back, and then began work with Fujitsu, an existing POS partner, to develop a pilot system. Two of the Advantage Checkout systems have been installed and been operational in a Kroger store in Hebron, KY since May, according to Lynn Marmer, group vice president for corporate affairs at Kroger. The heart of the system is a 'scan tunnel' similar in a sense to similar tunnels some airlines have tried to deploy to manage the tricky job of scanning baggage bar codes that are oriented in every possible angle. Inside the Advantage Checkout tunnel are a battery of imaging scanners on all sides capable of not only reading bar codes, but using optical character recognition (OCR) technology to read letters and numbers and potentially to capture a picture of the product as it goes through the tunnel. A scale could also be added to the system, but has not been installed in the pilot system, Kroger says. The result of all this is a very high read rate of product UPC bar codes despite the huge variation in orientation of the bar codes as products move through the tunnel. Current read rates in the pilot program are 98.5% percent or more, Kroger says."

Clearly, stores won't opt to buy both scan tunnel and barcode-based mobile self-scanning systems. Each has its advantages. Kroger says it likes the advantages offered by the scan tunnel technology. Those advantages include:

"It can reduce store labor by further empowering customer self-checkout. While current self-checkout systems in grocery stores have been largely successful, they are generally used by shoppers with a relatively small number of items. Advantage Checkout is designed to be used for large or small volumes of items in a shopping cart. Consistent with that, the high speed of the system - the belt inside the tunnel is moving at rapid pace - means the system can dramatically improve the processing time for a given customer through checkout, a benefit any time but especially so on peak shopping periods near say dinner time or on Sundays. Customers or a store associate can rapidly place products on the belt and off they go through the tunnel as the cart continues to be unloaded. ... The system brings up all sorts of interesting design and material handling possibilities, from shopping carts optimally designed for the system to how the in-feed and out-feed belt systems are designed."

Kroger CIO Chris Hjelm told the SCD staff, "We don't see RFID coming at the item level to the grocery industry any time soon. I don't know if the products could support a one cent tag, let alone tags costing five cents or more, and even if we got to a cent or less, the rollout across all the vendors would take years."

Before writing a eulogy for RFID tags, ABI Research reports, "Despite the 2008-2009 economic setbacks, the RFID market rebounded nicely in 2010, growing slightly more than 14 percent." ["Continued Strong Growth in RFID Market Is Seen Across Several Verticals," SupplyChainBrain, 12 July 2011] The article concludes:

"ABI Research foresees variation in demand and the pace of adoption between applications, verticals, regions and technologies, with the retail apparel sector in particular displaying something of a slowdown in growth this year. However, the bottom line is that across the market as a whole we continue to see strong potential for future growth. Research director Michael Liard notes, 'The fastest-growing application between now and 2016 will be item-level tracking in supply-chain management, which ABI Research estimates will exceed a 37-percent growth rate.' This growth is being driven by high-volume demand for passive UHF systems to support:

"• Retail apparel tagging in U.S., Europe and other select country markets
"• Pharmaceutical tagging in Korea due to government compliance
"• Wine, tobacco and other anti-counterfeiting tagging efforts, notably in China
"• Other items over long term, including cosmetics, consumer electronics and more

"'The fastest-growing verticals over our five-year forecast period (in descending order) will be retail CPG, retail in-store, healthcare and life sciences, diverse non-CPG manufacturing, and commercial services,' says Liard. More specifically, primary RFID applications can be broken down into 'traditional' and 'modernizing' types. In the former group are access control, animal ID, automotive immobilization, AVI and e-ID documents. The modernizing category includes asset management, baggage handling, cargo tracking and security, point-of-sale contactless payment, real-time location, supply chain management, and ticketing. The 2011-2016 CAGR for aggregated modernizing applications is expected to be double that of the traditional applications cluster."

Ironically, the folks at Supply Chain Digest -- the same ones who asked if there is "any real future for item-level RFID in consumer packaged goods?" -- agree with and the folks at ABI Research that RFID is likely to come up the big winner in the tagging wars. They write, "It is inevitable that RFID will dominate the RFID landscape versus traditional bar codes - the only question is when." ["The Seven Reasons RFID will Eventually Win in the Supply Chain," 27 July 2011] Here the SCD Staff's seven reasons for coming to that conclusion:

"• In most applications, RFID simply has a number of advantages over bar codes: the potential for auto versus manual reads/scanning, no line of sight requirements, ability to put more data on the tag, etc. The better capabilities will win out in the end, as users gain a level of comfort, the price comes down, and the performance improves, all of which will happen.

"• We are clearly on a path where companies want to track everything at an individual, serialized level; while that will take time, and will require a reduction in RFID tag costs in some applications (e.g., a can of soup), technologies such as printed tags are likely to make that tag cost reduction requirement a reality at some point.

"• As companies emphasize continuous improvement, whether through formal Lean programs or otherwise, this will inevitably lead to the opportunity to reduce/eliminate manual scanning in many processes, and point to RFID-based approaches as a result. Bar code scanning can clearly be seen as a 'non-value added task' if technology which can eliminate that step is available.

"• Companies that adopt these sorts of more automated processes and higher levels of visibility/tracking will gain competitive advantage, causing others to jump on board, after the leaders have driven costs and complexity down for the followers.

• Increased regulatory requirements, especially relative to food and pharmaceuticals, will either actually or largely mandate use of RFID for to support safety and product integrity goals. This will spur further technology development, and ultimately put pressure on other sectors to develop the same levels of visibility and control.

"• Relatedly, growing supply chain complexity and virtualization will put pressure on companies, especially as safety or other related issues emerge, to be able to track lineage, chain of custody and inventory status at higher levels than most can do today.

"• As RFID finds its way into smart phones, payment systems, etc., which is it clearly on a path to do now, that pervasiveness in the everyday world will make the use of RFID in the supply chain seem more obvious. And the RFID capabilities in these other technologies may actually be used in conjunction with the supply chain (e.g., delivery drivers using smart phones to read/automate store delivery processes.)"

You have to admit they offer some compelling reasons -- IF the cost of RFID can be dramatically reduced. Cost remains the long pole in the tent. RFID tag manufacturers need to work closely with mobile phone makers to speed the introduction of RFID technologies that will make smartphone apps as useful as current barcode readers. It will be the ubiquity of such apps that will define the tipping point in the tagging wars for RFID. The staff at Supply Chain Digest makes this prediction: "We would plan with an assumption that an increasing percent of new systems will be RFID-based, and that by the end of 2016, RFID will be the dominant auto technology used in the supply chain, to the diminishment of bar codes." I guess in five years we'll know.