Site moved to enterrasolutions.com, redirecting in 1 second...

153 posts categorized "Artificial Intelligence"

October 25, 2013

Fostering Genius

Eliza Gray tells the fascinating story of The Institute for Advanced Study in an article published in Time. ["The Original Genius Bar," 22 July 2013] She describes the Institute as "a place to work unhindered by the pesky objectives required by traditional research centers or obligations to pimply students at universities. If research were measured on a spectrum from the practical (like making a laptop slimmer) to the theoretical (like studying the way matter moves in space), the institute is as 'close to the frontier as possible,' says its new director, Dutch mathematical physicist Robbert Dijkgraaf." Since opening its doors in 1930, Gray reports that "33 Nobel laureates have stopped through along with more than two-thirds of the winners of the Fields Medal, math's top honor since 1936."

Institute for Advanced StudiesAccording to the article, the Institute was founded "by Abraham Flexner, an educational theorist, and siblings Louis Bamberger and Caroline Bamberger Fuld, department-store moguls who provided the initial endowment of $5 million." Gray indicates that their motivation for founding the Institute was "to counteract a trend in the U.S. toward applied science." There is certainly nothing wrong with applied science; but, seeking to expand basic scientific knowledge (i.e., "pursuing questions for which the value of the answers isn't obvious") taps something deep in soul of scientific and mathematical geniuses.

Gray concludes her article on the Institute by describing some of the work currently ongoing there, including work on machine learning, cancer, human migration patterns, and star explosions. No one is sure where all of the research will lead, but as one researcher told Gray, "Use your imagination." To put an exclamation point on the fact that the Institute remains relevant and important, it was recently announced that theoretical physicists from the Institute "a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components of reality." ["A Jewel at the Heart of Quantum Physics," by Natalie Wolchover, Quanta Magazine, 17 September 2013] To learn more about that, read my post entitled Are Space and Time Real?

As you read Gray's article, you can sense the excitement she felt as she interviewed those involved with the Institute. It is that sense of excitement and inquisitiveness that a new non-profit organization, called The Project for STEM Competitiveness – which I helped found and currently serve as chairman of – wants to instill in young students. The first project being sponsored by The Project for STEM Competitiveness is a pilot program in Newtown, PA, at the Newtown Friends School called "Liftoff to Mars." To read more about this program, read my post entitled Teaching STEM Subjects Using a Mission to Mars and the following articles: "Newtown Friends to pilot STEM education program," by Crissa Shoemaker DeBree, phillyBurbs.com, 24 September 2013; "STEM pilot program lifts off at Newtown Friends School," by Regina Young, Bucks County Herald, 3 October 2013; and "Newtown Friends School, Lockheed Martin, US Dept. of Energy thinking about life on Mars," by Cary Beavers, The Advance, 3 October 2013.

You might be asking: What does the Institute for Advanced Study have to do with a science, technology, engineering, and mathematics (STEM) education initiative at a Newtown, PA, middle school? The primary connection is geographic. The Institute, along with several other world-class academic and research organizations, lies within a ten-mile radius of Newtown – which also happens to be the headquarters of my company Enterra Solutions, LLC. It is our intention to sponsor field trips to these world-class organizations so that students can feel for themselves the excitement that research can engender.

 

Fuld Hall of the Institute of Advanced Studies
Fuld Hall of the Institute of Advanced Studies
It is our belief that by exposing students to current projects and researchers, as well as exposing them to the work of the intellectual giants upon which current researchers build, students will become more interested in STEM subjects. The Institute has hosted over 6000 members through the years. It is perhaps best known as the academic home of Albert Einstein. Other intellectual giants who have worked there include: John von Neumann, Oskar Morgenstern, Kurt Gödel, Alan Turing, Paul Dirac, Edward Witten, J. Robert Oppenheimer, Freeman J. Dyson, Julian Bigelow, Erwin Panofsky, Homer A. Thompson, George F. Kennan, Hermann Weyl, Stephen Smale, Atle Selberg, Noam Chomsky, Clifford Geertz, Paul Erdős, Michael Atiyah, Erich Auerbach, Nima Arkani-Hamed, John N. Bahcall, Michael Walzer, T. D. Lee, C. N. Yang, Hassler Whitney, Andrew Wiles, André Weil, Stephen Wolfram, Eric Maskin, Harish-Chandra, Joan W. Scott, Frank Wilczek, Edward Witten, Albert O. Hirschman, Nima Arkani-Hamed, and Yve-Alain Bois.

 

NCSX
The National Compact Stellarator Experiment (NCSX) machine at PPPL
As I mentioned above, The Institute for Advanced Studies is just one of the many world-class organizations that lie within about a ten-mile radius of Enterra Solutions' headquarters. One such organization is the Princeton Plasma Physics Laboratory. It was during a visit with Dr. Andrew Zwicker, Head of Science Education at PPPL, that I first discussed establishing The Project for STEM Competitiveness. As explained by its website, "The U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) is a collaborative national center for fusion energy research. The Laboratory advances the coupled fields of fusion energy and plasma physics research, and, with collaborators, is developing the scientific understanding and key innovations needed to realize fusion as an energy source for the world. An associated mission is providing the highest quality of scientific education."

 

 

The original SRI Mouse
The SRI Mouse

Another such organization is the SRI International, which began life as the David Sarnoff Research Center, a research and development organization specializing in vision, video and semiconductor technology. Named for David Sarnoff, the former visionary leader of RCA and NBC, the organization has been involved in several historic developments that will grab students' attention, notably color television, CMOS integrated circuit technology, liquid crystal displays, and electron microscopy. The organization invented the now ubiquitous computer mouse, was one of the first four sites of ARPANET, the predecessor to the Internet, developed the SIRI personal assistant that iPhone users are familiar with, and it even helped Walt Disney select a location for his first theme park.

Bell Labs Original Transistor 1947
Replica of Bell Labs' 1947 transistor

 

Another nearby organization is Bell Laboratories (also known as Bell Labs and formerly known as AT&T Bell Laboratories and Bell Telephone Laboratories). It is currently the research and development subsidiary of the French-owned company Alcatel-Lucent. According to Wikipedia, "researchers working at Bell Labs are credited with the development of radio astronomy, the transistor, the laser, the charge-coupled device (CCD), information theory, the UNIX operating system, the C programming language, S programming language and the C++ programming language. Seven Nobel Prizes have been awarded for work completed at Bell Laboratories."

 

Princeton University clearAnd, of course, there is Princeton University. As its website explains, "Chartered in 1746, Princeton is the fourth-oldest college in the United States. Princeton is an independent, coeducational, nondenominational institution that provides undergraduate and graduate instruction in the humanities, social sciences, natural sciences and engineering. As a world-renowned research university, Princeton seeks to achieve the highest levels of distinction in the discovery and transmission of knowledge and understanding. At the same time, Princeton is distinctive among research universities in its commitment to undergraduate teaching."

With all of these wonderful organizations with such rich histories residing in our own backyard, it seemed natural to me to reach out to them for assistance in helping make STEM subjects for interesting for young students. If researchers at these institutions can get young students excited about STEM subjects, I'm not sure what will. However, I'm optimistic that field trips and visits by guest lecturers from these organizations will help stimulate students and, with luck, foster a genius or two among them.

October 11, 2013

Intelligent Robots: Should We be Frightened?

Machine learning clear"Are you prepared to meet your robot overlords," asks Tia Ghose. "The idea of superintelligent machines may sound like the plot of 'The Terminator' or 'The Matrix,' but many experts say the idea isn't far-fetched. Some even think the singularity – the point at which artificial intelligence can match, and then overtake, human smarts – might happen in just 16 years." ["Intelligent Robots Will Overtake Humans by 2100," Discovery News, 8 May 2013] Roger Berkowitz, the Academic Director of the Hannah Arendt Center for Politics, isn't certain about whether the singularity will ever be achieved; but, he writes: "The point is that if machines act 'as if' they are human, or if they are capable of doing what humans do better than humans, we will gradually and continually allow machines to take over more and more of the basic human activities that make up our world." ["The Humanity of Robots," The Arendt Center, 18 October 2012] If you find this thought frightening, you are not alone. For a fuller discussion on that point, read my post entitled Artificial Intelligence: Is there Peril in Deep Learning?

Lord Martin Rees, co-founder of the Centre for the Study of the Existential Risk at Cambridge University (which was the subject of the post mentioned above), believes "we should ensure that robots remain as no more than 'idiot savants' – lacking the capacity to outwit us, even though they may greatly surpass us in the ability to calculate and process information." ["Will robots take over the world?" Phys.org, 30 July 2013] Berkowitz is actually as much concerned about how humans are going to behave in the future as he is about how robots are going to behave. He writes:

"Undoubtedly one reason machines are acting more human is that humans themselves are acting less so. As we interact more and more with machines, we begin to act predictably, repetitively, and less surprisingly. There is a convergence at foot, and it is the dehumanization of human beings as much as the humanization of robots that should be worrying us."

Kathleen Richardson, an anthropologist of robots has a view similar to Berkowitz'. She told Phys.org that we are only afraid of intelligent robots because we know how cruel mankind can be to one another. She explains:

"To understand what underscores these fears, we need to understand science and technology as having a particular and exclusionary kind of mimesis. Mimesis is the way we copy and imitate. In creating artificial intelligence machines and robots we are copying the human. Part of what we copy is related to the psychic world of the maker, and then the maker is copying ideas, techniques and practices into the machine that are given by the cultural spirit (the science, technology, and life) of the moment. All these factors are fused together in the making of artificial intelligence and robots. So we have to ask why it is also so frightening to make this copy? Not all fear a robotic uprising; many people welcome machine intelligence and see it as wonderful opportunity to create a new life. So to understand why some fear and some embrace you really have to know what models of mimesis go into the making of robots."

Alex Knapp believes that all of the talk about intelligent robots, the singularity, and computer overlords is interesting, but he isn't worried. Right now, he notes, robots aren't even smart enough to be middle managers, let alone world-domineering overlords. "There's no existing computer or robotic system capable of doing the job of a manager at the present time," he writes. "And it’s pretty unlikely that artificial intelligence will have advanced to the point of replacing the role of manager in ... the next 20 years, either. The technology just isn't there. Still, it's an interesting intellectual exercise, so let's proceed on that basis." ["Why Robots Would Make Lousy Middle Managers," Forbes, 31 December 2012] Ernest Davis, a computer scientist at New York University, agrees with Knapp that any discussion about intelligent robots must, for the moment, be theoretical. He told Ghose, ""I don't see any sign that we're close to a singularity."

Ghose believes that another reason that people fear the rise of intelligent robots is that "we're barreling toward a future that doesn't take people into account." She points out, for example, that driverless vehicles may make our roads and highways safer, but they will also put millions of truck drivers, taxi and limousine operators, and bus drivers out of work. The same is true concerning robots that are taking over jobs on factory floors. Robert Atkinson, President of the Information Technology and Innovation Foundation, however, believes that such concerns are more myth than reality. ["Robots Are Not The Enemy," Manufacturing.net, 12 September 2013] He writes:

"It has become a popular meme that 'robots are destroying our jobs.' How else do we explain today's persistent high unemployment? While scores of pundits and analysts have made this claim in the last couple of years, perhaps no one has done more to popularize this theory than MIT scholars Erik Brynjolfsson and Andrew McAfee, authors of the widely cited e-book, Race Against The Machine. They argue workers are, 'losing the race against the machine, a fact reflected in today's employment statistics.' To drive their thesis home, Brynjolfsson and McAfee point to the fact that after WWII productivity and employment lines increased in tandem, but beginning in 2000, the lines diverged, productivity continued to rise robustly but employment suddenly wilted. Aha, ipso facto. Here's the problem, they claim."

Atkinson, however, believes that the "notion that technology, automation and productivity lead to fewer jobs and higher unemployment is simply wrong." He asserts that "there is no logical relationship between job growth and productivity." Concerning the worry about robots taking over jobs, Atkinson writes that "second order effects that must be considered when evaluating the true impact of technology on jobs." He points to an OECD study that stated:

"Historically, the income-generating effects of new technologies have proved more powerful than the labor-displacing effects: technological progress has been accompanied not only by higher output and productivity, but also by higher overall employment."

He dismisses the argument that things are different this time around (i.e., history is not going to repeat itself). He continues:

"This time is actually not different. New innovations being introduced will largely boost productivity in information-based functions or routinized functions, but not jobs that involve interacting with people (e.g., nursing homes, police and fire) or doing non-routine physical tasks (e.g., construction or janitorial services). In addition, new technological growth will create new industries and business models that will promote economic and job growth across the board. The reality is that, far from being doomed by an excess of technology, we are actually at risk of being held back from too little innovation. ... Rather than fearing technology we actually need a lot more of it. So bring on the robots, now!"

As with most debates, you can find a bit a truth on both sides. We certainly know that robots are taking over a lot of blue collar jobs. Factory floors will never again look like those manned by our fathers and grandfathers. We also know that new technologies help create new jobs. Whether enough new jobs will be created to keep a burgeoning world population fully employed is certainly a serious question. The rise of intelligent robots will also affect workers who, to date, believed their skills couldn't be mimicked by machine. The Economist reports:

"Two things are clear. The first is that smart machines are evolving at breakneck speed. ... The second is that intelligent machines have reached a new social frontier: knowledge workers are now in the eye of the storm, much as stocking-weavers were in the days of Ned Ludd, the original Luddite." ["The age of smart machines," 25 May 2013]

Far from being concerned, however, The Economist concludes: "If we manage them well, smart machines will free us, not enslave us."

October 08, 2013

Web 3.0: Does Anybody Really Know What It Will Be?

Web 3_0Half a dozen years ago I wrote a post entitled Web 3.0 Still Advancing -- Even if People Don't Know What to Call It. I noted that Web 1.0 could be called the Information Web and that the Information Web morphed into the Social Web (commonly called Web 2.0). For most of the intervening years, people have been talking about the Semantic Web being the successor to the Social Web and they have been referring to it as Web 3.0. Back in 2009, however, Miguel Helft wrote about the amazing rise of "Twitter, Facebook, and other similar services" and how they "are increasingly becoming the nation's virtual water coolers. They spread information quickly, sometimes before the mass media do, and their ricocheting bursts of text and links become an instant record of Americans' collective preoccupations." Helft labeled the next evolution in the Web "Real-Time Search." ["How High Will Real-Time Search Fly?" New York Times, 24 October 2009] He didn't refer to Real-Time Search as Web 3.0, but he clearly saw it as the next step towards something new.

The following year Alice Truong interviewed Herman Lam, chief executive of Cyberport Management Company Limited, a government-owned company in Hong Kong that manages a development called Cyberport, which was supposed to be Hong Kong's silicon valley. During that interview, Lam did talk specifically about Web 3.0, but not as the Semantic Web. He simply called Web 3.0 "smarter computing." ["How to Make Web 3.0 Reality," Wall Street Journal, 7 December 2010] Lam told Truong:

"Web 3.0 is about how to make life even easier for us in the future. We believe with all this technological change that we could be looking at a paradigm shift and how people interact with the Web. When this changes, it opens up an opportunity for small startups, new companies and young entrepreneurs. ... In the world of Web 3.0, the Internet should know I won't be able to watch my favorite TV show. It should automatically record it and book a time slot for me to catch up on this show."

Lam's vision of Web 3.0 is certainly something different than Helft's Real-Time Search and certainly doesn't sound like traditional descriptions of the Semantic Web. It sounds more like a cross between a personal digital assistant and a smart TV. Daniel Nations probably got it right when he wrote, "The truth is that predicting the Web 3.0 future is a guessing game. A fundamental change in how we use the web could be based on an evolution of how we are using the web now, a breakthrough in web technology, or just a technological breakthrough in general." ["What is Web 3.0?" About.com] Nation's goes on to state, "Many people ponder the use of advanced artificial intelligence as the next big breakthrough on the web. One of the chief advantages of social media is that it factors in human intelligence." Certainly, the Semantic Web won't emerge without artificial intelligence technology. Concerning the Semantic Web, Nations writes:

"There is already a lot of work going into the idea of a semantic web, which is a web where all information is categorized and stored in such a way that a computer can understand it as well as a human. Many view this as a combination of artificial intelligence and the semantic web. The semantic web will teach the computer what the data means, and this will evolve into artificial intelligence that can utilize that information."

Another possibility raised by Nations is what he calls the "Ever-Present Web 3.0." He explains:

"Not so much a prediction of what the Web 3.0 future holds so much as the catalyst that will bring it about, the ever-present Web 3.0 has to do with the increasing popularity of mobile Internet devices and the merger of entertainment systems and the Web. The merger of computers as a source for music, movies, and more puts the Internet at the center of both our work and our play. Within a decade, Internet access on our mobile devices (cell phones, smartphones, pocket pcs) will be as popular as text messaging. This will make the Internet always present in our lives: at work, at home, on the road, out to dinner, wherever we go, the Internet will be there."

More recently the so-called Internet of Things has been put forth as a viable candidate for Web 3.0. ["Introducing Web 3.0: Internet of Things," by Xath Cruz, Cloud Times, 9 September 2013] As I've explained in previous posts about the Internet of Things, it's basically a machine-to-machine network. Cruz explains that it "basically refers to the concept of every gadget and appliance we own being interconnected via the Internet." Sarah Perez acknowledges that both the Ever-Present Web and the Internet of Things were potential candidates to be crowned Web 3.0, but notes: "None of these got to win the Web 3.0 branding." Perez nominates another candidate — the Ephemeralnet. ["The Rise Of The Ephemeralnet," TechCrunch, 30 June 2013] She believes the Ephemeral Net will emerge as people combine the desire to share their lives with friends as well as maintain a modicum of privacy. She writes:

"While some confuse the 'Ephemeralnet' with the so-called 'SnapchatNet,' in reality, it’s not only a new way to socialize online, it's a new way to think about everything. You can see the trend also in the rise of the (somewhat) anonymous and untraceable digital currency Bitcoin. Unlike traditional transactions, Bitcoin is decentralized and doesn't require banks or governmental oversight or involvement. And though it’s not entirely anonymous, there are already efforts, like Zerocoin, working to change that. ... At the end of the day, the Ephemeralnet may never get to become as defining a trend as Web 2.0 once was. Though it may find adoption beyond the demographics of its youngest participants, it will continue to share the web with the services that preceded it – services too big, too habitual, and too lucrative, to die off entirely."

Kevin Lindquist labels his version of Web 3.0 "'The Integrated Web' (Not 'The Semantic Web')." ["Web 3.0 is Here! Is Your Small Business Ready?" YFS Magazine, 30 August 2013] He explains:

"Many are pushing for this thing called 'The Semantic Web' where an app will be able to understand user interaction in such a way that it will not only return directly relevant results, but also indirectly relevant results. For instance, a search for 'showtimes 84003' returning what time movies are playing at the Cinemark in American Fork, Utah, including nearby places for dinner (because you are probably going on a date) or gas stations en route because it knows that your car is out of gas. These concepts are exciting, innovative, and forward thinking, but there is a critical evolutionary step missing in between the 'The Social Web' and 'The Semantic Web'. Instead of the telephone lines that we constructed between web apps and services during Web 2.0, Web 3.0 brings bridges and process to support the future of the app. Integrated services are the future, integrated applications of those services will take over the world. Being able to take one object, and pass it through multiple services to accomplish a task is that critical evolutionary step before 'The Semantic Web' becomes meaningful. Instead of a world where the app returns data about indirectly relevant things, what if it could instead automatically perform those indirectly relevant services for you upon approval? In other words, if you are going to show me 'The Semantic Web' show me a semantic web that can get things done without me needing to go between a number of different apps, web pages, etc."

I started this post with a reference to another post I wrote six years ago. Over those six years, there have been a number of technological advances. One thing that hasn't advanced, however, is agreement on what Web 3.0 should be called and what exactly its characteristics will be. In the end, that really doesn't matter very much. We will get what we will get — and we will probably like it.

October 07, 2013

Big Data is a Big Deal in Healthcare

Sharon Terry, President and Chief Executive Officer of Genetic Alliance, asserts, "I find myself becoming increasingly optimistic that we are approaching a tipping point for the consumer movement in health." ["Big Data Is Good for Your Health," Forbes, 1 July 2013] Her optimism is fueled by the fact that supermarkets can provide personalized offers to their customers based on loyalty card and point of sale data. If supermarkets can get personal, Terry believes that healthcare providers certainly should be able to provide personalized service as well. If there is one area where personalization is desirable, healthcare is that sector. Terry insists that personalized healthcare "is a movement that will enable consumers to be more active participants in their own health, gain more personalized care and contribute to the acceleration of clinical research and the quest to ameliorate disease."

Another reason for optimism is that an enormous amount of data already exists in the healthcare sector and that mountain of data grows each day. The following infographic provided by Healthcare IT Connect provides a good overview of the Big Data big picture in the healthcare sector. ["Big Data is a Big Deal," by Zach Urbina, 15 May 2013]

Infographic-BIG-DATA and healthcare

Two things really stood out for me in that infographic. First, 80% of healthcare data is unstructured. That means that natural language processing is essential if any real insights are going to be obtained from the data. Second, I thought that the "6 ways that Big Data can transform healthcare" were particularly enlightening. I suspect that Terry agrees that Big Data will transform healthcare in all those ways. She notes that the healthcare sector has "a great deal of work to be done to catch up to [its] counterparts in the retail industry, but just as advances in online, mobile and social technology forever changed the face of shopping, those technologies, combined with breakthroughs in genetic and molecular science, are fueling unprecedented change in healthcare." Tibco analysts report, "Healthcare organizations are increasingly embracing big data to bolster the quality of care while reducing costs, according to a recent survey of senior level executives." ["Big Data Analytics: The Prescription for Better Patient Care," Trends and Outliers, 27 August 2013] Let's take a closer look at how Big Data can transform healthcare, starting with research.

Support Research: Genomics and Beyond

Dr. Bonnie Feldman writes, "Genomics is making headlines in both academia and the celebrity world. With intense media coverage of Angelina Jolie’s recent double mastectomy after genetic tests revealed that she was predisposed to breast cancer, genetic testing and genomics have been propelled to the front of many more minds. In this new data field, companies are approaching the collection, analysis, and turning of data into usable information from a variety of angles." ["Genomics and the Role of Big Data in Personalizing the Healthcare Experience," The Doctor Weighs In, 14 September 2013] For those unfamiliar with the field of genomics, Feldman explains:

"Genomics is the study of the complete genetic material (genome) of organisms. The field includes sequencing, mapping, and analyzing a wide range of RNA and DNA codes, from viruses and mitochondria to many species across the kingdoms of life. Most pertinent here are intensive efforts to determine the entire DNA sequence of many individual humans in order to map and analyze individual genes and alleles as well as their interactions. The primary goal that drives these efforts is to understand the genetic basis of heritable traits, and especially to understand how genes work in order to prevent or cure diseases. The amount of data being produced by sequencing, mapping, and analyzing genomes propels genomics into the realm of Big Data. Genomics produces huge volumes of data; each human genome has 20,000-25,000 genes comprised of 3 million base pairs. This amounts to 100 gigabytes of data, equivalent to 102,400 photos. Sequencing multiple human genomes would quickly add up to hundreds of petabytes of data, and the data created by analysis of gene interactions multiplies those further."

Feldman goes on to explain that the Holy Grail of medicine is to provide individualized treatments for patients. Perhaps the most transformative result of genomic research will be the prediction and treatment of diseases before they actually affect a patient. Feldman explains:

"Personal genomics – understanding each individual's genome – is a necessary foundation for predictive medicine, which draws on a patient's genetic data to determine the most appropriate treatments. Medicine should accommodate people of different shapes and sizes. By combining sequenced genomic data with other medical data, physicians and researchers can get a better picture of disease in an individual. The vision is that treatments will reflect an individual's illness, and not be a one treatment fits all, as is too often true today."

Terry puts it this way, "With access to more information than ever, consumers can take control of their healthcare in ways never before imagined." Elizabeth Rudd notes, "Health issues are increasingly monitored and recorded electronically creating large amounts of data about an individual's health in the process." ["Big Data- About You," Innovation Management.se, 13 February 2013] Although medical personnel prescribe the use of some of those devices, increasingly individuals are buying equipment to monitor themselves. They are part of group known as the Quantified Self movement. Wikipedia explains the movement this way:

"The Quantified Self[ is a movement to incorporate technology into data acquisition on aspects of a person's daily life in terms of inputs (e.g., food consumed, quality of surrounding air), states (e.g., mood, arousal, blood oxygen levels), and performance (mental and physical). Such self-monitoring and self-sensing, which combines wearable sensors (EEG, ECG, video, etc.) and wearable computing, is also known as lifelogging. Other names for using self-tracking data to improve daily functioning are 'self-tracking', 'auto-analytics', 'body hacking' and 'self-quantifying'."

As the healthcare sector aligns itself better in the Big Data era, all of these devices are likely to become part of the Internet of Things (a massive network that will involve machine-to-machine (M2M) communication). Members of the Quantified Self movement are likely to share their collected data with their primary physician, whose cognitive computing system will automatically track a patient's vital data and alert the physician if something worrisome arises. This kind of M2M monitoring will benefit both the healthy and sick. Terry concludes that each passing day "brings us one day closer to a time when our healthcare is as personalized as our commerce, which will empower all of us to participate more actively in our own health."

Tibco analysts note that, in addition to personalized healthcare, Big Data is transforming healthcare in other ways. "Data analytics," they write, "are being used for revenue cycle management, resource utilization, fraud and abuse prevention, population health management, and quality improvement." The infographic above claims that billions of dollars can be saved by leveraging Big Data analytics. With healthcare costs being a major concern in the U.S., such savings would be a blessing. That's why Big Data is such a big deal in the healthcare sector.

October 02, 2013

Keeping Up with Changing Tastes

IBM has been trying to demonstrate how its Watson computer can be useful in a number of fields. Earlier this year, it delved into the realm of cuisine. Watson ingested notes taken by IBM researchers during a collaboration with James Briscione, a chef instructor at the Institute of Culinary Education in Manhattan, along with "20,000 recipes, data on the chemistry of food ingredients, and measured ratings of flavors people like in categories like 'olfactory pleasantness'," and created a breakfast pastry called a "Spanish crescent." ["And Now, From I.B.M., Chef Watson," by Steve Lohr, New York Times, 27 February 2013] Lohr reports:

Food and senses"Watson's assignment has been to come up with recipes that are both novel and taste good. In the case of the breakfast pastry, Watson was told to come up with something inspired by Spanish cuisine, but unusual and healthy. The computer-ordered ingredients include cocoa, saffron, black pepper, almonds and honey — but no butter, Watson's apparent nod to healthier eating. Then, Mr. Briscione, working with those ingredients, had to adjust portions and make the pastry. 'If I could have used butter, it would have been a lot easier,' said the chef, who used vegetable oil instead. Michael Karasick, director of I.B.M.'s Almaden lab, had one of the Spanish crescents for breakfast recently. 'Pretty good' was his scientific judgment."

The point of the story is that a lot of analysis goes into making great dish (generally by human brains). Watson used both food chemistry and human opinion to help it in its analysis. As I've pointed in out in past posts, all of our senses come into play when we eat. While it comes as no surprise that taste is king, our sense of taste can be fooled by how something looks or feels (see my post entitled Sensing Food: The Role of Color. John Stanton, Contributing Editor of Food Processing magazine, writes, "It's no shock that taste is important to consumers; however, the surprise is that in many cases, we find taste takes second place or worse to other factors" when producers create new products. ["Taste Remains Consumers' Top Preference for New Foods and Beverages," 6 September 2013] Stanton laments, "Many of the initial efforts in producing healthy foods failed because the packaging tasted slightly better than the product." He discusses a number of "healthier" products that were introduced only to fail because they failed the taste challenge. You can almost hear him asking, "What were they thinking?" One of the challenges, he notes, is that food is a lot like fashion. As super model Heidi Klum would say, "One your day your in; the next day you're out." Stanton writes:

"The changing tastes of consumers have vexed food marketers for years. Changes in ethnic composition, attitudes of different age groups, health issues and the need for convenience have led food marketers to invest heavily in consumer insights and research to determine what consumers want."

One example of a company trying to keep ahead of changing tastes is McCormick & Company. For over a dozen years, it has been publishing a Flavor Forecast (see my post entitled McCormick® Flavor Forecast® 2013. Stanton indicates that getting the taste right is also good for the bottom line. "Better taste means better profits," he writes. "Most of the really good tasting foods often have the highest margins." So let's examine some of what's happening in the world of food and senses.

Taste

Mark Garrison reports, "Sour foods and flavors are riding high at the moment, and our growing desire for them is changing the food industry." ["More Sour to You," Slate, 20 June 2013] He continues:

"If Katherine Alford says sour flavors are having a national moment, pay attention. A vice president at the Food Network, Alford runs its expansive test kitchen in Manhattan's Chelsea neighborhood. Recipes and ideas that make the cut here will find their way into kitchens across America through the network's TV, Web, and magazine content. Alford's job is to stay in front of America's ever-changing palate without alienating a mainstream audience. Right now, Alford is finding that audience increasingly hungry for sour foods."

Ellen Bryon agrees flavors with a bit tang and sourness are becoming more mainstream. "No one says, 'I feel like fermented food tonight'," she writes. "But pungent, tangy flavors — all results of fermentation — are increasingly sneaking into grocery-store aisles." ["Mmm, the Flavors of Fermentation," Wall Street Journal, 10 April 2013] She continues:

"Packaged-food makers, grocers and chefs say more Americans are developing a bigger taste for fermented foods. Flavor experts even envision a world where spicy kimchi replaces pedestrian sauerkraut on American hot dogs. Already, fermented flavors are popping up on snacks and condiments such as Lay's Sriracha potato chips, Heinz balsamic vinegar flavored ketchup and Trader Joe's Spicy Seaweed Ramen noodles. The Subway sandwich chain is testing a creamy Sriracha sauce. Demand is especially strong from baby boomers, who face a weakening ability to taste and are drawn to stronger flavors, and the 20-something millennials, who seek new and exotic tastes."

Garrison reports, "Both sour and spicy flavors have ridden to popularity on a wave of new international cuisines that reflect the nation's growing diversity." Stephanie Strom agrees that American tastes are expanding and that demographic diversity is playing a major role. "The growing influence on America's palate of the influx of immigrants from Latin America and Asia has been ... subtle," she writes, "even as grocery shelves increasingly display products containing ingredients like lemon grass and sriracha peppers." ["American Tastes Branch Out, and Food Makers Follow," New York Times, 8 July 2013] She continues:

"For years, multinational food companies have been experimenting with ingredients, often being unable to find appeal broad enough to start or sustain a new brand. But as the buying power of Latino and Asian consumers expands, fruit flavors, hotter spices, different textures and grains and even packaging innovations are becoming essential for big food manufacturers trying to appeal to diverse appetites, according to company executives. From 2010 to 2012, sales of ethnic foods rose 4.5 percent, to $8.7 billion. The Mintel Group, a market research firm, estimates that between 2012 and 2017 sales of ethnic foods in grocery stores will grow more than 20 percent. Mintel predicts Middle Eastern and Mediterranean foods will increase the most in that time in terms of dollar sales."

Sight

Candice Choi reports, "Companies are tossing out the identical shapes and drab colors that scream of factory conveyor belts. There's no way to measure exactly how much food makers are investing to make their products look more natural or fresh. But adaptation is seen as necessary for fueling steady growth." ["Food Companies Work To Make It Look Natural," Manufacturing.net, 18 June 2013] Another way that companies use sight to appeal to consumers is through the use of color. Recently, however, use of artificial colors has drawn a lot of scrutiny. Eliza Barclay reports, "We've grown accustomed to choosing our food from a spectacular rainbow — care for an impossibly pink cupcake, a cerulean blue sports drink or yogurt in preppy lavender? But there's a growing backlash against the synthetic dyes that give us these eye-popping hues. And now scientists are turning to the little-known (and little-grown) purple sweet potato to develop plant-based dyes that can be labeled as nonthreatening vegetable juice." ["Purple Sweet Potato A Contender To Replace Artificial Food Dyes," National Public Radio, 9 September 2013]

Natural color plays an important health role according to some pundits. The Epoch Times reports, "Over 3,000 years ago, the Yellow Emperor wrote in his classic book on internal medicine, Huangdineijing, that if people wanted to obtain health and longevity, they should eat food with 'five colors, five tastes, and five fragrances'." ["Food With 'Five Colors' Benefit Health," 29 August 2013] The article continues:

"The benefits of a color-rich diet are also recognized by Western nutritionists. In the 2005 Dietary Guidelines for Americans, some of the recommendations include adding the following color-rich foods to one's diet: dark green vegetables, orange vegetables, legumes, fruits, whole grains, and low-fat milk products. The guidelines were released by the U.S. Department of Health and Human Services and the U.S. Department of Agriculture. ... According to [Marla Caplon, nutritionist and supervisor for the Division of Food and Nutrition Services for the Montgomery County Public Schools in Maryland], yellow and orange fruits and vegetables, rich in beta carotene, Vitamin A, and Vitamin C contain powerful antioxidants that neutralize free radicals. Green vegetables are rich in phytochemicals and are good sources of iron, calcium, vitamins K, A, and C. Blue and purple fruits and vegetables contain anthocyanins, which are powerful antioxidants that help the prevention of heart disease, stroke, and some types of cancer. The red group contains lycopene, an antioxidant that can help protect against cancer. ... The white group contains allicin, which has been known to help lower blood sugar and have amazing anti-inflammatory and anti-bacterial properties. This group also contains powerful antioxidants, which help to protect against cancer and heart disease."

Touch

Catherine Alexander, Vice President of Corporate Communications with Comax Flavors, told Hank Schultz, "One particular challenge ... is to incorporate echoes of texture into the flavor matrix. ... Texture is a huge part of what imparts pleasure to food ... and can impact how flavor is perceived." ["If spciness is king, texture might be queen, and strawberry rules its own realm," Food Navigator, 29 August 2013] Alexander also indicated that she had "seen a study that says texture is becoming almost more important than flavor." I'm skeptical about that last point. I agree with Stanton that taste will always trump the other senses when it comes to food enjoyment.

Smell

Back in the days when every main street had a butcher and baker, smart bakers vented the aromas of baking bread out into the street to attract customers. Aroma continues to be an important part of eating experience. PepsiCo is one manufacturer that certainly believes that to be true. It recently filed a patent for "a method of encapsulating aromas within beverage packaging to entice US consumers with 'favorable aromas' before they drink ... juices or coffees." ["PepsiCo seeks US patent to encapsulate beverage aromas within packaging," by Ben Bouckley, Beverage Daily, 10 September 2013] Naijie Zhang and Peter Given, inventors of the technology, told Bouckley, "Research has shown that aromas can in some instances have substantial impact on consumer perception of the taste of a beverage or other food, trigger a favourable emotional response, elicit a favourite memory, and/or otherwise improve overall product performance."

Undoubtedly, research will continue within the food industry about how the senses work individually and together to make eating the most pleasurable experience it can be. I believe that cognitive computers, like Enterra's Cognitive Reasoning Platform™, will be solicited to help with this research.

September 25, 2013

Are Space and Time Real?

"Physicists have discovered a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components of reality," writes Natalie Wolchover. ["A Jewel at the Heart of Quantum Physics," Quanta Magazine, 17 September 2013] Although that sounds far-fetched to the average person, there are few things about the wacky world of quantum physics that don't boggle the mind. The jewel-like shape at the heart of this new revelation is called an "amplituhedron" and it may be the key to unifying theories at the macro and micro levels. Wolchover continues:

"The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like 'amplituhedron,' which yields an equivalent one-term expression."

This subject caught my eye because I recently read James Gleick's wonderful biography of Richard Feynman entitled Genius. By anyone's definition, Feynman, a Nobel Prize laureate, was a mathematical genius with a particular gift for being able to visualize the quantum world. Gleick called him "the most brilliant, iconoclastic, and influential physicist of modern times." One of the challenges with which Feynman wrestled was the paradox that particles simultaneously behave as particles and waves. That makes any attempt to determine where a particular particle is at a particular time very problematic. Gleick's narrative about Feynman's contribution to the subject at hand also explains why the new geometric shape is called an amplituhedron:

"[Feynman] developed an alternative formulation of quantum mechanics to add to the pair of formulations produced two decades before by [Erwin] Schrödinger and [Werner] Heisenberg. He defined the notion of a probability amplitude for a space-time path. In the classical world one could merely add probabilities: a batter's on-base percentage is the 30 percent probability of a base hit plus the 10 percent probability of a base on balls plus the 5 percent probability of an error ... In the quantum world probabilities were expressed as complex numbers, numbers with both a quantity and a phase, and these so-called amplitudes were squared to produce a probability. This was the mathematical procedure necessary to capture the wavelike aspects of particle behavior. ... Probability amplitudes were normally associated with the likelihood of a particle's arriving at a certain place at a certain time. Feynman said he would associate the probability amplitude 'with an entire motion of a particle' — with a path. He stated the central principle of his quantum mechanics: The probability of an event which can happen in several different ways is the absolute square of the sum of complex contributions, one from each alternative way. These complex numbers, these amplitudes, were written in terms of classical action; he showed how to calculate the action for each path as a certain integral. And he established that this peculiar approach was mathematically equivalent to the standard Schrödinger wave function, so different in spirit. ... [Using Feynman's ideas,] Polish mathematician Mark Kac ... created a new formula, the Feynman-Kac Formula, that became one of the most ubiquitous of mathematical tools, linking the applications of probability and quantum mechanics. ... Feynman's path-integral view of nature, his vision of a 'sum over histories,' was also the principle of least action, the principle of least time, reborn."

Commenting on the discovery of the amplituhedron, Robert T. Gonzalez writes, "In the past, Feynman diagrams (which are themselves very powerful and elegant simplifying tools) didn't help us in understanding some specific particle interactions because the number of terms that needed to be calculated were so huge that even our most powerful supercomputers couldn't crack a solution." [See "Comments" at the end of his post entitled "This physical breakthrough could change our understanding of spacetime," io9, 20 September 2013] I suspect Feynman would have been thrilled with the new discovery. Wolchover's article in Quanta Magazine was accompanied by artist Andy Gilmore's rendition of what the amplituhedron looks like.

Amplutihedron_span

Continuing her superb article about why the discovery of the amplituhedron is so important, Wolchover writes:

"The new geometric version of quantum field theory could also facilitate the search for a theory of quantum gravity that would seamlessly connect the large- and small-scale pictures of the universe. Attempts thus far to incorporate gravity into the laws of physics at the quantum scale have run up against nonsensical infinities and deep paradoxes. The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity. ... Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature. In keeping with this idea, the new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct."

Scott Aaronson, a theoretical computer scientist and faculty member in the Electrical Engineering and Computer Science department at the Massachusetts Institute of Technology, suspects that there will be a lot of skeptics about the claim that the amplituhedron will "be a 'jewel' that unlocks all of physics, or a death-knell for spacetime, locality, and unitarity." After all, he notes, it applies to a "restricted class of theories." Aaronson himself may be a skeptic; nevertheless, he asserts tongue in cheek and writes, "If anything, the popular articles have understated the revolutionary importance of the amplituhedron. And the reason I can tell you that with such certainty is that, for several years, my colleagues and I have been investigating a mathematical structure that contains the amplituhedron, yet is even richer and more remarkable. I call this structure the 'unitarihedron'." ["The Unitarihedron: The Jewel at the Heart of Quantum Computing," Shtetl-Optimized, 20 September 2013] Aaronson goes on to explain:

"The unitarihedron encompasses, within a single abstract 'jewel,' all the computations that can ever be feasibly performed by means of unitary transformations, the central operation in quantum mechanics (hence the name). Mathematically, the unitarihedron is an infinite discrete space: more precisely, it's an infinite collection of infinite sets, which collection can be organized (as can every set that it contains!) in a recursive, fractal structure. Remarkably, each and every specific problem that quantum computers can solve — such as factoring large integers, discrete logarithms, and more — occurs as just a single element, or 'facet' if you will, of this vast infinite jewel. By studying these facets, my colleagues and I have slowly pieced together a tentative picture of the elusive unitarihedron itself. One of our greatest discoveries has been that the unitarihedron exhibits an astonishing degree of uniqueness. At first glance, different ways of building quantum computers — such as gate-based QC, adiabatic QC, topological QC, and measurement-based QC — might seem totally disconnected from each other. But today we know that all of those ways, and many others, are merely different 'projections' of the same mysterious unitarihedron."

Aaronson good humoredly reports that he is "awestruck" by "mathematical elegance and power" of the unitarihedron. He continues:

"But I haven’t even told you the most spectacular part of the story yet. While, to my knowledge, this hasn’t yet been rigorously proved, many lines of evidence support the hypothesis that the unitarihedron must encompass the amplituhedron as a special case. If so, then the amplituhedron could be seen as just a single sparkle on an infinitely greater jewel. Now, in the interest of full disclosure, I should tell you that the unitarihedron is what used to be known as the complexity class BQP (Bounded-Error Quantum Polynomial-Time). However, just like the Chinese gooseberry was successfully rebranded in the 1950s as the kiwifruit, and the Patagonian toothfish as the Chilean sea bass, so with this post, I'm hereby rebranding BQP as the unitarihedron. For I've realized that, when it comes to bowling over laypeople, inscrutable complexity class acronyms are death — but the suffix '-hedron' is golden."

In addition to his intellect, Aaronson obviously possesses a good sense of humor. Luboš Motl, however, isn't amused. "The amplituhedron exists," he writes, "while the diaperhedron or any other computer-science-based objects Aaronson talks about don't exist as objects." ["Diaperhedron can't match amplituhedron," The Reference Frame, 21 September 2013] Wolchover concludes her article by writing:

"Beyond making calculations easier or possibly leading the way to quantum gravity, the discovery of the amplituhedron could cause an even more profound shift, [Nima Arkani-Hamed, a professor at the Institute for Advanced Study,] said. That is, giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry. 'In a sense, we would see that change arises from the structure of the object,' he said. 'But it's not from the object changing. The object is basically timeless.' While more work is needed, many theoretical physicists are paying close attention to the new ideas."

Admittedly the subject of quantum physics is difficult to grasp. Wolchover does a remarkable job of making it accessible to everyone. I highly recommend that you read her entire article.

September 16, 2013

Artificial Intelligence and Deep Learning

These days, most of the arguments that involve the subject of artificial intelligence (AI) and whether it will ever be achieved are really about artificial general intelligence (AGI) — the ability of machines to think like humans and eventually achieve sentience. There are, however, more limited goals to be achieved in the field of AI. Dr. Ben Goertzel defines it as, "the ability to achieve complex goals in complex environments using limited computational resources." As I've repeatedly noted in past posts on artificial intelligence, most business use cases don't require AGI so the debate is a bit of red herring if you are interested in solving limited, but important, problems using AI. That's why Adam Hill can now write, "Once the stuff of fantasy, artificial intelligence is now a reality that could change the world we live in." ["Deep learning: a step toward artificial intelligence," Performance, 18 August 2013]

Hill notes that there have been a lot of "gimmicks" associated with the field of artificial intelligence in the past (like chess-playing computers), but he is more interested in AI's ability to address business applications.

Deep Learning 02"Last year, Microsoft Research boss Rick Rashid demonstrated some advanced English to Cantonese voice recognition and translation software with an error rate low enough to suggest that it had moved things on. Much of the most interesting work in the field at present comes from research into neural networks – building computers that can sift through vast amounts of data and recognize patterns – and these are proving successful in disciplines such as voice and picture recognition and natural language processing (NLP)."

In other words, to be useful in the business environment, AGI is not the sine qua non for success. Hill reports that search engine giant Google "certainly seems to see the potential" in applying limited AI towards business problems. He continues:

"Over the past year, it has snapped up a couple of the best-known names in the field to work with: Professor Geoffrey Hinton from the University of Toronto and AI expert Ray Kurzweil. Hinton is now working part-time with the media giant, while Kurzweil was appointed director of engineering in January."

Hill is particularly interested in Hinton’s work, which "is to help machines perfect deep learning, which is using low-level data to construct complex meaning and interpretation." Hill reports that Hinton believes that Google’s scientists and engineers “have a real shot at making spectacular progress in machine learning.” You might recall that last year Google made news when it reported that its computer system used millions of online images to learn to recognize cats on its own. Hill admits, however, that "much more – in terms of both computing power and software development – may yet be required to shift the deep learning paradigm beyond voice and image recognition." Nevertheless, for many business applications artificial intelligence systems, like Enterra's Cognitive Reasoning Platform™, are good enough to provide a significant competitive advantage for companies that utilize the technology. For example, Hill reports:

"When students from Sweden’s Chalmers University of Technology looked at AI's ability to select from which supplier to buy a particular part – taking into account factors such as price, lead time, delivery accuracy and quality – they found it could do so without making too many errors."

The fact that the system used by the Swedish students wasn't intelligent in the AGI sense of that word simply didn't matter for the task at hand. Dan Matthews, chief technology officer at IFS, told Hill, "The problem is not with the AI itself – the algorithms developed work well – but with the scenario and real-life data quality. For this to work well, and be worthwhile, you need a high volume of decisions where there are multiple choices and up-to-date values for all variables that may affect the decision. ... Taking the choice of supplier scenario as an example: lack of up-to-date price or lead time information for all alternative suppliers would lead to decisions made on wrong assumptions." In today's "always on" world, the collection of Big Data is much less of a problem than it was in the past. In fact, many companies lament that they have too much data and are collecting more each day. That is exactly why AI systems are required to make sense of it all.

At Enterra Solutions, we call this a Sense, Think/Learn, Act™ paradigm. In such a paradigm data matters a lot. As Hill states, "Ultimately, even AI can only be as good as the data it is given – to start with, at least." Not everyone is as sanguine about the future of deep learning as the researchers cited above. For example, Ed Crego, George Muñoz, and Frank Islam, write, "Big Data and Deep Learning are two major trends that will impact and influence the future direction and potential of innovation in the United States. ... In our opinion, both of these trends have substantial promise. But, they also have limitations that must be overcome to deliver on that promise." ["Big Data and Deep Learning: Big Deals or Big Delusions?" Huffington Post The Blog, 26 June 2013] It appears that what Crego, Muñoz, and Islam really object to is the hyperbole associated with Big Data and deep learning rather than actual gains that have been made in those areas. They write:

"Big Data is everywhere and the folks who are making a living warehousing and mining it abound. Big Data can be used to analyze web browsing patterns, tweets and transit movements, to predict behavior and to customize messages and product offerings. Kenneth Cukier, Data Editor of The Economist, and Viktor Mayer-Schoenberger, Professor of Internet Governance and Regulations at the Oxford Internet Institute, exalt the emerging use and impact of Big Data in an essay for the May/June issue of Foreign Affairs. The essay is adapted from their new book, Big Data: A Revolution That Will Transform How We Live, Work and Think. ... They assert that because of the ability to collect and use great volumes of information there will need to be three 'profound changes' in how data is approached. (1) We will no longer have to rely solely on small amounts or samples and statistical methods for analysis. (2) We will have to tolerate some 'messiness' and depend on the quantity of data as opposed to its quality. (3) In many instances, 'we will need to give up our quest to discover the cause of things in return for accepting correlations.' It seems to us that it is precisely because of these three considerations that there will need to be more rigor and objectivity in the data gathering and analysis process. Scientific methods will become more important rather than less. An informed intellect and an inquiring mind will become more essential in order to perceive 'truth' and bring some order out of chaos."

They make a good point. A lot of people fear "the rise of machines" and believe that AI systems will put analysts out of work. AI systems may reduce the number of analysts that a company needs, but I agree with Crego, Muñoz, and Islam that for the foreseeable future "an informed intellect and an inquiring mind" will remain essential to ensure that the results of machine deep learning make sense. The authors disagree with Cukier and Mayer-Schoenberger on another point as well. Cukier and Mayer-Schoenberger label Big Data as "a resource and a tool"; but, Crego, Muñoz, and Islam insist that it is only a "resource" not a "tool." On this point, I agree entirely with them. They explain:

"The tool is the research design that is employed to organize, aggregate, and analyze data in order to see patterns, extract meaning and make judgments. The person who creates and uses that data is the toolmaker. Today, we have an oversupply of Big Data and an under supply of Big Data toolmakers. ... The message to us from this is straightforward. Even with mounds and mounds of Big Data, human insights and innovation must come into play to matter and make a difference. Big Data. Small Minds. No Progress! Big Data. Big Brains. Breakthrough! Deep Learning stands in contrast to Big Data. Deep Learning is the application of artificial intelligence and software programming through 'neural networks' to develop machines that can do a wide variety of things including driving cars, working in factories, conversing with humans, translating speeches, recognizing and analyzing images and data patterns, and diagnosing complex operational or procedural problems."

In the end, I'm not sure how real the differences are between Crego, Muñoz, and Islam, and Cukier and Mayer-Schoenberger. Both groups agree that the application of deep learning is what is going to make a difference in the future. But, as Hill emphasized, the results of that deep learning depends significantly on the quality of the data being analyzed. Crego, Muñoz, and Islam conclude:

"Smart machines are here and they will continue to get smarter. ... The real innovation challenge to us then it seems will not be to apply deep learning to replace humans but to use it to create new ideas, products and industries that will generate new jobs and opportunities for skilled workers. ... Getting the most out Deep Learning will require deep thinking. That's where authentic human intelligence still trumps artificial machine intelligence."

Obviously, as President & CEO of a company that offers AI-based business solutions, I believe that the potential of cognitive reasoning platforms to enhance business processes is significant. Companies that embrace such systems are more likely to survive the journey across tomorrow's business landscape than those that do not.

September 11, 2013

Artificial Intelligence Research Gaining Momentum

Brain_f3On Monday, the National Science Foundation announced that it had awarded a 5-year, $25 million grant "to Harvard and Massachusetts Institute of Technology to study how the brain creates intelligence and how that process can be replicated in machines." ["Harvard, MIT join on artificial intelligence research," by Sharon Gaudin, Computerworld, 9 September 2013] The grant will be used to establish a Center for Brains, Minds and Machines that will be based at MIT. According to a press release from the National Science Foundation, the Center for Brains, Minds and Machines will partner with a number of other U.S. and international institutions and organizations. ["New center to better understand human intelligence, build smarter machines," 9 September 2013] Those institutions and organizations include:

Academic institutions

  • California Institute of Technology
  • Cornell University
  • Harvard University
  • MIT
  • Rockefeller University
  • Stanford University
  • University of California, Los Angeles

Broadening Participation institutions

  • Howard University
  • Hunter College
  • Universidad Central del Caribe, Puerto Rico
  • University of Puerto Rico, Río Piedras
  • Wellesley College

International partnerships

  • City University, Hong Kong
  • Hebrew University of Jerusalem
  • Italian Institute of Technology
  • Max Planck Institute for Biological Cybernetics, Tübingen
  • National Center for Biological Sciences, Bangalore, India
  • University of Genoa
  • Weizmann Institute of Science

Commenting on the announcement, Samantha Wood writes, "The search for artificial intelligence is jumping out of the world of sci-fi movies and into a world much closer to home — MIT." ["National Science Foundation Pours $25M into MIT's New Artificial-Intelligence Research Center," BostInno, 9 September 2013] Wood also reports that the Center for Brains, Minds and Machines (CBMM) was only one of three new research centers the NSF announced it was going to fund through its Science and Technology Centers Integrative Partnerships program. Although the NSF press release didn't give details about the other two centers, it did contain a statement from Wanda Ward, head of NSF's Office of International and Integrative Activities, which oversees the program. Ward Remarked, "NSF is pleased to support a cohort of exceptionally strong center proposals in Fiscal Year 2013 that scientifically 'top the charts' in terms of their timeliness and their potential contribution to U.S. competitiveness. These new leading-edge centers will produce the next generation of diverse, globally engaged talent and have the potential to attract more Nobel Prize-caliber researchers."

The NSF press release also noted that the CBMM was part of the broader BRAIN Initiative announced by the White House back in April 2013. The White House press release about the Brain Initiative stated that it was "designed to revolutionize our understanding of the human brain." ["Fact Sheet: BRAIN Initiative," White House press release, 2 April 2013] The release continued:

"Launched with approximately $100 million in the President’s Fiscal Year 2014 Budget, the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative ultimately aims to help researchers find new ways to treat, cure, and even prevent brain disorders, such as Alzheimer's disease, epilepsy, and traumatic brain injury. The BRAIN Initiative will accelerate the development and application of new technologies that will enable researchers to produce dynamic pictures of the brain that show how individual brain cells and complex neural circuits interact at the speed of thought. These technologies will open new doors to explore how the brain records, processes, uses, stores, and retrieves vast quantities of information, and shed light on the complex links between brain function and behavior."

The CBMM will be headed by Professor Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT. According to NSF press release, the CBMM will be able to use the grant "to benefit from the expertise of neuroscientists, engineers, mathematicians and computational scientists through a global network of academic, industrial and technological partnerships" as well as "help train the next generation of scientists and engineers. A summer school program, technical workshops and online courses are planned to create a new community of interdisciplinary researchers fluent in the study of intelligence." According to the release, the Center's other principal investigators include Haym Hirsh, Dean of the Faculty of Computing and Information Science and Professor of Computer Science and Information Science at Cornell University; Lakshminarayanan Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at Harvard University; and Matthew Wilson, the Sherman Fairchild Professor of Neuroscience and Picower Scholar at MIT.

According to an MIT press release, Poggio is hoping to revive some of the ambitious goals for artificial general intelligence that were first discussed over half century ago. To date, these goals have so far proven elusive. ["Artificial-intelligence research revives its old ambitions," by Larry Hardesty, MIT Media Relations, 9 September 2013] The release states:

"In recent years, by exploiting machine learning — in which computers learn to perform tasks from sets of training examples — artificial-intelligence researchers have built special-purpose systems that can do things like interpret spoken language or play Jeopardy with great success. But according to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT, 'These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.' Poggio thinks that AI research needs to revive its early ambitions. 'It's time to try again,' he says. 'We know much more than we did before about biological brains and how they produce intelligent behavior. We're now at the point where we can start applying that understanding from neuroscience, cognitive science and computer science to the design of intelligent machines.'"

In past posts about the quest to develop artificial general intelligence, I've cited skeptics who believe that it will be decades (at least) before a genuine AGI system will be developed. Some skeptics doubt that such machine intelligence will ever be achieved. Skepticism, however, is no reason to halt the pursuit of better machine intelligence. Regardless of whether all of the Center's goals are achieved, there is much that is going to be learned that will prove helpful and useful in the decades ahead. Wood reports that the U.S. isn't the only place where serious AI research is taking place. She writes:

"Not to be left out of artificial intelligence research, Italy, Germany, Hong Kong, India and Israel are all getting in on the act, as well, through various universities and institutions. Multiple industries, including Google and Microsoft, are also providing support. This collaboration will be cemented with the intertwining of both graduate students and postdocs, who will have joint advisors in different research areas at the center."

In the NSF press release, John Wingfield, assistant director of NSF's Biological Sciences Directorate, concluded, "Investments such as this in collaborative, fundamental science projects will ultimately lead to discoveries that revolutionize our understanding of the brain, which is the goal of the new BRAIN Initiative. Progress in this area holds enormous potential to improve our educational, economic, health and social institutions." According to the MIT press release the CBMM will focus on four main interdisciplinary research themes. "They are the integration of intelligence, including vision, language and motor skills; circuits for intelligence, which will span research in neurobiology and electrical engineering; the development of intelligence in children; and social intelligence." The fact that the Center is focusing on interdisciplinary research bodes well for its efforts. Some of the greatest innovations in history have resulted from interdisciplinary collaboration.

September 10, 2013

Healthcare is Forecast to Get Even More Personal

Current debates about healthcare cover topics ranging from how to rein in costs to how to make it more personalized. With the emergence of Big Data analytic tools that can rapidly sequence DNA, test hypotheses, discover new relations, uncover fraud, and so forth, the healthcare sector sits on the cusp of a new era. Exactly where the road will lead remains unclear. What is clear, however, is that Big Data analytics hold the potential of making healthcare much more personal. This is particularly true when it comes to developing treatments for disease. Before this can happen, however, there needs to be more secure data sharing.

In an earlier post entitled Big Data and Better Health, I cited an article by Becky Graebe in which she notes that "four distinct big data pools exist in the US health care domain." Those pools are:

  • Personalized medicine clearPharmaceutical R&D data
  • Clinical data
  • Activity (claims) and cost data
  • Patient behavior and sentiment data

Graebe reports "there is very little overlap in ownership and integration of these pools, though that will be critical in making big strides with big data in health care. ["What is the future of big data in health care?," SAS Voices, 6 May 2013] Even if data integration occurs slowly, Hellmuth Broda believes that strides will be made in personalized medicines and treatments. ["Personalized Medicine and Big Data," Pondering Technology, 30 July 2013] He writes:

"While many doomsayers describe the Pharmaceutical industry as one where the golden days are over and where more and more enterprises are bound to be falling off the 'Patent Cliff' or get lost in space, many rather visionary companies have ignited their rocket boosters and are catapulting themselves onto firm ground again. This rocket booster consists of treatments for rare diseases and Personalized Medicine. The two chambered booster combines especially targeted diagnostics with targeted therapies and utilises cloud services alongside with Big Data analytics. This new method of transport brings with it new demands on research, clinical trials, production, logistics, information systems and the overall business model."

To achieve maximum benefit from Big Data analytics, Broda agrees with Graebe that more integration and cooperation needs to occur. He also lists a number of challenges facing the pharmaceutical industry. He continues:

"These challenges and more importantly how Pharma companies tackle them, are shaping the future of the industry, with some key trends already emerging:

  • To solve complex tasks, 'coopetition' with other companies will become the norm
  • The journey towards Personalized Medicine
  • Race to Biologics
  • Pairing of drugs with their accompanying diagnostics
  • Stricter regulations on compliance
  • New demands on privacy and security
  • Big Data approaches yielding new insight into drug action correlations

"These challenges put new demands on governance, processes, business models and the information systems, which will build the foundation for these new endeavours. New trends in technology adaptation will support and enable these objectives."

The ability to gather and analyze Big Data is the common thread that weaves it way through all of the trends identified by Broda. One of the dilemmas that will confront the healthcare sector regarding personalized medicines will be cost. Broda explains:

"Drugs will become less generic (and low-cost) and more tailored for specific demographics in the future (and more high value/high cost). There are huge benefits to this, including the fact that these drugs are harder to copy so they retain their value better for the producer and they are also more effective as they can meet more specialist needs. While the current trend is looking at groups of people, maybe based on age or ethnicity, this will evolve in the 21st Century into truly personalised drugs on an individual basis reflecting the patient's genetic predisposition."

There are good reasons why doctors and patients alike would like to see more personalized medicines. As Broda explains, "This increasing personalisation of therapy will significantly benefit patients, by both raising efficacy and reducing side-effects." While drug companies may relish the thought of developing high value/high cost drugs, that's probably not good news for the masses. If Big Data analytics can be used to bring the cost of drug development down, a win-win situation could develop. It has been proven that profits can still be made by selling products to the so-called "bottom billion" consumers if products can be produced at the right price point in the right amount. Big Data analytics should be able to help pharmaceutical companies with that challenge as well. No one benefits when pharmaceutical companies go out of business or when medicines are manufactured in unsafe conditions in poorly regulated plants. For the immediate future, however, it looks like personalized medicines are going to benefit the rich more than the poor.

Broda makes the point that "drugs and their diagnostics walk hand-in-hand." As I pointed out in my earlier post (mentioned above), Big Data and cognitive reasoning systems are playing a role in diagnostics as well as drug development. A paper from the Institute of Medicine, entitled Making the Case for Continuous Learning from Routinely Collected Data, asserts that a "a learning health system" will be beneficial in a number of ways.

"The availability and reliability of large volumes of relevant longitudinal digital data from a variety of clinical and nonclinical sources are core features of a system that learns from each care experience, a learning health system. Common clinical repositories include data from electronic health record (EHR) systems used to manage patient care and claims data necessary for billing purposes. In some cases, data sources can be linked, using either institution-specific identifiers or matching algorithms, to create disease-specific patient registries that enable research. Integration of large pools of disparate clinical data from EHRs and claims is a major function of health information exchanges, which will be increasingly important to ensure seamless management of health information across institutions. Nonclinical sources of patient information may also include data from retail sales of over-the-counter medications, dietary supplements, walking and running shoes, and personal preferences and behaviors."

Although technology is important in the healthcare sector, professionals in that field can't afford to concentrate only on technology if things are going to get better. Like other business sectors, the healthcare sector needs to keep technology, people, and processes in balance. Deanna Pogorelc reports that Dr. Kevin Fickenscher, president and CEO of the American Medical Informatics Association, made that point at a conference earlier this year. ["The problem with big data in health? Too much focus on technology instead of people/process," MedCity News, 7 May 2013] Pogorelc reports that Fickenscher told conference participants, "One of the things that has been a problem in healthcare is that we tend to spend too much time talking about the technology and not enough time talking about the people and the process. So my personal bias is that while technology is important [...] if we don't deal with the people and process, we will not solve these other issues; we won't have good change management, and we won't have good implementation, which is where the value gets created from large data."

One of the newest fields being developed that will make healthcare even more personal is nanomedicine. "Nanomedicine, refers to highly specific medical intervention at the molecular level for curing disease or repairing damaged tissues." ["Nanomedicine," Associates Degree in Nursing] The article states, "Though in its infancy, could we be looking at the future of medicine? Early clinical trials certainly look promising." The article was accompanied by the following infographic.

Nanomedicine: The Future of Medicine
Source: Nanomedicine: The Future of Medicine

It should be abundantly clear that both diagnoses and treatments for diseases are going to be more personal in the future. That should be a good thing; especially if costs can be held in check.

September 04, 2013

Fellow Travelers: Big Data and the Internet of Things

"Bland by name and superficially viewed as gee-whiz technology never to be realized, the IoT (Internet of things) has significant potential to transform business," writes Bob Violino. ["The 'Internet of things' will mean really, really big data," InfoWorld, 29 July 2013] Why, you might ask, is the Internet of things viewed as science fiction by skeptics? The simple answer is that it holds the promise of connecting billions of machines (everything from cars to refrigerators) that will hold virtual conversations with each other. In another article, Violino defines the Internet of things this way:

"The term carries a number of definitions. But in general, the IoT refers to uniquely identifiable objects, such as corporate assets or consumer goods, and their virtual representations in an Internet-like structure. ... In effect, these networked things become 'smart objects' that can become part of the Internet and active participants in business processes." ["What is the Internet of Things?" Network World, 22 April 2013]

Ericsson, the Swedish technology firm, has a vision of the world in which 50 billion devices are continuously connected and communicating. On its website, the company writes:

"The vision of more than 50 billion connected devices, based on ubiquitous internet access over mobile broadband, devices or things will be connected and networked independently of where they are. Falling prices for communication, combined with new services and functionality connecting virtually everything to serve a wide range of commercial applications, individual needs and needs of society. The 50 billion connected devices vision marks the beginning of a new era of innovative, intertwined, combined products and services that utilize the power of networks."

If that vision comes true, the IoT will clearly be much larger than the "human" Internet with which we are all familiar. Violino writes, "Promising unprecedented connectivity among objects and the gathering of massive amounts of data, IoT is poised to deliver significant business benefits to organizations forward-thinking enough to envision the opportunities and efficiencies IoT can reap." In fact, the amounts of data that will be generated by the IoT will be so massive that the term Big Data seems entirely inadequate to describe it. Evangelos Simoudis warns, "While our ability to collect the data from these interconnected devices is increasing, our ability to effectively, securely and economically store, manage, clean and, in general, prepare the data for exploration, analysis, simulation, and visualization is not keeping pace." ["Big Data and the Internet of Things," Enterprise Irregulars, 26 February 2013] He continues:

"The Internet of Things necessitates the creation of two types of systems with data implications. First, a new type of ERP system (the system of record) that will enable organizations to manage their infrastructure (IT infrastructure, human infrastructure, manufacturing infrastructure, field infrastructure, transportation infrastructure, etc.) in the same way that the current generation of ERP systems allow corporations to manage their critical business processes. Second, a new analytic system that will enable organizations to organize, clean, fuse, explore and experiment, simulate and mine the data that is being stored to create predictive patterns and insights. Today our ability to analyze the collected data is inadequate."

General Electric, which calls the IoT the "Industrial Internet," is convinced that this machine-to-machine network represents the future. It depicts it as shown below.

Ge-industrial-internet-data-loopbig-620x465
Source: General Electric

Bill Ruh, Vice President and Global Technology Director at General Electric, writes:

"Big Data, a term that describes large volumes of high velocity, complex and variable data that require advanced technologies to capture, store, distribute and analyze information, is at a tipping point, with billions being spent to turn mountains of information into valuable insights for businesses. But there is more to Big Data than numbers and insights." ["GE Insight: The Industrial Internet – Even Bigger Than Big Data," Financial Post, 31 May 2013]

Although Ruh believes that "Big Data is the lifeblood of the Industrial Internet," he also believes the Industrial Internet is "about building new software and analytics that can extract and make sense of data where it never existed before – such as within machines." Beyond that, he believes that machine learning will be used "to make information intelligent." To make this happen, he writes that "new connections need to be developed so that Big Data 'knows' when and where it needs to go, and how to get there." He continues:

"By connecting machines to the Internet via software, data is produced and insight is gained, but what's more is that these machines are now part of a cohesive intelligent network that can be architected to automate the delivery of key information securely to predict performance issues. This represents hundreds-of-billions of dollars saved in time and resources across major industries."

Obviously, machines aren't going to spontaneously connect. The process begins "with embedding sensors and other advanced instrumentation in the machines all around us, enabling collection and analysis of data that can be used to improve machine performance and the efficiency of the systems and networks that link them." ["Big Data Will Drive the Industrial Internet," by Thor Olavsrud, CIO, 21 June 2013] Olavsrud cites a paper written by Peter C. Evans, director of Global Strategy and Analytics at GE, and Marco Annunziata, chief economist and executive director of Global Market Insight at GE, entitled, Industrial Internet: Pushing the Boundaries of Minds and Machines. In that paper, Evans and Annunziata assert, "The compounding effects of even relatively small changes in efficiency across industries of massive global scale should not be ignored." They explain:

"As we have noted, even a one percent reduction in costs can lead to significant dollar savings when rolled up across industries and geographies. If the cost savings and efficiency gains of the industrial Internet can boost U.S. productivity growth by 1 to 1.5 percentage points, the benefit in terms of economic growth could be substantial, potentially translating to a gain of 25 to 40 percent of current per capita GDP. The Internet Revolution boosted productivity growth by 1.5 percentage points for a decade — given the evidence detailed in this paper, we believe the industrial Internet has the potential to deliver similar gains and over a longer period."

Mike Wheatley agrees with the folks at GE that the Internet of Things is more than science fiction. "There's no holding back the Internet of Things," he writes, "this is where the world's heading, and we're already seeing it in concepts ranging from smart electricity meters to IBM’s rather more ambitious Smart Cities initiatives. The basic fundamental holding IoT together is connectivity, a world in which machines with intelligent sensors are hooked up to the web, and able to deliver a stream of constant data." Critical to that connectivity, he believes, is cloud computing. ["Cloud Computing & The Internet of Things go Hand in Hand," Silicon ANGLE, 17 July 2013] He continues:

"The Internet of Things ... won't be made possible by a jumble of wires. What makes it possible is cloud computing, combined with the glut of sensors and applications all around you that collect, monitor and transfer data to where it's needed. All of this information can be sent out or streamed to any number of devices and services. ... Of course, this means that there's going to be an awful lot of data flying around out there, data that needs to be processed quickly. ... This is why the cloud is so important. The cloud can easily get a handle on the speed and volume of the data that's being received. It possesses the ability to ebb and flow according to demand, all the while remaining accessible anywhere from any device."

If Evangelos Simoudis is correct that currently "our ability to analyze the collected data is inadequate," then Wheatley might be a bit optimistic in his assessment of how easy it is going to be to manage the IoT. I do agree with him, however, that the cloud will be essential to the IoT's development. One thing that all analysts seem to agree on is that once the IoT is up and running, the amount of data it will generate will be humongous.