By Anthony Scriffignano, Ph.D.  SVP, Chief Data Scientist at Dun & Bradstreet

When I was growing up, it seemed there were no shortage of life lessons that came from adults, many of which started with “when I was growing up.” This is something we do as humans. We put things in context. We tell stories. We take pictures. These stories and pictures evoke emotions that help others relate to our message. Before the age when history was written down, it was woven into long and rich narratives, which were recounted with great care and thus served to perpetuate history itself. How does all of this interaction change in an era where it seems everything is recorded and stored? We take thousands of pictures, but can we find the one that strikes the emotion of the moment like the single dog-eared photo we saved from years gone by? How does our world change as more things become digital? When I was growing up, I had friends with names like Peter and Richard and Marianne and Kathy. I can remember some story about each of them, just by remembering their names. What about the Ryan’s and Chad’s and Lili’s and Karen’s of today? Will they have the same experience looking back from an era of digitization? Oh, it’s been a long time, Chad… Things are quite different now…

The Fears of the Past: Computers will get smarter than people

In 2011, the reigning champion on the popular TV show Jeopardy was defeated by Watson, IBM’s cognitive super-computing superstar AI agent. Similar victories for machines were captured as Deep Blue beat the reigning chess champion in 1997, and last year when Chinese game of Go (which some say is an order of magnitude more difficult than Chess) was dominated by Google’s AlphaGo. In all of these cases, there were articles written about how machines were finally “smarter” than people.

Let’s look at the conclusion that machines are smarter a bit more critically. It seems there are some logical observations that might serve to disprove that conclusion. First and foremost, in each of these “victories” of the machine, the machine in question was designed to do one specific thing extremely well. It defeated a human, with a human brain, but only at that one very specific task. It would be unreasonable to expect Watson to successfully interpret the emotional impact of an opera or to ask AlphaGo to play Poker. The emotional impact of opera is uniquely personal, and often difficult for lovers of the form to explain their own reaction. Poker involves testing others’ will, bluffing, and drawing on intuition. Bring up any of these points at a cocktail party and you will find people passionately arguing either side of the “computers are smarter than people” debate. This debate has been going on for decades, and will (in my opinion) continue as long as people have the will to argue.

Another argument that still sways in the direction of humankind is that of evolution. The one thing that humans seem to do well is we usually get better at things. When I was growing up, the triple-flip was the mark of a master in trapeze, gymnastics, diving, and other sports. Today, Olympians regularly do maneuvers that involve more rotations, and rotation in multiple dimensions, some of which are so fast that they can only be truly appreciated by watching a replay in slow motion. Most AI machine algorithms have some method of improving (AlphaGo, for example, is particularly designed to learn from watching human behavior or playing against itself and learning from positions that emerge in those games), either by additional training from humans or by observation of many iterations of data. This type of learning, however, is still largely constrained to a particular problem or objective. That characteristic of machine learning changes somewhat with neuromorphic methods, designed to mimic the operation of the human brain, but even these methods are constrained to a very specific type of evolution that is very much pre-conceived.

In the past, our perception of digitization was largely based on improvement: doing things faster than people. With time, machine capability evolved to allow machines to do things that people could not do at all. Nevertheless, machine capability remains highly focused on a pre-determined capability or method of learning. Certain capabilities which rely on uniquely human emotion or versatility are still well outside the range of automation.

The Fears of the Day: Computers are changing humanity

Lately it seems there are devices everywhere. Technology astounds. I am writing this blog on an airplane, traveling close to the speed of sound, with all of the modern conveniences of power, light, protection from the environment. As humorists and songwriters have put it, I’m literally sitting in a chair in the sky. Modern airliners are highly dependent on digital technology not only to manage stable flight, but to detect surrounding danger and to communicate with the ground and other aircraft. Some rightfully argue that the privilege of digitization is unequal (with which I agree); but even so, digital devices are making water cleaner, helping to educate children in remote regions, and augmenting health care assistance to otherwise marginalized individuals.

Of course, it’s not all necessarily good. There are concerns about having too many “devices,” about children not spending enough time with physical activities, about the loss of intimacy in communication and the erosion of privacy. Personally, I lament the apparent demise of the hand-written note on beautiful paper in cursive writing that expresses a simple thank you or congratulations. At the far end of the spectrum of “not good” are cyber crimes and other digital malfeasance, which represent a clear and present danger to many aspects of modern life.

Sometimes, the balance between digital opportunity and digital threat is in the implementation. For example, we are certainly still some way off from having all autonomous, self-driving vehicles, but it is difficult to ignore the progress in that area. How does this evolution impact the person who drives a truck or a car to make a living?

Bill Gates reflected on the observation that machines are taking some jobs and, it seems more is to come. Mr. Gates had an interesting reflection, focusing on the potential opportunity for mankind, stating “”First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” (As quoted in The Washington Post, January 2015).

More and more, it seems, machines will be required to explain themselves to be relevant to humans and for humans to trust. In a recent Churchill Club paired luminaries session I did with Dr. Inderpal Bhandari, Global Chief Data Officer of IBM, he put the challenge very succinctly, stating “the bigger examples are actually going to be in the realm of human endeavor in terms of keeping up with the data, in terms of establishing context and using these agents in a way to augment that intelligence. I think that’s where the explanatory aspect is going to become really critical. You know you can’t have the agent without it being able to explain what it’s presenting back to the person.”

We are living in a time where it is becoming increasingly important to understand not only what our digital agents are doing, but how that agency may be changing the way we act and react. It is how we implement technology, and how we understand the actions of our electronic agents, that portends the impact on humankind.

The Fears of the Future: My boss is a robot.

Like any emerging field, not everyone agrees on the best way forward. Recently, some of the greatest minds such as Dr. Stephen Hawking have warned that AI could actually effectuate an end to mankind if not properly managed. I take these warnings quite seriously, but I think there is hope.

I have the biggest concern if we become complacent. I don’t worry about taking direction (to some extent) from a robot. I already take direction from automation. My phone alerts me to an upcoming meeting and I go. My fitness monitor tells me I’m not walking enough and I get up and walk. The danger, it seems to me, is in surrendering our will to a machine and allowing that surrender to be an excuse for rational thought. If my GPS tells me to turn the wrong way down a one-way street, I am not absolved from ignoring that direction because it defies rational behavior.

From digital malfeasance to the complacency that comes with expecting all difficult problems to be solved by machines, we clearly have evidence of a shift in our views and expectations of the machines in our lives. Without a doubt, as AI continues to advance, as the Internet of Things continues to connect things that were otherwise isolated and out of sync, and as new advances in computing make strides never before imagined, we have work to do to make sure that we use the excess human capacity created to improve ourselves and the world we live in.

Digitization is neither good nor bad. It is. The degree to which it will have a positive or negative impact on society and the world are entirely up to the creators of new technology and the consumers of that capability.

The challenge, as I see it, is to embrace the digital evolution going on around us, but to be thoughtful about how that digitization may be exacerbating marginalization, driving out creativity, or otherwise bringing about unintended consequence. There is still ample opportunity to serve the underserved, to think new thoughts, and to innovate in ways that far exceed machine capacity. The choice is entirely ours.
Author: Anthony Scriffignano, Sr. Vice President and Chief Data Scientist at Dun & Bradstreet  –  He can be reached at: LinkedIn