Entrepreneurship

The Rocky History of Artificial Intelligence

Roel Nuyts

Like countless other industries, it’s impossible to separate the future of healthcare from the two-letter elephant in the room – AI. Artificial intelligence has introduced a paradigm shift across industries – but where did it all begin?

The concept of an artificial life entered the collective human imagination long before the existence of the MIT Robotics Lab. For thousands of years it was the purview of inventive storytellers and philosophers who created mechanized human-like figures fueled by emotions and intent. This was also the time that mathematically-inclined classical philosophers began to describe human thinking as a symbolic system – purposely replicable and therefore programmable. But despite these initial thoughts and imaginations, modern thinking about the possibility of intelligent systems began in earnest in the 1950s.

1950s – Initial Tests and Terms

In 1950, Alan Turing, a young British polymath, published a paper titled “Computing Machinery and Intelligence,” sparking the first major discussion about the field that would later be deemed AI. Beyond establishing the groundwork for defining and testing intelligence, Turing’s paper served to demolish the common objection that computers couldn’t be intelligent systems. The promise of artificial intelligence was born.

Years later in 1956, cognitive scientist John McCarthy coined the term artificial intelligence, explaining that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” That same year, two young researchers, Allen Newell and Herb Simon, implemented the first artificially intelligent computer program, “Logical Theorist.”

These early AI experts were some of the first to wrestle with the question that we still ask ourselves today – Will we ever be able to build a computer that is defined as intelligent?

1960s – Collaborative Genius

About a decade after Turing published his paper, the field of AI saw many incredible breakthroughs at the hands of select dedicated minds. Marvin Minsky and John McCarthy set up the first coordinated AI research lab at MIT. While McCarthy’s approach was rooted in bending mathematical logic, Newell and Simon continued working on programs to model human thinking, developing additional systems that solved problems in a manner that mirrored human information processing. Countless brilliant minds helped shape the beginnings of AI, but in retrospect, Turing and Minsky are often regarded as the real pioneers of artificial intelligence – Turing for introducing the ‘what,’ and Minsky for defining the ‘how.’

First AI Wave and Winter

The initial wave of AI was rooted in a simple idea: problem reduction. You take a hard problem, break it into simple problems, and then break those into even simpler problems. In the early ’60s, James Slagle wrote the signature program of AI’s first wave – a system that integrated symbolic expressions to reduce problems.

But before long the hype far exceeded AI’s accomplishments. As the mid ’70s approached, AI researchers were dealing with two very basic limitations: not enough memory and dreadfully slow processing speeds. Even the most impressive, functional programs were only able to handle simplistic problems. AI research had its government funding cut, and interest dropped off. Scientific activities declined dramatically, and the AI industry entered a period of time described as the first “AI Winter.”

Resurgence – Expert Systems

The long winter of disinterest was punctuated by another brief period of investment. With the U.S. and Britain providing funding to compete with Japan’s Fifth Generation Computer Project, AI research was reignited in the mid ’70s and ’80s.

During this period, expert systems – computer programs aimed at modeling human expertise in specific knowledge areas – were becoming increasingly popular. Edward Shortliffe’s development of MYCIN, a system built to assist physicians in the diagnosis of infectious disease, was a hallmark of this new wave. At the time, MYCIN’s defenders claimed that it came extremely close to matching human levels of performance, prompting a wave of enthusiasm for the possibility of capturing expertise with a collection of rules. These rule-based expert systems were a dominant force for the next decade and remain an important part of the AI toolkit today.

Second AI Winter

After a series of financial setbacks and the fall of expert systems, the second AI winter came in the late ’80s and ’90s. Confident projections did not work out and many startups aimed at replacing humans had failed. Expert system computers were seen as slow, clumsy and too expensive to maintain when compared to desktops. As a consequence, funding for AI research was cut deeply.

21st Century – General Intelligence and Possibility

In the last two decades, the third wave of AI has resurged stronger than ever. In 2010, Siri was launched and we now have regular conversations with objects ranging from phones and homes assistants, to clocks and microwaves. In 2011, IBM’s Watson program beat the human jeopardy champion Ken Jennings.

The third wave of AI has been enabled by lots of computing and data, both of which facilitate a new kind of statistics. Some call this machine learning, others call it computational statistics. Whatever you choose to call it, it is successful due to its utilization of deep neural networks.

Looking Forward

AI is a powerful tool, and that power warrants both excitement and concern. In 2014, Elon Musk warned that “with artificial intelligence we are summoning the demon. AI is our biggest existential threat.” His skepticism mirrored that of 1970 Minsky, who believed that “once the computers get control, we might never get it back.”

Despite being studied for decades, AI still holds the elusive potential to be one of the biggest contributors to global disruption. From one perspective we have made immense progress – AI has evolved over the last century and become more competent in solving complex problems. On the other hand, the visions of Turing and McCarthy seem as far away now as they were back then. Just as human learning is a gradual process, innovation with AI requires patience and imagination. Each idea that enthusiasts have pointed to as the ‘answer’ has garnered both excitement and worry, and each is a unique contributor to the entirety of the AI story.

Many industries have been swept up in the AI storm – benefits are far too significant for virtually any industry to ignore. Gartner predicted that the global value derived from AI would total $1.2 trillion in 2018, a 70% increase from 2017. AI in healthcare first manifested in the MYCIN supportive program, and continues to support diagnosis and treatment. In the decades to come, we can be sure that AI will disrupt healthcare as we know it. Soon, as ancient imaginations continue to become our reality, industries will have no choice but to have a serious, collaborative conversation around machine policy and ethics.

— Roel Nuyts is Head of Product at Health2047.

Powering insights

Case Study: Health2047’s Model in Action

Healthy Returns: a monthly newsletter for founders

HEAL Security is a cognitive cybersecurity intelligence platform custom-designed for the healthcare sector.

Founders

Charles Aunger

Milestones

Founded 2021

WEBSITE
Social

HOPPR is transforming healthcare diagnostics with AI-driven medical imaging technology.

Founders

Khan M. Siddiqui, MD
Oliver Chen, MD
Robert Bakos
Gerry Stegmaier

Milestones

Founded 2018

WEBSITE
Social