Symbolic Mathematics Finally Yields to Neural Networks

After translating some of math’s complicated equations, researchers have created an AI system that they hope will answer even bigger questions.

More than 70 years ago, researchers at the forefront of artificial intelligence research introduced neural networks as a revolutionary way to think about how the brain works. In the human brain, networks of billions of connected neurons make sense of sensory data, allowing us to learn from experience. Artificial neural networks can also filter huge amounts of data through connected layers to make predictions and recognize patterns, following rules they taught themselves.

By now, people treat neural networks as a kind of AI panacea, capable of solving tech challenges that can be restated as a problem of pattern recognition. They provide natural-sounding language translation. Photo apps use them to recognize and categorize recurrent faces in your collection. And programs driven by neural nets have defeated the world’s best players at games including Go and chess.

However, neural networks have always lagged in one conspicuous area: solving difficult symbolic math problems. These include the hallmarks of calculus courses, like integrals or ordinary differential equations. The hurdles arise from the nature of mathematics itself, which demands precise solutions. Neural nets instead tend to excel at probability. They learn to recognize patterns — which Spanish translation sounds best, or what your face looks like — and can generate new ones.

To read the full article, click here.

Atomium-EISMD