To 'democratize' AI, make it work more like a human brain

Credit: Adobe Stock
Since the launch of ChatGPT in 2022, AI platforms based on a computer science approach called “deep learning” have spread to every corner of society—they’re in your emails, on recipe sites and in social media posts from politicians.
That popularity, however, has also brought an unexpected twist, said Alvaro Velasquez, assistant professor in the Department of Computer Science at CU Boulder: The smarter AI gets, the less accessible it becomes.
According to one estimate, Google spent nearly $190 million training its latest chatbot, known as Gemini. That price tag doesn’t include the computer chips, labor and maintenance to keep Gemini running 24/7. AI platforms also come with a hefty environmental toll. Around the world, AI data centers produce nearly 4% of total greenhouse gas emissions.
These factors are putting AI out of reach of all but the largest corporations, Velasquez said
“Historically, there was a much more level playing field in AI,” he said. “Now, these models are so expensive that you have to be a big tech company to get into the industry.”
In a paper published last month in the journal PNAS Nexus, he and his colleagues say that an approach known as neurosymbolic AI could help to “democratize” the field.
Embraced by a growing number of computer scientists, neurosymbolic AI seeks to mimic some of the complex and (occasionally) logical ways that humans think.
The strategy has been around in some form or another since the 1980s. But the new paper suggests that neurosymbolic AI could help to shrink the size, and cost, of AI platforms thousands of times over—putting these tools within the grasp of a lot more people.
“Biology has shown us that efficient learning is possible,” said Velasquez, who until recently served as a program manager for the U.S. Defense Advanced Research Projects Agency (DARPA). “Humans don’t need the equivalent of hundreds of millions of dollars of computing power to learn.”

Alvaro Velasquez
Dogs and cats
To understand how neurosymbolic AI works, it first helps to know how engineers build AI models like ChatGPT or Gemini—which rely on a computer architecture known as a “neural network.”
In short, you need a ton of data.
Velasquez gives a basic example of an AI platform that can tell the difference between dogs and cats. If you want to build such a model, you first have to train it by giving it millions of photos of dogs and cats. Over time, your system may be able to label a brand-new photo, say of a Weimaraner wearing a bow tie. It doesn’t know what a dog or a cat is, but it can learn the patterns behind what those animals look like.
The approach can be really effective, Velasquez said, but it also has major limitations.
“If you undertrain your model, the neural network is going to get stuck,” he said. “The naïve solution is you just keep throwing more and more data and computing power at it until, eventually, it gets out of it.”
He and his colleagues think that neurosymbolic AI could get around those hurdles.
Here’s how: You still train your model on data, but you also program it with “symbolic” knowledge, or some of the fundamental rules that govern our world. That might include a detailed description of the anatomy of mammals, the laws of thermodynamics or the logic behind effective human rhetoric. Theoretically, if your AI has a firm grounding in logic and reasoning, it will learn faster and with a lot fewer data.
Not found in nature
One place that could work really well is in the realm of biology, Velasquez said.
Say you want to design an AI model that could discover a brand new kind of cancer drug. Deep learning models would likely struggle to do that—in large part because programmers could only train those models using datasets of molecules that already exist in nature.
“Now, we want that AI to discover a highly novel biology—something that doesn’t exist in nature,” Velasquez said. “That AI model is not going to produce that novel molecule because it’s well outside the distribution of data it was trained on.”
But, using a neurosymbolic approach, programmers could build an AI that grasps the laws of chemistry and physics. It could then draw on those laws to, in a way, imagine what a new kind of cancer medication might look like.
The idea sounds simple, but in practice, it’s devilishly hard to do. In part, that’s because logical rules and neural networks run on completely different computer architectures. Getting the two to talk to each other isn’t easy.
Despite the challenges, Velasquez envisions a future where AI isn’t something that only tech behemoths can afford.
“We’d like to return to the way AI used to be—where anyone could contribute to the state of the art and not have to spend hundreds of millions of dollars,” he said.
Co-authors of the new paper include Neel Bhatt, Ufuk Topcu and Zhangyang Wang at the University of Texas at Austin; Katia Sycara, Simon Stepputtis at Carnegie Mellon University; Sandeep Neema at Vanderbilt University; and Gautam Vallabha at Johns Hopkins University.