Why the future of AI is open source

Artificial general intelligence (AGI) is on the way, and open source is key to achieving its benefits and controlling its risks.
69 readers like this.
Brain on a computer screen

opensource.com

Artificial general intelligence (AGI), which is the next phase of artificial intelligence, where computers meet and exceed human intelligence, will almost certainly be open source.

AGI seeks to solve the broad spectrum of problems that intelligent human beings can solve. This is in direct contrast with narrow AI (encompassing most of today's AI), which seeks to exceed human abilities at a specific problem. Put simply, AGI is all the expectations of AI come true.

At a fundamental level, we don't really know what intelligence is and whether there might be types of intelligence that are different from human intelligence. While AI includes many techniques that have been used successfully on specific problems, AGI is more nebulous. It is not easy to develop software to solve a problem when the techniques are not known and there is no concrete problem statement. The consensus from the recent AGI-20 Conference (the world's preeminent AGI event) is that AGI solutions exist. This makes the emergence of AGI in the future likely, if not inevitable.

Approaches to AGI

There are at least four ways to create AGI:

  1. Combining today's narrow AI capabilities and amassing huge computation power
  2. Replicating the human brain by simulating the neocortex's 16 billion neurons
  3. Replicating the human brain and uploading content from scanned human minds
  4. Analyzing human intelligence by defining a "cognitive model" and implementing the model with procedural language techniques

Consider GPT-3, OpenAI's monumental achievement that generates creative fiction such as poems, puns, stories, and parodies. The program has a library of billions of words and phrases and their relationships to other words and phrases. It is so successful that OpenAI has not made it public due to concerns about the potential for its misuse. Although it seems smart, most people doubt that GPT-3 understands the words it is using. However, GPT-3 demonstrates that with enough data and computing power, you can fool a lot of people a lot of the time.

Unfortunately, that is also the case with most narrow AI. The average three-year-old stacking blocks understands that objects exist in a real world and that time moves forward. Blocks have to be stacked before they can fall down. AI's basic limitation is that these systems are unable to understand that words and images represent physical things that exist and interact in a physical universe nor that causes have effects with the comprehension of time.

While AIs may lack understanding, AGIs tend to be goal-directed systems that will exceed whatever objectives we set for them. We can set goals to benefit humanity that will make AGIs tremendously beneficial. But if AGIs are weaponized, they will likely be efficient in that realm, too. I'm not so concerned about Terminator-style individual robots as I am with an AGI being able to strategize even more destructive methods of controlling humankind. I believe these risks transcend today's AI concerns of privacy, equality, transparency, employment, etc. AGI is akin to genetic engineering in that its potential is huge, both in terms of its benefits and risks.

When will we see AGI?

AGI could emerge soon, but there is no consensus on the timing. Consider that the structure of the brain is defined by a small portion (perhaps 10%) of the human genome, which totals about 750MB of information. This means that developing a program of only 75MB might fully represent the brain of a newborn with fully human potential. Such a project is well within the scope of a development team.

We don't yet know what to develop, but at any time, a neuroscience breakthrough could map the human neuroma. (There already is a Human Neurome project.)  The Human Genome Project seemed outlandishly complex when it began, but it was completed sooner than expected. Emulating the brain in software could be just as straightforward.

There won't be a "singularity," one moment when AGI suddenly appears. Instead, it will emerge gradually. Imagine that your Alexa, Siri, or Google Assistant gradually becomes better at answering your questions. It's already better at answering questions than a three-year-old child, and at some point, it will be better than a 10-year-old child, then an average adult, then a genius, then beyond. We may argue about the date the system crosses the line of human equivalence, but at each step along the way, the benefits will outweigh the risks, and we'll be thrilled with the enhancement.

Why open source?

For AGI, there are all the usual reasons for choosing open source: community, quality, security, customization, and cost. But there are three main factors to AGI that make it different from other open source software:

  1. There are extreme ethical/risk concerns. We need to make these public and set a system for verification and compliance with whatever standards emerge.
  2. We don't know the algorithm, and open source can encourage experimentation.
  3. AGI could arrive sooner than people think, so it is important to get serious about the conversation. If the SETI project discovers that a superhuman alien race will arrive on Earth in the next 10 to 50 years, what would we do to prepare? Well, the superhuman race will arrive in the form of AGI, and it will be a race of our own making.

The key is that open source can facilitate a rational conversation. We can't ban AGI outright because that would simply shift development to countries and organizations that wouldn't recognize the ban. We can't accept an AGI free-for-all because, undoubtedly, there will be miscreants willing to harness AGI for calamitous purposes.

So I believe we should look to open source strategies for AGI that can encompass numerous approaches and strive to:

  • Make the development public
  • Get the AI/AGI community on the same page about limiting AGI risks
  • Let everyone know the status of the projects
  • Get more people to recognize how soon AGI might emerge
  • Have a reasoned discussion
  • Build safeguards into the code

An open development process is the only chance we have of achieving these objectives.

What to read next
User profile image.
Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI. Mr. Simon's technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment.

2 Comments

I want to license of this article.

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.