Jeff Hawkins – Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)

Before the iPhone there was the Palm Pilot, and before OpenAI there was (and still is) Numenta – both were founded by Jeff Hawkins.

Jeff joins us on The Trajectory for episode 5 of the Worthy Successor series.

He founded Palm Computing in 1992, creating the Palm Pilot a few years later. Since 2005 Jeff has been working on brain-inspired AI at Numenta, sharing some of his lessons learned and visions of the future in his 2021 book A Thousand Brains: A New Theory of Intelligence.

In this episode Jeff shares his perspective on why the goal of posthuman life should be to “acquire knowledge,” why AGI will almost certainly be conscious, and why he believes it to be too early for governments to think about AGI regulation.

I hope you enjoy this conversation with Jeff Hawkins:

Below, we’ll explore the core take-aways from the interview with Jeff including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.

Jeff Hawkins’ Worthy Successor Criteria

1. Its main goal would be the accumulation of knowledge.

Knowledge could be potentially valuable for any future intelligence – and should be the focus of an AGI’s activity.

2. It would be conscious.

Jeff believes that consciousness would automatically emerge from a machine if it was built on the same “principles” of the human mind. He doesn’t believe we’d have to do anything in particular to “make” AGI conscious.

3. It would have to be able to replicate itself.

Jeff believes that any one AGI won’t last forever, so it would need to find ways to edit or make new versions of itself – through some mechanism that is presumably different from how biology does it today.

Regulation / Innovation Considerations

1. Fund the next paradigm

Significant research investments should be made in new paradigms, not merely in LLMs.

2. We should focus on practical AI governance now

We should develop new “extension” regulation around AI applications for libel and impersonation, as well as new guidelines for protecting intellectual property.

2. We should NOT be trying to regulate for AGI

It’s far too early to tell where the AGI risks are, we need to get closer to AGI before making those kinds of decisions.

Follow the Trajectory