The Intelligence Trajectory Political Matrix

In my discussions around posthumanism and AGI, I’ve noticed something curious:

Almost all discussions in the intergovernmental world, tech media, and social media center around specific policy decisions – not on the end-goal of the speakers or writers.

This seems exactly backwards. As far as I can tell, if I know someone’s end-game, I can often anticipate their policies.

Someone who believes that AI should never be more than a tool to serve humanity (even a thousand years from you) will obvious advocate for policy decisions that land them in that situation that they deem to be desirable.

Someone who sees as ideal the vision of a posthuman intelligences populating the galaxy will naturally have different near-term policy recommendations.

For this reason, I’ve created the Intelligence Trajectory Political Matrix (ITPM).

Intelligence Trajectory Political Matrix

This isn’t intended to be a complete vision of all possible futures, but rather, a kind of “game board” where people can show the direction (trajectory) they’d like to see life progress towards, given the risks and opportunities involved.

In the first video below, I explain the matrix and demonstrate how to find where you land on it:

In this second video I explain some of the core motives of the Accelerationist camp, as it pertains to control vs. freedom:

I plan to add more to this article in the future, but for now it’ll stand as a placeholder for future content, and a reference point for other articles and discussions.

If you have any suggestions on improving the matrix send me a DM on Twitter or LinkedIn.

 

Header image credit: Pixabay