Artificial Intelligence Job Loss is a Comparatively Minor Concern

How can you keep artificial intelligence from taking over your job?

In my latest TEDx talk, I explore what I’ve learned (from hundreds of interviews) about job security in the era of AI – but I also lay out why I think that “job loss” is a somewhat trivial concern in the big picture of creating life beyond humanity (stay with me if this sounds far out right now).

Job loss and job security are among the first “AI and ethics”-related questions that I’m asked when I’m in front of business audiences, but their questions rarely extend to the much greater, slightly longer-term concerns that AI poses for our species.

Oddly enough, the way we lose our significance in work is eerily similar to how I suspect we might lose our significance on earth, and in this short article I’ll aim to explain what we might do in order to:

a) Stay employed, and

b) Stay alive

Better yet I’ll attempt to explain this dynamic with a single concept which I’ll refer to as “Context”.

If you’d like to watch the full 11-minute TEDx below, feel free. Otherwise, continue reading the “boiled down” version of the main point below.

Humans and AI, the Future of Work

I’ll begin with a definition of what I consider to be the largest factor in job security in the coming age of technological disruption.

Explaining “Context”

Though there are many factors that affect the “automate-ability” of a job (I address 3 such ideas in the full TEDx talk above), there is one that stands above the rest in importance: “Context.”

Though the term could have many meanings, for the sake of this short article we’ll hold that “context” refers to the breadth of possibilities and considerations taken into account in a specific role. This isn’t simply a division between what are often called “white collar” or “blue collar” positions, as there are both narrow and broad context roles in either of those categories.

Generally, roles with narrow context are at risk for automation, and jobs with broader context are more secure.

Roles with narrow context involve:

  • A consistent kind of “input”
  • A consistent kind of “output”
  • A relatively narrow range of tasks

Blue collar example:

  • Limited context: A welder on a production line receives a certain range of input units, performs a set of reasonably predictable work, sends the outputs down the line.

White collar example:

  • Limited context: A financial auditor receives a certain range of financial and personal documents, handles them in a somewhat limited range of ways, and is responsible of the “output” of a completed audit.

Generally, roles with broad context are much harder to automate, and often include inputs, experience, and intuition beyond what we can presently program into software or train in a neural net.

Roles with broad context involve:

  • A wide – often unpredictable – range of “inputs”
  • A wide – often unpredictable – range of “outputs”
  • A relatively wide range of tasks

Blue collar example:

  • Broad context: A plumber receives jobs that are entirely different, requiring knowledge of boilers, different kinds of piping material, diagnostics for a wide range of plumbing issues (i.e. much more “context” from the messy real world).

White collar example:

  • Broad context: A corporate procurement leader might also look at a range of financial data, but this person also must understand suppliers, supplier relationships, and the impact of purchasing decisions on financial reports (i.e. much more “context” from the messy real world).

There are almost certainly instances where a company may lay off “broader context” roles before “narrow context” ones, maintaining lower-cost jobs that are critical for output, and dealing with less organization from the top. It’s not to say that all layoffs due to AI and automation will be of “narrow context” work first (although I suspect this will usually be the case).

Rather, what I’m arguing is that workers with “broad context” jobs will tend to find work more readily and steadily amidst AI disruption because their skills are generally more suited to higher level problem-solving. By no means do I believe this to be “a good thing” (or necessarily a “bad thing”), I’m simply stating that I believe it to be the case.*

Narrow Context Work: Comfort

The CEO of a company doesn’t simply juggle the demands of a specific business function, the CEO juggles the demands and dynamics of the market, of his/her shareholders, suppliers, finances, and more. The CEO wrestles with the messiness of the world in order to define processes, deliver value, manage resources, and grow a company.

Even non-founder CEOs are (generally) paid handsomely for their work – in large part because of the level of difficulty and adaptability required to deal with such a broad context.

A programmer, a front desk clerk, a call center rep, or a bookkeeper all deal in a much lower total range of contextual factors, and in many cases this is preferable. Dealing with massive amounts of context usually means higher pay, but it also generally implies more stressors, more ambiguity, and more ability to adapt to changes in the ever-changing and ever-nuanced market (the messy context of the real world).

It’s somewhat relieving to have a narrow domain in which to function – a limited “game” to play within what, when completed, signals our success (“Well… I’m done with these TPS reports, time to head home!”).

Of course, the market (and nature) never sleeps, and any sense of “done” is mistaken, no business is ever “done”, and no business is guaranteed to survive the fiscal year. Competitors are closing in, costs and revenues are changing, market forces are changing, the needs of the customer and of the employees are changing – all is in flux and eventual dissolution is the end of essentially all companies large and small.

Staying away from the harshness of that incessant Darwinian business struggle (which mirrors the same struggle in nature) is often what narrow context roles allow us to do.

The benefits of narrow context:

  • Less to worry about, less burdensome mental work, less strenuous mental engagement required at work
  • Potentially more time for leisure (some low-paying narrow context roles force people to take multiple jobs and have significantly less leisure, but salaries being equal, it’s much more likely that narrow context roles will have a more firm “clock out” than a broad context role)

Narrow Context Work: Danger

Being “left behind” at work doesn’t necessarily imply a low work ethic, and it doesn’t have to imply a relatively low rung on the corporate hierarchy. Plenty of high paying jobs are reasonably narrow in context, and those roles are often filled by bright and capable people.

But when skills aren’t in demand, or roles no longer provide an important value to the employer or customer – there’s no more role. If you make vinyl records, you can be the hardest working guy to gal in town and it doesn’t matter much – the market has passed you by and value is now delivered in a new way – not your way.

An employee who stays ahead of the technologies in their field, who make sense of trends in the market, and who is resourceful enough to find a way to apply themselves profitably to the changing, messy forces of the world… this employee (whether a salesperson or social media marketer or otherwise) is likely to find a place in the future of the market. This incessant diligence is the closest thing to “job security” that exists in today’s market.

If there is anything that I learned from re-listening to dozens and dozens of our interviews with top AI executives and researchers, it was this:

“Unless we stay ahead of the underlying Context of our market, we can’t expect our interests to be represented in the future of that market.”

An employee at any level who ardently finds how to be useful given the changes in their business and the needs of their customer is an employee who can constantly grasp an make sense of new context – and this is resourcefulness and vigilance is the best job security factor on earth.

The risks of narrow context:

  • Losing one’s job
  • Losing one’s purpose and utility in the broader job market

We’ll move on to how the idea of “hiding from context” applies not just to relevant in the working world – but to human relevance on planet earth.

Humans and AI, the Future of Life

If we limit our ethical concerns about AI to “job security”, we’re severely hampering the broader context in which artificial intelligence and other “transhumance transition” technologies initial in the two or three decades ahead.

  • Artificial intelligence will preempt our actions and goals at work and in life, augmenting everyday life with a new layer of convenience, speed, and personalization
  • Virtual reality will be a “place” – eventually becoming more compelling and useful than the real world when it comes to learning, entertainment, and working
  • Brain-machine interface technologies will allow us to volitionally change our emotions and memories, or

The Consequences of Ignoring the Broader AI Context

Is it possible that “job security” is the narrow little avenue of our concerns about AI, preventing us from staying relevant in the broader context of life on earth?

Like narrow context at work, narrow context around AI’s ethical concerns is certainly more comfortable. Thinking about the borderline inevitability of brain-machine interface technologies entering the minds of first world humans isn’t a comfortable thought. Considering our potentially nominal relevance in a world full of artificial intelligence systems and enhanced transhumans isn’t fun to think about (for most of us).

But the consequences of a narrow focus are the same. To extend our “Context” quote about jobs to our place on Earth:

“Unless we stay ahead of the underlying Context of the changes we’re about to undergo as a species, we can’t expect our interests to be represented in the future we’ll be living in.”

The worst-case scenario in the working world would be waking up to a world where we aren’t relevant in the marketplace, where the market has gone one way and we’re left somewhat clueless.

The world case scenario in the broader picture of AI ethics is that we wake up in a world where the direction of our evolution is unguided and erratic – or at worse – outright dangerous.

Job loss matters – and indeed we should think hard about how to offset the potentially negative impacts of AI disruption on people’s lives. At the same time – we’re building a future of more than just new jobs – and neglecting the context of transhumanism and the future of intelligence itself (extending beyond present humanity) would be – I think – the gravest error we could make.

To be blunt:

We Either Direct the Future, or We Are Crushed by It

There seems to be some kind of Hegelian force crushing its way through industries, and we either stay ahead of it (understand what’s valuable, and do our best to stay relevant), or we get run over by it.

There seems to be some kind of Hegelian force crushing through life itself, and we either direct it, or we are trampled by it. Humans either build a world that is the result of random forces of competition, or we venture some of our volitional force to molding what that future might be.

I don’t think this “force” is some kind of outside volition (a deity or destiny), but I am suggesting that the creative/destructive that seems to have continued since the beginning of the universe isn’t something we’re likely to escape. The history of species eating engaged in creative destruction and transmuting into entirely different forms than before (business and empire are exactly the same) – and the future is no different.

The galaxy is unlikely to be populated my little hominids like ourselves 1,000 years from now (given the inevitability of transhumanism, developments in AI, and more). Just as Standard Oil is no longer the largest company in the world, just as Rome is gone, just as the tyrannosaurus rex is no longer land’s apex predator.

As is hammered home a thousand times by Lucretius in On the Nature of Things:

“One thing gives rise to another, incessantly.”

“Life’s given no one outright; all must borrow.”

I especially like:

‘We need those atoms for our progeny.”

Given a long enough time horizon, it’s somewhat inevitable that humans – at least they are today (or maybe altogether) – will cease to exist.

Below is a graphic inspired from Oxford Professor Nick Bostrom’s essay The Future of Humanity, visually representing the perilous path of our species. Once arriving a homo sapiens, we will either:

(a) enhance ourselves into a different species entirely

(b) remain in the thin band of space that is the current human experience (not developing too much, and not destroying ourselves), or

(c) we will somehow go extinct like 99.9% of all species in history

Extinction Risk Human Future

In the words of Bostrom:

“The longer the time scale considered, the lower the probability that humanity’s level of technological development will remain confined within the interval defined at the lower end by whatever technological capability is necessary for survival and at the upper end by technological maturity.”

If remaining homo sapiens forever is essentially impossible (which is almost certainly the case), then avoiding self-destruction and moving on to higher forms is a transition we’ll want to make consciously as a species. We may see any of the following scenarios occur within our own lifetimes:

  • We might transform into transhuman entities, still human-like in form but vastly expanded in our physical and mental capacities
  • We might merge into a kind of meta-sentience and meta-intelligence via brain-machine interface
  • We might upload our “selves” into digital substrates (a-la Black Mirror’s San Junipero, living out a near-eternal experience in a virtual and digital world that we design
  • We might be driven to extinction by strong AI that has better use for resources than entertaining the fancies of comparatively pea-brained apes like ourselves (as we treat insects or rodents)

Regardless of which path we might take – it seems logical to guide the process by our own best judgment, to discern a future where sentience life can (hopefully) flourish more, expand more, suffer less. That would seem to be “the point” (and the entire focus of my third TEDx titled Can AI Make The World a Better Place?).

There is a default path through Bostrom’s minefield of survival: Carry on with business (read: organic life) as usual. Fend for our individual survival and for that of our children, and let context roll over us in waves, as every species before us has ever done.

Will Lucretius’ dumb matter and void simply splash its way into an arbitrary future for the future of life – or will we aim to take the wheel in the meta-transition (i.e. the trajectory of sentience itself) ahead?

I do think that the default path won’t be ours. The general improvements to the human condition – from democracy to modern medicine – have come from our own design. The universe is almost certainly indifferent to our fate – as it has been with every other species or planet in the universe.

The responsibility will continue to be ours to light a path ahead for our species that is somewhat better than “surviving” (or worse: narrowing focusing merely on “keeping our job”), but rather, is a definition of “thriving” that suits our needs and the transhuman future we face.

 

* Note: The concept of “context” is explained very briefly in this article. The concept has much more depth and their are numerous exceptions to explore – feel free to email me. I drew upon dozens of interviews from my AI podcast at Emerj, including Kevin LaGrandeur, Marshall Brain, Martin Ford, and others.