Although AI has been used for years, its recent ‘democratization’ via tools like ChatGPT has spurred debate about the impact it could have in the job market, and the spectrum of opinions is very wide. Already back in 2020, the World Economic Forum published The Future of Jobs Report 2020, in which it “estimated that by 2025, 85 million jobs may be displaced by a shift in the division of labour between humans and machines, while 97 million new roles may emerge that are more adapted to the new division of labour between humans, machines and algorithms”. This is a net job creation of +12 million1. Only two years later the same World Economic Forum published The Future of Jobs Report 2023, in which it suggested that “69 million jobs will be created and 83 million jobs destroyed, leading to a contraction of global labour markets of 14 million jobs in the next five years at the present rate of change”2. Although not (solely) driven by AI, AI is expected to be a large driver of this job change. More recently, in a March 2023 article Forbes referred to a Goldman Sachs report predicting that “300 Million Jobs Will Be Lost Or Degraded By Artificial Intelligence”. Always according to the same Forbes article, “Office administrative support, legal, architecture and engineering, business and financial operations, management, sales, healthcare and art and design are some sectors that will be impacted by automation”. On the positive side, the Goldman Sachs report estimated that “AI could eventually increase annual global GDP by 7%”. In short: few certainties and high variability.
My original intent when writing this article was to foresee, maybe guess, which jobs would be last to be replaced by AI, and hence ‘safest’ from automation. But as I wrote down and checked my own hypothesis, I could pretty much find counter-examples for each of them. So – and this is a spoiler – this article is eventually to state that I simply have no idea. If you expect a clear and definite answer, do yourself a favor, spare yourself ten minutes of reading, and move on to something else. Similarly, if you are looking for yet another apocalyptic article in the style of ‘AI will replace us all in our jobs’, please stop reading as well. However, if you also ask yourself which jobs are safe(r) from automation, and are interested in reading some hypothesis and counterexamples, then keep going. I would love if you even leave comments about your own thoughts or critique those I’ve shared.
And now, let’s get to it.
1. The power of data (or absence thereof)
The Industrial Revolution and the automation that emerged from it mostly resulted in partial substitution of so-called blue-collar jobs: agriculture, manufacturing, construction, mining, or maintenance-related jobs3. From the beginning of time, and even more so from the creation of the steam engine and until (still) today, efforts have gone into creating machines that outperform human physical capabilities like speed, strength, or ability to perform repetitive actions with sufficient precision. Machines are faster to plow, dig, saw, or shape metals, but also perform precision activities like precision-cutting, sewing, welding, or engraving.
In the 20th century, computers started performing not physical, but intellectual tasks such as complex calculations, organization of tasks, retrieval of information, or even entertainment. For a long time, though, those activities were subject to pre-programmed rules. Even if complex or intricate, computers would follow instructions defined as code, and execute accordingly. Given a number of inputs (data) and rules (code), the computer would generate an output, a result, an action.
AI changes that paradigm, in that it no longer outperforms human’s physical or intellectual capabilities that can be codified or programmed, but also those that can’t, such as abstract knowledge, inferences, and creative processes. In doing so, AI no longer relies on inputs and rules to produce results, but instead on inputs and results to create rules that can be applied for other scenarios, even different ones. If we were to describe it in human-like terms, AI is no longer an obedient person who performs actions based on orders and rules, but instead a reasoning person that can infer rules from the observation of external events and results – pretty much like we all learn in life. If conventional programming was based on the equation “Data + Rules = Results”, AI is based on the equation “Data + Results = Rules”.
But… what if there was no data to feed AI? Based on the above explanation, AI would be unable to thrive. If we extrapolated that same inference to the job market, we could think that jobs that don’t rely on data or for which there is no data readily available should be safer from AI-driven substitution. That is, for example, research and explorative jobs where by definition there is no prior data to rely on, creative jobs, or what I’ll call six-sigma jobs whose nature is to deal with extremely rare events that have little or no precedent (like a business leader who needs to cope with a severe, once-in-a-lifetime event, or an entrepreneur trying to prove the viability of a totally new idea).
However, truth is that all those jobs mentioned above do rely on some form of data – even if just hints of it. Research projects are often born from relationships between observations of one or several disciplines. Creative jobs also unavoidably take some inspiration from the past, even if only to deconstruct or adapt it. And leaders pull from past analogue experiences to try and navigate new circumstances. If you prompt ChatGPT, or Dall-E 2, or any other generative AI tool, to create a story or a picture for you, it will likely surprise you with an output you did not quite expect. If you ask it for how to cope with a once-in-a-lifetime event, it will probably suggest a course of action that is at least as good as the recommendation of an average person. Maybe not that of highly smart or resourceful experts, but good enough to outperform many of us. Which leads us to another hypothesis of AI-substitutable roles.
2. The dangers of mediocrity
McKinsey & Co. recently published a report named The economic potential of generative AI. The following paragraph particularly caught my eye:
“Research found that at one company with 5,000 customer service agents, the application of generative AI increased issue resolution by 14 percent an hour and reduced the time spent handling an issue by 9 percent. […]. Crucially, productivity and quality of service improved most among less-experienced agents, while the AI assistant did not increase—and sometimes decreased—the productivity and quality metrics of more highly skilled agents. This is because AI assistance helped less-experienced agents communicate using techniques similar to those of their higher-skilled counterparts.”
Imagine you’re the employer in that customer service center. Thanks to the use of AI, your less skilled and likely lower-cost agents can become as productive as your most skilled and also higher-cost ones. What would you do? The simplest answer might be to get rid of all but your less skilled employees, as they have now become highly productive. A more nuanced and likely righter answer would be to also keep your most skilled employees, to be able to deal with edge cases that AI might not be able to cope with.
But… what about those employees in the middle? They’re probably not distinctive enough to make a difference without AI, and are also more costly than their junior peers who suddenly have become equally as productive thanks to AI. That sounds like a tough spot to be in, right? Fortunately for them, there are many reasons why an employer would want to keep those middle-range employees, too. For example, building a pipeline of talent who can eventually become highly skilled (as current top employees may well rotate or leave at some point), repurposing them to another role where their skills make a greater difference, or simply maintaining a minimum workforce size to deliver on business commitments. But despite all these reasons, if you’re in that middle range, you have just become more replaceable than you were before.
It is likely that AI will boost productivity of less skilled workforce, and make it hard to succeed for ‘mid-skilled’ workers who can’t keep their edge in the new context, while highly skilled ones may spare themselves. You don’t want to be stuck in the middle.
3. De-scale
When I think about job automation, whether physical or intellectual, I tend to think about machines that are capable of outperforming humans because they can do the same tasks but faster, more precisely, or with greater strength. In other words, at scale. Earlier in this article I alluded to such examples: basic ones like plowing, highly precise manual tasks like sewing, or complex intellectual ones like calculations or financial modelling. For fun, I recommend you to watch this 14-second mesmerizing video of a machine triaging tomatoes, which shows how human capabilities can be dwarfed by the power of machines.
I sometimes convince myself that humans have a higher chance of outperforming machines and AI in high-variability jobs that require some degree of physical dexterity. As a very basic example, think of a waiter in a busy terrace: difficult to imagine a robot competing with the complexity and variability of tasks: take orders, serve and clean-up table after table, dodging an ever-changing set-up of tables, chairs, and patrons, and adapting to the needs of each one of them: whether a quick snack or a slower and more personalized service. Think also about the cook in that restaurant’s busy kitchen, having to deal with a vast range of menu combinations4, often customized to a customer’s needs, and at almost random timings which nevertheless need to be matched by guest table. Or to refer to a different industry, think of construction, where machines have for long helped increase productivity, but are not close to substituting basic manual jobs given the infinite variability of the outputs (whether houses, civil constructions, or any other type of building), and the physical dexterity needed to produce them. But despite my conviction, there are also counterexamples that show that AI can replace jobs exposed to high variability and which require physical dexterity: autonomous driving is such an exceptional use case, even if still in its infancy.
What makes some jobs replaceable, and some not, then? In my opinion, the differential value of human jobs is in their uniqueness, not on their scale. The higher the variability, and the lower the scale, the higher the personalization that a human can add vs. a machine.
4. Feelings
A few days ago, I came across a short video of Spanish comic Ignatius Farray, in which he claimed that “the last frontier beyond which we could really claim the concept of ‘Artificial Intelligence’ is when AI will feel shame, [as] shame is the purest human feeling”. I’d argue we can probably expand that to other human feelings: pity, envy, or greed, for example.
Machines have shown very capable of triggering or boosting such primary emotions in humans, but to my knowledge they have not demonstrated them, nor even credibly faked them. As my brother says, this is because we have a misconception of what AI is, and we call ‘intelligence’ what in reality are advanced statistical models – and a statistical model, no matter how advanced, will struggle to credibly express (not imitate!) feelings. Thanks to those feelings, I am convinced that humans can outperform machines in building relationships. I therefore believe that jobs that depend highly on creating, understanding, or managing relationships and emotions will be somewhat safer from full replacement by machines.
Think about management jobs, for instance. I’m convinced that machines can do a great job in organizing the work of a team, likely even better than a human can do. Machine algorithms can probably set priorities based on a number of well-quantified economical (or financial, or operational) goals and metrics, assign resources based on team members’ skills, prepare narratives to communicate, and of course track performance. But a core aspect of any management role is the management of feelings, especially those of team members, and not just the management of tasks. Being able to understand when a team member is going through a tough time and needs some flexibility or relief, or when a person might be ready to take a larger role (and whether he/she wants it or not!), or when a person needs some extra support to grow, are all emotional traits of management that machines will struggle to replace.
What about sales and commercial roles, which often rely on a lot one’s ability to understand and empathize with others’ feelings? I’m rather torn on this one. As Reid Hoffman explains in his book Impromptu: Amplifying Our Humanity Through AI, “AI will make human BDRs [i.e., Business Development Representatives] more effective by providing them with personalized information on the prospects they call – but this increased productivity will likely decrease employment. […] I believe that the future will see the sales profession shrink as a whole”. In many ways, machines have already gained their place in sales: for decades now, e-commerce sites have allowed brands to reach and sell items to large audiences which would be difficult to reach through a traditional salesforce. More broadly, machines can outperform humans in many steps of the sales process, e.g., scouting, targeting audiences, pricing, or building tailored proposals. I could even imagine for two parties in a negotiation to rely on machines (and not humans!) to close a deal: machines might be able to identify a Zone of Possible Agreement (ZOPA) where humans might fail because of a load of negative emotions like mistrust, envy, greed, or fear. But for that exact same reason I believe humans still play, and will always play in the future, a crucial role in sales. In the same way that negative emotions can make a deal fail, positive emotions can save or generate deals that are perfectly logical but would otherwise be dead. The best sales people excel at conveying trust, collaboration, generosity, happiness, or hope, all of which are deal facilitators. Who hasn’t bought something because the sales person called you by your name, had a kind word, or expressed empathy towards yourself as a client or towards someone who was with you?
Because of this, I am inclined to believe that machines (and among them, AI) will not replace jobs that rely on feelings. I mentioned earlier that the differential value of human jobs will be (or actually is) in their uniqueness. What’s more unique, and more human, than feelings?
5. Intent, curiosity, and ambition
Could AI ever desire to circumnavigate the world? Or to go to the moon, or to Mars? Or to explore the origin of life, or understand what happens after death (if anything)?
Intent, curiosity, and ambition seem human traits that a machine – even an intelligent one – could never replicate. The desire to explore something new, to try something for the first time, to experiment. For sure, machines (intelligent or not) can help in such physical and intellectual quests; they can allow humans to be faster in their research, safer in their efforts, or even to see things that might otherwise go unnoticed. But that initial spark, the direction, is human.
Certainly, AI (and even more so Generative AI) can take actions that look like intentional and self-directed in our day to day. If you hold a conversation with ChatGPT, or if you rely on an AI-generated recommendation for your next book, movie, or Instagram feed, you could argue that a machine is directing you rather than taking direction from you. But is that really the case? I’d argue not. While ML can be used for multiple problems, it is not to be used (among other) for problems where AI recommendations “require full interpretability”. Which somehow suggests that, in the earlier simple examples, AI-powered recommendations might be perfectly accurate, but somewhat unexplainable. Now, think about your school days, and imagine for a second that you had always provided the right answers to your teacher’s questions, but you were equally unable to explain why you got them right. That may not have gotten you too far, right? Areas where the why, the intent, the ambition, or the curiosity are important are, and I believe will be at least for some time, fertile territory for humans to outperform AI.
Or will they? Again as a counterexample, let’s look at what happened in 2016, in a series of Go games played between Lee Sedol, one of the world's best Go players, and Google’s AlphaGo technology. This is one of the first cases where AI demonstrated intentionality as a human would do, or at least so it seemed. As described in this article, with the 37th move in the 2nd game, AlphaGo took its opponent and spectators by surprise by making a move that many considered simply a mistake. I don’t play Go, so you’ll forgive me I can’t explain it to you in simple terms. But the important thing is that this one-in-ten-thousand move eventually proved key for AlphaGo to win the game, and according to the lead researcher on the AlphaGo project, David Silver, the machine had somehow viewed the move, and chose to execute it. Does that mean the machine demonstrated an actual intent with that move? Could be…
6. What will not change?
If you’ve read anything about AI, by now you have probably come across many articles anticipating or guessing the deep transformations that AI will cause in our lives – personal or professional. I have also alluded to a few changes in the paragraphs above. But quoting Jeff Bezos, it is even more important to ask ourselves what’s not going to change as a result of the growth of AI.
We, as humans, will still be moved by feelings, noble or not. We’ll remain social animals even if that may not be obvious when we spend hours in near-isolation, consuming information on our smartphones, tablets, or laptops. We’ll eat, get dirty, or get sick, and will need a way of solving for those (and other) physical needs. We’ll need education, skilling, and re-skilling all along our lives, to enjoy a minimum – or a maximum – of autonomy and independence. We’ll need to be entertained, protected, emotionally moved, and loved. We’ll need to be taken care of when we’re young, and when we’re old. We’ll be, after all, interdependent.
Learn to cater for those in a differential, human manner, and you’ll likely outperform AI.
Definition of blue-collar job taken from https://www.investopedia.com/articles/wealth-management/120215/blue-collar-vs-white-collar-different-social-classes.asp
Just realize that a menu with 10 starters, 10 main courses, and 10 desserts allows for 1000 different combinations!