
Machines to lead people, and people to lead machines
Leadership#Work & Skills#HRCommunity#ArtificialIntelligence
This article was first published in the July edition of People Matters Perspectives.
AI is getting better at being human than humans themselves are. Generative AI tools write better emails than the average office worker; automated research platforms beat out Wikipedia editors for accuracy and unbiased collation; even doctors are starting to learn their bedside manners from AI-powered training models. And now we have the promise of agentic AI, which is projected to not even need instruction from users: conversely, the virtual assistants will be the ones giving users unprompted instructions.
Where does that leave people?
In a 1976 fantasy novel, “Don’t Bite The Sun” by the late British author Tanith Lee, the protagonist expresses a wish to create art and is presented with a block of material, a tool, and a set of highly simplified step-by-step instructions to follow in order to create a design that the city management AI has selected for her. The AI has defined art, identified and solved all the problems that might arise in creating a piece of art, translated its solution into something a human can execute, and provided definition, solution, and tools in a neat package to the human. All the human needs to do is follow instructions; in fact, no other option is considered acceptable.
The advance of contemporary technology is quickly bringing us to that point. Despite the risks of hallucination and confirmation bias found in large commercial LLMs, it is already possible to present today’s models with a vague request like “I want to create art” and get back a complete package of design and instructions. It may or may not be what the user actually had in mind, and it may or may not actually work, but it takes the entire cognitive effort of research, analysis, and decision out of the user’s hands.
Ironically, this is the exact process of simplification and structuring that users in the past would go through in order to make a computer program do something. In the last half-century, we have gone from giving computing devices complex instructions, to taking simple instructions from computing devices. And with the emergence of agentic AI, we may not even need to request those instructions any more.
The spectre of metacognitive laziness
Over the last couple of years, neuroscientists from various parts of the world have raised concerns that intensive or habitual use of AI is having physically measurable effects on the function of the human brain, much in the same way that studies have already proven social media to affect cognitive function.
The risk this presents is firstly and foremostly a personal, individual risk. Over the long term and on a macro scale, an entire population that has outsourced most of their metacognitive function to AI may well form a society resembling some 1970s-80s science fiction dystopia. That is a challenge for policymakers; what people are concerned about here and now is how their chances of employment will be affected by companies’ use of AI, and what they should be concerned about is how their own, personal use of AI is affecting their standing in the job market.
Conversations about the economic impact of AI already take for granted that jobs will be replaced, people will be displaced, skill requirements will be altered. Anecdotes from younger jobseekers suggest that it is becoming increasingly difficult to land entry-level roles in multiple white-collar industries; governmental initiatives indicate that policymakers are increasingly concerned about the employment longevity of older workers.
These same conversations reiterate that workers absolutely must upskill and reskill, and that workers must pursue ‘real-life’ skills such as communication, collaboration, and critical thinking; the ability to evaluate AI’s output for suitability and to adjust it as needed. In other words, to manage AI rather than to be spoon-fed by it.
As of now, there are no clearly substantiated statistics as to how many people in the workforce, let alone how many in each demographic, have or are willing to acquire that capability. Anecdotal claims suggest that the incidence of metacognitive laziness among students and entry-level workers using AI is high - that there is a noticeable proportion of users who rely so heavily on the AI’s output that they neither fact-check nor attempt to make improvements. While the anecdotes seldom go into the question of what happens to these users, it might be safe to assume that they either upskill, or they do not keep their jobs long.
Take the lead or be led by the nose
It is temptingly easy to let AI take the lead, especially when workloads are high, staffing is low, and the technology is designed to be absolutely agreeable to human users, complete with taking initiative and offering solutions before you even realise there is a problem. But again, the question is: where does that leave people?
In the last few years it’s become common to say that the competition is not AI but people who know how to use AI. This is no longer completely accurate. The baseline difficulty of using AI is rapidly becoming so low that anyone will be able to use the technology. Indeed, its ubiquity may mean that some people know how to use it by default - there is no other option for them.
At this point in time, it would be more accurate to say that the competition is people who can use AI better than you. And that means people who manage AI rather than letting themselves be managed by it; people who add value that AI alone cannot.
For tomorrow’s entry-level employees, perhaps this may define the career progression of the future: to start out being led and managed by AI, then to manage themselves and their own work, and eventually to manage AI. Some, but not all, may advance to managing and leading people. And through all these stages, one differentiating factor will remain: the ability to do better than AI, to be more humanly capable than AI is.
You may also like:
- Secretary General NCOSH: Mental Health at Work Needs Leadership and Culture Shift
- Leading the Amazon Way: 16 Principles that set the standard for everyone, not just its employees
- Saudi Arabia to introduce AI lessons in schools starting 2025
- UAE set to become leading global hub for remote workers
- KSA HR ministry classifies expat work permits by skill levels, details here
Did you find this article insightful? People Matters Perspectives is the official LinkedIn newsletter of People Matters, bringing you exclusive insights from the People and Work space across four regions and more. Read the previous editions here, and keep an eye out for the upcoming August edition rolling-out soon.