Leading Authentically in the Age of AI Distortions
How can leaders navigate the tension between human reality and AI innovation?
Introduction
Managing AI risks in leadership and HR is increasingly becoming a fine balancing act between psychological understanding and technical implementation. Corporate identity and talent acquisition are increasingly being reshaped by AI, causing leaders to face critical tension.
On one hand, there is the ever-present pressure to modernise and reinvent their organisation. Conversely, there is the human reality of the workplace, where a combination of jargon and algorithms hallucinates and creates confusion. To lead effectively in the AI revolution, executives must chase more than efficiency gains and address the structural and psychological impacts of these technologies.
Why Jargon Backfires: The Underlying Psychology
As executives continue to keep up with advances in AI, it is not just services that are being rebranded but also their people. A prime example of this is Accenture’s recent reorganisation of its workforce, wherein it has labelled nearly 800,000 employees as "reinventors”. This reinvention was made with the goal of placing Accenture at the forefront of AI, yet it can have unintended psychological consequences.
André Spicer, executive dean and professor of organisation at the Bayes Business School, has criticised the move, claiming that Jargon is used in the consulting world to signal expertise without investing in the necessary competencies and knowledge. In effect, making otherwise unassuming jobs and processes seem novel.
Recent research by Bullock & Bisbey affirms this view, highlighting that business jargon acts as a barrier to effective communication. It obscures jobs, annoys staff, but most importantly, it lowers team self-efficacy by reducing the ease with which employees understand and process their roles and information.
Furthermore, Bullock and Bisbey’s studies highlight that if employees struggle to understand the “lingo” of their roles, they suffer from a reduction in confidence regarding their ability to complete tasks. Notably, this results in a diminishment of their intention to seek and share knowledge.
Thus, Accenture’s transformation highlights an issue that many companies seeking reinvention face: corporate jargon reduces confidence and creates a culture of hesitation. Authentic leadership is based on a clear, accessible language that builds psychological safety.
Do LLMs Hallucinate Electric Sheep?
Beyond internal culture, the risks of AI extend to the arena of talent acquisition. Research reveals that for many AI users we know manage three versions of ourselves: our physical presence, our online profiles and a new “third identity”, the data-driven version of us constructed by AI. However, this new identity is seldom an accurate reflection; instead, it is a fragmented construct of data points, prone to manipulation and misinterpretation.
A recent Financial Times article has highlighted the danger of relying on these new third identities. A journalist discovered that an AI chatbot had hallucinated maternity leave and a non-existent child when reviewing her career. While many point to it as a glitch, it is a feature of how LLMs operate.
Yet, due to AI tools maintaining an aura of neutrality, leading managers trust algorithmic rankings over human judgment, despite the underlying biases based on incomplete data.
A recent 2025 analysis by Jiang and al. reveals that these tools can reinforce existing inequalities and human biases. Candidates with employment gaps or from specific institutions can be disproportionately penalised.
Thus, human insight continues to be crucial to ensure that fairness is not abandoned for an efficiency underlined by AI hallucinations. Management leaders must treat AI as hypotheses rather than facts and recognise that our new LLM-driven identities may fail to capture the nuance of human potential.
Managing the Risks
To manage these risks successfully, organisations ought to move beyond AI as a plug-and-play solution. A systematic approach is necessary for effective integration. One such approach is suggested by Jiang et al., is using Leavitt’s Diamond Model.
This model emphasises four key components and their interdependence: people, tasks, structures and technology. In the context of AI, a change in technology ripples through the other three dimensions. Therefore, leaders must act as integrators. Ensuring that a firm’s management frameworks are robust enough to handle the concerns of the other components while simultaneously adopting and deploying AI.
Ultimately, a culture of ethical innovation and the simplification of processes is necessary alongside algorithmic output, enabling firms to build resilience to the AI tools they are rushing to adopt.
Sources
Bullock, O. M., & Bisbey, T. (2025). Jargon in the workplace reduces processing fluency, self-efficacy, and information seeking and sharing. International Journal of Business Communication. https://doi.org/10.1177/23294884251364525
Castilla, E. J. (2025, December 15). AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap. MIT Sloan School of Management.
Financial Times. (2025, November 30). Accenture has started calling its nearly 800,000 employees “reinventors”. https://www.ft.com/content/668944f0-4fb5-4d0a-a86a-93a0ffd0e57e
Gagan, O. (2025, December 10). The perils of using AI when recruiting. Financial Times. https://www.ft.com/content/229983ee-c11f-44fb-8e61-2ac61d8d100a
Jiang, Y., Cai, Z., & Wang, X. (2025). Leverage Generative AI for human resource management: Integrated risk analysis approach. The International Journal of Human Resource Management, 36(11), 1929–1959. https://doi.org/10.1080/09585192.2025.2544972
Robles-Carrillo, M. (2024). Digital identity: An approach to its nature, concept, and functionalities. International Journal of Law and Information Technology, 32, eaae019. https://doi.org/10.1093/ijlit/eaae019

Commentaires
Publier un commentaire