Interview
What AI can’t replace
We asked three experts to weigh in on the future of work...
Rodrigo Wielhouwer
April 15, 2025

The future of work

"Two-thirds of jobs require AI skills"

That insight from Indeed CEO Chris Hyams, shared in a recent Fortune feature, captures the paradox of our time. Artificial intelligence is transforming the workplace at unprecedented speed.

The numbers suggest disruption: AI could impact up to 300 million jobs, with 41% of employers anticipating workforce reductions in the next five years. But beneath the anxiety lies a quieter truth: AI still depends on us—for context, judgment, emotional intelligence, and ethical decision-making.

And perhaps most importantly: for follow-through.

As organisations invest billions into AI capabilities, many still struggle with the alignment between systems, people, and purpose.

In a landscape where automation rapidly evolves and generative models gain ground, what makes work truly human? And what role do people still play in a data-driven world?

While the appeal of AI-driven productivity is undeniable, the shift from capability to consequence raises crucial questions: How do organisations stay aligned during rapid change? Why does resistance persist even when benefits are clear? And what qualities will define the professionals of the future?

Three experts — from data science, psychology and industry to weigh in:

Dr. Frederik Situmeang, Associate Professor in AI & Data-Driven Business and Lead Data Scientist at Datafeed
Sara Wielhouwer, Business Psychologist and Partner at Datafeed
John Matoso, Change Management Consultant and Datafeed Partner

Here’s what they had to say.

The business case for human-AI collaboration

Question to Dr. Frederik Situmeang: Given the claim that AI can perform many skills listed in job postings, what’s the real-world business case for combining AI capabilities with human insight?

As artificial intelligence (AI) continues to evolve, it’s becoming increasingly capable of performing many skills listed in job postings, from performing repetitive tasks (e.g. Amazon’s warehouse robots) to creatively creating contents (e.g. Canva and Midjourney). However, the real-world business case for combining AI capabilities with human insight lies in enhancing decision quality, reducing risk, and fostering innovation. While AI excels at identifying patterns, processing data at scale, and automating routine tasks, it lacks the contextual understanding, ethical reasoning, and emotional intelligence that humans bring to the table.

This is why leading organizations are leaning into responsible applied artificial intelligence as model of human-AI collaboration rather than full automation. For instance, at Achmea, AI analyzes insurance claims and suggests decisions to award the claim, but insurance advisors interpret these suggestions considering information that are highlighted from the claim. Similarly, at IKEA, AI tools help visualize customer design faster, but human staff verify and adjust the visualization outputs and help to improve the design together with the customers. These examples underscore that AI can enhance productivity, but human oversight remains crucial for responsible implementation.

Research backs this up, AI is described as a “cognitive extender,” augmenting rather than replacing human decision-making. Companies that combine AI with strong human oversight outperform those that focus on automation alone. This collaborative model not only avoids the ethical pitfalls of over-automation but also encourages continuous learning and upskilling, particularly in roles that require judgment, creativity, and adaptability. For example, Volkswagen trains factory workers to use AI-powered visual inspection tools based on computer vision, allowing them to shift from manual quality checks to more strategic quality control roles.

Ultimately, the integration of AI into the workplace should not be viewed as a threat to human employment, but rather as a catalyst for reimagining the nature of work itself. By automating routine and repetitive tasks, AI enables professionals to redirect their efforts toward activities that require critical thinking, emotional intelligence, and complex decision-making. This shift allows organizations to not only enhance operational efficiency but also to foster a more engaged, adaptive, and future-ready workforce. Embracing this collaborative model positions businesses to thrive in an increasingly dynamic and technology-driven environment.

Question to Sara Wielhouwer: Why do employees often resist the implementation of AI tools, even when they offer clear productivity benefits?

Employees frequently exhibit resistance to the implementation of AI tools due to deeply rooted psychological and emotional factors. A primary reason for this resistance is the inherent human aversion to change and uncertainty. The introduction of AI technologies typically disrupts established workflows, prompting anxieties related to job security, competence, and professional identity.

Additionally, territorial behavior significantly contributes to resistance. Employees often associate their professional status and self-worth with their specialized skills and knowledge. The adoption of externally developed AI solutions can thus be perceived as a threat to their expertise, leading to reluctance in fully embracing new technologies.

Furthermore, cognitive biases, particularly the "Not Invented Here" syndrome, intensify this resistance. Employees may unconsciously prefer internal methods and solutions, rejecting external innovations such as AI due to skepticism or perceived loyalty. Even when provided with evidence of efficiency gains, confirmation biases may cause employees to disproportionately focus on minor flaws or challenges encountered during early implementation, further reinforcing their resistance.

Question to John Matoso: What are the biggest organisational blind spots you see when it comes to adopting AI-driven tools or workflows?

There are several critical blind spots organizations face when implementing AI, each requiring careful consideration. First and foremost is the necessity for clarity regarding what business problem or opportunity the AI agent, tool, or workflow is intended to solve. Without this foundational understanding, companies risk pursuing expensive technological solutions disconnected from real business needs. Second is the potential misconception that AI is merely another tool to be handled like any prior technology rollout. This perspective may lead to inadequate implementation approaches and governance structures.

Perhaps most concerning is the failure to root AI initiatives in the core values, culture, and ethical foundations of the organization. Quite simply, deploying AI appears fundamentally different from implementing other tools, and recognizing this distinction helps frame critical oversight parameters. The implementation of AI may require leadership approaches that transcend conventional organizational structures. As Hoque, Davenport, and Nelson (2023) point out in their analysis, traditional technical leaders like CIOs and CTOs, while vital for implementation and maintenance, often lack the capacity and authority to address the broader human and organizational dimensions of AI transformation. This gap in leadership suggests organizations need a new type of leader who can bridge technical implementation with organizational values and human considerations. Clarity of purpose serves as a crucial safeguard, potentially helping to anticipate unintended consequences that may emerge as AI systems interact with complex organizational environments.

Without taking the critical steps of establishing purpose clarity rooted in organizational values, culture, and ethical frameworks, a company might struggle to define or measure the success of its AI endeavors. The consequences could extend beyond financial investments in potentially ineffective initiatives. Organizations may risk compromising their corporate ethos, possibly causing harm to their reputation and creating liability exposure. This is why leadership might consider approaching AI implementation with a deliberate emphasis on alignment with organizational identity and values, rather than treating it as simply another item in the technological toolkit.

Aligning insight, people, and culture

Question to Sara Wielhouwer: What psychological factors or biases might be at play when employees perceive AI as a threat to their role, and how can leaders address this constructively?

When employees perceive AI as threatening, several psychological biases become evident. One notable bias is overconfidence, where individuals overestimate their own abilities compared to automated systems. Employees might also demonstrate confirmation bias, actively seeking shortcomings in AI tools to confirm their initial skepticism, often ignoring the overall long-term benefits.

Moreover, employees may disproportionately fear potential job displacement or a reduction in professional status, even when such fears are not justified by reality. The perceived threat of losing their role or professional significance can substantially amplify resistance.

Leaders can effectively address these psychological barriers by promoting transparency and clear communication. Explicitly articulating AI’s supportive role and emphasizing how it enhances rather than replaces human skills is critical. For example, emphasizing AI's capacity to manage tedious data entry tasks can enable accountants to dedicate more time to strategic financial planning, thereby enhancing their professional contributions.

Creating an environment of psychological safety, where employees feel comfortable expressing concerns without fear of repercussions, is equally important. Additionally, leaders should proactively provide upskilling and training opportunities, empowering employees with new competencies that complement AI tools. Publicly celebrating initial successes and acknowledging individual contributions can further promote acceptance, highlighting the indispensable human role in successful AI integration.

Question to John Matoso: From your experience, what are the most effective strategies to align people, processes, and culture when introducing AI into existing operations?

Building on the critical blind spots we previously discussed, the most effective strategies to align people, processes, and culture when introducing AI into existing operations begin with an unwavering clarity of purpose. Without this foundational element, organizations would be hard-pressed to articulate a compelling change story that answers the essential question: "Why are we doing this?"

Approaching AI deployment presents a paradoxical moment in change management: while certain fundamentals of change management remain crucial, organizations simultaneously need to think well beyond conventional approaches to adapt to the many permutations an AI deployment may bring. This makes me think of the jazz musician metaphor that I see as quite apt - the musician solidly anchored in both technique and theory, mastering all at their disposal, then calling upon these during improvisations that can quite literally stretch the very tonal rules they operate within, like Ornette Coleman, where even form may be difficult to find. (Orlikowski and Hofman, 1997, p. 11). Yet underpinning all of this is an extremely solid anchoring in their skill.

The change story must be deeply rooted in the company's vision and highly coherent with its core mission and values. Without this alignment, the change simply makes no sense to those affected by it. This becomes particularly critical given that AI implementation may be perceived, rightly or wrongly, as a potential existential threat by many employees. Considering this potentially disruptive element, successful AI integration begins with an essential exercise in sense-making. Organizations must craft a coherent narrative that connects AI capabilities to organizational purpose, acknowledging concerns while illuminating opportunities.

This narrative requires convinced Sponsorship that can clearly communicate clarity of purpose, ensure that this is rooted in the values and mission of the organization, while guiding and reinforcing existing culture. However, the approach cannot be top-down. Alignment and engagement here require perspectives from all organizational levels, creating space for collaborative exploration and shared ownership. When people understand how AI aligns with what the organization has always valued and where it aims to go, resistance often transforms into cautious curiosity, and eventually, into collaborative exploration.

Question to Dr. Frederik Situmeang: Where do most organizations fall short when trying to translate AI-driven data insights into strategic business outcomes?

One of the most persistent barriers to translating AI-driven insights into strategic business outcomes is the silo mentality prevalent in many organizations. When departments operate independently, AI-generated insights often remain confined to a single function, limiting their broader impact. Misaligned KPIs can further exacerbate this challenge. For instance, a supply chain team may focus on cost-efficiency, while the customer service team prioritizes retention—each shaping AI tools according to their own needs rather than a shared organizational vision. As a result, AI systems are often optimized for narrow outcomes rather than holistic value creation. Siloed structures also lead to duplicated efforts, where one department solves a problem, another is unaware of, ultimately stalling innovation and undercutting the very strategic integration that AI promises. These patterns create environments in which human oversight is fragmented and reactive, rather than coordinated and strategic.

This fragmentation is especially visible in how many organizations approach AI development, most notably when R&D or technology teams design systems in isolation from the business units that will ultimately rely on them. Consider the case of a customer-facing chatbot. While the technical team may build a solution based on available documentation, their limited exposure to real-time customer interactions and service workflows can result in tools that lack the nuance required for effective engagement. This disconnect becomes even more problematic when customer queries relate to the company’s business model—topics technical teams are neither trained in nor tasked with fully understanding. The consequence is not only longer development cycles but also diminished trust in the AI itself when its responses fall short of expectations. In such cases, human oversight—often confined to post-deployment QA or ad hoc feedback—is introduced too late to meaningfully shape the system’s development or ensure alignment with business objectives.

To move beyond these limitations, organizations must foster closer collaboration between technical and business functions from the outset of any AI initiative. This requires embedding individuals who can bridge technical capability with business acumen—linking pins who understand data, marketing, operations, and consumer psychology. These hybrid professionals, sometimes called analytics translators or digital product managers, are vital not just for aligning AI outputs with user needs, but for enabling a more proactive, continuous form of human oversight. They help frame the questions AI systems are built to answer, shape decision-making logic, and ensure that outputs are interpreted in the right context. In this way, oversight evolves from merely reviewing outcomes to actively steering how AI is designed and deployed. As AI becomes increasingly embedded in strategic and operational decisions, human oversight must also evolve—becoming more cross-functional, more anticipatory, and more deeply integrated into the fabric of organizational governance and innovation.

The future of oversight and the human edge

Question to Dr. Frederik Situmeang: How do you see the role of human oversight evolving as AI continues to advance in strategic and operational contexts?

As organizations increasingly adopt AI to drive strategic and operational outcomes, the role of human oversight is evolving from a technical checkpoint to a central pillar of responsible innovation. In siloed environments, where AI systems are built without cross-functional collaboration, oversight often becomes fragmented, reactive, and disconnected from strategic priorities. When technical teams operate in isolation, they lack the business context to anticipate downstream risks or ensure relevance. This not only undermines performance but also raises ethical concerns, as decisions may be encoded into algorithms without sufficient scrutiny. Under the EU AI Act, such oversight failures are especially problematic, as the regulation emphasizes the need for human involvement, transparency, and traceability, particularly in high-risk systems. Late-stage intervention is no longer sufficient, oversight must be proactive, continuous, and embedded from the outset.

In contrast, a more mature model of human oversight integrates ethical considerations, explainability, and organizational learning throughout the AI development lifecycle. This means moving beyond narrow compliance to embrace a more expansive view of governance, where ethical principles such as fairness, accountability, and non-discrimination are built into the design process. Crucially, the EU AI Act mandates that AI systems, especially those influencing people’s rights and opportunities, must be explainable and auditable. Roles such as analytics translators or digital product managers can help fulfill this mandate by acting as intermediaries who ensure that AI systems are not only technically robust but also aligned with business goals and societal values. These professionals enable oversight to be anticipatory rather than corrective, guiding AI development in ways that reflect real-world needs, stakeholder concerns, and legal obligations.

As AI systems become more autonomous and impactful, oversight must also evolve to include strategic framing and ethical justification. It is no longer just a question of whether an AI system functions correctly, but whether it functions responsibly. This requires clear governance mechanisms to define which business questions AI should answer, set acceptable risk thresholds, and ensure decisions can be justified in human terms. In operational contexts, this might involve setting safeguards or requiring human-in-the-loop mechanisms. Strategically, it means embedding human values into AI decision-making and aligning system behavior with both organizational purpose and evolving regulatory standards like those outlined in the EU AI Act. Oversight in this context is both a risk mitigation tool and a value creation lever.

Ultimately, the future of human oversight lies in its deep integration, not only into the AI systems themselves but also into the cross-functional collaboration structures that shape how those systems are conceived and deployed. When oversight is treated as a continuous, interdisciplinary process, supported by ethical frameworks, explainability standards, and unified strategic goals, it becomes a multiplier for responsible innovation. Organizations that adopt this approach will be better positioned to not only comply with evolving regulations but also earn trust, ensure fairness, and unlock the full transformative potential of AI.

Question to Sara Wielhouwer: If AI tools increasingly handle technical tasks, what emotional and cognitive strengths will define successful professionals in the near future?

As AI increasingly assumes responsibility for technical and repetitive tasks, emotional intelligence and interpersonal skills will become critically important. Empathy, effective communication, and the capacity to manage complex emotional interactions will distinguish successful professionals. For instance, in customer service roles, although AI may efficiently handle routine inquiries, managing emotionally complex complaints or sensitive situations remains an inherently human capability.

Furthermore, adaptability and a commitment to continuous learning will significantly shape professional success. Professionals who rapidly embrace change and enthusiastically integrate emerging technologies into their workflows will excel. Creativity and advanced problem-solving abilities will become increasingly valuable, as AI frees cognitive resources by automating repetitive tasks. Professionals who utilize this advantage to innovate and creatively address challenging problems will notably distinguish themselves.

Finally, critical thinking and ethical judgment will emerge as essential competencies. AI can provide data-driven insights but lacks human contextual awareness and moral discernment. Therefore, professionals capable of ethically interpreting AI-generated outputs and making nuanced, context-sensitive decisions will be highly valued. For example, human resources professionals may rely on AI for preliminary candidate screening but will depend heavily on human judgment for final hiring decisions, taking into account factors such as cultural fit and ethical considerations.

Question to John Matoso: What advice would you give to leaders trying to maintain employee engagement and clarity during rapid AI adoption?

This is an important question, and we need to take a moment to recognize some fundamental elements. True to any change, and certainly true here but perhaps in an amplified manner, is the fact that leaders live the change too - they too must change. In fact, in my own practice, my approach is rooted in the notion that for any change to succeed, leaders and managers must change first. This is incredibly challenging in the case of AI deployment, because underneath this, these could be people who are actually having to grapple with legitimate existential questions as to their own positions and future.

Therefore, although this has always been true, the human in the change cannot be forgotten. It may sound obvious, but "putting your own mask on first" is vital - it's very difficult to guide others through a change if you are struggling with that very same change yourself. This is important to ensure that your own values, interests and career objectives are still in alignment with where the organization is headed. If cognitive dissonance does emerge, it is important to work through this. It may be a simple exercise of revisiting your own motivators and career aspirations, may involve exploring educational options, but regardless, taking the time to look after yourself is critically important.

Engaged leaders who can clearly articulate and help share and affirm the purpose and vision of a change are one of the most important keys to a change's success; without that, engaging employees would be very challenging. But in order to succeed, taking care of oneself, ensuring one's alignment with the organization's values and objectives becomes even more necessary, especially if part of this reflection includes grappling with potential existential considerations. Only then can leaders authentically support others through their own journeys of adaptation and growth amid rapid AI transformation.

Conclusion

As AI continues to evolve, so must we — not just in terms of tools, but in how we think, lead, and collaborate. The path forward lies not in choosing between human and machine, but in designing systems where each amplifies the other.

Explore how we help organisations move from data overload to insight-driven action — guided by clarity, behavioural intelligence, and strategic alignment.

Blog

Latest from Datafeed

Subscribe to our blog for the latest trends in data science and psychology to strategies for solving industry challenges. We bring you thought leadership, best practices and forward-looking perspectives from our team and partners.
View all posts
Myth-busting

GEN AI does not fuel misinformation

Often blamed for fueling misinformation, but the reality is more nuanced...
Read post
Interview

What AI can’t replace

We asked three experts to weigh in on the future of work...
Read post
Data insights

Bridging the data-to-action gap

Why clarity matters in organisational change...
Read post
Forecast

From overload to opportunity

Global data is spiralling—will you harness its potential or drown in the noise?
Read post
Myth-busting

More data ≠ better decisions

Why more data doesn't lead to better decisions...
Read post
Stay informed with the latest insights
Subscribe to our blog to receive updates on emerging trends, practical strategies, and forward-looking perspectives tailored for decision makers.