
In this season of tech predictions, with 2025 heralded as “the year of AI agents” by industry watchers such as Gartner and TechTarget it pays to consider “predictions past” to understand where the fuss is coming from.
One good place to look is Human+Machine: Reimagining Work in the Age of AI, first published in 2018 by Paul R. Dougherty and H. James Wilson, which argued that powerful collaborations between people and algorithms hold vastly greater promise than automation alone.
Drawing on research covering more than 1,500 companies implementing early AI systems, the Accenture veterans found that successful organizations didn't just blindly replace people with AI, but rather deployed promising "fusion" opportunities that combined human and machine capabilities to raise the bar on value-delivery.
Already published an eon ago in AI-time, the Harvard Business Review imprint reviewed several of these “winning bets” (GE, Stitch Fix, Capital One, etc.) and postulated a key insight about technology adoption in the age of intelligent machines.
Articulated in the pre-GenAI era (though updated in 2024), Human+Machine has proven prescient and durable even as AI has exploded onto the scene as an everyday superpower, available to everyone, everywhere.
But to really grasp the potency of their insights, you need to embrace the nonlinear.
The Power of Dynamic Networks
Daugherty and Wilson make a crucial observation that fundamentally challenges how we think about business processes in the age of AI.
The traditional view of processes as linear, sequential chains of tasks is giving way to something far more dynamic and adaptable:
"Businesses need to shift from seeing processes as collections of sequential tasks. In the age of AI, processes become more dynamic and adaptable. Instead of visualizing a process as a collection of nodes along a straight line, say, it might help to see it as a sprawling network of movable, reconnectable nodes or perhaps something with a hub and spokes."
To understand this transformation, consider the evolution of navigation systems. As Daugherty and Wilson explain:
"The first online maps were largely just a digital version of their paper counterparts. But soon, GPS navigation devices changed how we used maps, giving us directions after entering a destination. Even that, though, was still a fairly static process.”
The revolution came with apps like Waze, which create "living, dynamic, optimized maps" by combining AI algorithms with real-time data from multiple sources. This is participatory mapping powered by humans and robots and loved by millions of drivers.
This distinction between digitizing existing processes and reimagining them for the AI era has proven remarkably prescient.
Modern AI demonstrates three key characteristics:
Real-time adaptivity: AI-enabled processes adjust instantly to current conditions, as seen in BMW's flexible human-machine manufacturing teams that handle customized orders without manual process changes.
Simultaneous feedback loops: Systems like GNS Healthcare's drug interaction analysis continuously generate and test hypotheses from patient data, creating dynamic learning cycles.
Emergent insights: These networks produce unexpected discoveries impossible in linear systems, like GE's counterintuitive wind farm optimization findings about turbine speeds.
Success requires embracing the ever-changing, interconnected nature of human-machine collaboration and being open to reimagination of how work gets done. Companies need to move beyond sequential thinking to embrace more dynamic, adaptive approaches to AI transformation, what David Teece calls “dynamic capabilities.”
Again, even before the rise of generative AI, the case has been made for rethinking how the work gets done. As the authors dryly observed:
“The linear model for process no longer cuts it.”
The Missing Middle and Beyond
In highlighting what they astutely foresaw as AI skills gap, the author purpose a compelling (if awkwardly worded, to my ears) concept they call the "missing middle" – a space where humans and machines work as symbiotic partners rather than adversaries
"In the missing middle, humans and machines aren't adversaries, fighting for each other's jobs. Instead, they are symbiotic partners, each pushing the other to higher levels of performance.” the authors suggest.
Human+Machine's three forms of augmentation:
Amplification: AI systems enhance human capabilities through data-driven insights, improving decision-making and expanding creative possibilities.
Interaction: Natural language interfaces and personalized experiences transform how organizations connect with stakeholders.
Embodiment: Physical collaboration through robotics, augmented reality, and other real-world interfaces creates new possibilities for human-machine teamwork.
In the view from 2018, these modes of engagement offered the greatest opportunity for developing the highest-value collaborations. And while this framework remains provocative, today's AI capabilities have evolved beyond what the authors envisioned.
Modern language models demonstrate levels of reasoning, planning, and problem-solving that push past pure augmentation. Multi-agent systems introduce new complexities where AI agents work together, specialize and exhibit emergent behaviors.
This suggests a future that demands new forms of orchestration where humans direct teams of specialized AI agents rather than engaging in simple one-to-one collaboration.
The MELDS Transformation Blueprint
While recognizing the power of human-machine collaboration is crucial, implementing it successfully requires a new approach to navigating nonlinear change.
Daugherty and Wilson propose the acronymic MELDS framework – Mindset, Experimentation, Leadership, Data and Skills – not as another management buzzword, but as a practical blueprint that addresses the unique challenges of AI transformation.
What makes MELDS different is its recognition that AI implementation isn't simply a technical challenge – it's an AI-enabled model for company operations.
Each element addresses a barrier to AI success:
Mindset challenges organizations to move beyond surface-level automation. The authors found that companies who merely digitized existing processes saw initial gains plateau quickly. In contrast, organizations that reimagined their processes – such as Rio Tinto's transformation of mining operations through AI-enabled remote control centers – achieved breakthrough performance improvements.
Experimentation acknowledges that unlike previous technological transformations, there's no standardized playbook for AI implementation. The authors cite Amazon's approach to testing its cashierless Amazon Go stores with employees first – recognizing that AI systems must be refined through real-world testing rather than theoretical design.
Leadership in the AI era requires a delicate balance between innovation and responsibility. The framework emphasizes that executives must manage trust while pushing for transformation. Without this balance, organizations risk either moving too cautiously to achieve meaningful change or pushing too aggressively and facing backlash.
Data emerges as the critical foundation, but with a twist. Rather than simply accumulating information, organizations need what the authors call "data supply chains" – dynamic, carefully curated streams of information that fuel intelligent systems. This represents a dramatic shift from treating data as a static asset to viewing it as a living resource.
MELDS remains interesting because it addresses both the technical and human elements of AI transformation, providing a comprehensive yet flexible approach to navigating the challenges of human-machine collaboration.
What resonates is that companies need to avoid a “checklist approach” and adopt a new way of thinking about organizational transformation in the age of AI.
Operational Implications for Leaders
The AI revolution demands a different kind of leadership, one dedicated to creating environments where people and machines can reach their full potential. Leaders are the midwives of AI emergence.
As Daugherty and Wilson emphasize, leaders must move beyond traditional change management to foster what they call "responsible normalizing" – helping organizations and stakeholders embrace new forms of human-machine collaboration, hybrid organizations with human and digital workers.
Building trust through transparency emerges as a critical leadership imperative. The authors point to the phenomenon of "algorithm aversion" – where people lose confidence in AI systems more quickly than human decision-makers after seeing them make mistakes.
Even when AI systems outperform humans, workers and customers may still prefer human judgment. Successful leaders actively work to demystify AI systems by ensuring they provide clear explanations for their decisions, creating feedback mechanisms that allow humans to question and improve AI outputs, and demonstrating how AI augments rather than replaces human capabilities.
A key leadership challenge involves preventing what the authors call "moral crumple zones" – situations where human workers unfairly bear the burden of responsibility when AI systems fail. Just as a car's crumple zone absorbs impact to protect passengers, organizations must ensure their human workers don't become liability sponges for AI shortcomings.
Perhaps most importantly, leaders must guide their organizations through a cultural shift. As the authors note, "Success demands 'neural opportunism' – the ability to naturally incorporate new technologies to create hybrid human-machine organizations."
The stakes for leadership success in AI transformation are high. As Daugherty and Wilson conclude:
"Leaders must recognize that AI transformation is not merely a technical challenge but a fundamental reimagining of how organizations operate."
Trust is the Foundation of Collaboration
The deepest and most productive human-machine collaborations are built on trust, a core problematic of the AI age which Daugherty and Wilson address head on.
Human+Machine argues that when employees and customers trust AI systems, they engage with them more fully, leading to breakthrough performance gains. Trust, in other words, is a value driver. Yet fostering this trust requires a comprehensive approach to responsible AI that goes far beyond technical capabilities.
First, organizations must build in transparency and explainability from the start. This means hiring "ethics compliance managers" who serve as watchdogs for human values – an area of specialization which has indeed grown fast.
When an AI system makes decisions affecting employees or customers, companies need the capability to explain how those decisions were reached. And when the systems fail, as noted above, humans should not unfairly bear that burden.
The authors emphasize that ethical AI deployment isn't just about avoiding problems – it's key to realizing the full potential of human-machine collaboration.
When employees trust AI systems, they engage more deeply with them.
When customers understand how AI makes decisions affecting them, they're more likely to accept those decisions.
When organizations proactively address ethical concerns, they can move faster with AI implementation rather than being caught in reactive damage control.
As Daugherty and Wilson note, "Companies can achieve the largest boosts in performance when humans and machines work together as allies, not adversaries."
All Tomorrow’s Predictions
Making predictions about the future of digital technology can be a risky business. Just ask Steve Ballmer about the success of Apple’s iPhone.
While the rise of agentic AI systems has transcended Daugherty and Wilson’s vision, their core insights about non-linear processes and human-machine collaboration remain powerfully relevant as we look to the future.
Indeed, the emergence of LLMs, autonomous agents and multi-modal AI reinforces the importance of reimagining processes around human-machine collaboration. What requires updating is our understanding of what these collaborations might look like.
For instance, the "missing middle" may need to expand beyond direct human-machine partnerships to encompass networks of human-directed AI agents working with meaningful degrees of autonomy.
Amid the "year of AI agents," several key themes emerge:
Adaptive Intelligence: The future points toward AI that adapts in real-time to changing conditions. Organizations must develop capabilities to manage this evolution while maintaining control.
Ethics at Scale: As AI systems become more autonomous and interconnected, ethical frameworks become increasingly critical. Organizations must scale their approach to responsible AI, maintaining boundaries and transparency.
The Human Element Elevated: The rise of capable AI systems elevates rather than diminishes the human role. As AI handles routine tasks, human creativity, judgment, and emotional intelligence become more valuable.
Considering the AI horizon as it apperars today, Human+Machine’s lasting contribution remains the cogent prediction that the transformation we're witnessing today isn't just about technology – it's about reimagining how humans and machines work together.
Success will require new frameworks for managing autonomous systems, different approaches to risk and control and more sophisticated governance mechanisms.
The future remains to be imagined, but it will certainly not be linear.
Agentic Foundry: AI For Real-World Results
Learn how agentic AI boosts productivity, speeds decisions and drives growth
— while always keeping you in the loop.