Special Feature
We examine issues affecting leadership
Strategy or Scramble: Reflecting on the Future of Business in the Advent of AI and Mass Layoffs
special feature
BY LAURIE HUFFMAN
The email arrived like so many others over the last two years: brief, clinical, and decoupled from the life it unsettled. “We’re restructuring to accelerate our investments in AI,” it read, the corporate equivalent of a band-aid. The line was the same across companies, across cities, across countries — a script that tried to make capital’s cold pivot sound like destiny. In the months that followed, thousands of people found themselves disentangling their identities from jobs they had held for decades: a mid-career product manager re-learning how to write resumes, a customer-service rep teaching herself data annotation, a small shop owner reconfiguring services around a chatbot. These are not isolated stories. They are the human faces in the numbers rolling across business pages and policy briefs: waves of layoffs across tech and other sectors in 2024–2025, heavy investments in AI infrastructure, and a steady rise in small businesses integrating AI tools into day-to-day operations. The stakes are enormous. The question at the center of every boardroom and barista’s breakroom is not just which tasks AI can do — it’s whether people and institutions will treat this moment as a strategy or a scramble. This piece stitches together the best available research, reporting, and policy analysis — from international organisations and think tanks to small-business surveys and frontline reporting — and steers it toward a single, human-centered thesis: AI is remapping the shape of work and business, but human ingenuity — retooled, reskilled and reimagined — will decide whether this becomes an era of strategic partnership or widespread scramble.
1. The present: a market of two moves — automate or invest
When CEOs send “restructuring” emails that name AI as the reason, they are rarely offering a full explanation. The reality is more complex — companies are simultaneously cutting labor costs and committing vast sums to AI infrastructure: chips, data centers, cloud contracts, and specialist hires. In 2024–2025, multiple large firms announced layoffs while also signaling larger AI investments to investors, a pattern documented in global reporting and company filings. The message to markets is simple: a leaner organization today, a more automated — or at least more AI-enabled — firm tomorrow. At the same time, independent trackers and news outlets have chronicled the scale of job cuts, particularly in tech. Even as the year 2025 showed continuing layoffs, the dynamic was not one-directional: while some roles were cut, demand for AI specialists, engineers and data professionals grew in parallel — a reallocation of labor, not a simple deletion. Analysts caution that isolating “AI caused” layoffs is difficult: macroeconomic slowdown, investment cycles, and prior over-hiring also play roles. But corporate narratives are clear: AI is central to the strategy many firms are selling to investors. What the big-picture analysts say: a recent World Economic Forum assessment and multiple studies find that technological transformations will both displace and create jobs — with the balance depending heavily on policy, reskilling, and corporate choices. The International Labour Organization and OECD emphasize that while AI can boost productivity and job quality in some contexts, it also introduces risks — faster automation of routine tasks, potential wage pressure, surveillance of work, and unequal impacts across demographics and regions.
“AI is not taking jobs. It is taking tasks. What happens next depends entirely on whether leaders choose strategy—or scramble.”
Numbers that matter — what the evidence shows so far
To move beyond headlines, we must anchor the argument in numbers the research community now accepts — and in the caveats they attach. Layoffs and job cuts are high and continuing. Trackers show hundreds of thousands of layoffs across tech firms in the 2024–2025 window; some industry counts report more than 200,000 impacted in a year. But these figures coexist with persistent hiring in AI-specialized roles. Automation potential is broad, but exposure is uneven. McKinsey-style analyses and subsequent updates indicate that a large share of work hours — by some measures more than half of current work hours — could be automated with technologies available today, depending on task composition and industry. This is a measure of technical potential, not an exact forecast of job losses; how many tasks are actually automated depends on economics, regulation, ethics, and choice.
Small businesses are adopting AI rapidly. Recent sector reports and government spotlights (U.S. SBA, national chambers) show a marked uptick in small and medium business adoption of AI tools: CRM automation, chatbots, generative content for marketing, bookkeeping and scheduling. Reports from 2024–2025 and into 2025 document increases in SMB AI use and measurable revenue uplifts for adopters. Yet adoption varies by region, sector and digital literacy. Forecasts are mixed but large-range. The World Economic Forum suggested tens of millions of job shifts globally across this decade, while other calculations (Goldman Sachs, various labor studies) have suggested large-scale task exposure and significant displacement potential in some occupations. At the same time, many organizations project job creation in areas like green tech, healthcare, AI oversight and human-centric services. The overall picture is one of major reallocation rather than simple contraction — but reallocation is disruptive. These numbers matter because they underline the two central realities for managers and citizens alike: first, the near-term turbulence is real, and second, the long-term outcome is not pre-ordained. Policy, corporate strategy, and collective action will determine whether the disruption results in broad prosperity or concentrated churn.
IV. The Hidden Cost of Emotional Distance
Empathy is often misunderstood as comfort. In truth, it is confrontation — with complexity, contradiction, and one’s own limitations. The opposite of empathy isn’t cruelty. It’s distance. Distance is what allows a manager to ignore a team member’s burnout because “we all have to push through.” It’s what enables brilliant founders to lose emotional touch with their people even as they expand their mission to “make the world better.” Distance protects the ego but corrodes the collective. One healthcare executive, whom we’ll call Rafael, discovered this the hard way. During the pandemic, his hospital’s workforce faced immense trauma — patient loss, overwork, grief that had no outlet. Yet Rafael, trained in stoicism, believed his job was to “keep morale up” and “stay strong.” It wasn’t until a nurse broke down in a staff meeting that something cracked open. “We don’t need you to be strong,” she said through tears. “We need you to be real.” That moment changed his entire philosophy. “I thought empathy meant showing compassion after the crisis,” he later reflected. “Now I see it means feeling the crisis with them, without losing my center.”The following year, Rafael introduced reflective rounds — short, facilitated sessions for staff to process emotional impact. Turnover dropped by 30%. But more importantly, morale became meaning again.
“The future of business will not be decided by how fast we automate, but by how intentionally we redesign work around what only humans can do.”
Small business: adapt fast, or get boxed out?
Small businesses are often framed as fragile in the face of automation. The real picture is more paradoxical: for many small firms, AI is an accelerant that amplifies what they already do well — personalized service, tight local knowledge, flexibility — but it can also raise new competitive pressures. Surveys and government analyses from 2024–2025 show striking signs:
Adoption accelerates revenue and responsiveness. Small firms that added AI-based customer-service tools, automated scheduling, or marketing generators frequently report improved response times and revenue lift. The US Chamber and SBA spotlights highlight AI use cases that make small players more efficient and able to scale digital touchpoints.
The resource gap remains. While more small businesses are using AI, the adoption gap between large incumbents and tiny firms has narrowed but not disappeared. The firms that win are those who pair AI tools with domain expertise and differentiating human service. Reports show adoption rising, but smaller firms still report constraints: costs, skills, and concerns about data privacy. New business models emerge. Expect to see more “AI-enabled microservices”: accountants offering predictive cash-flow models, micro-retailers using dynamic pricing engines, legal boutiques combining automated contract drafter tools with human compliance checks. The competitive frontier is not simply “can you use AI?” but “how do you combine AI with unique human judgment?” This opens a strategy window for small firms that can lean into relationships, local knowledge and curation.A composite vignette (fictional): consider a neighborhood florist who historically competed on Sunday delivery speed. In 2025 she adopted an AI-driven scheduling assistant, a lightweight customer-profile generator, and an automated marketing tool that created event-specific copy. Her customer retention rose — not because AI made flowers nicer, but because it freed her staff to consult on weddings and curate bespoke arrangements that AI can’t credibly manufacture alone. This is the repeated theme in small-business reporting: AI buys time; what people do with that time decides the outcome.
The workforce: who loses, who wins, and what ‘winning’ looks like
t’s convenient to talk about “jobs lost” and “jobs created” as if they’re tidy counters. The evidence shows a messier reality:
Routine tasks are most exposed, particularly clerical work, data entry, simple drafting and certain administrative functions. Multiple studies and the ILO’s 2025 update show that clerical roles continue to be among those with the highest automation potential. Yet even within exposed occupations, some tasks are complemented by AI rather than replaced outright.
Demand rises for hybrid skills. The new premium is on human capabilities that machines struggle to replicate at scale: judgment under uncertainty, social intelligence, ethical reasoning, systems thinking and the ability to orchestrate humans and AI together. Employers increasingly seek workers who can “translate” AI outputs into strategy and human action, and who can ask the right questions — the “why” and the “what if.” The OECD and WEF emphasize these skills as critical for the coming years.
Geography and demographics matter. Automation and its economic consequences are unevenly distributed. High-skill hubs still attract AI investment, while regions dependent on routine occupations face greater displacement risk. Women and minority workers can face disproportionate exposure where they cluster in highly automatable roles unless policy corrects course. The ILO and OECD reports call for active labor-market policies and inclusive reskilling to prevent widening inequality.
What “winning” looks like for workers: not merely learning to use a new tool, but cultivating an adaptive toolkit: domain depth, cross-disciplinary knowledge, and the meta-skill of learning how to learn. Companies that treat employees as assets to be redeployed — with meaningful retraining, pathway promises, and transitions — are more likely to preserve institutional knowledge and social stability.
The policy and leadership defect: governance lags capability
Technology often runs faster than governance. In the scramble to deploy AI, many organizations and governments are discovering that regulation, labor protections, and social-safety nets are lagging.
Key issues policymakers face:
Uneven safety nets for displaced workers. Severance packages and unemployment insurance can buffer shocks, but they do not guarantee re-employment. OECD and ILO analyses stress the need for active labor-market policies: targeted training, portable benefits, and transitional support for displaced workers. Where such policies exist and are well-funded, transitions are less painful.
New governance needs. AI creates new regulatory questions — from algorithmic bias to worker surveillance and collective bargaining in hybrid human-AI workplaces. The OECD and ILO call for frameworks that protect worker agency (e.g., transparency over AI decisions that affect work), set standards for human oversight, and encourage participatory governance
Leadership choices matter. Companies have two basic paths: strategic partnership, where AI augments human labor and companies invest in workforce transitions, and short-term optimization, where AI is used as a lever for cost-cutting without long-term human investment. The latter may boost near-term margins but risks social backlash, talent flight, loss of tacit knowledge, and reputational damage. McKinsey and other management analysts emphasize redesigning workflows over incremental automation to capture the full productivity potential while preserving human roles.
Policy and leadership, then, are the difference between an economic shift that broadens opportunity and one that concentrates gain while hollowing out large swathes of work.
Leadership isn’t only about achieving results. It’s about inspiring, elevating, and sustaining the people behind them.
Creativity, judgment, and the new premium on the human
There’s a recurring line in policy papers and think pieces: machines are great at speed and scale; humans are essential for surprise and meaning. The best research on creativity and AI suggests something subtler: AI can amplify human creativity, but it can also push creative outputs toward similarity at scale unless humans intentionally curate difference. Academic work and industry essays show a pattern: when humans use generative models as raw inputs, they often produce more output faster — but the variety and novelty of those outputs depends on human prompts, constraints, and the contextual frames humans bring. A 2024–2025 wave of studies found that AI assistance increases individuals’ creative productivity but can, over time, lead to homogenization unless institutions nurture diversity of inputs and human experimentation.
This is essential for business strategy: if your strategy is “use AI to scale what we already do,” you will scale known products and predictable performance. If your strategy is “use AI to free humans to explore what we don’t yet know,” you can invest in breakthrough offerings. The latter requires deliberate time, resources, and a tolerance for failure — something many firms have curtailed in the push to optimize.
Scenarios for 2026 — strategic forks in the road
Predicting the exact contours of 2026 is impossible; the next year will be shaped by compounding technological, economic and political choices. Still, plausible scenarios can help leaders and workers prepare. Below are three simplified narratives — not forecasts, but forks that illustrate how strategy and scramble diverge.
Scenario A — Strategy: Hybrid redesign and inclusive growth
What happens: Governments and major employers invest in large-scale reskilling programs, create incentives for firms that redeploy laid-off workers into new roles, and build portable benefits that cushion career transitions. Firms redesign workflows around human-AI partnerships, emphasizing human judgment tasks, quality control, and customer relationships that require emotional labor. Small businesses continue to adopt AI for efficiency, but community-focused firms use AI to deepen relationships and offer differentiated, premium services. Productivity rises; new categories of jobs (AI ethics officers, human-in-the-loop curators, AI auditors) expand.
Why it’s plausible: The McKinsey/WEF/OECD playbooks all highlight productivity upside if we redesign work rather than simply automate tasks. The SBA and chamber reports show SMB capacity to adopt when supported. With political will and smart incentives, a large-scale pivot is possible.
Scenario B — Scramble: Short-term efficiency and concentrated disruption
What happens: A wave of firms prioritize cost-cutting, using AI to eliminate routine roles without investing in redeployment. Layoffs continue, benefits fray under fiscal strains, and displaced workers face prolonged unemployment. Small businesses that cannot afford AI integration are outcompeted by platform-enabled firms. Public frustration mounts; the political economy becomes polarized around protectionist or reactionary measures rather than strategic workforce investment.
Why it’s plausible: We have seen evidence of layoffs paired with AI investment in the recent corporate announcements of 2024–2025. Without coordinated policy responses, private incentives can push companies toward short-run margin improvements.
Scenario C — Hybrid friction: uneven transition
What happens: The transition is patchy. Some sectors and regions adopt scenario A-like approaches; others slide toward B. High-skill metros prosper and attract AI investment; peripheral regions struggle. Small businesses in digitally-savvy sectors do well; those dependent on older models fail. The result is higher overall productivity but greater regional and sectoral inequality, prompting political backlash and ad-hoc regulation.
Why it’s plausible: Many multilateral analyses emphasize unevenness in technology adoption and labor-market impacts unless there’s active policy coordination. The OECD and ILO reports underscore this as an important risk to manage.
What leaders should do now — a practical playbook
If 2026 is a decision point, what concrete strategies should leaders adopt now to ensure they’re on the “strategy” side of the fork?
1. Redesign work, don’t just automate tasks. Map workflows end-to-end and ask where humans provide unique value — then use AI to remove drudgery and free time for those high-value activities. McKinsey’s research emphasizes that redesign yields the largest productivity gains.
2. Invest in human capital with measurable pathways. Offer retraining tied to real job pathways — internal mobility programs, apprenticeships, and partnerships with community colleges and bootcamps. Portable credits and employer-funded transitions reduce negative externalities.
3. Protect agency and dignity at work. Implement transparency rules when AI affects performance evaluation, scheduling, or job allocation. Worker representation matters; co-designing changes with employee input reduces friction and preserves morale. The ILO underscores the importance of governance and worker protections.
4. Help small businesses modernize affordably. Public-private partnerships, micro-grants, and shared-service platforms can lower the barrier for small firms to adopt AI for non-differentiating tasks, while reserving human attention for what customers value most. SBA and national chamber spotlights show the upside of targeted support.
5. Prioritize human-centered innovation. Fund R&D that pairs AI with human expertise — for creativity, care work, design, local services — rather than only efficiency projects. Companies that invest in this dual track are more likely to produce novel, defensible offerings.
6. Measure outcomes beyond cost. Track indicators of worker transition, re-employment rates, skill acquisition and customer wellbeing, not just short-term margins. The broader economy benefits when the social cost of transitions is managed.
“AI buys time. What we do with that time—cut people or cultivate possibility—will define this era of leadership.”
The moral economy: why business must think beyond quarterly signals
One reason this moment feels perilous is that corporate incentives can be misaligned with social resilience. Public markets reward rapid profit growth; short-term signals can crowd out long-term investments in human capital. But history offers a caution: economies that have weathered technological upheavals best are those that invested in people and institutions, not only in automation.
There is also a reputational calculus. Firms that lay off large numbers and make token reskilling promises risk burning trust with customers, employees and local communities. Conversely, firms that model human-centered transitions gain talent, loyalty and, often, innovative capacity. Markets, over the longer arc, remember where value is created: not simply in algorithmic efficiency but in human relationships and novel problem-solving.
The human edge: three examples of what “beyond AI” innovation looks like
To make “the human edge” concrete, here are three illustrative — and plausible — vignettes showing how people can innovate beyond what AI currently does:
A. The eldercare start-up that combines empathy with data
A small social enterprise uses AI to analyze remote-sensor data for early warning signs of elder health risks. But it pairs the analytics with a human-centered service: trained community navigators who visit homes to interpret signals, build trust, and coordinate culturally appropriate care. The AI spots patterns; humans deliver the humane touch that technology cannot automate. The result: better outcomes and a scalable, compassionate model.
(This vignette reflects trends in health-care digitalization and the premium on human coordination described in WEF and OECD analyses.)
B. The regional manufacturing co-op that upskills rather than offshores
A mid-sized manufacturer facing automation decides to retrain assembly-line workers as machine supervisors and quality curators. The company redesigns lines for mixed human-AI operations and partners with a regional technical college for credentials. Productivity rises; layoffs are limited; local economies benefit from retaining skilled roles that are now higher-value. This reflects McKinsey’s emphasis on reconfiguring workflows and investing in skill pathways.
C. The creative studio that treats AI as a collaborator, not a replacement
A boutique creative agency uses generative models to produce rapid prototypes and variants. But its competitive advantage is human curation: a team of strategists and artists refine, juxtapose and inject narrative and ethical frames that machines cannot reliably supply. The agency charges a premium for distinctiveness and deep strategic insight. This model aligns with academic findings that AI amplifies creative output but demands human curatorial judgment to preserve novelty.
The ethics of automation: fairness, transparency and contestation
Three ethical questions must be grappled with as AI reshapes labor:
1. Transparency: When algorithmic decisions affect hiring, firing, scheduling or performance metrics, workers must have recourse and understanding. Anonymous automation of supervision is a recipe for mistrust and legal risk. The ILO has flagged these concerns in its briefs.
2. Bias and fairness: Models trained on historical data can replicate and amplify inequalities — a risk particularly acute when model outputs guide opportunities. Independent audits, human oversight, and governance frameworks are necessary.
3. Distribution: If capital disproportionately captures AI’s gains, inequality will worsen. Policy levers — taxation, incentives for human-centered deployment, and investments in public goods like retraining — can mitigate this. The OECD and WEF stress that active policy choices shape outcomes.
A practical checklist for workers, entrepreneurs and policymakers
For workers
Map your tasks: identify where your role overlaps with routine functions and where it requires judgment or social intelligence.
Invest in hybrid skills: domain depth + AI literacy + communication and collaboration.
Build networks: peer support, local training, and community colleges can be faster pathways than uncertain online gig promises.
For entrepreneurs and small-business owners
Start small with high-value AI use-cases (scheduling, inventory, marketing templates).
Protect what makes your brand unique: curation, relationships, and local context.
Partner with networks for shared AI tools and training to lower cost barriers.
For policymakers
Fund active labor-market policies and portable benefits.
Support small-business AI adoption with grants and shared services.
Establish transparency standards for workplace AI and worker representation mechanisms.
The cultural strain: identity, meaning and the future of work
Beyond economics and regulation, the conversation is about meaning. Work structures identity for many people; mass layoffs and deskilling can have deep social consequences. The best corporate strategies recognize this: when companies help workers find new roles internally, when they invest in lifelong learning, and when they treat employees as stakeholders rather than expendable inputs, they preserve not just human capital — but social capital.
There is cultural creativity to be mined here. When human beings are freed from repetitive tasks, they can incubate new social roles and enterprises: community health navigators, local experience curators, neighborhood co-ops that remodel commerce around relationships rather than click-throughs. The future of business will be as much about cultural design as it is about algorithmic design.
The long view: a call for a strategic mind-set
If the upheaval of 2024–2025 was a warning bell, then 2026 is the testing ground. The evidence from global institutions — ILO, OECD, WEF — alongside reporting on layoffs and small-business adoption, points to a fundamental truth: the technology is not destiny. The economic path we choose will reflect policy, corporate strategy and civic will.
Strategy demands investment: in people, in institutions, and in governance.
Scramble is cheaper in the short run but costly over time: social stability, talent retention, and long-term innovation suffer.
The human ability to innovate beyond AI is the ultimate safeguard: creativity, context and moral judgment are what will keep businesses relevant and societies thriving.
If leaders, workers and citizens treat AI as a partner and not a cudgel — if they redesign work rather than simply displace it — the next era could be one of renewed human possibility. If not, we will be left to pick up the pieces of a scramble we could have prevented.
Final thought — a posture for turbulent times
In boardrooms and neighbourhood shops alike, the right question is rarely “Can AI do this?” but rather “What should we ask humans to do that AI cannot, and how will we pay for it?” That posture — prioritizing human judgment, dignified transitions and deliberate design — is not nostalgic. It’s strategic. It’s the way to turn the present disruption into a durable competitive advantage.
Business leaders who make that choice will not only preserve productivity; they will steward a future where economic value and human flourishing march forward together.
Sources (selected)
Tech industry reporting on layoffs and trends: TechCrunch — tech layoffs in 2025.
Layoffs trackers and aggregated data: TrueUp/Layoffs trackers.
Journalistic analysis on corporate layoff rationales and AI investments: AP News.
Small Business and Chamber reports on AI adoption: U.S. Chamber / SBA spotlights on small business AI (2025).
Global institutions and labor policy: ILO briefs on generative AI and jobs (2025 update); OECD AI and Work pages; WEF Future of Jobs Report 2025.
Management and workforce analyses: McKinsey Global Institute pieces on AI and work redesign.
Creativity and human-AI collaboration: BCG and academic white papers on AI and creativity; Berkeley legal review on human authorship and creativity.