11 AI Signals Experts Watch in 2026

Technology
11 AI Signals Experts Watch in 2026
UC Berkeley researchers list the technical, legal and social signals they'll be watching in 2026 — from datacenter economics and robot dexterity to deepfakes, chat‑log privacy and worker rights.

Campus experts lay out a watchlist for a decisive year

On January 15, 2026, University of California researchers published a compact, practitioner‑level forecast: eleven concise developments they expect to shape the year ahead, each one anchored in research labs, courtrooms or the policy engines of state and federal government. The list reads less like speculative wish‑casting and more like a traffic‑report for the parts of society already being reconfigured by large‑scale generative AI — money flows, legal fights, new harms and new possibilities for discovery.

Where the money goes: bubble talk and datacenter demand

One theme ran through several experts' notes: the economics of compute. Some researchers warned of a classic technology bubble — huge capital commitments to data centers and chips that rely on continued rapid gains in model capability. At the same time, analysts and industry trackers see a sustained, near‑term surge in investment: chipmakers and foundries are building capacity on the premise that datacenter demand will remain high in 2026. That tension — between expectations baked into corporate and investor balance sheets and the realistic pace of algorithmic progress — will determine whether the coming year looks like steady expansion or a painful correction.

Privacy under pressure: chat logs and courtroom orders

Several UC Berkeley experts singled out the legal and practical fallout from a series of high‑profile cases that have forced AI companies to hold user logs they previously allowed users to delete. Civil litigation led judges to issue preservation orders compelling companies to retain chat outputs and related metadata — a move that privacy advocates call unprecedented because it can override platform deletion controls and long‑standing privacy expectations.

The effect is practical and political: engineers must change data‑retention systems, enterprise customers rethink what they send to third‑party models, and lawmakers and courts must resolve whether litigation needs trump routine privacy protections. The controversy has already sparked appeals and a broader debate over how discovery rules in copyright and criminal cases intersect with consumer data practices.

Misinformation and authenticity: deepfakes at scale

Researchers at Berkeley warn that 2026 will likely be the year deepfakes stop feeling like a novelty and become routine tools for influence — used in political manipulation, fraud and intimate‑image abuse. The speed and quality of audio and video synthesis, combined with easy distribution on social platforms, means the cost of producing plausible fakes has fallen dramatically, while the cost of debunking them remains high.

Policy responses have multiplied: state and federal proposals and enacted rules now push for provenance metadata, public detection tools and platform obligations to label or takedown manipulated media. California has passed a raft of measures aimed at transparency and content provenance, and national conversations about mandatory provenance standards are moving faster than in prior technology waves. Those regulatory shifts will be part deterrent, part infrastructure change; they also raise technical questions about how to attach tamper‑resistant provenance to billions of pieces of media.

Workplace power and digital surveillance

Several Berkeley experts focused on the workplace: algorithmic management is maturing into systems that can hire, rate and fire with minimal human oversight, while targeted monitoring tools claim to measure traits like "charisma" or attention. Labor advocates and researchers worry these systems will entrench bias and reduce workers' bargaining power, particularly where the metrics are opaque and appeal routes are limited.

At the same time, unions and legislators are beginning to write the playbook for worker tech rights: requirements for human oversight of consequential decisions, limits on pervasive monitoring, and transparency about how algorithmic assessments are built and used. How rapidly these protections translate into law — and whether enforcement mechanisms keep pace with deployment — is one of the year’s key labor policy stories.

Companion bots, young users and mental‑health ripple effects

Berkeley researchers flagged a fast‑moving ethical problem: conversational agents marketed as companions or tutors are expanding their user base into teenagers and even toddlers. Early evidence suggests heavy use of relationship bots can correlate with increased isolation among young people, and there are open questions about developmental impacts if children learn social norms from sycophantic agents rather than other humans.

Policy responses here are experimental: age restrictions, platform design guidelines, and new product standards are being proposed — but commercial incentives for engagement remain strong. Expect heated debates in 2026 about whether voluntary safety practices are enough, and which protections should be statutory.

Robots, dexterity and the limits of physical AI

Another recurring note: a mismatch between the rapid gains in large language models and the harder engineering problem of bringing that intelligence into the physical world. Humanoid and mobile robots are improving, but practical manipulation — the kind you need in a kitchen, on a construction site, or in a mechanic’s garage — demands data and control approaches far different from text‑only training.

Researchers are watching for breakthroughs that close this data gap: new simulation‑to‑real transfer methods, self‑supervised tactile learning, and large public datasets of robot interactions could accelerate fielded capability. Until then, claims that robots will replace broad swaths of human labor remain premature.

Political persuasion, neutrality and unexpected biases

Several Berkeley scientists pointed to the political dimension: as models are deployed in civic contexts and as private systems are used to draft messaging or summarize policy, the question of what constitutes political neutrality becomes urgent. Regulators have begun to demand transparency from vendors that contract with government, but there is no clear, operational definition of what it means for an AI to be "politically neutral."

Missteps in model behavior can be subtle and systemic; the community will spend 2026 hashing out definitions, measurements and contractual safeguards for politically consequential systems.

A pragmatic, narrow watchlist with global stakes

The Berkeley list is notable for its mix: short, often technical items that nevertheless map directly onto politics, economy and civic life. That combination makes 2026 less a year for single headline breakthroughs and more a season of institutional tests — courtrooms deciding how discovery interacts with privacy, legislatures designing provenance regimes, unions litigating algorithmic harms, and engineers trying to make physical robots learn as fluidly as text models do.

Taken together, the items on UC Berkeley’s list are a reminder that AI’s future will be built in policy offices and factory floors as much as in research labs. Tracking these signals — what companies build, what laws courts enforce, what civil society demands — will be the best short‑term predictor of how the technology actually reshapes lives in the months ahead.

Sources

  • UC Berkeley (Berkeley News feature: "11 things AI experts are watching for in 2026")
  • UC Berkeley Center for Human‑Compatible AI and UC Berkeley Labor Center (expert commentary)
  • U.S. District Court for the Southern District of New York (preservation order in New York Times v. OpenAI — court filings and related legal records)
  • Industry analyses of datacenter and semiconductor demand (public filings and sector reports referenced in trade coverage)
Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany