AI browsers: promising, but not yet better

A.I
AI browsers: promising, but not yet better
New AI-powered browsers promise to surf the web for you, but tests and expert voices show they still need heavy prompting, human oversight, and better infrastructure before they can replace everyday browsing.

A hopeful technology stuck in a messy middle

This week several companies rolled out or demoed versions of what their executives call the next browser: Chat-driven interfaces that either sit beside a regular tab bar or replace your search box with an agent that can read pages, summarize documents, and—if given permission—take actions like adding items to a cart. The pitch is alluring: tell an AI what you want and let it handle the rest. It sounds like cheating at modern life. In practice, hands-on tests across a half-dozen agents show that the reality is more fussy, more human, and far less hands-off than the marketing implies.

A closer look at what works

When asked to do clear, bounded tasks—summarize a legal paragraph, extract table-ready specs from a product page, or list the pages in a long PDF—AI helpers often speed things up. They are useful as in-page copilots: highlight the dense clause in a medical study and the model will rephrase it in plainer language; open a dozen tabs of phone comparisons and the assistant will collate battery, weight and size into a short table. That is where the new browsers deliver immediate value. They reduce tab-juggling and lower the friction of glancing between sources.

But where AI browsers struggle is the thing that would make them genuinely transformative: complex, open-ended tasks that require judgment about trust, context and priorities. Ask a model to sort an inbox for urgency and relevance, and it will happily prioritize keyword-heavy marketing pitches over the subtle threads that a human would flag. Ask it to shop for a very specific pair of shoes that must meet many personal constraints, and you end up coaching the assistant through many clarifying prompts before it produces something comparable to what an experienced shopper would pick in a fraction of the time.

Why agents still need babysitters

There are other failure modes. Agents conflate keyword density with truth, elevating poorly sourced pages that use the right buzzwords. They will obey arbitrary site constraints—copyright or technical blocks—only inconsistently: sometimes refusing to extract a YouTube transcript on copyright grounds, sometimes pasting the entire text into chat. And even when an agent can perform actions, such as adding items to a cart, the confidence bar for handing full control over to software is high. Mistakes in e-commerce, scheduling or account access have real-world consequences.

Design, standards and power in the stack

These problems are not only technical. They are architectural and economic. If agents are to browse and act at scale, the web needs clearer rules about who may crawl what data, and under what terms. Today the instruments are a patchwork: , content-delivery networks that can throttle crawlers, and commercial deals that gate access behind API keys or paywalls. That creates asymmetry: a handful of infrastructure companies and platforms have leverage to decide whether an AI can extract the data it needs, and at what cost.

Some technologists argue that the answer is standards and interoperability—data wallets, agent-aware payment rails and machine-readable policies that would let a site express when and how a crawler may use its content. Others say the market will enforce solutions: if AIs disintermediate commerce, new micropayment flows or API agreements will emerge to remunerate publishers and services. But history warns that market incentives do not automatically produce fairness.

Big companies, different bets

Not all browser builders are trying to do the same thing. Some vendors add a chatbot to a familiar Chrome or Edge shell so that Copilot can open tabs and read them aloud; others build an AI-first interface that replaces the search bar. The strategies matter. An AI that runs in the cloud can access more compute and models, but it collects more user credentials and data centrally. A more local-first approach keeps personal data on-device but is limited by local compute capacity.

Executives at major platform firms have framed these choices as trade-offs between convenience, privacy and control. Some imagine a future where your personal agent holds your preferences and personal history in a private store, then negotiates with services on your behalf. That requires technical primitives for secure data wallets and a commerce layer designed for agents. It also requires either voluntary cooperation among platform owners or regulatory nudges to make interoperability standard rather than optional.

Voices from the field

Leading web architects and browser builders see both threat and opportunity. The inventor of the web has argued for open, interoperable systems so that agents can act in users' interests rather than merely for platform owners' advantage. Meanwhile, AI teams at major browser vendors talk about an "agentic" browser that uses the same tools a human would use—the address bar, tabs, forms—but automates repetitive tasks. The tension is visible: open-web proponents want standards and user sovereignty; platform companies are racing to bake agents into their own stacks.

There is also a human side to adaptation. People who rely on assistive technology often cobble together devices and hacks to make systems work for them. That same pragmatic creativity will shape how ordinary users adopt agentic browsing: some will welcome a concierge-like assistant that does heavy lift research; others will prefer granular controls and transparent logs of activity.

A practical roadmap: modest goals, big changes

For AI browsers to become truly better than humans at "surfing" the web, the industry needs progress on several fronts. First, models must become more consistent at judging credibility and verifying facts across multiple sources—something that will require better retrieval and provenance tools. Second, the web's infrastructure should offer clearer, machine-readable signals about data use and cost, so agents can negotiate access without breaking publishers' business models. Third, privacy-friendly architectures—local inference, data wallets and agent-aware payment rails—must move from experimental demos to common practice.

That is a long list. But the current crop of AI browsers, messy as they are, are important experiments. They highlight the chores and cognitive work of modern browsing that an assistant can reduce—compiling tables, paraphrasing dense passages, finding the right page in a long PDF. They also expose the gaps: when an assistant has to make a judgment call, human care is still required.

Where that leaves everyday users

If you were hoping to fire up an AI browser, say a few words and never touch the keyboard again, that day is not here. For now, AI browsers are best treated as specialized tools inside the larger browser toolbox: excellent at narrowing things down and explaining complexity, not yet reliable enough to take full control of your online life. They will change how we work online—but the change will be iterative, a negotiation between engineers, publishers, regulators and users about how data, value and trust flow across the web.

In other words: promising is not the same as prescriptive. The browsers of the future may well be better than we are at some forms of surfing—but first they have to get better at listening, explaining, and playing fair with the rest of the web.

Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany