Skip to content

Why Law Firms Should Think Twice Before Using AI Browsers

Use AI tools productively without exposing company data or losing oversight

Arthur Gaplanyan

AI Browser Risk

One of the latest AI tools to emerge is the artificially intelligent browser. The most popular ones are Perlexity Comet, ChatGPT Atlas, and Microsoft Edge with Copilot.

More than simple browsers with internet pages loading on them, they include assistants who help scan articles and complete tasks, and can even interact on the internet on behalf of the user without them having to do much.

That means conversational search, personalized experience, task automation, and workflow management. These all seem like great options, but there is one really good reason NOT to use an AI browser.

The Security Problem With AI Browsers

While an AI browser might sound a bit exciting, it carries with it some significant privacy and security risks. Here’s some issues you should be aware of.

Data Exposure Through Cloud Processing

Many of these browsers use their cloud-connected artificial intelligence to interpret content such as open documents, emails, and even monetary data sent to cloud-connected artificial intelligence systems to process and summarize data. This allows the possibility that if an employee is currently connected to certain clients through their use of the artificial intelligence sidebar feature, software outside of their security perimeter may interpret that data outside the security perimeter as well.

Autonomous Actions and Prompt Injection

Beyond data access, the tools are able to perform actions such as filling in forms, clicking on links, etc. More so, security researchers suggested how an attacker can inject information in web pages in the form of data so that the AI can read the information as an instruction to the browser.

Expanded Attack Surface

As a result, because the browser’s use of AI interprets the use of language as opposed to its use of code or graphics, the traditional security process does not address this type of issue, as do such things as prompt injection, which could manipulate a browser as a result of its use of natural language processing, which humans would not fall prey to. We’ve already seen this in the wild with the recent Copilot Reprompt attack where the initial safeguards were bypassed by embedding a second prompt along with the first.

Shadow Use and Policy Evasion

Although the IT group may have prohibited the usage of AI browsers, employees can still install and utilize these tools on their computers, thus exposing the organization to the potential risks of Shadow AI (“shadow” referring to the use of unapproved tech).

The interconnected risks of all of these mean that AI browsers are not suited to areas where confidentiality and compliance are non-negotiable, like legal practices.

Best Practices Your Firm Can Adopt

It’s not necessary to steer clear of it completely, though. Using these artificial intelligence browsers without proper policy is reckless. The way to go about it is as follows:

1. Establish a Clear Security Policy

Specify the approved list of web browsers as well as the list of web browsers not allowed. Make sure to include AI-driven web browsers in the list of prohibited software as well, unless permitted through the risk assessment process.

2. Centralize Configuration and Control

Working with your IT provider to restrict access to AI browsers at the network level can be beneficial. Disable the ability of AI browsers to be integrated with the internal portal through the “agent” or “sidebar” feature.

3. Train Your Team

Educate your staff on the risks associated with AI tool handling of sensitive content. Explain to them that anything on a screen can potentially be read or routed elsewhere if they’re using an AI assistant by default on their browser.

4. Limit Use Cases to Low-Risk Scenarios

If involved in piloting AI browsers, they should only be used for less sensitive tasks (for example, for public legal research) and not for client information gathering, client billing systems, or email systems.

5. Monitor for Shadow AI

Audit devices and software installations on a regular basis. Check for any unapproved use of an AI browser and implement policies to uninstall this software, especially for devices that belong to partners and contractors.

The Final Verdict

Understandably, the innovation of AI browsers may be interesting technology for the legal profession, but it is still not considered secure enough for the legal profession to use it for sensitive information.

The question to ask yourself before delving deeper into the use of these tools within the firm is to consider if the firm’s policies have taken the necessary steps to adopt such technology.

// Chat Widget