Skip to content

Using AI at Work? Your Firm Needs Clear Rules

Put guardrails around AI for efficiency gains without risk

Arthur Gaplanyan

AI Governance

Every day people are using AI tools to compose emails, summarize notes, keep research organized, edit writing, compare documents, and improve efficiency in performing routine office work. AI tools are here and being used, regardless of your feelings about them. Which is why they require clear policies and governance.

According to Netskope’s 2026 Cloud and Threat Report, 94% of organizations are now using generative AI apps, and their use in business has accelerated to such an extent that the number of users tripled year over year. Moreover, the number of prompts also increased significantly to 18,000 per month in the average organization.

The significance of this issue is underscored by the fact that AI tool adoption seems to have accelerated beyond their governance. The report states that 47% of generative AI tool users are actually using personal AI apps.

AI tools are part of daily work life now, but governance of these tools is lagging behind their adoption. What this means is that organizations are unaware of which tools their staff are using, which accounts are being used, and what information is actually typed or pasted in these tools.

AI Reddit Post

The Risk Scope

The risk is not hypothetical; the number of cases when users send sensitive information to AI apps has doubled over the last year, with the average organization experiencing 223 such cases per month. The report also states that 60% of insider threat cases involve personal cloud app instances, and regulated data, intellectual property, source code, and credentials are often sent to personal apps in contravention of the policies.

The risk is there, and it is augments by the visibility gap that is caused by “shadow AI” tools.

Applied to your firm

No matter what area you practice law in, the issue is obvious. Your staff and attorneys deal with a variety of private and personal data from client communications to case management systems. If any of this data is fed into an unmanaged AI tool, it is impossible to know where it will go, who will be able to access it, or whether it will be stored or auditable in the future.

California State Bar’s “Practical Guidance” warns of generative AI tools that may use prompts and uploaded documents to train their systems and share user questions with third parties. The ABA’s Formal Opinion 512 states that competence, confidentiality, communication, and supervision are all issues that need to be considered by lawyers when working with generative AI tools.

This all creates both an operational problem as well as an ethical one. In a firm setting, the person who used the AI tool to draft the email may later use the AI tool to summarize a portion of a deposition transcript, compare contract language, or condense notes from a client intake interview. Without guidelines, these activities can become indistinguishable from one another. An employee may treat the public AI tool as a harmless assistant even if the prompt includes confidential information, personal data, litigation strategy, or metadata copied from a document management system. A partner may believe the firm’s Microsoft or case management license includes safe AI controls, while the staff member is using their own personal account outside the firm’s control.

AI Policy

The answer, of course, is a clear policy. Not a vague “be careful with AI” kind of policy, but a real policy that lets people know what apps are approved, what they’re allowed to do, what data they’re never supposed to put into the app, when they’re supposed to review the output, who’s responsible for buying new tools, what they’re supposed to do if a lawyer or a member of the staff wants to try a new app, and so on.

A good AI policy also needs to include client confidentiality, level of access, vendor review, account ownership, records retention, citation and verification, and non-lawyer staff supervision.

California’s State Bar guidance and ABA Opinion 512 are both in favor of this because they include the use of AI in the list of professional obligations, not in a separate list of novelties. What is being governed is the use of AI, and this is only effective if the firm sets rules before ad hoc becomes normal, so AI is used productively, not as an avoidable risk.

// Chat Widget