For law firm partners, practice managers, and legal marketing leads — any jurisdiction.
The question nobody is asking
Across the legal profession, in every jurisdiction where AI has become impossible to ignore, regulators are moving in the same direction. The American Bar Association has published formal guidance on lawyers’ duties when using AI tools. The Law Society of England and Wales has issued practice notes on AI in legal services. The Law Society of Ontario has addressed competence obligations. The Australian legal profession’s governing bodies have weighed in on disclosure, accuracy, and supervision. The EU’s AI Act creates compliance obligations that touch legal practice directly.
Every piece of regulatory guidance addresses the same set of concerns: accuracy, supervision, disclosure, confidentiality, competence. How AI should be used. Under what conditions. With what safeguards.
None of it addresses this: when a potential client asks an AI system who to call, does your firm appear?
The regulatory conversation is about how lawyers use AI. The commercial conversation is about how clients use AI to choose lawyers. Those are different systems. Different questions. Different risks. And the profession is only having one of them.
What is already happening
Regardless of jurisdiction, regardless of practice area, regardless of firm size — potential clients are using AI tools as a starting point when they need legal help. Someone facing a criminal charge in Manchester searches for representation. Someone dealing with an employment dispute in Chicago asks for recommendations. A business owner in Sydney needs commercial property advice. A family in Toronto is navigating an estate dispute.
They are not, in most cases, starting with a directory. They are asking a question — of ChatGPT, of Perplexity, of Gemini — and receiving a shortlist. Three firms. Maybe four. Each described with enough confidence to act on. Those firms get researched. Their websites get visited. Calls get made. The firms that don’t appear on that shortlist don’t enter the process. Not ranked lower. Not found less easily. Simply absent from a decision that has already been partially made.
This is not a prediction. It is a description of behaviour that is measurable now — in search console data, in the pattern of AI Overview impressions, in the experience of firms that are already tracking how new clients find them.
Why the regulatory focus creates a blind spot
Legal regulators are responding to genuine and serious concerns. AI tools used carelessly in legal work create real risks — to clients, to the administration of justice, to the standing of the profession. The regulatory attention is warranted.
But regulatory frameworks, by their nature, focus on what professionals do. They govern conduct, obligations, and standards of practice. They are not designed to address market positioning, business development, or commercial discoverability.
The result is that the conversation dominating law firm strategy meetings right now is about AI governance — policies, risk committees, approved tools lists, disclosure protocols. These are important conversations. But they are consuming the attention that might otherwise go toward the question that will determine which firms are growing five years from now.
Compliance does not equal discoverability. Governing your AI use does not improve your AI recommendation eligibility. They are unrelated.
The universal gap across jurisdictions
The specifics vary but the pattern holds everywhere. In the United States, the ABA’s formal guidance on AI in legal practice covers competence, confidentiality, and supervision. What it does not address is the entity authority, external corroboration, and structured positioning that determine whether a firm appears when someone asks an AI tool for a recommendation.
In England and Wales, the SRA has published guidance covering accuracy, transparency, and client care. It does not address whether a firm’s identity is consistent across the SRA register, legal directories, Google Business Profile, and the other sources AI systems draw from when deciding who to name.
In Canada, provincial law societies have addressed AI competence and disclosure. The question of whether a firm’s practice area positioning is specific enough and consistent enough to survive AI comparison against competitors has not been touched. In Australia, state-level governing bodies have weighed in on AI and professional obligations. The structured data, schema markup, and citation architecture that determine AI recommendation eligibility sit entirely outside their frame of reference.
The gap is not a failure of regulatory thinking. It is a category error — applying a conduct governance lens to what is fundamentally an infrastructure and discoverability problem.
What recommendation-eligible law firms look like
The firms that appear consistently in AI responses for legal queries have — usually without intending to — built the right signals. The AI system is not evaluating whether a firm is good. It is evaluating whether it is safe to name.
Recognition. The firm exists as a clear, consistent entity across every source the system checks. Name, location, practice areas — consistent across the website, the regulatory registration, legal directories, and third-party platforms. In the UK that means the SRA register, Legal 500, Chambers, the Law Society directory. In the US it means Avvo, Martindale-Hubbell, Justia, the state bar listing. The principle is identical: consistent identity across credible sources reduces the risk of naming you.
Validation. Legal directories carry particular weight because they are already trusted reference points for AI systems evaluating professional services. A firm that appears in Chambers or Legal 500 or their jurisdictional equivalent carries more corroborative authority than one that relies solely on its own claims. Press coverage, professional body memberships, court judgments referencing the firm — each independent confirmation makes the recommendation safer to give.
Selection. Generic descriptions — ‘full service firm,’ ‘high quality legal advice,’ ‘experienced team’ — give the system nothing to match against a specific query. Practice area specialisms need to be stated explicitly, consistently, and in the language clients use when looking for help.
Citation. The information a potential client needs — what the firm handles, who it helps, what to expect — needs to be extractable by machines, not just readable by humans. FAQ sections structured around real client questions. Schema markup. Clear, direct answers rather than flowing prose that buries the key facts.
The practical implication
A firm that is doing the right things on compliance — whether ABA, SRA, Law Society, or any other jurisdiction — but has not addressed its recommendation eligibility is a firm that is governing its AI use carefully while quietly losing client acquisition ground to competitors who haven’t even thought about the compliance question yet but happen to have stronger entity signals.
The firms that will look back on 2026 as the year they got ahead of this are the ones treating AI recommendation eligibility as the infrastructure question it actually is. Not a marketing initiative. An infrastructure question: how does the system our potential clients are already using to shortlist legal firms understand us — and what would need to change for it to understand us more clearly?
The legal profession can regulate how lawyers use AI. It cannot regulate how clients use AI to choose lawyers. That gap is yours to close. Or not.
If you are not on the shortlist, you are not in the market.
This article is a companion piece to the main guide: How to Make LLMs Recommend Your Business. Take the AI Recommendation Readiness Diagnostic to find your firm’s primary bottleneck.