Artificial intelligence is bringing a new wave of jargon. To use AI tools effectively and understand the regulations shaping them, lawyers must first grasp the fundamental concepts underpinning the technology. This article aims to demystify some of the buzzwords and provides an overview of the practical limitations that matter most to the legal profession.

Core concepts: what lawyers need to know

To start, it is essential to distinguish between the two types of AI. Predictive AI analyses existing data to find patterns and predict outcomes, and has been used for years in applications like e-discovery. In contrast, Generative AI is designed to produce something new, such as a brief, a podcast, a short video or an infographic. You have likely heard of Large Language Model (LLM) as a synonym for Generative AI, but the technology has evolved. Most models nowadays are multimodal, which means they are capable of handling not just text, but also images, video, charts, code and audio.

These all-encompassing AI models are often referred to as foundation models because they serve as the base upon which developers build various applications. The most advanced, state-of-the-art versions are known as frontier AI. These complex models utilise vast computational power, which is why you may have heard regulators discuss compute. Legislators, needing a yardstick for these processing resources, have focused on FLOPs (Floating-Point Operations) as their primary measure. You can think of it as the total number of calculations required to build the model. This metric has been adopted as a threshold for scoping in models subject to the closest regulatory scrutiny.

This legislative approach rests on the assumption that ‘more calculations equals more capabilities which equals more risk’. Although generally true today, this premise is becoming insufficient for measuring risk as technical advancements help build models that are ‘smarter’ without getting much ‘bigger’. For instance, you may have heard about agents or that AI is becoming agentic. This means a model does not just answer questions, it can execute tools to complete workflows. Imagine a situation in which a smaller model is efficient enough to run locally on a laptop, but capable enough to navigate the necessary websites to register a company, check for conflicts, download and populate the IN01 forms, draft the Articles, calculate any registration fees and present you with a final package or validation screen. Such a model could fall below strict regulatory compute thresholds, yet these functionalities can carry risk. This shows that risk is a function of autonomy and application, not just computational resources.

Now, it is vital to make an important distinction: when we speak about AI, we are often talking about two different things in the same breath. On one hand, you have the underlying model itself – the powerful computational resource we have described – and on the other, you have the system or application layer. The system refers to the application built around the model, which serves as the user interface or the specific tool you are directly interacting with. A helpful way to conceptualise this is to think of the model as the engine and the application or system as the car. For example, your chambers or law firm might use a specific legal research tool (this would be the ‘system’) that leverages a well-known AI model such as Google’s Gemini or Anthropic’s Claude. 

Practical issues: managing generative AI in your legal practice

Beyond the theory, legal professionals will face three practical issues when using generative AI tools. The first and most commonly known challenge is reliability. This can happen for a few reasons but you have probably heard of hallucinations, where an AI tool confidently invents facts, cases and citations. A different concept, less commonly known by legal professionals, is the knowledge cut-off date. This date represents the end of the model’s internal knowledge base, meaning it is unaware of events or information that have occurred since that date – akin to using an early edition of a jurisprudence textbook. Knowing the cut-off date helps you understand the potential limitations that a tool may have when using it for recent topics. 

Technical methods exist to address these challenges. For instance, a model may have access to real-time information via tools like web search. Using Retrieval-Augmented Generation (RAG) a tool can connect to an external knowledge base and use that information as context to create a more accurate answer. We call this anchoring to information outside the model knowledge base grounding, and you could use a legal database or your organisation’s own documents for this. Many systems also provide a citation back to the source on which their answer is grounded. A prominent example of this in practice is the tool ‘NotebookLM’, which works by grounding its output specifically on the sources given by the user to ensure the analysis remains within the provided context. For lawyers these concepts are valuable because RAG and grounding can help with outdated knowledge. However, it is important to remember that a model may still misinterpret the source text it is grounded in, or make up citations and attribute real quotes to the wrong sources. 

A second practical element is system instructions. These instructions act as the AI’s ‘practice direction’, defining the rules that guide the model’s behavior, its role and its style of response. Developers or firms can use these instructions to implement safety guardrails or enforce specific guidelines – for example, instructing the AI to ‘always adopt a formal tone’ or ‘never express a legal opinion’.

Finally, every lawyer must understand the context window. This is simply the AI’s active memory for a single conversation. If you are working on a complex case and your bundle is larger than this context window, the AI will start forgetting the beginning of the conversation before it reaches the end. Importantly, everything in the conversation counts toward this limit: the documents you upload, your questions and the AI’s answers.

Understanding AI will increasingly be a sign of competence

We have established that these models can be powerful engines, but a car still requires steering. This primer has focused on core technical concepts underpinning AI and how it works – demystifying terms across a gradient of complexity, from ‘multimodality’ to ‘RAG’ and ‘system instructions’ – to provide an approachable technical baseline. Many other substantive areas merit attention for our profession, but they cannot be properly addressed without some foundational knowledge. 

Understanding some of the basic technical terms will increasingly be a sign of competence, but more importantly it can empower lawyers to not fall into the alluring trap of passive reliance. An AI system can generate a skeleton argument, but only you understand the strategy, the client’s risk appetite and the nuance of the law. The tool may build the draft, but your judgement builds the case.