Mustafa Suleyman, chief executive of Microsoft AI, claimed recently that artificial intelligence is approaching “human-level performance” on “most, if not all” professional tasks. He projected that most white-collar work tasks “will be fully automated by an AI within the next 12 to 18 months”.
Suleyman’s comments are just the latest in a long line of similar predictions, which pose confronting questions for professionals, those studying to become professionals, and those wondering what to study at university, or whether to attend at all.
Legal professionals are frequently listed among those whose careers are most vulnerable to AI. So, is law school still worth it? We believe it is. While AI is reshaping worlds of work, and the legal profession will change too, broader conditions also intensify uncertainty: geopolitical conflict, economic strain, and climate stress.
Law graduates nonetheless leave university with something powerful: an adaptable human skill set that is becoming more, not less, important in AI-saturated workplaces and civic life. Law degrees do not simply train people to apply rules. They cultivate capacities for judgment, interpretation, empathy, critical analysis, ethical reasoning, negotiation, institutional design, and the constructive navigation of complexity. These are precisely the capabilities that automated systems struggle to replicate.
At the University of Sydney Law School, we have been thinking hard about how legal education must adapt to the future our graduates now face, not only so they thrive, but so they can help the communities they serve to flourish. Although we have taught law and technology for years, we are now bringing the teaching of techno-legal fluency into the core of our programs. From our work so far, we have distilled five things that law graduates now need to be able to do.
First, law graduates must understand how different knowledge systems work and interact. Generative AI (GenAI) does not “read” or “reason” like humans. It analyses relations among words across large volumes of text, weighted dynamically according to their relevance for certain tasks. Having “learnt” from their extensive use in an immense variety of contexts, GenAI models are increasingly proficient imitators of logical thought. Also, as humans interact with Gen AI, social learning (including preconceptions, stereotypes, and social norms) and machine learning often subtly reinforce each other.
Yet AI still cannot do many things that legal education teaches. AI cannot: weigh authority and precedent with deep contextual understanding; navigate jurisdictional boundaries; appreciate distinct legal subcultures; attend holistically to the varying circumstances and interests of clients; or develop legal and policy ideas that go beyond existing know-how. Understanding both the power and limits of AI matters. Without this, lawyers risk mystifying AI or trusting it too readily. With it, they can help clients and institutions decide when and how AI may be deployed responsibly.
Second, law graduates must be able to explain how AI development is funded, fuelled, and governed, and how this impacts on society and ecology. AI development depends on access to data, energy, water, minerals, capital, and on corporate governance and employment arrangements, tax incentives, licences and more. In some countries, state-led investment funds and government-backed lending play a role. The unusual corporate structures of OpenAI and Anthropic, nested in dense networks of commercial “partnerships”, show how profoundly law shapes who holds technological power and how it is exercised. Lawyers who understand these architectures well are better placed to ask and help address hard questions about accountability, public benefit, and long-term consequences.
Third, tomorrow’s lawyers must be able to evaluate AI’s usefulness in legal practice, and where its usefulness ends. That means identifying current and speculative use cases – from document review to legal drafting and decision-support – and assessing how AI tools might serve different needs and clients. Lawyers require working familiarity with a range of models, recognising that AI is not a single thing. For law schools, this requires exposing students to different tools and encouraging critical comparison rather than brand loyalty.
Fourth, graduates must confront how responsibilities are changing. As AI becomes embedded in legal work, expectations around disclosure, verification and risk will evolve. What should junior and senior lawyers verify? What assurances should clients and courts receive? How might the cost of delivering or accessing legal services change, and who stands to gain or lose? These are not just practical questions. They raise issues of fairness, access to justice, and the distribution of benefit and risk. Addressing them requires historical awareness, strategic judgment, and sensitivity to how technological change affects different groups in different ways.
Finally – and most importantly – law graduates must be able to work effectively with people with different forms of expertise, including expertise grounded in lived experience. The value that law graduates bring lies not only in technical knowledge, but also from their training in human capacities that resist automation: cultural competence, empathy, imagination and the ability to work through new problems together with others. At the same time, a strong grounding in legal principles remains essential, not least so graduates can evaluate AI outputs discerningly rather than defer to them.
Artificial intelligence and automation will continue to transform legal, economic, social and political life, but this does not weaken the case for legal education. If anything, it strengthens it.
Societies need people who can combine legal understanding and technological competence with distinctively human capabilities, to govern new technologies wisely. The task for law schools is not to get swept up in every wave of prediction, but to prepare graduates who can think clearly, work adaptively, collaborate well, and act courageously and responsibly to help ensure technology serves the public good.
Fleur Johns is dean and head of school, The University of Sydney Law School. Kevin Walton is associate professor and associate dean (education), The University of Sydney Law School.
Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.
From our partners
Read the full article here














