TL; DR: AI in Law School Admissions
Cordel Faulk, former Chief Admissions Officer at the University of Virginia School of Law and head of College Solutions’ What’s Next? law school admissions advising program, has seen a lot of changes in the law school admissions process over the years. In today’s post, he’s providing his insights on one of the newest disruptions to law school admissions (and the world as a whole) — Artificial Intelligence.
AI is becoming a permanent fixture in law school admissions — for applicants and admissions offices alike. Used ethically, it can support clarity and access; used carelessly, it risks eroding authenticity and fairness. Applicants should never outsource their personal statements to AI, though light editing or brainstorming tools can be appropriate. Admissions professionals should focus on transparency, equity, and human judgment when incorporating AI into their processes. The future of admissions will depend on shared norms, ethical leadership, and intentional use of technology.
AI: How to Use It, and When to Step Back
Artificial Intelligence (AI) is no longer a futuristic concept creeping into the world of admissions — it’s already here, embedded in the daily experiences of both applicants and admissions professionals. From drafting assistance to backend automation, AI is changing how we think about storytelling, selection, and fairness.
We believe this shift demands a proactive, ethical, and collaborative response. The question isn’t whether AI belongs in admissions — it’s how we can use it in ways that uphold the integrity of the process and preserve the values at its core.
AI Is Already Here — Let’s Be Clear About That.
Many applicants are turning to tools like ChatGPT or Grammarly to help craft essays, and recommendation writers are using them to help frame letters. Sometimes that help is light — checking grammar, tightening phrasing. Other times, AI is used to generate entire drafts. Likewise, admissions offices are beginning to test AI for logistical support: organizing documents, flagging incomplete files, or helping identify patterns in application data.
None of this is inherently wrong or dangerous. In fact, when used responsibly, AI can make processes more efficient, more accessible, and even more equitable. But its presence raises essential questions: What counts as a personal statement if a machine helped write it? How do we assess voice, judgment, and authenticity if the medium is AI-assisted? And how do we ensure fairness when using tools trained on biased historical data?
The answers require input from all corners of the admissions world — from students and counselors to admissions professionals and tech developers.
Applicants, Consider This: AI Can’t Tell Your Story.
Let’s start with the clearest boundary: No applicant should use AI to write a personal statement from scratch. This is not a gray area. The moment you plug a prompt into ChatGPT and ask for a full draft, you've handed over your voice to an algorithm — and placed ethics to the side.
A personal statement is a reflection of who you are — your motivations, your growth, your vision for the future. AI might be able to generate something grammatically clean or structurally sound, but it can’t capture your lived experience. More importantly, it can’t reflect the kind of introspection that undergrad and grad school admissions offices demand and value.
That said, there are responsible, limited ways applicants can use AI:
Editing and clarity support: If English isn't your first language or you're not confident in your writing, AI tools can help you clean up grammar or rephrase awkward sentences — after you have done the work of drafting.
Brainstorming assistance: AI can help you generate questions to ask yourself, or outline a rough structure based on what you want to say — but it should never be your ghostwriter. The insights must be your own.
Preparation tools: Use AI to simulate interview questions or summarize aspects of your resume. These can be helpful for practice and confidence.
Above all, remember: the purpose of your application is not perfection. It’s authenticity, thoughtfulness, and coherence. If an essay doesn’t sound like you — if it lacks specificity or emotional truth — admissions officers will notice. And if schools begin adding disclosure requirements about AI use (as many already are), honesty will matter as much as ever.
For Admissions Professionals: Shape, Don’t Shun, the Role of AI.
Admissions offices are also confronting AI — and while many are proceeding cautiously, others are already integrating it behind the scenes. Some use AI to help organize materials or automate routine steps. Others are beginning to explore predictive modeling or screening tools based on past applicant data.
We believe AI has a place in admissions — but only under ethical, human-centered conditions. Here’s what that might look like:
AI as a support tool, not a decision-maker: Final decisions about who gets admitted, interviewed, or awarded should rest with people. Human judgment remains essential, especially when assessing context, growth, or potential.
Use AI to promote equity: For example, AI-powered language tools might help non-native English speakers or first-generation applicants feel more confident in their writing. Or AI might surface overlooked forms of leadership across multiple years of applicant data.
Recognize the risks of bias: AI systems trained on historical admissions outcomes can reinforce existing inequities. Without oversight, these tools could undervalue applicants from underrepresented or nontraditional backgrounds.
Prioritize transparency: If AI plays a role — even in screening or organizing — applicants deserve to know. Transparency builds trust and sets a standard for fairness.
Create shared standards: No single office or organization should be solving this alone. Professional organizations and cross-institution conversations should take the lead in establishing norms for ethical AI use, disclosure policies, and red lines.
Defining the Future, Together
Ultimately, AI is just a tool — powerful, but neutral. Like a calculator, it can help when used correctly, but it can also undermine learning if it becomes a crutch. In admissions, AI can support access and efficiency — but it can’t substitute for judgment, empathy, or experience.
Applicants, advisors, and admissions professionals all have a role to play in shaping how AI is used. If we lead with intention and transparency, we can make space for innovation while preserving what matters most.
What’s Next?
The law school application process is intimidating, confusing, and at times scary. We at What’s Next? law school advising can use our expertise to demystify this process. We want it to be more exciting than it is scary. We can help you find the right fit. It’s out there.
Led by Cordel Faulk, former Chief Admissions Officer at UVA Law, What’s Next? is grounded in deep experience and honest guidance. We’ve helped applicants navigate this journey before — and we’re ready to help you do the same.