Ethical dilemmas arise as artificial intelligence makes its way into courtrooms
ROCHESTER, N.Y. – In what may be a legal first, a woman used artificial intelligence to create a video of her deceased brother speaking at the sentencing hearing for the man who killed him.
“Every word that Chris speaks in there is positive and thankful and forgiving,” said Stacey Wales.
Wales was struggling with what to say at the hearing for the man who shot and killed her brother, Christopher Pelkey. So she decided to have someone else speak.
“I would like to make my own impact statement,” Christopher Pelkey said in the AI-generated video. “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances. In another life we probably could have been friends.”
But the video is now spurring a nationwide debate on the use of AI in the courtroom.
“I think it’s really interesting. It’s also a little troubling,” said Jon Getz, a Rochester attorney who sits on the New York State Bar Association board.
Getz says the use of AI in the legal profession dominates discussion at the board’s meetings these days.
“The technology is moving so fast that the legislative branch can’t even get to it. It’s really that much of an issue,” he said.
In March, a New York appellate court judge became irate when a plaintiff attempted to use an avatar he created to plead his case, saying he doesn’t “appreciate being misled.”
Nicole Black, an attorney who works in legal tech for AffiniPay’s legal brands, said there was one instance where someone submitted a video showing the perspective of the victim when their car was hit by a truck, which raised many issues because nobody knows what the victim actually saw.
Black said while there are legitimate ethical concerns about AI’s use, there are also benefits.
“Because he uses AI in practice, he’s able to handle the cases more efficiently and take on more assigned cases and provide his clients with better legal representation than they probably would have otherwise had,” Black said.
According to a report Black authored, 31% of attorneys surveyed saying they personally used generative AI at work in 2024, up from 27% a year earlier. In the survey, lawyers say they most frequently used AI for writing letters, brainstorming, general research and drafting documents.
“We all sometimes want to take the easier route, and it allows us to sometimes take the easier route when you gotta slog through it,” Getz said.
Brett Davidsen: “And what are the dangers of that?”
Getz: “Unfortunately there’s been some law firms and attorneys who literally had AI write their brief or submissions to the court for them. And in some of those cases, what’s being provided to the court are cases that don’t exist.”
Black said there is issues with accuracy, as you have to make sure the output is right.
And while no one is questioning Wales’ motives in crafting her brother’s impact statement, Getz said the courts are now confronting questions they’ve never had to deal with.
“The question still becomes, when is it a good tool to use and when is it not. And we’re still not there yet,” Getz said.
Beyond conducting legal research or reviewing contracts, attorneys have found other novel uses for AI. According to the American Bar Association, some are using it to predict the legal outcome of lawsuits.
By running it through AI, it allows an attorney to decide how much to invest in the case and whether to advise clients to settle.
The judge in the Arizona case praised the use of the AI video and said he felt it was genuine. The killer was given a sentence of 10 1/2 years. His attorney immediately appealed, questioning whether the judge improperly relied on the AI video when handing down the sentence.
AI assisted with the formatting of this story. Click here to see how WHEC News 10 uses AI