A family presenting an AI-generated statement of a murder victim in court has generated controversy on the ethical use of AI technology.
According to a YouTube video shared on 8 May, the victim Chris Pelkey who was killed in a 2021 road rage presented the AI-generated statement through words coined by his sister.
A first in the legal system
AI has been used for many things, including research, security, science, and medicine, but not so much in the legal sector, at least not in the direction this Arizona murder case has taken.
Pelkey’s sister, Stacey Wales said she could not frame words to express what she believed her brother would say, so she sought the help of her husband who is an AI expert to help generate a statement that would fairly resemble what Pelkey would say.
The generated statement said during the murderer’s sentencing hearing:
“To Gabriel Garcia, the man who shot me, it’s a shame we encountered each other that day in those circumstances. In another life, we probably could have been friends. I believe in forgiveness… and I still do.”
Though such a method is kind of extreme and raises serious questions about the ethical use of AI, it does not violate any law, as it is permitted under Arizona’s Victims’ Bill of Rights, which allows victims to present statements through any means that they would like.
A different opinion
While the decision to use AI for the victim statement remains technically legal, there are some who may not agree with the idea.
A former judge, Judge Jeffrey Swartz, said this could set a bad precedent He said:
“This could happen if this is admitted even as a victim impact statement—are we now going to start having people project what they believe someone else would say?”
He further stated that he would likely not have permitted it if he was presiding over the case.