SEATTLE — This story was originally posted on MyNorthwest.com
The debate over artificial intelligence has flared up again in Seattle after KIRO Newsradio uncovered a formal complaint that a Seattle police officer used multiple AI programs to help write key internal reports reviewed by the department’s chain of command for accountability, such as complaints, police pursuits, and the details of any use of force during an arrest.
According to letters sent from Seattle’s Office of Police Accountability (OPA) to SPD Chief Shon Barnes, in April, “It was alleged that a sworn named employee used AI to draft emails and Blue Team Reports.”
Blue Team Reports critical to officer accountability
Blue Team Reports are used to document incidents that could involve questions about an officer’s accountability, and they are often automatically sent up SPD’s chain of command for further scrutiny and review.
KIRO Newsradio asked watchdog group, Electronic Privacy Information Center, about using AI to help create a Blue Team Report, and the issue raised serious flags for EPIC attorney, Callie Shroeder.
“One of the issues we have with AI is that you can’t necessarily count on it to be fully accurate in what it’s doing,” said Shroeder. “And with something like a police report, you really need it to be fully accurate.”
According to the OPA letters and documents, the officer in question admitted they used the AI tools for grammar and tone, but they said they never copied and pasted directly from the AI platforms into any emails or reports. Instead, documents say, they manually entered the AI-generated edits before finalizing and submitting the reports.
For watchdog groups like Shroeder’s, that really doesn’t matter.
“It’s really important to have police officers put down their thoughts and their memories immediately after an event happens,” explained Shroeder. “With the gap of feeding this into an AI system, having the AI generate the report and then the human reviewing it, that gap in the memory gets even larger, and it’s possible that there can be some discrepancies there.”
Memory concerns and risks to case integrity
Shroeder warns that outcomes of investigations and cases often rely on an officer’s fresh recollection, including memories of what they did and what a suspect did during an incident. Using AI, Shroeder worries, can open a door for bad cops to do bad things and for good cops to inadvertently alter what actually happened during an incident.
That concern is shared by the King County Prosecutor’s Office, which announced last year it would no longer consider any criminal referrals prepared with the help of AI.
Spokesperson Casey McNerthney explained the office wants to protect both victims and officers from the risks of unreliable technology. “What we want to make sure is that good, honest police work is upheld, and that it isn’t compromised by an unintentional error through artificial intelligence software,” McNerthney said.
That includes avoiding an awkward scenario where an officer could be accused of lying in a referral to prosecutors, potentially letting a suspect go free, and denying victims justice.
“We don’t want King County to be the guinea pig for the rest of the nation,” McNerthney said. “I think we’ll hit a point where AI is much more reliable. But we don’t want to be in a situation where we put victims’ cases at risk, or officers at risk, or the public at risk — and even defendants, too. Everybody deserves a fair shake all around.”
Investigation results and policy recommendations
At the end of their investigation, OPA found there wasn’t enough evidence to show the officer violated SPD policy or the City of Seattle’s Generative AI Policy. In fact, investigators found his actions consistent with city guidelines for non-substantive AI use since he wrote his own drafts, reviewed, and finalized all reports himself.
In other words, the officer can’t really violate a policy that never existed when they also claimed they never cut and pasted AI results.
OPA also recommended SPD develop a policy, “… on the use of AI for department-related purposes and consider incorporating existing city of Seattle policy on GenAI directly into SPD’s policy manual.”
And, if SPD is planning to adopt any AI-based software for report writing, OPA told Chief Barnes he should consider “coordinating with impacted stakeholders to discuss any concerns with the software and a proposed transparent timeline for implementation,” and consider requiring employees to disclose their use of AI in any report.
KIRO Newsradio asked SPD for a response to their ongoing efforts to develop a department-specific AI policy. Spokesperson Sgt. Patrick Michaud explained SPD does not use AI at this point in time, but they’re also governed by the city’s AI policy. “We are continuing work on a draft AI policy for future use, but at this point, we do not have anything we can share,” Sgt. Michaud said.
Follow Luke Duecy on X. Read more of his stories here. Submit news tips here.