Stanford’s Vaccine Mishap Blamed on Complex Algorithm

PinIt

The university is working to find a better solution to distributing the vaccine to its most at-risk employees after problems with its AI algorithm.

Healthcare is deep in the process of utilizing new algorithms and deep learning initiatives to facilitate healthcare. However, Stanford’s recent misstep with vaccine distribution illustrates how these algorithms are still subject to serious issues.

Using algorithms to distribute critical vaccines

Stanford Medicine intended to determine the most at-risk healthcare workers to distribute limited vaccine supplies more equitably. They created an algorithm that considered age, job role, department, and the percentage of tests collected in each role or department.

Stanford also added more factors into the overall algorithm than other facilities conducting similar initiatives. The result should have ranked the most at-risk healthcare employees to ease pressure on departments as vaccine doses roll out.

See also: How AI is Changing the Healthcare Industry

What actually happened

Like many things, over-complication contributed to the results. Algorithms for deciding patient risk are common, but more factors can muddy the waters. When Stanford’s team looked at the factors creating the algorithm, they possibly didn’t test those factors in real outcomes.

The results favored healthcare workers that weren’t necessarily Stanford’s front line. These included older workers who were currently remote. Only seven of the facility’s 1300 residents qualified, while the first round of vaccinations went to administration and doctors seeing patients remotely.

One of the biggest problems seems to be that residents had no formal departmental designation, causing the algorithm to overlook their potential exposure. Whatever the reason, the algorithm was approved but has caused quite a bit of distress on campus.

Stanford apologizes formally

Stanford has issued a formal apology but has not commented further. Many analysts feel that this points to another example where humans rely on the best of artificial intelligence or algorithmic decision making without more review.

As we continue to use algorithms to facilitate decision-making, create efficient processes, and remove uncertainty, we must also examine how our own blind spots can affect the outcomes of AI-driven initiatives.

The university is working to find a better solution to distributing the vaccine to its most at-risk employees. We’ll see in the coming days and weeks how they’ve worked out the solution.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *