Good article about how algorithms are making decisions that their human front end does not understand and cannot explain. By the time these decisions result in a lawsuit enough time would have passed between a specification being provided to the vendor to create an algorithm and conditions of satisfaction to have changed enough to render it flawed. So when the human representing the contested decision says don't blame me blame the algorithm they are creating an impossible situation.
No one in the mix can understand what that algorithm does and even if that were to be unpacked and made comprehensible to the lay person it will still not answer the question if it delivers correct results. When it does not, how and why does it fail. What percent of the populated served is affected by the bad results. None of those questions can be answered without an informed, educated and engaged customer - in this case the powers that be in a government agency. It sounds like they would rather not take responsibility for decision making perhaps to preclude bias and subjectivity - instead let the algorithm do its inscrutable thing. As long as there are no consequences for saying I don't know ask the algorithm nothing will change.
Not until they were standing in the courtroom in the middle of a hearing did the witness representing the state reveal that the government had just adopted a new algorithm. The witness, a nurse, couldn’t explain anything about it. “Of course not—they bought it off the shelf,” Gilman says. “She’s a nurse, not a computer scientist. She couldn’t answer what factors go into it. How is it weighted? What are the outcomes that you’re looking for? So there I am with my student attorney, who’s in my clinic with me, and it’s like, ‘Oh, am I going to cross-examine an algorithm?’”
In the current model, the nurse is expected not know anything and even the person who signed the purchase order to buy the algorithm would be within their rights to say they have no idea what's inside it. The technical team that vetted it has likely moved on from the organization and tracking them down would unlikely yield any results. I was recently listening to a couple of data scientists explaining data anomalies to a group of non-technical people. They had (in their opinion at least) simplified their content to the lay person level and the presentation was peppered with illustrative and relatable examples from real life. Sadly none of that helped the audience make sense of things, They just thanked the data guys for their time and went on their way choosing not to make any of the decisions that were recommended.
Comments