merry murderesses of the Prepare dinner County Jail climbed the stage within the Chicago musical, they had been aligned on the message:
That they had it coming, that they had it coming all alongside.
I didn’t do it.
But when I’d accomplished it, how may you inform me that I used to be improper?
And the a part of the tune I discovered fascinating was the reframing of their violent actions by means of their ethical lens: “It was a homicide, however not a criminal offense.”
In brief, the musical tells a narrative of greed, homicide, injustice, and Blame-shifting plots that unfold in a world the place fact is manipulated by media, intelligent legal professionals, and public fascination with scandal.
By being solely an observer within the viewers, it’s simple to fall for his or her tales portrayed by means of the sufferer’s eyes, who was merely responding to insupportable conditions.
Logically, there’s a scientific rationalization for why blame-shifting feels satisfying. Attributing adverse occasions to exterior causes (different folks or conditions) prompts mind areas related to reward processing. If it feels good, it reinforces the behaviour and makes it extra computerized.
This fascinating play of blame-shifting is within the theatre of life now, the place people can even begin calling out the instruments powered by LLMs for poor selections and life outcomes. Most likely pulling out the argument of…
Inventive variations
Understanding how the variations (creative or not) lead us to justify our dangerous acts and shift blame to others, it’s solely frequent sense to imagine we are going to do the identical to AI and the fashions behind it.
When looking for a accountable get together for AI-related failures, one paper, “It’s the AI’s fault, not mine,” reveals a sample in how people attribute blame relying on who’s concerned and the way they’re concerned.
The analysis explored two key questions:
- (1) Would we blame AI extra if we noticed it as having human-like qualities?
- (2) And would this conveniently scale back blame for human stakeholders (programmers, groups, firm, governments)?
By three research carried out in early 2022, earlier than the “official” begin of the generative AI period by means of UI, the analysis examined how people distribute blame when AI methods commit ethical transgressions, comparable to displaying racial bias, exposing kids to inappropriate content material, or unfairly distributing medical sources and located the next:
When AI was portrayed with extra human-like psychological capacities, members had been extra keen to level fingers on the AI system for ethical failures.
Different findings had been that not all human brokers received off the hook equally:
- Firms benefited extra from this blame-shifting sport, receiving much less blame when AI appeared extra human-like.
- In the meantime, AI programmers, groups, and authorities regulators didn’t expertise decreased blame no matter how mind-like the AI appeared.
And possibly crucial discovery:
Throughout all eventualities, AI constantly acquired a smaller proportion of blame in comparison with human brokers, and the AI programmer or the AI workforce shouldered the heaviest blame burden.
How had been these findings defined?
The analysis advised it’s about perceived roles and structural readability:
- Firms with their “complicated and infrequently opaque constructions” profit from decreased blame when AI seems extra human-like. They’ll extra simply distance themselves from AI mishaps and shift blame to the seemingly autonomous AI system.
- Programmers with their direct technical involvement in creating the AI options remained firmly accountable no matter AI anthropomorphisation. Their “fingerprints” on the system’s decision-making structure make it almost inconceivable for them to say “the AI acted independently.”
- Authorities entities with their regulatory oversight roles maintained regular (although decrease general) blame ranges, as their obligations for monitoring AI methods remained clear no matter how human-like the AI appeared.
This “ethical scapegoating” suggests company accountability would possibly more and more dissolve as AI methods seem extra autonomous and human-like.

You’ll now say, that is…
All that jazz
Scapegoating and blaming others happen when the stakes are excessive, and the media often likes to place an enormous headline, with the villain:
From all these titles, you may immediately blame the end-user or developer due to a lack of information of how the brand new instruments (sure, instruments!) are constructed and the way they need to be used, carried out or examined, however none of this helps when the injury is already accomplished, and somebody must be held accountable for it.
Speaking about accountability, I can’t skip the EU AI Act now, and its regulatory framework that’s putting on the hook AI suppliers, deployers and importers, by stating how:
“Customers (deployers) of high-risk AI methods have some obligations, although lower than suppliers (builders).”
So, amongst others, the Act explains completely different lessons of AI methods and categorises high-risk AI methods as these utilized in crucial areas like hiring, important providers, legislation enforcement, migration, justice administration, and democratic processes.
For these methods, suppliers should implement a risk-management system that identifies, analyses, and mitigates dangers all through the AI system’s lifecycle.
This extends into a compulsory high quality administration system masking regulatory compliance, design processes, growth practices, testing procedures, knowledge administration, and post-market monitoring. It should embrace “an accountability framework setting out the obligations of administration and different workers.”
On the opposite facet, deployers of high-risk AI methods have to implement acceptable technical measures, guarantee human oversight, monitor system efficiency, and, in sure instances, conduct fundamental rights affect assessments.
To sweeten this up, penalties for non-compliance may end up in a effective of as much as €35 million or 7% of world annual turnover.
Possibly you now suppose, “I’m off the hook…I’m solely the end-user, and all that is none of my concern”, however let me remind you of the already present headlines above, the place no lawer may razzle dazzle a decide into believing harmless for leveraging AI in a piece state of affairs that significantly affected different events.
Now that we’ve clarified this, let’s talk about how everybody can contribute to the AI accountability circle.

Whenever you’re good to AI, AI’s good to you
True accountability within the AI pipeline requires private dedication from everybody concerned, and with this, one of the best you are able to do is:
- Educate your self on AI: As a substitute of blindly counting on AI instruments, study first how they’re constructed and which duties they’ll remedy. You, too, can classify your duties into completely different criticalities and perceive the place it’s essential to have people ship them, and the place AI can step in with human-in-the-loop, or independently.
- Construct a testing system: Create private checklists for cross-checking AI outputs towards different sources earlier than performing on them. It’s value mentioning right here how an excellent method is to have multiple testing approach and multiple human tester. (What can I say, blame the good growth practices.)
- Query the outputs (all the time, even with the testing system): Earlier than accepting AI suggestions, ask “How assured am I on this output?” and “What’s the worst that would occur if that is improper and who can be affected?”
- Doc your course of: Hold data of the way you used AI instruments, what inputs you supplied, and what selections you made primarily based on the outputs. Should you did every thing by the e-book and adopted processes, documentation within the AI-supported decision-making course of can be a crucial piece of proof.
- Converse up about considerations: Should you discover problematic patterns within the AI instruments you utilize, report them to the related human brokers. Being quiet about AI methods malfunctioning just isn’t an excellent technique, even should you induced a part of this downside. Nevertheless, reacting on time and taking accountability is the long-term highway to success.
Lastly, I like to recommend familiarising your self with the rules to grasp your rights alongside obligations. No framework can change the truth that AI selections carry human fingerprints and that people will take into account different people, not the instruments, answerable for AI errors.
Not like the fictional murderesses of the Chicago musical who danced their manner by means of blame, in actual AI failures, the proof path gained’t disappear with a sensible lawyer and superficial story.
Thank You for Studying!
Should you discovered this submit useful, be at liberty to share it along with your community. 👏