One of the fascinating features of artificial intelligence is how much it tells us about ourselves, but it is the way we train AIs in rules-based systems that can teach us the most about organisational culture.
Victoria Krakovna, a research scientist at DeepMind has put together a master list of AI "specification gaming" examples — AI training experiments gone "wrong" due to the AI gaming the system it is supposed learn from and evolve. Here are a few examples:
Several involve exploiting bugs in the code of the systems they are operating in. Others simply exploit "common sense" boundaries, such as pausing the game indefinitely or killing themselves repeatedly to avoid losing.
On the surface, these failures appear to show how dumb and non-human AI can be, but what they really show is the relationship between reward incentives and behaviour and the perception of rules. It's eerie microcosm of what business culture has become in many large organisations.