BMW sets out ethics for AI

BMW sets seven rules for its robot minds.

BMW, in its wisdom, has decided that we need a full seven rules, as it sets out its code of ethics for the use of Artificial Intelligence (AI).

The German carmaker has said that it will implement a new series of rules to ensure that its use of AI is both ethical and safe. Michael Würtenberger, Head of BMW's "Project AI" said: "Artificial intelligence is the key technology in the process of digital transformation. But for us the focus remains on people. AI supports our employees and improves the customer experience. We are proceeding purposefully and with caution in the expansion of AI applications within the company. The seven principles for AI at the BMW Group provide the basis for our approach."

AI is already in wide use

AI, rather than being some kind of sci-fi concept, is already widely in use. Those voice-controlled 'Hey, BMW' in-car digital assistance use artificial intelligence to work out what you're saying and what function you need. AI is used to help work out logistics and supply chains, and even to assess how noisy the inside of a new car is. It's also used to scan the badges of cars on the production line to make sure that the model corresponds to a customer order.

AI can also help when it comes to processing customer service requests, improving energy efficiency in factories and offices, and - of course - in electronic driver aids and, eventually, autonomous driving.

To ensure that we don't, through use of AI, end up with some dystopian Planet Of The Robot Apes (we have a script for that in development...) BMW's rules for the safe and sensible use of AI are:

Humans still in charge

Human agency and oversight. This means that real, fleshy, people are actually in charge and not just letting computer whirr away all by themselves. Decision made by an algorithm can be overruled.

Technical robustness and safety. This is a bit of a hazy concept. BMW says that it means that the company "observes the applicable safety standards designed to decrease the risk of unintended consequences and errors." We think it means they'll pull the plug if the AI gets too bolshy.

Privacy and data governance. Well, obviously. You don't want an AI deciding that it's OK to release your bank details just because you bought a 1 Series.

Transparency. All AIs to be wrapped in clingfilm. No, seriously - here BMW means that it will be open about how and when it deploys AI tech and will explain how it works.

Diversity, non-discrimination and fairness. Several AI programmes in the past (not BMW's) have been accused of inherent bias when it comes to race and sex. So, naturally, Munich wants to eliminate that from its AI programmes.

Helping with climate goals

Environmental and societal well-being. Basically, that means that all AI programmes have to work towards BMW's climate change goals, as well as not undermining anyone's human rights. Whether that also includes not laying off any workers because a robot can now do their job better (i.e: cheaper) remains to be seen.

Accountability. Another obvious point, which kind of relates back to the human agency bit - the buck can't stop with a pile of microchips. Someone, some human one, is in charge and carries the can if BMW's AI goes rogue and starts creating all-conquering robot-chimp hybrids which, with their mighty laser-beam eyes can... but we're giving away the plot points of our script.

Published on: October 12, 2020