Hidden discrimination of AI systems continues to be a concern for the world. Various privacy laws, like the GDPR, already provide a right to not be subject to automated decision-making, which is often based on artificial intelligence. Amazon is rolling out a new “AI nutrition label” of sorts to provide transparency of AI systems.
Amazon.com Inc is planning to roll out warning cards for software sold by its cloud-computing division, in light of ongoing concern that artificially intelligent systems can discriminate against different groups, the company told Reuters.
Akin to lengthy nutrition labels, Amazon's so-called AI Service Cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The goal would be to prevent mistaken use of its technology, explain how its systems work and manage privacy, Amazon said.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
© Mondaq® Ltd 1994 – 2022. All Rights Reserved.
Forgot your password?
Free, unlimited access to more than half a million articles (one-article limit removed) from the diverse perspectives of 5,000 leading law, accountancy and advisory firms
Articles tailored to your interests and optional alerts about important changes
Receive priority invitations to relevant webinars and events
You’ll only need to do it once, and readership information is just for authors and is never sold to third parties.
We need this to enable us to match you with other users from the same organisation. It is also part of the information that we share to our content providers (“Contributors”) who contribute Content for free for your use.