On December 17, 2019, The Australian Human Rights Commission unveiled a Discussion Paper on Human Rights and Technology, which includes draft proposals to prevent discrimination driven by artificial intelligence (AI) and intrusive facial recognition. This follows the recent rejection of the Australian Government’s proposed facial recognition legislation by the Parliamentary Joint Committee on Intelligence and Security (PJCIS) due to privacy concerns.
The discussion paper proposes a National Strategy on New and Emerging Technologies for Australia to help “seize the new economic and other opportunities, while guarding against the very real threats to equality and human rights”.
With regard to AI in particular, the Commission proposes the creation of a new AI Safety Commissioner to monitor the use of AI, as well as the following three key goals:
- AI should be used in ways that comply with human rights law
- AI should be used in ways that minimise harm
- AI should be accountable in how it is used.
The Project welcomes further input and its Final Report is planned to be released in 2020.
The Project is a consequence of the continuing concern in Australia – as in other jurisdictions – about the societal and ethical challenges posed by rapidly developing technologies.
In parallel, the Australian government is working on developing a voluntary AI Ethics Framework to help guide businesses and governments looking to design, develop, and implement AI in Australia, including 8 AI Ethics Principles to encourage responsible use of AI systems and associated guidance for businesses.
These Australian initiatives are unsurprising when considered in light of digital ethics initiatives elsewhere in the world. Related developments in this space globally are:
- The UK government established a Centre for Data Ethics and Innovation (CDEI) in late 2018. The UK CDEI released interim reports on its reviews into online targeting and bias in algorithmic decision-making in July 2019, with final reports expected in early 2020. In September 2019, the CDEI also issued by 3 snapshot papers looking at deepfakes and audiovisual disinformation, smart speakers and voice assistants, and AI and personal insurance.
- On 8 April 2019, an EU High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence, which indicate that, to be trustworthy, AI should be: lawful – respecting all applicable laws and regulations; ethical – respecting ethical principles and values; and robust – both from a technical perspective while taking into account its social environment.
- The Organisation for Economic Co-operation and Development (OECD) adopted non-binding Recommendations of the Council on Artificial Intelligence on 22 May 2019. These guidelines set out general principles which promote the responsible stewardship of trustworthy AI, including that AI systems should be designed to respect privacy rights, and that the privacy risks of AI systems should be continuously assessed.
- The G20 adopted AI Principles inspired by the OECD’s recommendations on AI in June 2019, which urge fairness, accountability and transparency in AI, and argue for respect for the rule of law and values such as diversity, equality, privacy and internationally recognised labour rights.
There is a global consensus that AI and other new technologies offer unprecedented benefits and opportunities for business – but these initiatives are illustrative of the growing recognition of the potential pitfalls for humankind of allowing technology to grow and develop untamed and unshaped by moral and ethical considerations. Businesses should remain alive to these concerns when deciding where and how to craft their digital agendas for the future.