AI Transparency Statement

The policy for the responsible use of AI in government provides mandatory requirements for departments and agencies relating to accountable officials and transparency statements. This page provides details of the Australian Law Reform Commission (ALRC)’s implementation of these policy requirements. 

Accountable Official 

The ALRC has an AI Accountable Official under the policy, the Director of Operations and Finance (Mandeep Bahia), reporting to the Executive Director of the ALRC (Ruth Barson). 

The AI Accountable Official has primary responsibility for the development and application of the AI policy across the following:   

  • Facilitating ALRC involvement in cross-government AI coordination and collaboration 
  • Developing ALRC’s AI policy and its associated implementation plan 
  • Embedding a culture that fairly balances AI risk management and innovation 
  • Uplifting governance and education of AI adoption in the ALRC 
  • Enhancing the response and adaptation to AI policy changes in the ALRC 
  • Encouraging the development or alignment of an ALRC or Inquiry-specific AI policy 
  • Reviewing and monitoring ALRC’s AI policy implementation regularly and providing feedback to the ALRC’s President and Audit & Risk Committee 

ALRC’s Approach to AI Adoption and Use 

The ALRC has not yet used AI in any formal setting or supported any Law Reform activities to date. Our intention is to trial the adoption of AI in 2025 and beyond as part of the Australian Government’s commitment to digital innovation.   

The ALRC is committed to demonstrating, encouraging, and supporting the safe and responsible adoption of AI within the Australian Public Service, and in digital and ICT investments, systems, and digital services. As part of this commitment, we will implement AI fundamentals training for all staff, regardless of their role. 

How the ALRC Uses AI 

At this time, we are not using AI to support any of our Law Reform activities or engage with the public.  

From April 2025 we intend to trial AI to support Workplace Productivity and trial generative AI service, Microsoft 365 Copilot.  We understand that the Australian government trailed this software from 1 January 2024 to 30 June 2024 and a number of commonwealth agencies have adopted this software.  Copilot has yet to be formally announced internally or made available to staff officially.  As a prerequisite to using Copilot, ALRC staff are required to complete internal training on the use of generative AI. 

We also intend to consider the use of AI tools to support Legal Research and will look to Pilot Lexis Nexis+ AI or similar such products which are used within the legal industry to support research.   

We have a policy and SharePoint page on the use of AI tools by staff, which staff are required to confirm and acknowledge they are familiar with before accessing any generative AI tools online. This policy encourages and assists staff to: 

  • Not rely on the authenticity or veracity of content generated by AI, without external verification. 
  • Sensitive materials contain information which, if disclosed without authorisation, could cause harm to an organisation, individual, or national security. These may include but is not limited to:  
  • legal analysis documents,  
  • confidential corporate information, 
  • personal staff information,  
  • stakeholder consultations,  
  • judicial reviews for ALRC papers, 
  • outcomes of Commissioner meetings, 
  • ALRC internal discussion papers. 

The above is in line with the ALRC Privacy Policy. 

The ALRC is participating in the Pilot Australian Government AI assurance framework. Through our participation in this pilot, we are exploring the potential for AI to be used by our staff and by our ICT systems. 

The ALRC will use AI in the domain of Corporate and Enabling, and usage pattern of Workplace Productivity.  

AI Safety and Governance 

Within the ALRC, all AI use cases are recorded in an internal register to track their progress and status. For new and emerging potential uses of AI, it is the responsibility of the AI Accountable Authority to:    

  • Ensure that any AI is implemented safely and responsibly 
  • Monitor the effectiveness of the deployed AI system 
  • Ensure legal and regulatory compliance of the ICT system 
  • Identify potential negative impacts of the AI use case 
  • Implement measures to mitigate potential harms from AI 

This transparency statement was last updated on 14 February 2025. 

It will be updated as our approach to AI changes, and at least every twelve months. 

For further information or enquiries about the ALRC’s adoption of artificial intelligence, you can contact us directly at info@alrc.gov.au.