AI Transparency Statement

Last updated: 19 February 2026. 

The Digital Transformation Agency (DTA) Policy for the responsible use of AI in government requires agencies to designate AI accountable officials and publish a public AI transparency statement. This page provides details of the Australian Law Reform Commission (ALRC)’s implementation of these requirements. 

Accountable Official 

The ALRC has an AI Accountable Official under the policy. The ALRC’s AI Accountable Official is the Director of Operations and Finance, reporting to the Executive Director of the ALRC. There is no change to the ALRC’s accountable official. 

The AI Accountable Official has primary responsibility for the development and application of the ALRC’s AI policy and implementation approach, including: 

  • facilitating ALRC involvement in cross-government AI coordination and collaboration 
  • developing and maintaining the ALRC’s AI policy and supporting resources 
  • uplifting governance, education and guidance for AI use in the ALRC 
  • embedding a culture that balances AI risk management and innovation 
  • monitoring implementation 
  • supporting adaptation to changes in whole-of-government AI policy over time. 

ALRC’s approach to AI adoption and use 

The ALRC is committed to adopting and using AI in a way that is safe, ethical, accountable and transparent, consistent with Australian Government requirements and community expectations. 

At present, the ALRC’s approach is to: 

  • use AI only for approved, low-risk workplace productivity purposes, and 
  • build organisational capability and governance so we can assess whether, and how, AI might responsibly support additional activities over time (subject to risk assessment and approvals). 

As part of this commitment, the ALRC maintains internal guidance and training requirements for staff on the safe and responsible use of AI tools. 

How the ALRC uses AI 

Public interaction and impact 

The ALRC is not using AI in a way the public can directly interact with, or be significantly impacted by, If this changes, we will update this statement to describe the use and safeguards 

Law reform activities 

The ALRC does not use AI for strictly law reform activities, including legal analysis, drafting or recommendations. AI does not produce or materially influence inquiry outputs.  

Corporate and workplace productivity 

Our current AI use is limited to workplace productivity. All staff have access to Microsoft Copilot, and Microsoft 365 Copilot is available to approved roles across the organisation (for example, for collaboration, meeting support, and drafting and summarising routine material).  

Planned AI work 

In February 2026, the ALRC established a staff working group to assess broader AI use, including potential trials for inquiry-adjacent tasks such as submission analysis support, proofing and document handling (subject to governance and approval). The group is also considering legal-specific tools (for example, LexisNexis AI+ and Westlaw Edge) for possible future legal research support, noting these tools are not currently used for inquiry work.  

We have internal guidance and a SharePoint page on the use of AI tools. Staff are required to confirm and acknowledge they are familiar with this guidance before accessing AI tools. Our guidance reminds staff to verify AI-generated content and to avoid sharing or copying sensitive material into AI tools, consistent with relevant Australian Government security and privacy requirements.  

AI safety and governance 

Within the ALRC, all AI use cases are recorded in an internal register to track their progress and status. For new and emerging potential uses of AI, it is the responsibility of the AI Accountable Authority to:     

  • Ensure that any AI is implemented safely and responsibly  
  • Monitor the effectiveness of the deployed AI system  
  • Ensure legal and regulatory compliance of the ICT system  
  • Identify potential negative impacts of the AI use case  
  • Implement measures to mitigate potential harms from AI  

Classification of AI system use cases 

The DTA standard requires agencies to classify AI use by usage pattern and domain. 

The ALRC’s current AI use is classified as: 

  • Usage pattern: Workplace productivity using Microsoft CoPilot and Copilot 365. 
  • Domain: Corporate and enabling. 

If the ALRC’s AI use expands to additional usage patterns or domains, we will update this statement accordingly. 

Updates to this statement 

This statement will be updated as the ALRC’s approach to AI changes, and at least every twelve months. It may also be updated sooner when the ALRC makes a significant change to its approach to AI, or when any new factor materially impacts the statement’s accuracy. 

Contact 

For enquiries about the ALRC’s adoption of AI, contact: [email protected]