Ai in the Australian Healthcare Landscape.
- Polash Adhikari
- Sep 17
- 3 min read
Updated: Sep 27

This is a question-and-answer presentation focusing on key aspects of AI in healthcare. Most of the content is drawn from the report published by the Australian Department of Health, Disability and Ageing in March 2025, titled Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review.
Does existing health legislation in Australia adequately regulate AI in healthcare?
Over the past few years there has been significant work done to provide regulatory scaffolding of Ai use in healthcare, however there is areas where further development is needed. Current federal laws (e.g., Privacy Act 1988, Therapeutic Goods Act 1989, My Health Records Act 2012) already provide significant coverage. However, minor technical amendments are needed for clarity, particularly definitions of services and offences. Gaps do exist in areas outside TGA oversight (e.g., AI scribes, wearable devices). There is still a long way to go in terms of providing Whole-of-economy AI guardrails are proposed to strengthen consistency and safety.
Who should lead AI policy in healthcare, and why?
There is a clear need for national, centralised leadership. While the Department of Industry, Science and Resources (DISR) leads economy-wide AI policy, healthcare requires tailored guidance addressing sensitive data, clinical governance, specialty-specific issues, and the rapid pace of AI innovation. A national health-focused AI body could ensure equitable, safe adoption. There are consultation groups, but there does not seem to be anything set up to specifically deal with healthcare specialised needs at a Australia wide level.
Do healthcare providers have enough resources to safely adopt AI?
There are significant knowledge gaps, and there is little high-quality, contemporary guidance. Providers need support for evaluating datasets, validating AI systems, monitoring performance, and conducting implementation trials. Without this, adoption risks inconsistency, unsafe practices, and inequitable outcomes.
Is there enough reliable information about AI in healthcare?
A centralised, trusted source is needed. Currently, misinformation and poor-quality AI outputs undermine both patient and clinician decision-making. Without this, providers may inadvertently rely on unsafe systems, leading to malpractice risks.
Is there strong evidence of AI benefits in healthcare?
As an emerging technology evidence is expected to be lacking. There are preconditions for assessing benefit. The existing benefits framework is weak, with limited data on outcomes. A formal framework with both qualitative and quantitative measures is needed to guide investment, adoption, and equitable access. Otherwise, AI risks a unmethodical deployment then hyped without proof of effectiveness.
Are data governance and patient consent practices strong enough for AI?
Here exists a critical gap. Patient data ownership is unclear, consent processes are inconsistent, and deidentification is not always effective (e.g., genetic or imaging data). Clear accountability across the AI lifecycle is critical. Without reform, providers risk breaching privacy law, ethical standards, and trust obligations.
How can industry be encouraged to build safe, high-quality AI?
Through incentives (funding, recognition, regulatory fast-tracks). Encouraging compliance and accuracy in Australian-specific contexts can prevent harm from low-quality products. Without incentives, the market may flood with unsafe, non-evidence-based tools.
This is a view into what the risks may mean:
Risk Type | What it Covers | Example | Consequences for Practitioners |
Legal | Breaches of legislation (Privacy Act, Therapeutic Goods Act, etc.) | Sharing patient data with an AI model without consent | Civil penalties, fines, litigation for damages |
Regulatory | Breaches of health system or departmental regulations (Medicare billing rules, aged care standards, TGA approvals) | Using unapproved AI diagnostic software in aged care | Loss of provider status, funding clawbacks, compliance enforcement |
Professional | Breaches of professional standards (AHPRA codes, duty of care, medico-legal obligations) | Over-reliance on AI diagnosis without clinical oversight | Disciplinary action, suspension of registration, malpractice suits |
Comments