Do big tech companies have a ‘duty of care’ for users? A new report says they do – but leaves out key details

1738652748289.webp

Melbourne, Feb 4 (The Conversation) – Large social media platforms may soon face stricter regulations, including mandatory removal of harmful content, regular risk assessments, and hefty fines for non-compliance. These recommendations come from an independent review of Australia’s online safety laws, led by experienced public servant Delia Rickard.

The federal government has finally released the report, more than three months after receiving it. The review comes amid growing concerns about online safety, particularly following Meta’s decision to stop using independent fact-checkers on Facebook, Instagram, and Threads.

Rickard’s review puts forward 67 recommendations aimed at making the online space safer for Australians. If implemented, these measures would help curb cyberbullying, abusive content, and other online harms while aligning Australia with global regulations seen in the UK and the EU. However, with a federal election looming, these recommendations are unlikely to be enforced until the next government term.

A Call for Digital Duty of Care

One of the key proposals is the introduction of a "digital duty of care" for major tech companies, requiring them to actively address harmful content, including child exploitation and hate speech based on gender, race, or religion.

While the government has already committed to this initiative, legislative progress has stalled since November, overshadowed by discussions surrounding the proposed social media ban for under-16s. If enforced, the digital duty of care would introduce severe penalties for non-compliance—either 5% of a company’s global annual turnover or AUD 50 million, whichever is greater.

New Harm Classifications and Expanded Regulatory Powers

The review also recommends separating the Online Safety Act from the National Classification Scheme, which currently governs media content ratings. This would establish two new categories of online harm:

  • "Illegal and seriously harmful" content, such as explicit abuse or violent threats.
  • "Legal but potentially harmful" content, including material related to self-harm and eating disorders.
Additionally, the proposed measures would:

  • Require tech companies to conduct annual risk assessments and publish transparency reports.
  • Shorten the response time for content removal from 48 hours to 24 hours after a complaint.
  • Lower the threshold for identifying menacing, harassing, or offensive content, based on what a "reasonable person" would find harmful.
  • Grant the eSafety Commission greater authority to enforce mandatory compliance rules.

The Challenge of Misinformation and Disinformation

While the recommendations strengthen online safety, they do not address misinformation and disinformation, a growing global concern. Experts warn that AI tools, including deepfake technology, pose significant risks, from election interference to public health misinformation.

A 2024 report by the International Panel on the Information Environment highlighted that social media platform owners themselves are among the biggest threats to online information integrity. Similarly, a 2025 report by the Canadian Medical Association warned that "problematic sources" are gaining influence while tech companies profit from "pushing misinformation" and restricting access to trusted news.

Australia’s misinformation bill was scrapped in November 2024 due to concerns over free speech. However, this has left Australians exposed to unverified online claims, particularly in the lead-up to this year’s federal election.

The Need for Better Online Education

Another gap in the recommendations is the lack of additional educational support for navigating online risks. While the eSafety Commission provides resources for young people, parents, and educators, it remains unclear how the proposed governance changes would impact these initiatives.

The report does emphasize the need for clearer complaint processes for users experiencing harm. However, preventative education remains an essential yet overlooked component of online safety.

What’s Next?

The Albanese government has stated it will respond to the review in due course, but action before the election is unlikely. Regardless of which party comes to power, implementing these recommendations alongside robust misinformation countermeasures and educational initiatives should be a top priority to ensure a safer digital environment for all Australians.

(The Conversation)
 
Back
Top