Australian adults can now tattle to the government if they’re being bullied on social media.

New laws allow Australia’s eSafety Commissioner to compel platforms such as Twitter, Facebook, and Instagram to remove “cyber‑abuse material” within 24 hours, or face a hefty fine. It’s a second avenue of recourse for those dissatisfied with the platforms’ moderation policies.

Described by government officials as a “world first cyber-abuse take-down scheme to protect adults,” the amendments to Australia’s Online Safety Act came into effect on Sunday after being passed by the Australian Senate last June. Under this new legislation, the eSafety Commissioner can issue a removal notice to a platform if they don’t take down a post within 48 hours of receiving a complaint about it.

This doesn’t mean Aussies can run to the government about every single tweet they find objectionable, though. The offending posts must be “menacing, harassing or offensive,” as well as likely intended to cause “serious harm to a particular Australian adult.” This means that mere hurt feelings won’t suffice, and that posts targeting characteristics such as race or gender won’t fall under this law. 

The eSafety Commissioner noted the criteria for determining what falls under “cyber‑abuse material targeted at the Australian adult” has intentionally been set high “to ensure it does not stifle freedom of speech.”

“These new laws…. place Australia at the international forefront in the fight against online abuse and harm – providing additional protections for Australians in the fight against online harms through our approach of prevention, protection, and proactive change in the online space,” said eSafety Commissioner Julie Inman Grant.

If a company refuses a request to take down a post, they may be fined up to $AU555,000. If they repeatedly refuse, the consequences can be more severe.

“Under these new laws, if websites or apps systematically ignore take down notices from eSafety for this type of content, they could see their sites delinked from search engines or their apps removed from app stores,” said Australian minister Paul Fletcher’s office in a media release last December.

When reached for comment, Facebook and Instagram’s parent company Meta indicated an intention to cooperate with the Australian government under the new laws.

“We have rules in place to help keep our communities safe, like removing harmful and hateful content from our platforms,” Mia Garlick, Director of Public Policy at Meta Australia, New Zealand and Pacific Islands, told Mashable. “In addition to our own policies and tools, we’ve supported the introduction of online safety regulation, including Australia’s Online Safety Act. We have a long track record of a productive working relationship with the eSafety Commissioner, and we’ll continue to work constructively with her Office on important safety matters.”

Mashable has also reached out to Twitter for comment.

“The internet has brought immense advantages, but also new risks, and Australians rightly expect the big tech companies to do more to make their products safer for users,” Fletcher said on Sunday.

UPDATE: Jan. 24, 2022, 6:52 p.m. This article has been updated with comment from Meta.

rss
Facebooktwitterredditpinterestlinkedintumblrmail