The Australian lobby group for tech giants, including Google, Facebook and Twitter, has moved to strengthen a voluntary code aimed at reducing misinformation online as new global calls rise for stricter regulation of internet platforms.
Tech industry association DIGI will set up an independent board for police of a voluntary code of misinformation and misinformation, which it launched in February at the request of the government. DIGI members Facebook, Google, Twitter, Microsoft and the viral video site TikTok have all signed up for the code, requiring technology companies to tell users what measures they have to stop the spread of misinformation about their services and provide annual ‘transparency’ reports describing their efforts.
“When we launched the Code in February, DIGI publicly committed itself to introducing independent oversight to strengthen its governance, which we have developed over the last few months,” said DIGI CEO Sunita Bose.
The movements to strengthen the voluntary code come amid new anger from politicians in both Australia and the US over the tech giants’ lack of policing on fake information about their platforms and other issues, such as targeting vulnerable users with problematic advertising.
But despite calls growing, stricter rules are required, Bose said the digital industry was able to regulate itself. “Codes developed by industry associations are widely used to regulate a range of industries, such as media, advertising and telecommunications,” she said. We wanted to strengthen this code and build public confidence in it by providing independent expert oversight and public accountability for compliance. “
Prime Minister Scott Morrison and Deputy Prime Minister Barnaby Joyce blew up both tech companies last week, threatening the industry with tougher regulation after fake stories spread across the platforms. Meanwhile in the US, Facebook employee and whistleblower Frances Haugen said the company was misleading investors and the public about its role in spreading disinformation and violent extremism in connection with the 2020 general election and the 6th riots in the US capital. She also said that Facebook implemented safeguards to protect against the spread of misinformation, but the post-election efforts, a claim that the social media giant disputes.
In a U.S. Senate hearing two weeks ago, Facebook’s non-compliance with public obligations was revealed again when new research revealed that it had not stopped targeting teens with ads for weight loss products. Minderoo’s Frontier Tech Initiative, funded by local billionaire Andrew ‘Twiggy’ Forrest, commissioned and funded research earlier this year that focused on Facebook’s revenue generation of young people’s personal data. As a result of the work, Facebook announced that it and Instagram would ban targeting inappropriate ads for young people, but a U.S. company that runs the Tech Transparency Project found that this was not the case.
Investigations from online misinformation monitor NewsGuard revealed last week that conspiracy theories about COVID-19 were still available to young children on TikTok, disputes the research sent to the British government and the World Health Organization (WHO) in June.