Australian Children’s Photos Secretly Used to Train AI: Public Outcry Ensues!

  • Editor
  • July 3, 2024
    Updated
AI-Scandal_-Personal-Photos-of-Australian-Kids-Used-Without-Permission

Key Takeaways:

  • Human Rights Watch (HRW) found 190 photos of Australian children in the LAION-5B dataset, used to train AI tools like Stable Diffusion and Midjourney.
  • These photos were scraped from various online sources without the consent of the children or their families, raising significant privacy concerns.
  • The dataset has been used to create potentially harmful AI tools, including deepfake applications.
  • The Australian government is expected to propose changes to the Privacy Act to protect children’s online privacy better.

Photos of Australian children have been found in a massive dataset used to train artificial intelligence (AI) systems, raising serious concerns about privacy and consent.

Human Rights Watch (HRW) discovered that images from the LAION-5B dataset, which contains 5.85 billion images, included personal photos of Australian children without their knowledge or consent.

This dataset has been used by AI tools such as Midjourney and Stable Diffusion to generate images.

With this news going online, this is what people worldwide have to say!

Comment
byu/cricketmad14 from discussion
inaustralia

Facts About The Findings!

HRW found 190 photos of Australian children in the dataset, including intimate family moments and images from school events. These photos were scraped from various online sources, including unlisted YouTube videos and school websites.

“These are not easily findable on school websites,” said Hye Jung Han, HRW’s children’s rights and technology researcher. “They might have been taking images of a school event or like a dance performance or swim meet and wanted a way to share these images with parents and kids.

It’s not quite a password-protected part of their website, but it’s a part of the website that is not publicly accessible, unless you were sent the link. These were not webpages that were indexed by Google.”

HRW also found images of Indigenous children, with some photos over a decade old.

Han noted, “This raised questions about how images of recently deceased Indigenous people could be protected if they were included in the dataset being used to train AI.”

Laion, the German non-profit managing the dataset, stated that the reported images were removed but highlighted that the AI models could not unlearn the data already used.

“With regard to links to images on public internet available in LAION datasets, we can confirm that we worked together with HRW and remove[d] all the private children data reported by HRW,” a spokesperson said. “As long as those images along with private data remain publicly accessible, any other parties collecting data will be able to obtain those for their own datasets that will remain closed in most cases.”

Legal and Ethical Implications

The discovery has led to calls for urgent legal reforms to enhance online privacy protections, particularly for children. Experts argue that existing privacy laws are insufficient and need updating to address the challenges posed by AI technologies.

Hye Jung Han emphasized, “No one knows how AI is going to evolve tomorrow. I think the root of the harm lays in the fact that children’s personal data are not legally protected, and so they’re not protected from misuse by any actor or any type of technology.”

Government and Legislative Response

The Australian government is expected to propose changes to the Privacy Act to protect children’s online privacy better.

Attorney General Mark Dreyfus has introduced reforms to ban the non-consensual creation and sharing of deepfake pornography, but HRW argues that more comprehensive measures are needed to protect children’s personal data from misuse.

Comment
byu/cricketmad14 from discussion
inaustralia

Public and Parental Concerns

Parents are advised to be cautious about sharing their children’s photos online due to the risks of unwanted surveillance and misuse.

However, experts acknowledge that complete avoidance is challenging, and the onus should be on tech companies and regulatory bodies to ensure data protection.

Comment
byu/cricketmad14 from discussion
inaustralia

Last month, a teenage boy was arrested and released after nude images created by AI using the likeness of about 50 female students from Bacchus Marsh Grammar were circulated online.

This incident underscores the potential for AI tools to cause significant harm when misused.

Comment
byu/cricketmad14 from discussion
inaustralia

HRW found that almost all free nudity apps have been built on LAION-5B, and these apps can cause harm to children.

“Almost all of these free nudify apps have been built on Laion-5B because it is the biggest image and text and training dataset out there,” Han said. “It’s being used by untold numbers of AI developers, and some of those apps were specifically being used to cause harm to children.”

The unauthorized use of children’s photos in AI training datasets highlights significant gaps in current privacy protections.

Comment
byu/cricketmad14 from discussion
inaustralia

There is an urgent need for updated legislation and responsible AI practices to safeguard children’s rights and prevent the misuse of their personal data.

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *