On September 30, 2025, OpenAI launched Sora 2—not just an AI video generator, but a full social media platform with a TikTok-style feed where users create and share imaginary scenes that look completely real.
THE BASICS
What it is: Sora 2 creates photo-realistic videos from text prompts. Type “woman walking down a city street at night” and get footage that looks like you filmed it with a camera. The problem? Your teen cannot reliably tell the difference between actual footage and AI-generated imaginary scenes.
Who can access it:
- Sora 2 social app: Currently invite-only on iOS (US and Canada only)
- Sora 2 video generation: Available to ChatGPT Pro subscribers ($200/month) without invite
- Original Sora: Available to ChatGPT Plus ($20/month) and Pro subscribers
What this means: Even if your teen doesn’t have the social app, they could generate videos if they have a ChatGPT subscription. Videos can be downloaded and shared on Instagram, TikTok, and other platforms where AI labels are lost.
THE “CAMEOS” FEATURE RAISING ALARMS
Sora 2 lets users upload their likeness once, then share permission with friends to include them in AI-generated videos. Friends can then create videos featuring you doing anything, anywhere.
Why this is dangerous: Even if you trust someone with access to your likeness, they could:
- Generate deceptive content that harms your reputation
- Create videos you never approved
- Share your likeness permission with others without your knowledge
While users can revoke access anytime, once a video is generated and downloaded, it can be shared anywhere. The damage can happen before you know about it.
OpenAI admits: “Non-consensual videos are a persistent problem with AI-generated video, causing significant harm with few laws explicitly governing platform responsibility.”
THE CRITICAL RISKS
1. Fake vs. Real is Broken
Your teen will see AI-generated imaginary scenes shared as if they’re real events. News footage could be fake. “Evidence” of something happening could be imaginary. Videos of people “doing” things could be completely fabricated.
The watermark problem: OpenAI adds visible watermarks to all Sora 2 videos. However, within 7 days of launch (by October 7, 2025), third-party tools to remove watermarks became prevalent. The safety measure is already circumvented.
Critical conversation: “Before you believe ANY video you see online, ask: Where did this come from? Could this be AI-generated? What’s the original source? Remember: watermarks can be removed.”
2. Your Child’s Photos Are Training Data
The evidence: In July 2024, Human Rights Watch published an investigation revealing that personal photos of Australian children were found in LAION-5B, a massive dataset containing 5.85 billion images used to train AI models.
Key findings:
- Photos taken from social media without knowledge or consent
- Privacy settings didn’t prevent scraping—even photos with strict settings ended up in the dataset
- In June 2024, 50 girls from Melbourne discovered their social media photos were used to create sexually explicit deepfakes
- Photos you posted years ago—Instagram, Facebook, TikTok, parent groups
Meta’s AI training: Since 2007, Meta has been using publicly available photos (including children) to train AI models. Photos of children shared publicly by adults are included in AI training datasets.
The harsh reality: Once you’ve uploaded your child’s image to social media, you’ve lost control over how AI systems might use it.
What to do NOW:
- Think twice before uploading ANY new photos/videos of your children
- Review privacy settings (though this may not prevent scraping)
- Consider removing old photos or making them friends-only
- Remove location data and metadata before posting
- Never post photos of other people’s children
Talk to your teens: “The photos you post now could be used by AI to create fake videos later. Once it’s online, we can’t control what happens to it.”
3. Cyberbullying, Consent Violations, and Reputation Damage
- Fake videos of classmates doing things they never did, using photos scraped from social media
- The “cameos” feature makes this easier—give someone permission “for fun,” and they can create harmful content
- Videos can affect college applications, jobs, relationships
- Even when proven fake, the damage is done
- Friends can misuse “cameos” access or share permissions without knowledge
4. Misinformation Spreads Faster
- Fake news videos look like real footage
- “I saw the video” is no longer proof
- Videos downloaded from Sora and shared on other platforms lose AI labels
- Your teen shares false information believing it’s real
WHAT PLATFORMS ARE (OR AREN’T) DOING
Current state:
- Visible watermarks (easily removed within 7 days of launch)
- C2PA metadata (can be stripped when downloaded)
- Blocking CSAM and sexual deepfakes
- Uses copyrighted content by default unless copyright holders opt-out
Other platforms: Instagram, TikTok, YouTube have inconsistent AI content policies. Most don’t require AI labels. No standardized verification. Images from these platforms are being scraped for AI training.
The gap: Sora 2 launched three weeks ago with social features. Watermark removal tools appeared within a week. Platform responsibility for AI-generated harm remains unclear.
THE CONVERSATIONS TO HAVE
Talk about fake vs. real: “Videos don’t show reality anymore—they show what someone wants you to think is reality. Before you believe any video, verify the source. If someone shares a shocking video, check it before believing or resharing.”
Talk about your family photos: “Every photo I’ve posted of you could be used by AI companies. Research proves children’s photos are in AI training datasets. From now on, we’re being much more careful. Before posting: Who can see this? Could it be used for AI? Is sharing this worth the risk?”
Talk about the “cameos” feature: “Sora 2 lets you share your appearance with friends so they can put you in AI videos. Think about: Even friends you trust could create something harmful. They could share your permission with others. Once a video is made and downloaded, you can’t control where it goes.”
Talk about consent: “Never create videos of other people without permission—even AI-generated ones. How would you feel if someone used your photos to make a fake video of you?”
Talk about what to do if targeted: “If someone creates a fake video of you: Don’t engage or retaliate. Screenshot evidence immediately. Report it to the platform. Tell a trusted adult immediately. Document everything. We will help you through this.”
PRACTICAL RULES TO CONSIDER
For all ages:
- STOP uploading photos/videos of children to social media without careful consideration
- Review privacy settings (though scraping may still occur)
- Make profiles as private as possible
- Remove location data and metadata before posting
- Family rule: No AI videos of family members without approval
- Teach them to question EVERY video they see online
- Verify sources before believing or sharing videos
For younger teens (13-14):
- No AI video tools without supervision
- Review anything they create before posting
- No using photos of real people without explicit permission
- No sharing “cameos” permissions without parent approval
- Understand that watermarks can be removed
For older teens (15-17):
- AI-generated content must be clearly labeled if posted
- Never create content of others without consent
- Be extremely cautious about sharing “cameos” permissions
- Recognize that even friends can misuse access to your likeness
For parents:
- Audit your social media for photos of your children NOW
- Tighten privacy settings immediately
- Consider deleting old public photos
- Assume any public photo will be used for AI training
- Have ongoing conversations, not just one talk
THE BIGGER PICTURE
The timeline:
- 2007-present: Meta using publicly available photos (including children) for AI training
- Before 2024: Children’s photos scraped into LAION-5B dataset
- June 2024: 50 Melbourne girls’ photos used to create sexual deepfakes
- July 2024: Human Rights Watch confirms children’s photos in AI training data
- September 30, 2025: Sora 2 launched with social features
- October 7, 2025: Watermark removal tools prevalent (7 days after launch)
What kids need to develop:
- Critical skepticism about ALL video content
- Understanding that realistic ≠ real
- Awareness their photos can be and are being misused
- Ability to verify sources before believing/sharing
- Recognition that watermarks and labels aren’t reliable
What parents need to accept:
- Photos you’ve already posted are likely in AI training datasets
- You cannot completely prevent teen access to these tools
- Restriction alone won’t work—education and values matter more
- This requires ongoing conversation, not one talk
- Even with invite-only apps, video generation is already available
NEXT STEPS
Check if your teen has access:
- Do they have a ChatGPT Plus or Pro subscription? (They can generate videos)
- Do they have the Sora 2 iOS app?
- Do their friends have access?
- Have they seen videos that seemed suspicious?
Audit your social media NOW:
- Review every platform where you’ve posted children’s photos
- Assume publicly available photos have been scraped
- Change privacy settings to most restrictive
- Consider deleting public photos or making them friends-only
- Stop tagging children
Start the conversation tonight: “I learned about Sora 2. It’s a social media app where people create videos that look real but are imaginary. Even without the social app, people with ChatGPT subscriptions can generate these videos. We need to talk about: How do we know what’s real anymore? What photos should we remove from social media? How do we protect ourselves? What would you do if someone made a fake video of you?”
Set family guidelines together:
- Rules about what can be posted publicly
- How to verify videos before believing them
- What to do if targeted by fake content
- Boundaries around AI tool use
- Guidelines for “cameos” if they use Sora 2
- Agreement to have ongoing conversations
THE BOTTOM LINE
“Seeing is believing” is officially over. Videos don’t show reality—they show what someone wants you to think is reality.
Sora 2 launched three weeks ago. Video generation is available to ChatGPT subscribers now. The photos you posted years ago are already in AI training datasets. Watermark removal tools existed within a week of launch.
This isn’t a future problem. It’s happening now.
Think twice before uploading any images online, because you can’t take them back once AI has learned from them. Teach your teen to question every video. And have ongoing conversations about consent, ethics, and the difference between realistic and real.
SOURCES:
Primary Sources:
- OpenAI Sora 2 Announcement: https://openai.com/index/sora-2/
- OpenAI Sora Launch: https://openai.com/index/sora-is-here/
- TechCrunch – Sora as Social Platform: https://techcrunch.com/2025/09/30/openai-is-launching-the-sora-app-its-own-tiktok-competitor-alongside-the-sora-2-model/
AI Training Data & Children’s Photos:
- Human Rights Watch Investigation: https://www.hrw.org/news/2024/07/03/australia-childrens-personal-photos-misused-power-ai-tools
- UNSW Sydney Analysis: https://www.unsw.edu.au/newsroom/news/2024/07/photos-Australian-kids-massive-AI-dataset
- Meta AI Training Controversy: https://telehealth.org/blog/facebooks-ai-training-controversy-the-ethical-implications-of-using-childrens-photos/



