Video platforms like YouTube are being used to share advice and get help with problems. But what about unreliable or low-quality information being shared about health, news, politics, and legal matters?
At least for health, the YouTube team has created a new intervention to mark when a video is created by an authoritative, reliable source. See this example of a user journey to find advice about detecting skin cancer, and how YouTube intervenes to validate the information is authoritative.
- I was reading a news article about skin cancer rise.
- The news article linked to this official health website, that the news article said could help you learn how to check for skin cancer from a reliable source.
- The official health website had an embedded YouTube video, that promised to walk you through how to check for skin cancer in under two minutes.
- When I pressed play, and for the first 30 seconds of the video, there was an overlay on the top left corner, that said that the video was from an official health authority.
- If I clicked on that official health authority message, it then put a pause on the video and showed me an overlay of two options, about why I was seeing this and where I could learn more.
- When I clicked one of those links, it took me to a YouTube/Google health page, that went through why the source was legitimate, and how it was chosen.
- When I went one level up on the YouTube help page, I saw an overview of different help topics for prime areas for misinformation, like news and health.
Could there be an intervention like this for legal content?
Here are the screenshots of the health website & YouTube video intervention.



