Our team is starting to do preliminary explorations of how we may audit social media platforms like TikTok, Instagram, and YouTube to see how legal help is (or could be) delivered on them.
Based on our research assistant Carolina Nazario’s work, here are two themes for research and design that we’re exploring:
Area 1: Spotting Legal Influencer & Public Interest Outreach opportunities on TikTok and Instagram
- Understanding the landscape of different Influencer types & volumes on these platforms, including what they post about, who they reach, what their motives are, what limits they have
- Zooming in on where public interest content (about rights, services, free help, common issues that people experience) is actually being served up. Who’s sharing this now? What format/presentation is most common? What’s the engagement level?
- Imagining New Opportunities for Content: Are there parallel issues, maybe in the criminal space or other policy/system areas, that could be models for how public interest content could be served up better? Like, more walkthroughs? More skits? More…?
- What are the potentials in the Comments & Interactions? Are people asking for help? Are they seeking out services? Are they sharing advice? What could be done here by the OPs or by bots/platforms? What’s the potential here
- What are the potentials for Platform Policy/Design Changes? For the TikTok and Instagram platforms in particular, what would it look like to have ‘authoritative’ content inserted in? Would that be seen as censorship or suspect? How could it be done in a way that fits this community?
Area 2: Understanding & Responding to Harmful Legal Crisis/Conflict Content on Social Media
- Profiling certain hashtags or topic areas’ most popular, frequently seen content. Can we show the legal and government agency audience what’s happening on the platform: very concerning movies of ‘eviction days’, showing people being set out from their homes – -where the tone is highly adversarial, dehumanizing, etc. Describe what this new trend is.
- Quantify, go beyond the anecdote. What’s the volume of content like this? What are the kinds of comments & reactions, and how much vitriolic comment activity is there? How quickly would a newbie user see extreme, polarizing videos like Eviction Days?
- Explore the policy and legal concerns. According to the platform’s rules about privacy, consent, and treatment of others — what are the concerns we can identify with these kinds of videos documenting very sensitive legal proceedings and execution of court judgments? According to local or national laws about privacy, are there issues around protection of people’s records from biasing them in future housing/employment interactions? Are there other harms? We may even run a workshop about this trend & spot the harms. We must also be realistic about the platform’s track record in enforcing its own policies — even if we spot harms, will that translate into platform changes?
- Find parallel topics & interventions. What are ways to address the harms we identify? What are platform policy-making and design opportunities, to balance out adversarial/dehumanizing posts or to make it less likely that people will go down this rabbithole. Are there counter-content opportunities, to make more information and viral content from others’ POV or from less adversarial approach? What can we learn from health, elections, news, education, vaccines, other topic areas?