Easy access to information and the Internet has created a significant impact on society. From increasing general cultural and journalistic literacy to empowering people to stay connected, the rapid information age has certainly transformed the way society communicates. However, with it, some challenges have come, and the spread of disinformation is central among those challenges.
Fear of disinformation was expressly prevalent during the Covid-19 pandemic, when there was a global state of enormous confusion and lack of understanding about what the virus was, who could be affected, how to prevent infection, death, etc. Many non-medical professionals shared their views on the virus which challenged the views of skilled medical professionals, causing not only chaos but a sense of distrust in the general community.
This is an ephemeral challenge in social media and public information platforms; companies and moderators struggle with the always onerous balance between excessive moderation and limitation of free speech versus promoting a platform that propagates disinformation.
At the end of last week, YouTube, one of the largest online media and video sharing platforms in the world, released a new initiative on this front. Dr. Garth Graham, Global Head of YouTube Health, wrote in a blog post titled “New Ways for Licensed Healthcare Professionals to Reach People on YouTube” and explained, “When it comes to our health, people are trust healthcare professionals to give us the best advice. But the opportunity healthcare professionals have to inform and educate their patients largely stops at the clinic door. The reality is that most healthcare decisions are made beyond outside the doctor’s office, in the daily life of our patients […] Today, we announce that for the first time, certain categories of healthcare professionals and healthcare information providers are eligible to apply to make their channels eligible for the capabilities of our healthcare products launched in the United States last year. This includes health information panels that help viewers identify videos from authoritative sources and shelves of health content that highlight videos from these sources when researching health topics, so that people can more easily navigate and evaluate health information online “.
Essentially, YouTube is attempting to deliver higher quality health information across its platforms, with the hope that this will help the wider YouTube community find more legitimate content and connections they can trust.
YouTube is certainly not the only platform experiencing significant growth difficulties on this front. Facebook (now known as “Meta”) has for many years received in-depth control over how content is moderated on its platforms, from the actual Facebook application to Instagram. The CEO and founder of the company, Mark Zuckerberg, has been repeatedly scrutinized on this topic, especially since the platforms reach nearly 2 billion people per month.
More recently, Tesla founder Elon Musk’s dramatic purchase of Twitter Inc. has attracted significant media attention. One of Musk’s self-proclaimed interests in buying the company was allegedly based on his dissatisfaction with the way Twitter moderated content and controlled the flow of information. In a post last week, Musk shared an open Note addressed to “Twitter Advertisers”: “The reason I acquired Twitter is because it is important for the future of civilization to have a common digital place, where a wide range of beliefs can be discussed in a healthy way, without resorting to violence. There is currently a great danger of social media breaking into far-right and far-left echo chambers that generate more hatred and divide our society. ”
Musk has congruently announced the formation of a “content moderation board” that will review content and account recovery decisions, to ensure better alignment with Twitter’s mission and guidelines.
In fact, this is also important when it comes to health disinformation, as Twitter is a conduit for massive amounts of health care data and news. During the height of the pandemic, doctors and providers from all backgrounds used Twitter (the most common hashtags included #MedTwitter, # MedEd— “medical education” or # FOAMed— “Free access medical education”) to show the scenes on the in front line, often sharing their experiences and advice on dealing with the virus and other diseases. Of course, this soon led to the ephemeral problem: who should you trust? Notably, Twitter boasts nearly 450 million monthly active users.
None of this content management work is going to be easy. The reason this problem exists is due to a constant challenge between the right to free speech, the dissemination of truthful information and concerns about public safety. Finding the right balance between these factors is notoriously proving to be a challenge for these companies. In fact, the best thing users can do for themselves is ultimately to consult their licensed and trained trusted medical professionals for any issues. However, the question of how best to convey information is one of the most important thinking issues leaders should consider, as the future of our world really depends on it.