Misinformation and disinformation have been a very big problem recently even before the covid-19 pandemic. The use of social media and other digital platforms has made it easy to spread misinformation and this has been a big concern of late. With the current pandemic, this poses a big threat as misinformation can mean life and death to some people. As people use social media platforms to stay connected and be informed, these platforms can be very detrimental as they aid with the spread of misinformation to those who may not be aware of what is happening.
Misinformation and disinformation related to the covid-19 pandemic refer to different things with the latter referring to the intentionally spreading of misleading or false information while the former referring to the unintentional sharing of false information. Either way, both end up spreading false information and this can be quite a big threat to public health. With the misinformation and disinformation enabled by social media and digital platforms, public health response to the current pandemic has been complicated and this has led to more confusion, distrust to the government and led to further spread of the pandemic.
With tech companies being at the centre of this information war, they do have a role to play to stop the spread of false information. Not one company to do everything but this is a collective effort that should be coordinated and include the general public, private organizations, governments and non-governmental organizations.
Encourage Conscious Participation
With most social media and digital platforms, ease of use is encouraged as the many bottlenecks there are, the less fun the experience is. This is exactly how these digital platforms are structured to get you to spend more time using them. You have an easy way of interacting and sharing information that is relevant to you. This is good but can also make it easier for people to share misinformation as there are not that many steps to take when sharing such information. Introducing friction may slow this down and people can be forced to think before sharing information.
Implementing this can be done on a number of ways but the end goal is to make sure people know what they are sharing before they share. We have seen this implemented on platforms such as WhatsApp that warn users if a post they are forwarding has been forwarded many times. This can make people think before sharing such posts and they may be forced to confirm if it is true or not. Twitter has also implemented a similar move not necessarily to address the current pandemic but just to help stop the spread of misinformation.
Provide Additional Information
The spread of false information is not always malicious, sometimes the people aiding with the spread of misinformation have no idea what they are doing. With that in platforms can implement ways to provide more context to users when they are sharing certain bits of information. Facebook and YouTube have implemented this well as they do provide users with information about the pandemic when they are sharing posts or links about the virus. This helps them understand more what they are sharing and even their followers can get further information when they see the posts on their feeds.
Finding other ways to aid users in processing and interpreting content must be a part of the mis/disinformation response.
Social media platforms tend to collapse social context for users, displaying all content in a similar way despite varied sources, relationships, history, or purpose and intermediating relationships between content producers and content consumers. As a result, these designs remove the social cues and contextual information that could help a user better interpret and make sense of what they’re seeing.
Not everyone will be happy with this but it is a very important step in fighting misinformation. Social media and digital platforms should invest in getting content moderators who can help get rid of false information. These platforms already do this to some extent but most rely on automated systems for content moderation. This is ok but there needs to be a way to quickly confirm the authenticity of flagged content on their platforms. By doing so, they will slow down the spread of false information on their platforms.
This is key with the current pandemic that has seen so many unconfirmed reports, research and data shared on such platforms. By keeping up with this platforms can ensure users are able to quickly report content that may be false and an actual moderator can quickly look into it.