The internet is often described as one of the greatest inventions of mankind. It has transformed everyday lives of about 4.4 billion people across the globe. However, in recent years there has been growing demand to regulate the internet because of the proliferation of fake news, cyberattacks, propagation of extremist content by terrorist outfits, an increase in the recruitment of potential militants, and the weaponization of social media, among other challenges.
Currently, there are two extremes to internet regulation. The first approach is of free unhindered internet access propounded by American law whereas the other approach is adopted by China wherein a complex network of telecom companies, human content moderators and legal instruments regulate the internet in what is generally referred to as the ‘Great Firewall’.
Keeping in consideration the two approaches, a white paper released by the British government against online harms calls for a balanced approach, a demand which is called for by a majority of people all across the world.
Keeping in consideration the two approaches, a white paper released by the British government against online harms calls for a balanced approach, a demand which is called for by a majority of people all across the world. Described as the first global attempt to deal with all forms of online harms in coherent manner, the white paper aims for a free, open, vibrant and secure internet; guaranteeing users protection against harm.
The white paper proposed a new regulatory framework which will be applicable to companies that allow users to share or discover user-generated content or to interact with each other online through social media platforms, file-hosting sites, public discussion forums, messaging services and search engines. In other words, platforms will neither be regarded as neutral conduits nor will they be disassociated from the content spread through their platform.
The white paper stated that initial focus would be on platforms which posed ‘the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms’.
Under the new regulatory framework, the British government will establish a new statutory ‘duty of care’ to make companies take on more responsibility in relation to their users and tackle all harm caused by content or activity on their services. An independent regulator will oversee and enforce compliance towards this ‘duty of care’. In brief, companies will have to undertake the following measures:
- Relevant terms and conditions pertinent to the ‘duty of care’ should be sufficiently clear and easily accessible
- Enforce the ‘duty of care’ through codes of practice issued by the regulator. Alternatively, companies can adopt its own mechanism for the enforcement of the duty by justifying that it will result in similar or greater adherence.
- Submission of annual transparency report, which will document the prevalence of harmful content on the platform and list countermeasures taken to deal with such content.
- If demanded, provide additional information such as the impact of algorithms in selecting content for users and reporting emerging and known harms.
- Improve the ability of independent researchers to access data, conditioned for appropriate safeguards
- To build effective and easy-to-access user complaint functions; which will address complaints within an appropriate time frame in accordance with the conditions set out in the regulatory framework.
Failure to enforce the duty of care will empower the regulator to take the following punitive measures:
- Issue substantial fines
- Impose liability on individual members of the company’s senior management
- Blocking access to non-compliant services for British users
The white paper stated that initial focus would be on platforms which posed ‘the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms’. The British government also plans to launch a new online media literacy strategy.
For moderate internet regulation, the white paper represents a welcoming step. However, the paper fails to address some of the most pressing issues. First, several proposals mentioned in the paper are either too ambitious or unrealistically feasible. Second, the paper remains silent on the issue and means of enforcement. For instance, child sexual abuse and online terrorist propaganda are illegal however counter enforcement leaves much to be desired.
Applicability of enforcement requires a considerable amount of finance and human resource which, neither governments nor tech companies are willing to expend. Currently, a combination of human reporting and algorithms is deployed for enforcement. However, this present mechanism has proved to be ineffective as recent events like the Christchurch terror attack, which has been described as the first true example of social media massacre, have shown. Third, a number of concerns have been raised regarding the definitions of specific terms including ‘harmful’ and ‘misinformation’. Not defining specific parameters for harmful content will incentivize companies to remove and restrict content, threatening the right to free speech and information. Last, experts have pointed out non-compliance of several duties mentioned in the white paper to the European Union (EU’s) Electronic Commerce Directive 2000.
Not defining specific parameters for harmful content will incentivize companies to remove and restrict content, threatening the right to free speech and information.
The white paper is currently under a consultation process until the 1st of July. It means that the paper will certainly undergo several changes post the consultation process before becoming a legislative bill. However, it is important to remember that building a framework for the regulation of the internet is a tedious process and will require a multi-stakeholder approach for maintaining a free, open, vibrant and secure public internet.