Regulate social media? It’s a bit more complicated than that!
Taming social media has long been dismissed as an insurmountable task.
The sheer volume of content shared on platforms like Facebook, Twitter, and YouTube, combined with concerns about freedom of expression, has made any mission to rein in tech’s biggest companies extremely difficult. However, the human cost of leaving the internet to regulate itself is too vast. Children nowadays are especially vulnerable to distressing images and extremist views that can be accessed in seconds by everyone.
U.S. President Donald Trump’s effort to regulate social media companies’ content decisions is now facing an uphill battle from regulators who have previously said they cannot oversee the conduct of internet firms. Trump said last month that he wants to “remove or change” a provision of a law that shields social media companies from liability for content posted by their users.
He signed an executive order that directed the Commerce Department to petition the Federal Communications Commission (FCC) to write rules clarifying social media companies’ legal protections under Section 230 of the 1996 Communications Decency Act.
In this political environment, policymakers, pressure groups, and even some technology sector leaders— whose enterprises have benefited greatly from free expression—are pursuing the imposition of online content and speech standards, along with other policies that would seriously burden their emerging competitors. The current social media debate centers around competing interventionist agendas.
Conservatives want social media titans regulated to remain neutral, while liberals tend to want them to eradicate harmful content and address other alleged societal ills. Meanwhile, some maintain that Internet service should be regulated as a public utility. Blocking or compelling speech in reaction to governmental pressure would not only violate the Constitution’s First Amendment—it would require immense expansion of constitutionally dubious administrative agencies. These agencies would either enforce government-affirmed social media and service provider de-platforming—the denial to certain speakers of the means to communicate their ideas to the public— or coerce platforms into carrying any message by actively policing that practice. When it comes to protecting free speech, the uproar over social media power and bias boil down to one thing: The Internet— and any future communications platforms—need protection from both the bans on speech sought by the left and the forced conservative ride-along speech sought by the right.
In the social media debate, the problem is not that big tech’s power is unchecked. Rather, the problem is that social media regulation—by either the left or right— would make it that way. Like banks, social media giants are not too big to fail, but regulation would make them that way.
The vast energy expended on accusing purveyors of information, either on mainstream or social media, of bias or of inadequate removal of harmful content should be redirected toward the development of tools that empower users to better customize the content they choose to access.
Rule of territory vs Access – Effects of regulating Social media in the UK and EU
But the new executive order issued by President Donald Trump could have significant implications for the data-sharing agreement between the European Union and the United States.
At present, the EU-US Privacy Shield sets out what data can be shared between businesses on both sides of the Atlantic ocean and how that data can be used. It is designed so data protection laws can be upheld between EU’s member states and the US.
Under international law, one of the primary means for states to exercise their jurisdiction is the territorial principle, the right to regulate acts that occur within their territory. For instance, UK law would apply to online content hosted on servers located in the UK, or to an internet user uploading content online from the UK.
But of course, internet users can access content created and hosted from all over the world, and it is not always possible to tell where it has come from or where it is hosted. This limits the territorial principle and makes establishing the existence of a territorial connection with “un-territorial data” a key requirement. Unfortunately, there is no international agreement on how to do so.
So we increasingly see states tending to impose measures that go well beyond their borders, This is problematic because it inevitably runs into the rights and freedoms of foreign citizens abroad, who should in theory only need to comply with the local laws in their country.
The Danger of Progressive Regulation of “Harmful Content”
Some politicians, dominant social media firms, and activists propose to expunge what they see as objectionable content online. They point to hate speech, disinformation, misinformation, and objectionable, harmful, or dehumanizing content. Since “misinformation” can translate into “things we disagree with,” this inventory can be expected to grow.
Disagreeable or hateful speech is nonetheless constitutionally protected.
American values strongly favor a marketplace of ideas where debate and civil controversy can thrive. By contrast, attempts by tech companies uniting with the government to create new regulatory oversight bodies and filing requirements to exile politically disfavored opinions on the one hand, and force the inclusion of conservative content on the other, should both be rejected.
The vast energy expended on accusing purveyors of information, either on mainstream or social media, of bias or of inadequate removal of harmful content should be redirected toward protecting the right for future platforms to be biased in any direction, and toward fostering the development of tools that can empower users to better customize the content they choose to access.
Existing social media firms want rules they can live with—which translates into rules that future social networks cannot live with. The government cannot create new competitors, but it can prevent their emergence by imposing barriers to market entry. The government has a duty to protect dissent, not regulate it, but a casualty of regulation would appear to be future conservative-leaning platforms.
This just reinforces the problem an unbiased and external regulation and moderation are required to ensure online is transparent, safe and a useful source of media is what we at RiskEye believe.
How have the social networks responded to these regulations?
Twitter called the order “a reactionary and politicized approach to a landmark law,” adding that Section 230 “protects American innovation and freedom of expression, and it’s underpinned by democratic values”.
Google, which owns YouTube, said changing Section 230 would “hurt America’s economy and its global leadership on internet freedom.”
“Our platforms have empowered a wide range of people and organizations from across the political spectrum, giving them a voice and new ways to reach their audiences,” the firm said in a statement to the BBC.
In an interview with Fox News, Facebook’s chief executive, Mark Zuckerberg, said censoring a social media platform would not be the “right reflex” for a government concerned about censorship.
“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online,” said Mr. Zuckerberg.
While sites including Facebook and Twitter allow us to share information, they’ve also become places for illegal and harmful content to thrive.
The UK and Ireland now want those firms to be more responsible, The government will appoint its Broadcast regulator Ofcom as an online watchdog, with powers to force companies to take down certain material. Other countries such as Germany and Australia have brought in measures to control online content too. The European Union has its own rules in place.
But is regulation the answer? And can it be done without violating personal freedoms?