What has taken place in the US congress over recent days offers clues to what can be expected if the Online Harms Bill is passed by the UK Parliament.
One of the fascinating aspects to the interrogations of ex-Twitter staff in the House committee hearings is the extent to which citizens’ speech suppression, rests not on the powers of government but on these private companies’ own terms of service. These terms not only influence the behaviour of users but, it would seem, influence the behaviour of the technology companies too.
As more information comes out, it becomes clear the extent to which the companies are themselves being influenced in some kind of psychological operation. Until the televised hearings of recent weeks, many people had suspicions that there was a level of collusion between their government and these technology platforms. But it turns out to be worse. What’s taking place is a sleight of hand.
Users may have been under the impression that censoring tweets, shadow banning accounts and removing content on social media happened merely because companies running these platforms are incredibly ‘woke’ or just don’t like users with more ‘conservative’ views. But as the truth starts to emerge, along with the ‘twitter files’, it seems far more likely that people in positions of authority at these tech companies are themselves influenced by some rather shady practices behind the scenes.
In August 2022, Mark Zuckerberg told Joe Rogan on his podcast that “…the FBI came to us - some folks on our team - and was like “hey, just so you know, you should be on high alert. We thought there was a lot of Russian propaganda in the 2016 election, we have it on notice that basically there’s about to be some kind of dump that’s similar to that”.
In his own words, it ‘fit a pattern’. By ‘it’ he was referring to the Hunter Biden laptop story. When the details about what was reportedly on that laptop appeared in the New York Post a few weeks before the US 2020 Election, it was quickly followed by a letter from 51 former intelligence officers claiming the story “has all the classic earmarks of a Russian information operation”. As a result, the ‘fake news’ article was summarily removed from Twitter and Facebook limited its ability to be shared with others. We now know, what many suspected at the time, that it was not Russian disinformation but was legitimate.
Zuckerberg’s statement seemed very strange at the time, he blurted it out in an interview where he was under no real pressure to do so. However, looking back we might now interpret it as a useful warning, because what he said helps us understand the relationship between social media and the FBI who, it seems, had planted the possibility of a fake story turning up in the first place. This was something that had been happening in a similar way at Twitter. Today we can see that the Zuckerberg quote along with the Twitter hearings are starting to reveal a type of psychological influence taking place behind the scenes, one David Sacks refers to as ‘priming’.
In my copy of Psychology of Intelligence Analysis, which is a Centre for the Study of Intelligence CIA handbook containing articles to help analysts work with ambiguous information, there is a reminder within the first ten pages that ‘we tend to perceive what we expect to perceive’. As the manual points out, whilst we may think of perception as a passive process of taking in information, actually perception constructs rather than records our reality. Elsewhere many studies have shown the extent to which the exact information acquired by any observer depends on the observer’s preconceptions.
If the FBI had never said to be on the lookout for Russian disinformation, the authenticity of the Hunter Biden laptop story may never have come into question. It now seems that social media executives were actively primed to perceive it as ‘disinformation’.
What is the relevance of this to the UK’s Online Harms Bill?
Well it shows that having a safety policy for adults in a multi-billion user tech platform can make tech executives extra attentive to any intelligence information that comes their way. The Twitter hearings so far suggest that the FBI were liaising with the Twitter safety team via a special one-way communications tool called Teleporter. Email messages and documents were sent from the FBI to Twitter and then would disappear from the site some days later. As others have pointed out, the communication that went along with any documentation was always keen to stress this was a possible violation of the tech company’s own policies.
It can be assumed that this put the companies under enormous pressure to be overly cautious. In a sleight of hand, the responsibility for the decision to take action moved imperceptibly from government to governance; from nation state to corporation.
Likewise, the present draft of the Online Harms Bill in the UK plans to hand more responsibility to technology companies to police user-generated content online. And as the UK Govt’s own updates on the Bill state, platforms “…will also have to set out in their terms and conditions what types of legal content adults can post on their sites. The legislation will not ban any particular types of legal content, but will ensure that terms and conditions are comprehensive, clear and accessible to all users…These companies will need to transparently enforce their terms and conditions”.
This will see us become a ‘terms-of-service society’, where the traditional social contract around freedom of speech is replaced by a corporate usage agreement - without any citizen ever having voted for that.
Through a duty of care, each of the tech platforms’ terms of service will have to address disinformation and be clear about how they intend to treat it. The updates to the Bill state that “all companies will need to remove illegal disinformation”, “protect underage users from harmful disinformation”, and “set out clear policies on harmful disinformation accessed by adults”. At the same time OFCOM is given new powers to impose fines of up to £18m or 10% of global annual turnover for non-compliance to the responsibilities set out in the Bill. Consequently, those same company executives responsible for ‘trust and safety’ may feel under even more pressure than before. This in turn would render them overly cautious and quite possibly overly susceptible to priming in the event that it should ever occur here. End result? A system which is more likely to label a negative story about the establishment as disinformation, than it is to perceive it as shocking but true.
The problem we have is this: at the moment we have two systems operating in parallel. We have constitutions that apply to the citizens of a nation, that emanate from history, and safeguard people’s freedoms of speech in a physical territory. We also have the ‘terms of service’ of technology companies that internet users from anywhere in the world sign up to, for the expression of opinion online. The recent congressional hearings have shown that with an environment of secretive communications tools, disappearing documents and intelligence agencies priming tech companies to perceive reality in a certain way, it is very unclear which of the two ‘systems' is actually responsible for censoring the public.
If we import this kind of obfuscation to the UK, we will find it impossible in the future to hold any one system to account, as it will be unclear who is responsible for doing what. Then again, perhaps that is the idea.