Did Twitter's algorithm have a back door for gov't censors?

Townhall Media

Would it surprise anyone if it did? On Friday, “Chief Twit” Elon Musk released part of Twitter’s traffic algorithms as promised, and it didn’t take long for coders to find some interesting nuggets. Just hours later, Steven Tey began deconstructing some of the more, er, interesting features of Twitter’s algorithm.

Advertisement

Like the aforementioned government back door, albeit a slightly indirect one:

A developer has discovered a U.S. government intervention option and other insights in Twitter’s recently open sourceed recommendation algorithm.

The Post Millenial reports that on Friday, Twitter released a portion of its recommendation algorithm on GitHub, a website where programmers can share and work together on open-source code. Concerns about possible government influence on the platform were raised when developer Steven Tey examined the code and found a mechanism that permits the U.S. government to alter the Twitter algorithm.

Tey revealed his research on the mechanism for intervention, saying, “When needed, the government can intervene with the Twitter algorithm. In fact, @TwitterEng (Twitter Engineering) even has a class for it – ‘GovernmentRequested.’” Tey also provided a direct link to the code on GitHub for public review.

We’ll get to the “other insights” momentarily. Tey later compiled his insights on his own eponymous site, calling the release “a big day for Twitter, and for open-source.” Tey puts the government hook in better perspective there, and calls it “very Big-Brother-ly”:

When needed, the government can intervene with the Twitter algorithm.

In fact, this probably happens so often that Twitter Engineers even has a class for it – GovernmentRequested.

Presidential elections is also another big part of the Twitter Algorithm. During election events, the algorithm can:

  • Recommend election candidates to follow (source)
  • Suppress misinformation (source)
Advertisement

No kidding. I don’t mean this as a criticism of Tey, whose analysis provides independent confirmation, but this has already been revealed. The reporting of Matt Taibbi, Michael Schellenberger, and Bari Weiss already made clear that previous Twitter management complied with censorship requests from Homeland Security, the FBI, and the CDC, among others. That they formalized this mechanism into the algorithm should surprise no one, although it does speak to the penetration of Twitter’s business model by the government-censorship industrial complex. The “suppress misinformation” code is an explicit extension of those censorship efforts, encouraged and demanded in no small part by Congress in the few years preceding Musk’s buyout of Twitter.

Perhaps the algorithm to “recommend candidates to follow” needs a bit more explanation, though. Did that get directed by government too, or is that a more natural algorithm based on a user’s own tweets? It’s an open question as to why Twitter would recommend candidates at all, of course, and how that might get considered as in-kind campaigning if the FEC took an impartial interest in that sort of thing.

But let’s return to the “other insights,” and one in particular. There has been plenty of suspicion that Twitter has shadow-banned people, particularly conservatives, and suppressed their reach on the platform. Critics have hypothesized that Twitter employees have tools to impose these penalties on individuals, and that still might be true. However, Tey reveals that the algorithm is built to shadow-ban — at least in one key feature — and suppress accounts that get negative feedback from other users, with massive penalties in some cases:

Advertisement

There are a few factors that determine if your tweet will appear on someone’s “For You” tab.

These are calculated by a heavy-ranker algorithm, which receives various features describing the Tweet + the user whose timeline is being ranked for, and outputs binary predictions about how the user will engage with the Tweet. …

To put this in perspective:

  • A user clicking on your tweet & staying there for >2 min is weighted 22x more than them just liking your tweet.
  • If they click into your profile through your tweet & likes/replies to a tweet? 24x more than a like.
  • If they reply to your tweet? 54x more than a like.
  • If they reply to your tweet and you respond to their reply? 150x more than a like.
  • If they report your tweet? -738x the effect of a like (you’re basically screwed).

Why is this important? In principle, this algorithm allows Twitter to self-police. In practice, it’s a heckler’s-veto process. A single block or mute results in a -74x penalty to your account, and a report of a tweet is nominally five times worse, and effectively ten times worse. To put that in perspective: a single report (formal complaint) would undo the positive effect of 750 likes or 369 retweets. A single block or mute provides negative consequences equal to 148 likes or 74 retweets.

And that’s just if one user reports you. If you get onto a block list (as many conservatives are) and especially targeted in a report campaign, you’ll get suppressed overall with these feedback loops in place. You don’t even need an intervention by Twitter censors to have that happen. Pay attention especially to the power of a complaint on one of your tweets. It’s engineered to explicitly impose a heckler’s veto, no matter what the actual content of a tweet or account may be.  This may just apply to the “For You” tab, as Tey says, but that is one key element of visibility and reach within Twitter.

Advertisement

And until Tey or others go through all of the code, it won’t be clear whether similar processes are in place for the overall platform and account visibility. Having built this to run the “For You” process, it would surprise me if this wasn’t in effect for the overall platform. It incentivizes the block-list efforts on the Left to silence the opposition and keep their arguments from emerging … and it appears to be effective at it, too.

If Elon Musk wants an open speech and debate platform, he’d better take particular interest in this portion of the code. We’ll know that answer to that if this code survives. If it does, then it’s business as usual for a snitch society and incentives for shout-downs.

Update:  I initially mixed up the impact of blocks and mutes with the impact of reports in my post. One block or mute undoes the impact of 148 likes and 74 retweets. It’s fixed above.

Update: The implications of this scoring system are significant. Even if applied evenly, the hecklers gain control of the distribution system, which means that — contra Twitter’s arguments in the past — you aren’t shaping your own Twitter experience with mutes and blocks. Other people’s mutes and blocks are shaping your experience and your engagement with other points of view. And it was being done behind the scenes, while previous Twitter management insisted that they weren’t interfering with visibility and shadow-banning people.

Advertisement

Update: A couple of further thoughts in this conversation between EWTN’s Kevin Jones and me:

I actually run into this quite a bit when reading stories that include tweets from Left-leaning users. On a number of occasions, I discover that the user has blocked me without ever having engaged me first. Those block lists on the Left are real, and I used to just consider them a snowflake-esque response. Now I have to wonder whether Twitter informed activist users of the impact on their “reputation” algorithm and encouraged the use of block lists and complaint-report campaigns. That might be something to search in the Twitter Files.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
David Strom 1:50 PM | December 24, 2024
Advertisement