It’s been a wild several days reporting on Elon Musk and Twitter. On Oct. 27, the world’s richest man completed his acquisition of the social media company and immediately fired the firm’s CEO and three of its other top executives. Since then, Musk’s lieutenants have been at work inside the company, meeting with senior managers and drawing up layoff plans.

Musk himself has been posting his thoughts on Twitter, telling advertisers he won’t let the site become a “free-for-all hellscape” and saying he plans to put together a content moderation council that sounds similar to Facebook’s Oversight Board before making big decisions on letting banned accounts back onto the site. On Sunday he tweeted a known conspiracy theory website’s take on the attack on Nancy Pelosi’s husband, Paul.

All that has only contributed to the confusion around how exactly Musk will moderate the content of tweets and what kind of tone the site will have. For months, he’s been telling conservative media figures that he will look into their claims of censorship and has stated that he thinks Twitter’s content moderation policies are too strict. But his new promises to advertisers and his plans for a content moderation board could contradict that position.

In the coming weeks, we might get some new clues from an unlikely source. Twitter is busy preparing a brief for a Supreme Court case it is at the center of that will have wide-ranging implications for how content is moderated online.

The court said on Oct. 3 that it will hear arguments in Twitter v. Taamneh, a case that asks whether online platforms should be liable for terrorist content posted on their sites. The family of a Jordanian citizen killed in a 2017 ISIS attack in Turkey is suing the company, alleging that it should be liable for ISIS propaganda on its site. The case is similar to another one facing Google, and the court has agreed to hear both.

At the center of the lawsuits is Section 230, a foundational internet law that immunizes website owners from being held responsible in civil lawsuits for what people post on their sites. The law allows companies to police their platforms in the way they see fit, removing spam, fraud, hateful content and harassment. If it gets struck down, the companies and other supporters of the rule say, the internet could quickly descend into an unusable maelstrom of harassment, fake activity and spam.

Advertisement

Conservative politicians and media figures have seized on the law as a tool the tech companies use to censor their viewpoints, though those same people have used social media to increase their reach and power.

Proponents say it’s vital for free speech, and opponents say it’s a barrier to free speech.

Musk himself hasn’t said much specifically about 230, though he has proposed in the past that Twitter’s recommendation algorithms be made public so that people can understand what the company is promoting to its users and what it’s not. That would be difficult to pull off because making the algorithms public would allow people to more easily game them to promote their own content.

Twitter’s legal team has vociferously fought to keep the Section 230 protection. If there are changes to that position when the company files its brief in the case, it will reveal a lot about Musk’s position.

On Oct. 26, the day before Musk closed the deal, he visited Twitter’s San Francisco headquarters to shake hands with employees and meet with senior leaders. One of his meetings was with Vijaya Gadde, Twitter’s head of trust and safety who had been spearheading the company’s work on content moderation. The next day, Gadde told workers in her division that the topics she and Musk discussed included Section 230 and the Supreme Court and how important the issue was. Gadde said he seemed knowledgeable. Later that day, Musk fired her.

If Musk intends to change the company’s position on Section 230, we’ll know soon. Twitter has until Nov. 17 to file its brief.

Comments are no longer available on this story

filed under: