This post is designed to advance and briefly defend the proposition that The US government should treat any company with an algorithmic feed as the publisher of all content delivered via that feed, meaning that they are legally liable for it.
There has been some talk recently about repealing or reforming Section 230 of the Communications Decency Act. A blanket repeal would be a disaster, basically destroying some of the best sites and communities on the Internet, like Wikipedia.
However, the impulse to do something is understandable. It has become more and more apparent that algorithmic feeds harm society in various ways. The AI is implicitly or explicitly designed to make you addicted to the platform by feeding you things that make you more emotionally reactive. This wastes time, harms mental health, and causes harmful content to proliferate.
I argue that it is intuitive, sensible, and would have good effects to treat the algorithmic feed as a publishing decision. Once a company filters and delivers content in an opaque way, it is no longer a neutral platform. It has made a choice to deliver some content over others. Therefore, it should be held liable for the content it chooses to deliver. This can be done by amending Section 230 to remove liability protections from any content delivered via a newsfeed or recommendation engine that the company controls.
This proposal would not harm blogging platforms, Wikipedia, Reddit, or any platform with user-generated content where people proactively choose what they want to see, or where the decision is made by a transparent system of upvotes and downvotes. And if you wanted to program your own algorithmic selection into an RSS reader that scrapes a lot of content, you could.
Social media companies would have to stop using algorithmic feeds or be buried in a storm of lawsuits. Some of them would turn into something that resembles newspapers or TV stations, with a curated feed of chosen content. Others would switch back to delivering all the content of people you select, and only them. A third option is to have people subscribe to a moderated topic and see things in that topic that have been upvoted the most.
Probably most would use a mix of these strategies. After a brief flurry of reprogramming and experimentation, the disruption would be minimal and the companies would probably continue on with their current business model. Many of the harms of the current system would remain, but it would probably be less addictive, and probably harder for something to go viral.
Addendum: We would need to be sure to carefully and narrowly define what we mean by algorithmic feed, to make sure that spam filters on blog comments are not affected.
No comments:
Post a Comment