Tech This Week | Will Facebook’s ‘Supreme Court’ make the internet a safer place?

Technology
Tech This Week | Will Facebook’s ‘Supreme Court’ make the internet a safer place?
Earlier the other day, Facebook’s Oversight Board (often dubbed Facebook’s Supreme Court) announced co-chairs and first twenty members. The board allows users to appeal removal of their posts, and upon request may also issue advisory opinions to the company on emerging policy questions.

Why did we arrive here? With vast amounts of users, Facebook has had a content moderation problem for some time. In an excellent world, good posts would stay up, and bad posts will be pulled down. But that's not how it works. When it comes to Facebook posts, morality isn't always black and white. Arguments could be made on either side for some posts regarding where the to free speech ends. Similarly, whether politicians ought to be permitted to lie in ads.

Status quo has historically dictated that Facebook takes these decisions and the world goes on. However, that process generally has been perceived just like a black box. There hasn’t been a whole lot of transparency around how these decisions are taken, in addition to the minutes of Facebook’s Product Policy Forum, which is a mixed bag. 

An intended and anticipated consequence of the board is that it'll instil more transparency in to the procedure for what stays up and just why. By reporting on what the board discussed and did not discuss, it will help bring more clarity around the most prevalent problems on the platform. It could help reveal whether bullying is a bigger problem than hate speech or how (or where) harassment and racism manifest themselves. 

There is the issue of if the decisions taken by the board will be binding. Mark Zuckerberg claimed that “The board’s decisions will be binding, even if I or anyone at Facebook disagrees with it,” so that it is safe to say that Facebook vows they will be.  The board could have the power to remove particular pieces of content. The question is whether the board’s judgements may also apply to pieces of content that are either similar or identical. Otherwise it could make no sense for the board to pass a decision on each and every piece of content on Facebook.

Regarding this, Facebook’s stance is, “in instances where Facebook identifies that identical content with parallel context - that your board has recently decided upon - remains on Facebook, it will require action by analysing whether it's technically and operationally feasible to apply the board’s decision to that content as well”.

In simple speak, board members (who'll not all be computer engineers) could make recommendations that can't be implemented over the platform. In which particular case, Facebook will not just do it with replicating the decision for every single little bit of decision on the platform. Also, in the event the board does just do it with an exceptionally radical recommendation (say, shut down so on button), Facebook can ignore that.

On the bright side, so far as content moderation can be involved, there appears to be little reason behind Facebook to go against your choice of the board anyway, considering the body has been established to take this responsibility (and blame) off Facebook’s hands. 

The billion dollar question is whether it'll make Facebook a safer place. The short answer is no (accompanied by too early to state). The board is merely going to manage to her a few dozen cases at best. New members of the board have focused on typically 15 hours a month to the work, which is to moderate what stays up for a user base of 3 billion people.  Even if the members were regular, the quantity of cases the board could have had the opportunity to see and pass judgement on is a drop in the ocean. Based about how your body is structured, it seems sensible for the members to deliberate on the most visible or charged cases (such as political advertising or the presence of deepfakes on the platforms). 

It has historically been a difficult process for society move the needle forward, and the board an effort to do that. The very best case scenario here's that your body achieve incremental progress by installation of key principles that guide Facebook’s content moderation efforts.  As far as whether the board can make Facebook (and by extension, the web) a safer place, it really is too early to say but seems unlikely. For each high profile deepfake of Nancy Pelosi or Mark Zuckerberg, there are thousands of content moderation decisions that require to be produced. Low profile instances of misinformation, bullying, harassment, and abuse plague platforms like Facebook, Instagram, and WhatsApp and can not magically vanish.

Instead, content moderation at Facebook will be an extended fraught battle, led by the board. This can be a beginning of 1 of the world’s most significant and consequential experiments in self-regulation. Time will tell how it shapes up.
Tags :
Share This News On: