SOCIAL MEDIA AND CORPORATE RESPONSIBILITY
Right now, members of a senate committee are grilling CEOs of the major social media companies.
Note: This is being posted on the Wednesday morning of hearings.
It’s a rare instance of a bi-partisan pile-on. Meta, X, TikTok, Snap, and other social media companies are getting hammered.
Admittedly, social media does pose many dangers. Too much exposure (and unfiltered abuse) does have obvious negative impacts on individuals and society — and especially upon impressionable young people. But let’s also acknowledge that the positives of social media largely outweigh the negatives and risks.
Watching these exchanges (the testimony is live on CNN right now), I’m confused as to what responsibility these companies (and by association, CEOs) have in policing user behavior, excesses, abuses, and even crimes committed. Certainly, all companies should have mechanisms in place to protect users. There are tools which help to reduce problems. I’ll let those in tech discuss these countermeasures, if they wish.
It seems these companies must walk a precarious, even impossible fine line, and it’s an imperfect science. Given hundreds of millions of users, can we really expect a multitude of abuses not to occur? Given social media is now the primary channel of human communication and interaction for many, including as a means of commerce, isn’t some percentage of abuses a natural byproduct of human activity and the law of averages/large numbers?
I’m generally in favor government regulation and stronger oversight. I certainly support increased consumer protections, especially when it comes to minors. That said, I don’t know what these senators expect the CEOs of these companies to do, or say.
While these CEOs are certainly evasive as to disclosing details of their internal practices and a few have become loathsome (and dangerous to society, at large) for the outrageous things that have been spread — yes, I’m talking to you, Musk — I must ask…..what responsibility does that company actually have in regulating its own platform? I totally understand (and agree with) the obvious physical threats, plots, and criminal activity must be addressed and the first firewall is the company platform. But what more can and should they do? “Breakage” is human.
I’m lucky to know many friends who work in this sector. I’m not sure how much they can speak out. This strikes me as one of those unsolvable problems unless we were to create another entirely new set of problems, meaning the medicine might be worse than the illness.
Thoughts?
__________
JOIN THE DISCUSSION ON FACEBOOK / META HERE (and yes, I see the irony of linking to a social media platform to debate this issue):