Doubtless, this is an incredibly complex issue, and companies behind recommendation algorithms do have some degree of culpability for the outcomes of the algorithms they design, but I can't help but feel that most of this is an issue with human beings more than it is with algorithmics.
One could presume youtube has an extreme ideological agenda and has developed its algorithms in service to that agenda, yet a more likely possibility seems to be that youtube's algorithm simply maximizes on the content that humans are drawn to engage with strongly which, surprise, surprise, all of history will show you happens to be vicious, inelegant, untruthful, and ultimately disastrous ideologies.
The problem is not the algorithm. The problem is the masses that believe everything they watch without questioning it. The problem is not internet communities. The problem is the utter lack of localized communal structures in the modern world which leaves people defenseless in the face of efforts of galvanization toward some other, hateful view--there are no institutional anchors sheltering unwitting individuals from being duped any longer, and passions, blame, and fury still speak loudest among the figures of rhetoric. The problem is not the dissolution of truth and spread of misinformation, which as always existed. The problem is the increased efficiency with which that misinformation spreads without equivalent compensatory development of the intellectual and critical thinking armature we need to defend ourselves and keep our reason sound and free of influence.
You cannot call on these companies to be stewards of the world's information, grand censors, stiflers of expression in the name of the greater good. That's tyranny that will leave the blind blind. What we should be focusing on is how we can equip citizenry to better inform themselves, better guard against new means of propaganda and information, better imbue them with the critical sense to second guess the radical and largely unsupported claims they encounter on a daily basis. "fake news" is not a problem solved at the level of the distribution channels, which will always be susceptible to dastardly uses--it is a problem solved at the level of the receivers of the information, who need to be imbued with the critical foresight and reasoning skills required to defend themselves from manipulation.
Yet no one seems to focus on that.
Instead, the pundits and essayists of our day assume the general population will remain a dark and unenlightened herd, susceptible to every trick and puff piece, slaves to the whims of algorithmic wizardry. It's a narrative that buys into the very power structure it seems to want to question, and instead of lauding and promoting the dignity of human thought, the possibility of enlightenment, and a pursuit of a rational future, has abandoned all to a dark technological determinism, assuming that we've already made the devil's trade between human thinking for algorithmic obedience. We'll never solve this problem so long as we continue treating human subjects as irrational receptors of whatever ideology comes their way instead of trying to promote intellectual edification. The very title of this article reveals this bleak worldview that assumes human beings have no free will against the larger movements of technology--note that the title is not "How Radical Far-Right Politicians used youtube to radicalize Brazil" but rather "How youtube radicalized Brazil"
We live in a techno-dystopian fantasy that posits we've already lost our freedom to the "machines".
I see your point, and I agree about all those sociological problems you point out.
However, that does not mean "the algorithm" (or tech in general) is blameless. People and societies have had these weaknesses for millennia. Technology should work to help us _overcome our weaknesses_ as a society. Instead, tech that ends up favoring flat-earth or extreme political hate videos _preys upon our weaknesses_ for a profit.
The recommendation system is probably entirely "neutral" in a sense that it doesn't pick "sides" itself, sure. But that isn't nearly good enough. We should strive for something much better.
Their agenda is "more time spent watching" = "more revenue" but theyve happened to induce a health crisis in the process.
In this translation of engagement to dollars, youtube has an ethical obligation to not actively promote content that is provably, unequivocally false and actively harmful to the public health, even if they continue to provide a platform for it.
> You cannot call on these companies to be stewards of the world's information
youtube and Google already do these things.
Not recommending videos is not censorship.
This is wrong. You're basically saying people should have been educated against an emerging technology, a process that can take a generation. You let the technologists off the hook on the basis that if they don't do it someone else will, and the negative outcomes are the fault of uneducated for having failed to educate themselves.
The same argument was used to dismiss the way Facebook facilitated the genocide in Myanmar. For sure, social media companies are not the originators of human evil nor in any way the monopolists of it, but they are massively profiting from it despite themselves having the education and knowledge of how their tools are being abused, so that makes them culpable.
Still, this is not a new problem. In 1939 an entire nation was susceptible enough to propaganda to accept global war and genocide, and several other nations were willing to turn a blind eye and ally themselves to that nation. Mass propaganda has functioned successfully for a long time, the only difference is that it used to be driven primarily by national interests while now its driven by global iconoclasts on platforms that have economic interests but are otherwise neutral. The reason populations in ‘39 were suceptible to harmful ideogy is the same as the reason they are today—they are cultivated in environs that, either becuase of material limits or cultural emphasis, do not emphasize instilling the ability to conduct critical analysis of agrumentation in the wider population.
Its important to recognize that while technology has certainly resulted in an acceleration of the spread of harmful thought and populist rehteoric it has not enacted a fundamental change in the nature of that rhetoric or of its function. Totalitarianism today looks much the same as it did a few years ago, and as it did even further back in history—only its vehicle has changed. In fact, one of the prime errors in the analysis of modern ills seems to lie in the willful ignorance of history, which in flavor differs from or day but in essence remains the same.
Also, I’m not making the argument that we should have adapted to technologies we couldn’t have foreseen—I’m saying the time to adapt is now not a few years ago. I simply think focusing too heavily on attacking the technologists (now) is the wrong angle and will fail to address the whole issue (though of course, holding them accountable is defintely a big part of the story), simply ensuring the problem lives on in another form.
The goal should be better education systems, and particularly ones that focus on producing rational human subjects , not economic cogs (what public education, for the most part, produces today) that, while plenty efficient (and pleasingly expendable) for corporations, have no aptitude for discerning judgement or humanistic reasoning to save them from political radicalization.
That's a little like saying we should do away with vaccines and antibiotics because the problem isn't disease, it's a poor immune system.
maximizes on the content that humans are drawn to engage with strongly which, surprise, surprise, all of history will show you happens to be vicious, inelegant, untruthful, and ultimately disastrous ideologies.
I don't agree with this view at all. The analogy I typically use is that people will keep eating potato chips and never feel full if they never get a good meal. Vilifying potato chips makes no sense, but vilifying people for only ever serving potato chips and no real meals makes perfect sense. That's where the real blame lies.
What we should be focusing on is how we can equip citizenry to better inform themselves, better guard against new means of propaganda and information, better imbue them with the critical sense to second guess the radical and largely unsupported claims they encounter on a daily basis. "fake news" is not a problem solved at the level of the distribution channels, which will always be susceptible to dastardly uses--it is a problem solved at the level of the receivers of the information, who need to be imbued with the critical foresight and reasoning skills required to defend themselves from manipulation.
It's something I try to do, but I'm just one person and can't seem to get any traction. So I don't know what to tell you. I keep on keeping on, perhaps stupidly so given the general lack of interest and the state of my finances.