Something is Rotten on YouTube

I recently read something bewildering, "Something is wrong on the internet," by James Bridle. The part cast a light on some dark corners on YouTube. (No, it's not that. It's far stranger.)

You can go read the post, but if you choose to do so, expect to end up feeling perplexed at best. And I'm not even sure I can recommend reading it.

He's the short version: people are posting all sorts of weird videos on YouTube that have the potential to cause all kinds of problems. The videos target young children with familiar cartoon characters and sing-song melodies, that mix with creepy behaviors and outright violence. (And reportedly other, worse things. (I took Bridle's word.) And once a kid stumbles upon this mess, YouTube's algorithm and autoplay function may lock them into a long session of such videos. As one who suffered frequent, terrifying nightmares in my youth without such fodder, I fear what such viewing might do to kids in the same shoes. And more importantly, what effects it might have on their psyches.

As Bridle puts it:

What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives.

This, I think, is my point: The system is complicit in the abuse. (emphasis mine)

The bolded point above is a recurring one recently. What effects did Facebook and Twitter have in recent elections? Did outside actors use them to leverage existing biases and further divide countries that are in dire need of common ground? And are there any adults in the room or are the inmates running the asylum?

More YouTube Tomfoolery

That get's me to the weird thing I ran across today. A file I'm using to keep track of links for a writing project became corrupted, and so I was wading through it trying to salvage as much as I could. I was looking for a Guardian article on possible food cost inflation after Brexit via its title, "Food prices would soar after no-deal Brexit, warns major dairy boss," when I made my discovery. In searching Google for that text, I found the link I was looking for, but I also found a couple of YouTube links (appended at the end of this post) of the same name.

From the skipping through that I did, it seems that both of these videos are automatically generated by pulling the text of the article (and some images) and then embedding it in a video via an algorithm. (It might have been done manually, but it certainly feels automated.)

Maybe I've just missed the boat on this, but I can't recall ever seeing anything like it before. And I don't see it as a desirable replacement for reading the original article, but I don't suspect that's the aim. Rather, it seems another ham-fisted attempt at grabbing ad revenue via an automated system. Anyways, I posted the tweet below in the hopes of alerting The Guardian team in case they weren't yet aware of this.

https://twitter.com/costrike/status/932274447276376070

With that, the real concern needs to be with YouTube. What controls do they have in place to protect copy written materials? And what are they doing to improve performance on these lines? More importantly, what are they doing to help protect children from inappropriate content?

If ad revenue chasers can post videos to YouTube with the same title as the original article without issue (the article in question is seven days old, the videos five and six days), what hope do they have of protecting copyrights when such approaches become more advanced? Will they have to shift to greater human intervention with measures to strenuously validate that content providers are trustworthy? And will advertisers lose faith in the system? I don't know the answers to these questions. But I think they're all worth asking, especially in our current environment in which these systems are increasingly being perverted and leveraged against us.

The last video in Bridle's post, the one he called out for bring highly problematic, is just over a month old. It has received over 300k hits and the account that posted it has nearly 400k subscribers. It's obviously in violation of multiple copyrights, and yet it's still up. The video does have a notice calling it out for Age-restricted content, but how does that notice play out as a mechanism to keep children from viewing the content? And how can an account that's posting age-restricted content get away with having a handle like, "Animals For Kids?"

I'll go back to Bridle's key point:

The system is complicit in the abuse.

Does anyone disagree? I don't.

Something is rotten on YouTube. It's time to demand that they do something about it.

Here are the YouTube productions of the article from the Guardian.

https://www.youtube.com/watch?v=lTAuGuGRzpo

https://www.youtube.com/watch?v=i2o5fLQm4PM