YouTube’s Pedophile Problem

Originally published at: https://tidbits.com/2019/02/21/youtubes-pedophile-problem/

The New York Times reports that YouTuber Matt Watson has revealed a hidden pedophile ring in the comments on Google’s video-sharing site, which has led to a massive backlash by advertisers.

Very alarming. YouTube has a strong presence for young people and while my students all use It for music, beyond entertainment, it’s the primary source of fake news and dubious reports younger teenagers encounter. The stories I have to listen to and counter on the drive to school… it has become the most worrisome social media site after FB.

While it’s obvious what’s going on, essentially what most of us would class as young girls mucking around innocently on camera, are being turned into something vile by these commenters time-stamping any time bare flesh or similar may appear.

In their own little minds, they can somehow get-off on this, but to most people it’s nothing sexually suggestive whatsoever, especially the actual girls themselves, obviously.

The trouble is that the videos themselves are innocuous, and young girls have a right to be in front of a camera mucking around as much as anyone else is (YT’s T+C’s, permitting). It’s not until the lowlife’s turn this innocent behaviour into their own warped idea of it being sexualised behaviour, that said videos get ruined for their makers – presuming of course, the originals haven’t been stolen and re-up’d, or the video is even originally being filmed by a paedophile pretending to the girl they’re intentions are ‘normal’.

How links to external full-on child abuse get in, on top of the time-stamping issue, is clearly the larger cause for concern.

I think YT’s comment cutting, is a reasonable way to deal with that side (time-stamping/linking), but yes, an algorithm that shows videos with the same issues is something they’ll have to address.

The end bit on his video, suggesting how demonetising has happened for videos with far less controversial subject matter, is a bit forced IMO, as these things are massively complex decisions across a variety of reasonings per video.
It’s not the brands’ fault either, they obviously don’t want their stuff showing up next to videos with vile comments beneath, and the ads are supplied via algorithms YT controls, so are clearly something YT have to sort out rather than the advertisers themselves.

VNBC News covered this in some detail today. YT is taking several steps to prevent this, including improved algorithms, closer monitoring and preventing comments on postings such as these. The interviewers advice to parents is not to post this type of content on YT.

I think the problem is that, at this time, any mass communications vehicle that is just about 100% user generated but not thoroughly screened by professionals is going to be continually plagued with problematic content. The more people they reach, the more money they make. In the case of internet based channels, being able to fine tune specific segments within a large audience is a huge advantage, and it’s why YouTube, Facebook, Twitter, etc. are so profitable. But dedicated, trained editors, screeners and curators are very expensive, and the processes would be time consuming; AI is currently far from anything resembling a solution. This is why YouTube, etc. always insist they are a platform, not a publisher. Publishers are responsible for vetting and curating content. IMHO, it’s only when stuff is considered offensive by advertisers or governments that the “platforms” will consider removing objectionable content.

How likely is YouTube to take responsibility for everything that’s posted on it? Unfortunately, there are lots of bad people out there, and they are not likely to disappear.

I have mixed options on that. Pushing strongly towards not doing it, as it’s punishing the kids who’ve done nothing wrong (by suddenly not allowing them to add simple videos), rather than punishing the offenders.

While obviously no video = no possibility of comments, this is effectively censoring perfectly normal girls’ childhood behaviour just in order to stop idiots from having the possibility of adding their vile comments. Throwing the baby out with the bathwater.
We as a society should be going about our everyday lawful activities, while the ones that are in the wrong get the punishment. Not the other way around.

Surely the right thing to do, is to chase the vile commentators, deleting their comments and accounts at the very least, and allow the girls (or boys for that matter) to carry on adding their normal mucking around videos.

Some better AI in future, and likely more staffing in the present, hopefully can aid YT in their efforts.

I’m equally conflicted here. Obviously, it shouldn’t be a problem for people to post innocuous videos on YouTube, but clearly it is, and in many cases, what’s happening is that the original videos are being downloaded and then reuploaded, so they’re not even associated with the original account that would otherwise have some control over comments, etc.

Particularly with the deepfake capabilities of putting one person’s head on another person’s body and more, I’d strongly encourage parents to discourse such posting. Luckily for us, our son is utterly uninterested in this sort of public sharing and wouldn’t be caught dead posting a video of himself in a public forum. He’s not even terribly happy that there are childhood photos of him floating around the Internet from before any of us realized what could happen.

Each of these new revelations of abuse of data further deepens my belief that we need systems that give us full control over our personal information—it’s our intellectual property and we should have the ability to exploit or protect it as we wish. As it stands, these companies basically own us, which leaves us open to exploitation by both the companies themselves and anyone who can get past the often-weak protections that are placed on our data.

I’ll have to go re-read Vernor Vinge’s True Names.

I agree with previous posters that this is gross and cringeworthy. Control of personal data is a worthy goal, how to accomplish it is probably going to take greater minds than mine on the problem.

Minor point, I don’t think he is using “wormhole” in any kind of commonly understood way here. I think what he means is “rabbit hole”, like in once you go down that rabbit hole everything you see is related to that content. A minor point given the larger issue at hand, but nevertheless…

YouTube has now turned off comments on nearly all videos featuring minors.

An interesting aside to this thread, and I will try my best to keep my political opinions out of this discussion:

A question I could not find an answer to when Goggleing about this issue…was it a human or algorithm that flagged the ad, and who or what decided to pull it? In either event, it took many days of negative publicity and public debate about child porn, abuse and censorship before the comments about, and edited reposts of, the gymnastics videos were pulled, but what really did the trick was when advertisers started pulling their campaigns.

1 Like

An algorithm can probably find a clearly displayed Facebook logo within an image but my guess is it was human reviewers behaving in an algorithmic fashion, e.g. IF THEN Remove. I expect the policy is less about suppressing disparagement and more about preventing deception, ads that appear to be from Facebook or even a part of the platform. It’s not protection of the logo itself, they’re happy to have people use it on their own sites to link to Facebook.

I think it’s also about advertisers potentially pulling business.

YouTube has now announced mandatory labeling for content aimed at kids as part of a settlement with the FTC and the NY Attorney General. I’m not 100% sure it’s related to the issues raised in the New York Times article, but it’s certainly in the same general context.

https://support.google.com/youtube/answer/9527654

While I read the TB article, I never followed the link back to the NYT article, and don’t plan to do so; but I find all of this beyond disturbing. As a grandma to 6 wonderful young’uns, this is the stuff of nightmares for me. I hate seeing photos of my grandkids show up on FB, and I’ve gently told my kids why; however, the final choice is theirs.

Now that some parents are wising up, there’s currently a new “thing” on FB: if you’re a grandparents, post a daily photo of one or more of your grandkids. So many older adults are now trying to outdo all their so-called friends (do they even know those 500 friends? I’d bet not.) It would be interesting to see who started this new craze. I’d guess it wasn’t some innocent grandparent. :wink:

I’m so glad my childhood photos aren’t floating around on the net. It’s bad enough my older self is out there, plus one baby photo–all of which I posted before I got wiser. As much as I love technology and the ability to easily keep in touch, I abhor the dark side of it all.

1 Like

If you’re concerned about sharing photos of kids through FB a great alternative is called Notabli at notabli.com. It has device apps and a website. The user/account owner controls all sharing and the developers don’t monetize users by selling data. Images are totally controlled by the user and are only viewable by those the user gives permission. They earn a living through donations and subscriptions. If you don’t want to be the product, like we all are with ‘free’ software, then this is an option, like proton mail, where you can pay and be assured your data is not being sold.

Jack Clay
jaclay@gmail.com