YouTube’s Pedophile Problem


(Adam Engst) #1

Originally published at: https://tidbits.com/2019/02/21/youtubes-pedophile-problem/

The New York Times reports that YouTuber Matt Watson has revealed a hidden pedophile ring in the comments on Google’s video-sharing site, which has led to a massive backlash by advertisers.


(Tommy Weir) #2

Very alarming. YouTube has a strong presence for young people and while my students all use It for music, beyond entertainment, it’s the primary source of fake news and dubious reports younger teenagers encounter. The stories I have to listen to and counter on the drive to school… it has become the most worrisome social media site after FB.


(jimthing) #3

While it’s obvious what’s going on, essentially what most of us would class as young girls mucking around innocently on camera, are being turned into something vile by these commenters time-stamping any time bare flesh or similar may appear.

In their own little minds, they can somehow get-off on this, but to most people it’s nothing sexually suggestive whatsoever, especially the actual girls themselves, obviously.

The trouble is that the videos themselves are innocuous, and young girls have a right to be in front of a camera mucking around as much as anyone else is (YT’s T+C’s, permitting). It’s not until the lowlife’s turn this innocent behaviour into their own warped idea of it being sexualised behaviour, that said videos get ruined for their makers – presuming of course, the originals haven’t been stolen and re-up’d, or the video is even originally being filmed by a paedophile pretending to the girl they’re intentions are ‘normal’.

How links to external full-on child abuse get in, on top of the time-stamping issue, is clearly the larger cause for concern.

I think YT’s comment cutting, is a reasonable way to deal with that side (time-stamping/linking), but yes, an algorithm that shows videos with the same issues is something they’ll have to address.

The end bit on his video, suggesting how demonetising has happened for videos with far less controversial subject matter, is a bit forced IMO, as these things are massively complex decisions across a variety of reasonings per video.
It’s not the brands’ fault either, they obviously don’t want their stuff showing up next to videos with vile comments beneath, and the ads are supplied via algorithms YT controls, so are clearly something YT have to sort out rather than the advertisers themselves.


(Al Varnell) #4

VNBC News covered this in some detail today. YT is taking several steps to prevent this, including improved algorithms, closer monitoring and preventing comments on postings such as these. The interviewers advice to parents is not to post this type of content on YT.


#5

I think the problem is that, at this time, any mass communications vehicle that is just about 100% user generated but not thoroughly screened by professionals is going to be continually plagued with problematic content. The more people they reach, the more money they make. In the case of internet based channels, being able to fine tune specific segments within a large audience is a huge advantage, and it’s why YouTube, Facebook, Twitter, etc. are so profitable. But dedicated, trained editors, screeners and curators are very expensive, and the processes would be time consuming; AI is currently far from anything resembling a solution. This is why YouTube, etc. always insist they are a platform, not a publisher. Publishers are responsible for vetting and curating content. IMHO, it’s only when stuff is considered offensive by advertisers or governments that the “platforms” will consider removing objectionable content.

How likely is YouTube to take responsibility for everything that’s posted on it? Unfortunately, there are lots of bad people out there, and they are not likely to disappear.


(jimthing) #6

I have mixed options on that. Pushing strongly towards not doing it, as it’s punishing the kids who’ve done nothing wrong (by suddenly not allowing them to add simple videos), rather than punishing the offenders.

While obviously no video = no possibility of comments, this is effectively censoring perfectly normal girls’ childhood behaviour just in order to stop idiots from having the possibility of adding their vile comments. Throwing the baby out with the bathwater.
We as a society should be going about our everyday lawful activities, while the ones that are in the wrong get the punishment. Not the other way around.

Surely the right thing to do, is to chase the vile commentators, deleting their comments and accounts at the very least, and allow the girls (or boys for that matter) to carry on adding their normal mucking around videos.

Some better AI in future, and likely more staffing in the present, hopefully can aid YT in their efforts.


(Adam Engst) #7

I’m equally conflicted here. Obviously, it shouldn’t be a problem for people to post innocuous videos on YouTube, but clearly it is, and in many cases, what’s happening is that the original videos are being downloaded and then reuploaded, so they’re not even associated with the original account that would otherwise have some control over comments, etc.

Particularly with the deepfake capabilities of putting one person’s head on another person’s body and more, I’d strongly encourage parents to discourse such posting. Luckily for us, our son is utterly uninterested in this sort of public sharing and wouldn’t be caught dead posting a video of himself in a public forum. He’s not even terribly happy that there are childhood photos of him floating around the Internet from before any of us realized what could happen.

Each of these new revelations of abuse of data further deepens my belief that we need systems that give us full control over our personal information—it’s our intellectual property and we should have the ability to exploit or protect it as we wish. As it stands, these companies basically own us, which leaves us open to exploitation by both the companies themselves and anyone who can get past the often-weak protections that are placed on our data.

I’ll have to go re-read Vernor Vinge’s True Names.


(Bruce Carter) #8

I agree with previous posters that this is gross and cringeworthy. Control of personal data is a worthy goal, how to accomplish it is probably going to take greater minds than mine on the problem.

Minor point, I don’t think he is using “wormhole” in any kind of commonly understood way here. I think what he means is “rabbit hole”, like in once you go down that rabbit hole everything you see is related to that content. A minor point given the larger issue at hand, but nevertheless…


(Adam Engst) #9

YouTube has now turned off comments on nearly all videos featuring minors.


#10

An interesting aside to this thread, and I will try my best to keep my political opinions out of this discussion:

A question I could not find an answer to when Goggleing about this issue…was it a human or algorithm that flagged the ad, and who or what decided to pull it? In either event, it took many days of negative publicity and public debate about child porn, abuse and censorship before the comments about, and edited reposts of, the gymnastics videos were pulled, but what really did the trick was when advertisers started pulling their campaigns.


(Curtis Wilcox) #11

An algorithm can probably find a clearly displayed Facebook logo within an image but my guess is it was human reviewers behaving in an algorithmic fashion, e.g. IF THEN Remove. I expect the policy is less about suppressing disparagement and more about preventing deception, ads that appear to be from Facebook or even a part of the platform. It’s not protection of the logo itself, they’re happy to have people use it on their own sites to link to Facebook.


#12

I think it’s also about advertisers potentially pulling business.