Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Terminated (susiebright.substack.com) similar stories update story
4 points by danso | karma 162920 | avg karma 11.44 2022-06-20 09:23:09 | hide | past | favorite | 242 comments



view as:

None

Or... hosting the videos at cornell.edu. I realize that's radical.

I'm a little unclear on how you're supposed to self-host video these days. From what I understand the "obvious" way to do it (uploading a video file and embedding it on a page) doesn't result in a good user experience.

PeerTube perhaps? Also, let's face it, YouTube's user experience isn't great either; people only use it because it's popular and free (as in beer).

You can use Vimeo or Wistia. Of course, they aren't free.

So it's not that hard to do a good job of it, but it is work, and it's work with diminishing returns if you're not comfortable managing the lift. And once you get it working, it might require some experimentation here and there to get exactly what you want out of it. The HLS spec is readily available and ffmpeg can, given a video input, emit an HLS manifest with a set of renditions. (For research purposes, the magic words are probably "ffmpeg hls adaptive streaming". Also, tweaking those renditions is a rabbit hole. Bring lunch.) At that point you've just got video files and text-based manifests that can be uploaded somewhere, and you point a sufficiently smart video player at them.

For context, I work at Mux [ https://mux.com ] where SaaS video streaming is our bag; having previously built DIY solutions as mentioned above I think that Mux is probably the right choice for most folks who just want to ship a VOD or a live stream in a usable format--and you pay for it, you're the customer--but the tools are there if you want to take a swing at it.


When it comes to recorded lectures, even just HLS is probably overkill in my opinion. All modern browsers support video playback of simple H.264/VP9 files natively anyway. Stuff like HLS only tends to break the browser's native download button.

As with anything, it depends, right? Like, native video does have strong arguments for simplicity. It doesn't have strong arguments for usability. Do you want to seek? Do you want it to continue to work if a user's phone hits a rough patch of the internet?

Nothing stops you from letting somebody download a static rendition too, though. (Not trying to shill too hard for Mux, but that comes out of the box.)


Out of curiosity, what is the problem with the user experience? I genuinely want to know, since I haven't tested embedding videos for a long time. Is a CDN absolutely needed for streaming content?

Seeking into a large video file is awful and having to buffer the entire thing is a bad user experience unless you're starting at the very beginning. In addition to that, there's a quality-vs.-download-time tradeoff and the right answer to that can vary depending on circumstances. Modern systems use HLS or DASH to enable adaptive streaming. Adaptive streaming chunks up your video into segments of 2-10 seconds and renders it out at different quality levels; a sufficiently smart player can then switch between different renditions on the fly based on system and network performance.

> Seeking into a large video file is awful and having to buffer the entire thing is a bad user experience

It would be, if not for HTTP having "Range" support since HTTP 1.1 in 1999.


Have you ever tried to range into a file with variable-length encoding? It doesn't go so well. This is one of the reasons (a little stale IMO, but historically reasonable) that the received wisdom for podcast formats is to use CBR audio, so as to make the naive seek algorithms used when streaming more reliable. Video's bigger and the human tolerances are smaller.

There are very, very smart people working on making video readily consumable over the internet. If "just use HTTP range requests" was a viable answer, I tend to think they'd have done it.


Then, clearly, what we need is a seek-friendly video encoder.

Okay, I'm stumped. Why do we need a "seek-friendly video encoder", which will compromise video quality and file size, when DASH and HLS define seeking in a more flexible and effective way that also allows for adaptive streaming and content steering than "range requests for a multi-gigabyte file" ever will be?

Sure, if the indexing can be handled out-of-band to enable more efficient video encoding, then do that. But this means that the media is no longer a single stream of bytes, which complicates things. It’s always simpler, and therefore better, to have media self-contained in a single file of bytes.

I'm pretty sure the user experience of what you've described is drastically better than what we have with youtube.

What on earth would be the problem?


Vimeo is the SAAS solution.

AWS has primitives that you can cobble together into a solution. Assume Azure, Gcloud and probably others have something similar.


Use something like Peertube [1] to host the video and serve that content in a frame seems to be a feasible alternative. I've hosted a few instances for about 3 years without a hiccup, it mostly 'just works'.

[1] https://joinpeertube.org/


I don't get it. What's wrong with a <video> tag and a few <source> tags to aid browsers that don't support the necessary codecs? My experience with the browser native video player blows most alternative players out of the water in terms of convenience and reliability.

If you're okay with the privacy implications, you can always set up a Peertube instance to host the content more affordably. Such a system makes it more difficult than necessary to download the video, though.


I find a simple <video> embed to be a fantastic user experience, even compared to YouTube.

There are numerous open source YouTube-like viewer scripts. If bandwidth is an issue put the files on S3 or something and link to them there, but that's probably not an issue for a university.

Here's one: https://videojs.com

People seem to think these things are hard.


Deserved? I think that is a bit strong. Yes, they should have anticipated the possibility, and, maybe they have! Who knows?

Saying that individuals deserve the bad actions of platforms is ridiculous. If Microsoft decided to delete files in your home directory, did you deserve it because you weren't using your own version of linux that you compiled from the ground up? No. And I don't think you are really advocating for that.

The reality is that we need a better set of laws to hold platforms accountable to their users. and we need to advocate that our government create one.

edit: word substitution for clarity.


Not to mention the part where it's kind of ridiculous to suggest that a library would not maintain proper archives. There isn't a single paragraph in the text suggesting that "their only copy was on a proprietary video platform", to quote GP. Just that YT terminated their account.

Yes. I deserve that if Microsoft decides to do so. One should take care of its data and not trust anyone.

I think you might misunderstand what "deserve" means. You do not deserve to go bankrupt if you don't have health insurance. you do not deserve to lose your life savings if someone robs your bank.

Yes, people have a responsibility to manage their money, their health, and their data loss risk. Failing in that responsibility does not mean that those consequences are somehow moral imperatives.


None

> The point is, and this is crucial: If a writer/journalist charges a fee— even so much as a dime— trolls can’t be bothered to complain and harass. They’re such cheapskates.

Really interesting point, and I hope it works for them.


It’s also possible to use this as an effective defense against being indexed by Google, which is a great way to cut abuse as well.


It's hard to tell if you're agreeing with the sentiment in the comic or using this event to illustrate its limits? I think probably the latter but...

I agree with the sentiment in the comic, but am honestly asking here: was the content of asshole quality? Did it deserved to get the whole channel pulled down?

It sounds like you don't really agree with the sentiment in the comic, if you're asking that question?

It sounds like you think there are only black and white consequences in the world. Criticism and differences in opinion can, and should, occur before complete de-platforming if the level of disagreement is not too high (hence my question on the level of disagreement for the content).

That being said, there exists no innate right for content to be hosted on Youtube (or anywhere else for that matter) if it does not follow the rules set out by the hosting company or the nation in which it operates. So yes, I do agree with it.


I don't at all think there are only black and white consequences, what I think is that the comic's "it's just that people think [Cornell University is] an asshole and are showing it the door" is a rather black and white statement. I think the question you asked was a perfectly reasonable one, I'm also interested to know the answer. I just don't think it matches the sentiment of the comic.

Saying that I agree with the sentiment of the comic, does not impart any statements about how I feel about its application in this situation.

Perhaps the two align, I do not know. Until more is know, I reserve judgement.


The comic does not apply when it's a system controlled by sentiment of being advertiser friendly.

The real trick is that the comic has always been about what is "acceptable" content, which in the US is what advertisers deem acceptable.

When this comic was first printed it was partially a response to big platforms like Reddit having massive waves of bannings. It has always been about systems that are advertiser friendly, even if Munroe is too dense to realize that.


Exactly. The comic is content-agnostic.

The comic refers to communities who do not want to allow a certain type of speech and thus censor or ban it. There is nothing wrong with that.

Automated content flagging by AI on a platform like YouTube is completely different.


It really isn't, it's just an extension of the flawed mentality that people like Munroe exhibit.

The reality is that speech is ALWAYS a slippery slope, and those who believe it isn't are always surprised when they become the next target.


I tend to agree with the comic but it treats about "online community".

In the case of youtube we are talking about a platform. While it could relate, up to which level do we consider that it is a private platform? What about writing something on your own website that is hosted on a vps and you end up terminated by the hosting provider ? What if you end up terminated by your ISP and your mobile vendor ?

Do we have to draw the line and if so where ?


> Do we have to draw the line and if so where ?

When the government gets involved.

Till then, build your own internet backbone or get back in line.

Fwiw, I don't agree with this but net neutrality died a while ago. Both the left and right political movements have had ample opportunity to fix this problem but they've made their bed. Now they get to lie in it.


Organizations like Google, Amazon, and Apple have more capital and potential to do harm to individuals than most governments in the world.

> Both the left and right political movements have had ample opportunity to fix this problem but they've made their bed. Now they get to lie in it.

This is frankly nonsense. Net Neutrality is dead because organizations like Google aren't bothered by having more control on the net as well. For every "blackout" site when NN was a big pet issue, there are 10 people who were in favor of NN then that could not even explain what NN was.

Google doesn't push on the NN angle anymore because they make more money on a closed net, and pushing out anyone critical of mass media/advertisers.

NN is dead because the big institutional players are against it. It's as simple as that.


And this is exactly Elon's goal for Twitter.

A private platform without uptight censorious people banning anything they deem offensive from the right or the left.


It's a fine feature for a company to say "we don't owe anyone their right as a US citizen to 1st Amendment 'free speech' but we'll certainly offer it because we think it's good (and profitable, obviously.)"

As shown by his actions of firing people from SpaceX who disagreed with him on Twitter. Lawsuit pending.

I am completely baffled by the attempt to equate valuing free speech on a broadly accessible platform and tolerating insubordination on operational issues within a private company.

Is it really hypocritical of Elon to want free speech on Twitter, but also say that employers can fire employees who violate policies and/or bad mouth their company to the public?

That doesn't seem inconsistent to me.


No one could truly be naive enough to believe that that man will not start imposing his own will and fancy over Twitter if the posibility presents it self to him, could they?

I don't necessarily disagree but good lord XKCD got awful preachy?? If you're so damn smart and witty and above the fray of petty bickerings, then you shouldn't feel compelled to lecture the bottom-feeders of the internet as such, unless you're self-important and feel as if you have a "duty" from your public soapbox to "put these people on notice" or whatever.

Except in this case it wasn't "people listening" it was probably an AI who flagged it and a mindless human drone who clicked "OK" when prompted to block the Cornell account. Or alternatively, one or more asshole(s) reported them to Google and a mindless human drone agreed with the asshole(s). If the enemy of my enemy is my friend, is the asshole of someone I consider an asshole a non-asshole?

> it was probably an AI

The AI, despite recent rumors to the contrary, is not a self-aware entity. It is the creation of humans, and it reflects the biases and values of the humans that created it. It's simply that the people listening have created a tool to automate listening and reacting.



> You may have heard, last week, YouTube terminated the entire Cornell University Library account on their site.

They did?

Seems to still be available for me: https://www.youtube.com/c/cornelllibrary

I couldn't find any news about this "termination" with a cursory google search either.

I even still see the lecture mentioned: https://www.youtube.com/watch?v=KM0RL8ppDmE

Presumably it was some kind of automated response which was quickly overturned by a human?


I see it as well. I even see the "subject" of the ban at the top of the page. I agree it's probably something automated that was overturned (although I feel that these AI's are being too heavy-handed, there should have definitely been some human review before taking offline the entirety of the Cornell YT library).

The big issue is that if it wasn't Cornell Library, which can wield considerable weight into requesting a manual review, the reinstatement would have low chance of happening - even if you did hit a human being with your request, they would be overworked and spend barely few seconds before selecting automated negative response, and that's assuming they wouldn't deliberately decide that by whatever personal or cultural mores the content is obscene and warrants destruction.

She even makes an indirect point about this when mentioning her earlier experiences:

> As one Google rep emailed me in the 1990s, “If you were a 10-figure net worth institution, we woudn’t bother. But you aren’t.”


there is no way anything "automated" would not ban gayle rubin

It's complicated. Google is (for Cloud at least; couldn't say for certain regarding YouTube) already in the business of identifying certain "high-touch" customers where automatic downing of their service would be embarrassing to Google and so manual review happens. If YouTube doesn't already support that, adding it would be controversial ("Oh, they're playing favorites with content? Come and see the censorship inherent in the system! Help help I'm being repressed!"). If YouTube does support it, Cornell may simply not have made the list yet because building and maintaining that list across millions of accounts is a hell of a cat-and-mouse game.

Google is rather heavily incentivized to try and automate as much as possible (and deal with the political blow-back when they stumble) instead of playing the cat-and-mouse game.

(As a fun thought experiment: ask yourself "If I were to design the algorithm that wouldn't have kicked Cornell off but would still auto-enforce the TOS, how would I do it?" And then ask yourself "If I were a malicious actor and the algorithm I designed were the one implemented, what kind of content could I create that the algorithm would fail to take down?").


I couldn't find any news about this "termination" with a cursory google search either.

It is sad that a cursory Google search has become the arbiter of truth today.


I certainly wasn't claiming it didn't happen, nor do I think I know the truth of the matter.

I simply don't have the time or the resources to fully investigate the issue myself, and so I provided the information I do have.

I am hoping that someone who has more substantial information will chime in.


I looked out the window and didn't see it on a billboard, so I'm going to let the world know that.

Okay, but they also found that the allegedly "terminated" Cornell University Library account seems to be… not terminated. The point of the Google search is to find accounts corroborating the claim that it actually had been.

And spending sixty seconds researching something before giving up does not mean that the thing didn't happen, and thus shouldn't be worthy of comment.

"I did nothing and was shocked to find nothing" shouldn't be newsworthy. It also shouldn't be used to cast doubt on the blog post.


Why shouldn't there be doubt on the post given that there's no indication it ever happened?

People just believing whatever they read on the internet is the reason why such rampant misinformation is spreading. YouTube is a distributed system - many parts are eventually consistent. It's very likely they loaded the page, saw some error and made the blog post. It doesn't mean anything was ever deleted.


> It is sad that a cursory Google search has become the arbiter of truth today.

The parent comment first pointed to a live and active link to the Cornell University Library account.

The continued existence of the account is the source of truth. The Google search was meant to show that they tried to find any extra information but couldn't.

Checking the direct source (the YouTube account itself) and searching for additional information is basic, responsible fact checking. There's no reason to mock responsible behavior like that.


When a public blog post refers to an incident that would almost certainly be widely reported if it actually happened, I don’t think it’s sad that readers would expect to find mentions of it on Google.

Looks like this was published 4 days ago. Things might have been rectified since.

I wasn't privy to this particular incident, but typically: 1. Content uploader gets complaints from users, institutional service desk tickets, etc. 2. Content owner tries official channels that may be slow or nonresponsive by filing a ticket on YouTube's side. If this is something that has happened before and/or they're a large institution, they may be prioritized or have a point of contact. 3. Maybe it gets fixed

I tried a cursory google search of 'cornell library youtube down' and the first result is a tweet from the 18th where someone's taken a screenshot of the content being down.


You mean a tweet referring to a screenshot of the blog post posted here?

I would think they would mean this tweet which is a screenshot of the youtube channel returning a "this page isn't available": https://twitter.com/elotroalex/status/1538168994712399872

That happens pretty often with random channels.

"This page isn't available" YouTube - search it and you'll see thousands. The other thing that can happen is if the channel is marked as private.


> I even still see the lecture mentioned: https://www.youtube.com/watch?v=KM0RL8ppDmE

It's age-restricted though, so you have to give up some identifying data to Google to be able to view it.


Coincidentally posted 2 days ago:

https://github.com/zerodytrash/Simple-YouTube-Age-Restrictio...

> This extension uses some API tricks to access age-restricted videos from YouTube anonymously.


Not if you use yt-dlp/youtube-dl or one of its wrappers like mpv to watch it.


Thanks, that looks like a useful service.

Note that youtube-dl's workaround tends to be broken due to lack of maintenance, and currently even yt-dlp doesn't work on the most severe of the 'tiers' of age restriction.[1]

The less/non-anonymous method of using cookies from a logged-in browser still works on all videos.(?)

[1] https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-88883...


That PR has been merged now, so presumably those tier issues are fixed now?

It highlights another point, what do Bitcoin, web3, and social media have in common?

give up?

We have not found a way to duplicate the human required to build, sustain, and maintain a community of humans in our digital spaces.


Censorship is the side effect of moderation.

Censorship is a tool of moderation. It doesn’t have to be the only one.

> The First Amendment, or anything close to it, is MIA on the Web.

... she declared, uncensored, from her self-hosted journal.

A multinational Fortune-500 megacorp declined to host Cornell's content for free because they have standards the content violates. Cornell has more than enough cash in the ol' war chest to spin up their own video hosting. This is almost a non-story because it's either telling us something we already know about how corporations treat potentially-embarrassing-to-them content or how society treats the content Susie Bright creates.

ETA: In addition, YouTube does not appear to have decided not to host the content after all (it was likely found initially objectionable because they can code a recognizer for "objectionable" more accurately than a recognizer for "educational value"). This doesn't change the fact that YouTube is within their rights to choose not to associate with Cornell as a business arrangement, but it perhaps matters that here, they didn't even make that choice.


When you create a monopoly, you have to deal with more restrictions - you should at least, but Google looks like US government favorite toy and they don't get any pushback.

Regarding: "standards the content violates", I suspect it is a significantly simpler, and revenue-driven, rationale.

Removing the content costs less than responding to the complaints, unless those associated with the affected content have sufficient clout to swing the cost equation the other way.


Sometimes I think of the internet like a new country we've discovered. When we landed there, there was no infrastructure - no roads or telephone networks.

How do we govern the internet? For all our bluster about loving democracy, the internet is essentially a feudal system. A few large entities have stepped up to build needed infrastructure. In exchange, they own everything. You don't rent your facebook page, your gmail address or your iCloud account. Renters have rights. If AI enforcers run by the the big tech companies don't like you, they'll just banish you from their land forever. (And delete all your stuff in the process). You have no rights and no representation. Just like in medieval times, this is seen as totally normal.

Facebook and Google pay for all of this by tracking us everywhere we go and selling our data to advertisers. Its like if every road outside your house was owned by a company, and when you walk out the front door cameras track your every move so they can show you personalized ads. Apple pays for it all by massively taxing every transaction in their store, banning competition and making it so everything you buy inside their ecosystem won't work anywhere else.

I never know quite what to say when people defend the right of companies to shamelessly claim sovereignty over their billions of users. I mean, its not exactly the same as totalitarianism because you can move to a different tech platform if you want. Well sort of - have to leave your data, communities, apps and devices behind when you leave.

And anyway, benevolent dictators work great in small companies and communities. My problem with Facebook is that it has nearly 3 billion monthly active users. Thats not a team. Thats bigger than any country. Its an empire. It even has a king!

If I don't like how my city is run, I can vote in my local elections. Where is the ballet box for our tech platforms?

It took humanity thousands of years to figure out that democratic systems work better than feudal systems. I really hope we can make that leap faster this time. Our civilization might depend on it.


Reasoning from real estate to describe the Internet is a fraught analogy. When "we landed there," the Internet didn't exist; it wasn't built yet. It's not finite carve-able territory like a continent; it's machines, the interconnections between them, and the data accessible on them. And all of those things were always owned by someone (even if the data wasn't under copyright, someone was paying for the electrons to represent and transmit it).

Everything about access to the network has grown as a peer-and-guest arrangement between private owners or government entities (starting from the original systems where most people connected via academia or the government and being fired from your government position or exited from your university cut your access to your email account and other online resources). In this modern era, very few are peers and most are guests, and yes, the ownership situation reflects that. Adjusting that ownership situation by fiat or force is a fraught enterprise. Let's imagine someone tries to nationalize Internet infrastructure that is currently privately-owned and establish a democratic structure for managing it. What government do we trust to do that? Part of the allure of the Internet was how hard it was for governments to control it; do we willingly hand them that power?

All of that having been said, I agree with you that tension is there between the origins of the Internet and the modern desire for self-actualization and personal data rights in the use of the Internet. I'm somewhat excited about projects like Mastodon that seem to be trying to decrease the complexity of becoming a peer. I can imagine someone starting a democratically-run service provider, perhaps off the credit union model, to provide more direct ownership to their userbase. Of course, it would have to compete in the corporate world alongside other companies, and there is, perhaps, a reason that the corporate world so closely resembles medieval fiefdoms almost uniformly. You noted "It took humanity thousands of years to figure out that democratic systems work better than feudal systems;" I'd offer the counterpoint that the feudal system describes most corporate structures pretty correctly. But maybe that's just accident of history and there's no particular reason that board-owned or single-owned companies dominate the landscape relative to co-ops.

It'd be a fun experiment to watch someone try to provide a better alternative as a service provider (in the connection, social networking, data hosting, or cloud infra spaces).


This might be the most interesting and important point:

"The thing is, this banning/terminating/deleting troll crap has been going on since I first got my modem in 1986. I’ve been on every popular web platform, every one — and I’ve been “un-platformed” each time, because of specious complaints. The emerging media companies court early adopters, and then they can’t get rid of us fast enough."

She participated in the famous WELL, back in the 1980s. It is interesting how long this particular struggle, over moderation policies, has been going on.

She makes a good point that each of these companies has followed a similar pattern of first courting the kinds of radical experimenters who blaze new trails, and then later kicking them off platforms because they are not respectable:

"This has been going on for decades now. It isn’t just Google, it’s all the majors: Apple, Amazon, FB, Twitter, Microsoft, AOL, on and on. I could name a dozen others who’ve disappeared in the social media mist. It’s cruel that each of them censor the very people who created their original incubators. Did you know that On Our Backs was the first magazine, of any kind, to publish with an early Apple computer and Adobe Pagemaker 1.0? Yes, we were. I was on The WELL in the 80s. Our idealism and commitment was ferocious!"


Yep, this has been going on forever. That's why the Spritely Foundation and Project were created to put all the knowledge on this subject to use in a standards effort.

https://spritelyproject.org/


>She makes a good point that each of these companies has followed a similar >pattern of first courting the kinds of radical experimenters who blaze new >trails, and then later kicking them off platforms because they are not >respectable

IMO/IME this is not exactly how it happens. What really happens is that as these platforms grow, they attract the attention of religious zealot groups who conduct absolutely massive campaigns to take down any content that they personally see as inappropriate. They have their own private blogs and mailing lists, with people given names/emails to contact to lodge complaints about "highly offensive" content that they have never even viewed, they were just instructed to file complaints against it.

A big part of the issue is that much of this content really only appeals to a small group of people, and many times those people do not even realize these attacks are happening in the background. The content platforms then decide it is easier to just ban this content so that they do not get overwhelmed with the religious zealot protests.

I wish we could charge those kinds of groups with the impediment of free speech (at least in the US) and fine them out of existence.


Religious groups have certainly been responsible for chilling _some_ content in modern times, but they are far from the biggest threat.

The biggest threat is that only a handful of companies operate most of the Internet now and they want it to have all the diversity and culture of a suburban shopping mall.


Religious content bans happen all the time. Consider the ban of Lightyear in 14 countries last week.

Also, it has never been easier to form geographically dispersed but aligned cultural groups.

I don't know which Internet operators you are talking about, but mine does not give a crap about the culture of the content I consume--only the ability to sell me ads about it.


Just because they don't broadcast that they're doing it doesn't mean it's not happening.

I think companies embrace moral relativism and attempt to stay within the Overton Window (or at most dabble near the edges). It's a survival necessity because their own profitability will suffer if they are found to facilitate content that is outside of that window. The Overton Window does drift, which explains how content can suddenly become objectionable in a matter of months.


UK cellular network operators by default enforce an ‘adult content filter’, and require you to provide proof of being over 18 to get this removed (typically in the form of a credit card number as an identifier, not as a mechanism to be charged).

These content filters block large swaths of the internet. Urban Dictionary is the one that trips me up every time I change networks.

LGBTQIA+ related content has typically gotten caught up in this as well.

You’re lucky if your Internet operator does not care. Unfortunately, there’s plenty of places where censorship is rampant, under the guise of just “think of the children.”


I don’t think their remark is limited to conventional religions.

Zealot is the key word. People who are enthusiastically intolerant.


But why do companies want things so bland? It's because of the very vocal religious communities.

> I wish we could charge those kinds of groups with the impediment of free speech (at least in the US) and fine them out of existence

That would be an actual violation of free speech, penalizing a legal, private, first amendment-protected activity. Remember, the first amendment only prevents the government from constraining the speech of citizens. Telling an organization, especially a religious organization, what they can and cannot advocate for would be an egregious violation of first amendment protections.

https://xkcd.com/1357/


I get what you are saying, but these groups effectively operate as a DDoS attack against content they do not like. Content which is not promoting or showing anything "bad", it just violates their beliefs.

Those groups are, and would be, free to voice their concerns and beliefs on their own platforms, in their own buildings, etc. I am not saying they should not have an equal right to free speech, just that their should be a cost or penalty for trying to impose your beliefs on all of society via those kinds of organized actions.


But isn't this category of "bad" the entire dispute? People opposed to pornography don't typically agree with your perception that it's just about their individual tastes; they argue that it is "bad", in precisely the same way as racial slurs or planning a crime or any of the things that you and I think obviously need to be prohibited.

None

> should be a cost or penalty for trying to impose your beliefs on all of society via those kinds of organized actions.

At least in the US, "organized actions" would be protected under "right of the people peaceably to assemble" part of the First Amendment. Unless those organized actions presented a "clear and present danger"[1], it's difficult for me to see how a government could restrain organizing.

1 Cantwell v. Connecticut, 310 U.S. 296 (1940)


Sorry, this is a pet peeve of mine. The expression "free speech" has a meaning outside the US constitution. It's like if I were to start quoting the Magna Carta at you. I'd further argue that it clearly has a meaning outside of a legal one.

When the author states they want to fine the groups, specifically in the US, based on violating free speech principles, it is clear that they are referencing the legal free speech in which a government would impose a penalty when an institution violates the law.

My comment was in response to the mention of the US. I could have been more exact, instead of "actual violation of free speech", it would be "an actual violation, in the US, of the First Amendment guarantees of free speech. Outside the US, that doesn't apply.

"She participated in the famous WELL, back in the 1980s. It is interesting how long this particular struggle, over moderation policies, has been going on."

This has been going on at least since publishers existed (ie. centuries ago).

Publishers have always held the reigns of power, and decided who had a voice and who didn't.

That's where self-publishing comes in, but of course self-publishing and independent platforms/publishers won't have the reach of huge organizations.

At least for now we still have a choice where to turn our attention, even if it's only a minority who'll make the effort to do so and stay informed from outside of mainstream media.


I think the difference is that in the past there were many publishers. You're right that editorialising and censorship has been going on forever, but in the past much of it was kind of (unintentionally) decentralized.

Not that it was always perfect or anything, but this is a key difference. You can get content out there without going through Facebook/Google, but it's much harder. It's like the postal service doing the censorship.

The challenges faced by Facebook and Google are also substantial, although most of these revolve around user-generated content, a bit different from publisher-generated content. Maybe it shouldn't treat them as identical.


> It's like the postal service doing the censorship.

See 18 U.S. Code § 1461

https://www.law.cornell.edu/uscode/text/18/1461


> religious zealot groups who conduct absolutely massive campaigns

The campaigns don't even need to be very big. The baseline of organic complaints against the average show/video/site is vanishingly small, so even a group of a few thousand people all hitting the same target at once looks like a tidal wave of backlash by comparison.


> What really happens is [...]

Do we have data on this? How do you know this? How does this effect interact with the usual claim of YouTube self-policing to be able to sell ads?


The WELL is still around, by the way.

Compared to modern forums, it's a bit hard to read because the subject lines don't change. Topics can last for many years and keep getting rolled over when they're full.


Never, ever depend on Google for anything that you can't afford to lose and/or can't replicate in a short time. And yes, that applies to GCP.

> Google

Or Facebook/Instagram/Meta, or Twitter, or Dropbox, or ...

Not even Iron Mountain is immune from malicious or negligent handling of critical records.


Never depend on anything you haven't signed a contract for that explicitly outlines the consequences of the thing failing to deliver, and you are satisfied with those consequences.

Also never depend on any one thing; GCP, AWS, Azure, Rackspace, etc.


my critical comment on sexual content in an academic setting has been 'flagged' on YNews which is a kind of censorship

In that case are downvotes censorship too?

easy answer - not at all. The difference there is, flagged results in 'no one can read this' which is an act of censorship, while a downvote is part of an ongoing engagement process.

Anyone can read [dead] comments on HN if they turn 'showdead' on in their profile. We never delete posts outright unless the author asks us to.

Users flagged your comment at https://news.ycombinator.com/item?id=31810870. We can only guess why users flag things, but it's not hard to point to site guidelines which your comment broke - such as the ones that asks people not to fulminate, post flamebait, or use the site for ideological battle.

You can call that 'censorship' if you like - people use the word to refer to any moderation they don't like - but (regardless of what's going on in users' heads when they flag) moderators don't think of this process in terms of banning specific views. That is, we don't much care what your views are. We care about threads not degenerating into flamewar dreck.


thank you for the clarification dang -- as usual your calls to order and insight here make sense. I will keep striving on that

None

Is NGINX-RTMP [1] the easiest to setup alternative to YouTube? Does anyone have any experience with it? I'm not sure if you need the paid NGINX Plus [2] to deliver a decent user experience on all devices.

[1] https://www.digitalocean.com/community/tutorials/how-to-set-...

[2] https://www.nginx.com/products/nginx/streaming-media/



The is the Avalon Media Project run by Indiana University as well. All open source and intended for this very reason since IU’s Kinsey Institute places them at risk of a deplatform.

I think a much better example of YouTube degradation is the Act Man debacle: https://twitter.com/TheActMan_YT/status/1534648758780035072

Unfortunately, YouTube is a de facto monopoly in its space which is backed by its daddy's search monopoly, so it will be very hard for alternatives (be it competing services or p2p solutions) to become viable even if the degradation will become obvious to everyone.

The most realistic hope regarding YouTube which I have is that combination of factors will allow/force the US establishment to force split of tech giants similarly to how it was done to AT&T. Without it YouTube will continue to enjoy its dominant position with minimal oversight and accelerating abuse of its power on information consumed by populace.


Yeah, this decision by YouTube is extremely puzzling. The background[1] on this dispute is extensive, but the most egregious event was probably Quantum calling ActMan's mom on the phone and making some veiled threats[2].

[1] https://www.dailydot.com/unclick/act-man-quantum-tv-drama-ex...

[2] https://twitter.com/TheActMan_YT/status/1518660188617641984


YouTube seems to have been unusually clear in their reasoning on this one, as per the DailyDot story: ActMan threatened YouTube employee families and content creators.

If anything, YouTube's response to that threat is positively reserved; they've taken nearly none of ActMan's content down.


I think reading ActMan's tweets as a threat requires intentional obtuseness on their part. He's clearly satirizing what Quantum is openly doing on YouTube itself.

Have we entered an era where satire is dead? No one wants to spend time actually gauging intent so everything is read as literally as possible.


Why is this so highly upvoted?

> Google’s little company deleted all the files, after they saw the April university lectures by myself, Gayle Rubin, and other On Our Backs artists. We don’t know if there was a complaint, or it was just their usual harrow of ever-changing terms and conditions.

It's literally wrong, lol. You can go on YouTube and find these lectures yourself. If people on here are susceptible to such blatant misinformation then maybe there's no hope. YouTube can and does actually delete videos - if this is what happened they'd be gone (even if the account were restored). The fact you can see the videos in question means the author is wrong.

Furthermore, where's the proof that it was ever taken down to begin with? Archive.org is a thing. People just believe anything they're told huh.


I see absolutely no reason anyone with a reputation should lie about this. It is not an uncommon occurance in platform capitalism to have automated moderation misfire either.

It’s not about lying. People can just be mistaken

So this person has been banned in every single platform. Maybe instead of arrogantly wearing this statement as some sort of medal think about how the community guidelines reflect some sort of shared values and being ousted again again might be a sign of you being an asshole above anything else.

Looks like the account got restored. Seemingly, nothing got erased, the account was just temporarily unavailable.

> Needless to say, the OOB lecturers were fully clothed and rather obsessed with our historical narrative. Anyone not interested in feminist politics would have probably fallen asleep.

The lecturers perhaps, but the streamed lecture contains full-screen examples from pornographic magazines that obviously should be flagged for verification by any porn detecting algorithm.

Youtube does state that explicit materials are allowed for educational content, but if their copyright machines can't even distinguish white noise from copyrighted music then I have very little hope that context like "educational discussion" will ever be recognised automatically.

As a society, we may need a platform where educational/government institutions can upload content (with content warnings, if necessary) because Youtube is a terrible fit for content like this. Youtube's scope is too large to be managed by humans who can understand context and their advertiser seeking behaviour provides an incentive to "punish" videos containing even allowed nudity in their self-learning algorithms. Requiring every institution to set up their own video hosting infrastructure is unnecessary and even wasteful of public funds.


They could just flag the accounts of educational institutions as educational. I sincerely doubt the Cornell University Library is going to pivot to pornography.

On the other hand, wrongthink is quite likely to come out of such an institution (historically speaking, maybe not so much in the current place and time). So it would be a bad idea from the censors' point of view to allowlist the account.

So what you are saying that it makes sense to ban people for ideas you consider wrong, because you worry that those ideas might lead to people being banned for ideas that are wrong?

Are you using "wrongthink" to ironically mean a wrong thought?

Still seems like a bad political[0] move for them to so shamelessly censor an institution of learning.

[0] As in it will leave a bad impression on a lot of people, not that they'll lose the next election.


I'm not sure if they'd even want to. Youtube exists to make money of ads and giving account exemptions so that they can upload long-form content explicitly not fit for advertisers seems like a poor business model. Google is definitely not in it for the good of humanity (and neither are the decision makers at a lot of educational facilities, as hard as some staff may try to improve that). In a perfect world your solution would fix the problem, but in practice it doesn't work out.

Vimeo seems like a much better fit for eduction to be honest. They're probably more expensive, though.


Vimeo is moving to a B2B model, so definitely too expensive.

What they really want is to upload the Internet Archive, but the UI really sucks.


But why would all advertisers refrain from putting ads next to lectures, even if some imagery is a little risqué? That some advertisers would not like it, sure, but all of them?

In the US, mainstream culture considers pornography to be totally beyond the pale and no company that isn't itself selling sex would allow its brand to be associated with porn.

Discussing nude imagery in an academic setting doesn't have much to do with "pornography".

To the credit of YT, they host plenty of nude videos that aren't, somehow, "porn" (search for "Brazilian wax" for instance...)

So much so that one begins to wonder if the problem with Cornell and such may stem more from the political message than the imagery itself.


This particular lecture series, focused on the erotica magazine On Our Backs (with a mandate to "deliver sexual content" in the lecturer's words) and presented by one of the early editors of that magazine, has quite a bit to do with pornography. That doesn't make it non-academic, but it's not some kind of abstract analysis at a distance either.

Came here to say this. An "AI" system may have a hard time distinguishing a porn image published for arousal, from the same image studied in a lecture.

But how hard can it be to mark Cornell University as an account important enough that every action needs a human review, or two?


Disclaimer - just my theory, but...

The love for tech companies on Wall Street stems from their ability to scale up revenues much more rapidly than their costs. Human review is costly, and at YouTube's scale, every reasonable request for more human attention opens the floodgates for tons of other, equally reasonable requests. Thus, any broad policy change that increases demand for human review can eat into profit margins and stock prices big time, and will therefore be resisted with great ferocity and vigilance.


I'm not sure they'd agree with the premise that it's OK to show pornography as long as your purpose is to educate people about it. Their current policy on the topic (https://support.google.com/youtube/answer/2802002) is that even in an educational context "gratuitous" nudity isn't allowed.

It's gratuitous shock technique. Sir Timothy has blessed us with the revolutionary concept of the "hyperlink" and these days everyone has a "web browser" in their pocket where accessing a "hyperlink" is one tap away. Absolutely trivial to upload the pornography one "hyperlink" away and let people decide for themselves whether they want to look at it or not.

These were literally _lectures_ containing images from pornography because it was _part of the lecture_.

IIUC you're suggesting people -- academic lecturers, no less, not professional streamers -- splice out sections of their lecture, replace them with a content warning, provide a contextual link to the "risky" content, and then hope that people will follow it, jump right back to the original lecture, and then come back? (And that somehow that link won't trigger the same YouTube content flag, b/c obviously they're going to at least broadly categorize outbound links to combat spam.)

This (meaning HN) community regularly nerds out over SEO, landing page optimization, and zero-friction payment flows. Why should we expect educators to be successful delivering complex content in an approachable way if we're also going to require them to carefully slice their content into "acceptable" and "NSFW" segments and deliver them separately, even when the original lecture was linear?


We shouldn't expect them to be! I just don't think it's tremendously important whether educators can effectively deliver complex pornographic lectures on Youtube.

Right, why should YouTube have material that helps people thinks and build a better world, instead of emptying their pockets to consume junk.

I absolutely suggest that pornography should not be part of "lectures". Being an "academic lecturer" is not an excuse for appalling behavior. Put the pornography behind a signup wall managed by your uni, and then when you upload to YouTube the hyperlinks are broken and there are no NSFW flags to trip over. Why? Because decency, that's why.

Oppressive definitions of "decency" is exactly the problem. Your comment is circular.

>I sincerely doubt the Cornell University Library is going to pivot to pornography.

Probably not but what rando has the password to that account could be crazy wide ranging, that's the real key. Professional sports teams still have run ins with "oh some rando intern was posting sorry".

Still worth a try. Maybe have it so the filter "flags" questionable video, the account is special so it doesn't remove the video and it gets checked "quickly" with some identifier that this is an educational account.

Granted exceptions are always complicated.


That’s a solution that works for 150-year-old universities, not for little indie publishers. If we don’t have solutions that work for obscure and unpopular ideas - like Susie Bright in the 1980s - what good is the right to free speech?

I wish that regulation was the answer, but back in the day, government censorship was the problem. Which also fell disproportionately on small and weak organizations. In the 80s and 90s, in Canada, there were Supreme Court cases about small LGBT-identified bookstores routinely having their orders seized by Customs on obscenity grounds.

EDIT: I don’t know what I believe any more. We know that censorship falls heavily on on unpopular speech from marginalized people, no matter whether it’s government or private industry. On the other hand, without censorship or regulation, it seems to be true that content providers and adtech will turn about 10-15% of the population insane with far-right conspiracy theories.


There's no single uniform right answer, and the tech community got away with naively believing their might be one for decades because they actually didn't have the social clout for the median person to care what was going on in online fora.

But the tension between free speech, censorship, information, and misinformation is as old as communication itself, and there's more to be learned on the subjects from history than from the YouTube Terms of Service.


> content providers and adtech will turn about 10-15% of the population insane with far-right conspiracy theories.

I have some of these people in my family tree. Adtech, content providers, censorship, etc are all irrelevant. Those 10-15% are going to find their nutjob content no matter where it lives. The content doesn't make them crazy, they make themselves crazy by filtering their memetic exposure to stuff that confirms their biases. Kicking it off of youtube does not prevent them from finding it, it just confirms their suspicion that there's a conspiracy trying to stop them from hearing "the truth".


> As a society, we may need a platform where educational/government institutions can upload content (with content warnings, if necessary) because Youtube is a terrible fit for content like this.

I understand Cornell using YouTube as a mirror for the content - especially considering the content is probably more "discoverable" there. However, I find it ridiculous that a large, well-funded institution with such a large catalog of public video content can't stand up their own host. Relying on YouTube to host their content is just negligent.


LOL! I have worked for a large, well funded, private non-profit university with many millions in its endowment funds. They are probably one of the biggest cheapskates I've even worked for when it comes to IT spend. If it doesn't bring in grant money or student money, it is not supported. They'll spend millions on research computing, but balk at buying anything but the cheapest "ergonomic" chairs for the open plan offices that they cram IT staff into. (Professors and academic staff get offices, IT staff don't even get cubes.)

They might pay for uncensored video hosting and exposure an "edu only" site, but they wouldn't want the expense of hosting their own. The value add would be a content nexus for academic and "open university" lectures that could possibly be monetized by a per view/per topic fee from the users.


> we may need a platform where educational/government institutions can upload content

All these institutions already host their own websites.


But they also want to be visible and a part of other conversations ... for now platforms kinda dictate that...

> As a society, we may need a platform where educational/government institutions can upload content (with content warnings, if necessary) because Youtube is a terrible fit for content like this.

These institutions are big enough to host their own videos - we don't need a new platform, they just need to spin up a server and to pay for their own bandwidth.

If you trust Google to provide your video hosting at a pricetag of £0 this is what you get... You are on their platform, with their rules, where their algorithm is judge, jury and executioner.


> You are on their platform, with their rules, where their algorithm is judge, jury and executioner.

Sounds like a Govt's.


When I see a comment on HN that says someone "just" needs to do something I always wince. There are so many things to consider here, even if universities weren't the bureaucratic labyrinth that they are. Quite apart from the discovery aspect of being on YouTube, where people already on YouTube are much more likely to stumble across Cornell videos when using certain search terms, it wouldn't "just" be spinning up a single server - it would involve all sorts of stuff like encoding videos, bandwidth detection, load balancing etc. etc. I'm not saying this might not be a good thing to do, particularly as a backup if their YouTube channel gets taken down for some reason, but it's almost never as simple as "just" doing something. I'd encourage anyone who catches themselves writing "just" in a sentence to have a deeper think about why a person or organisation isn't doing that.

None of that is difficult, especially for a University, sorry.

What exactly do you mean? Say the university hired you to do the job, I assume you have a budget & headcount ready?

For a fileshare? What exactly is the challenge here? Bandwidth might be, but most colleges probably have a pretty solid fiber connection.

Have you worked in a university before, in a capacity where you had to get something like that signed off?

I've worked for university sized companies. I'm not sure why setting up a fileshare would be difficult. What's to sign off on? Space and a couple VMs? Red tape aside, I don't see the challenge.

I don't say this sarcastically, genuinely, but when you've worked in a university I imagine you'd have a very different view of something like this. I've worked in three - all in the UK, so I suppose that might make a difference - and the bureaucracy can be positively Byzantine. On top of just getting approval for the service, which could involve you in meetings where people split hairs to the point you're having conversations about semantics that would make Wittengstien weep, you now have to persuade a whole load of different departments to use your internal service over just them sticking their own stuff on YouTube. A university isn't a single, cohesive organisation, it's a collective of disparate organisational units, all with different agendas, and each of those organisational units has it's own internal struggles. It gets very political - in the broadest sense.

Well if your reasoning is that you can only ever use YouTube because you can only ever use YouTube then I guess you can only ever use YouTube.

> Have you worked in a university before, in a capacity where you had to get something like that signed off?

Bureaucracy isn’t a compelling excuse - that’s just an easy way to dismiss anything.

Universities have absolutely managed more complex IT change than hosting a few videos on their website.

But we are going to complain that a universities videos got taken down because of YouTube bureaucracy but universities can’t host their own videos because of their own internal bureaucracy then I have very little pity!


That’s even more true for american high profile university and their deep pockets.

They likely already have datacenters and high bandwidth internet connections (multiple).

Oh and they also have staff already on the payroll, and likely capable students that could lend a hand as a part time job or something.

Not to mention, the role of big corporations could be reduced to cdn only, for example (tink of google cdn, cloudfront or cloudflare).

It just take willingness to actually address the problem.


My thoughts exactly. The systems already probably exist for easily creating and deploying VMs, identity management, etc. In 2022 that's pretty turnkey stuff for a University.

Is there white-label open source (i.e. host it yourself) video hosting services like Vimeo / Mux / YouTube / NetFlix? The closest I found was NodeTube but I think it would need a UX/UI redesign before govts or universities would be ready to deploy it on their own.

Fwiw, universities have long paid for their own white label YouTube clones from various companies; at a bare minimum Canvas has been around forever. It just wasn't free (as in beer or otherwise).


There's peertube [0], which was built to defend against arbitrary censorship.

> PeerTube aspires to be a decentralized and free/libre alternative to video broadcasting services.

> [...]

> Anyone with a modicum of technical skills can host a PeerTube server, aka an instance. Each instance hosts its users and their videos. In this way, every instance is created, moderated and maintained independently by various administrators.

[0] https://joinpeertube.org/


There's Odysee based on LBRY, but I don't know if LBRY is here to stay. It's hard to tell with new Blockchains like that. I have to imagine running a node is expensive too.

[1]: https://odysee.com/

[2]: https://lbry.com/


My issue is that there was a time when it was a lot closer to "just" than it is now. I agree it's always more complex than we realize. But what frustrates me is that that almost has become a justification (pun) to justify (pun) not spending the effort to make it so that can be "just".

> These institutions are big enough to host their own videos - we don't need a new platform, they just need to spin up a server and to pay for their own bandwidth.

That’s why companies emerge - people don’t want to acquire specialized skills. And that’s why as a technology manager I choose to buy over building more often than not.


Is there a quick-and-easy "Wordpress with video player" setup you can add to a VPS to get something akin to the quality of the Youtube player (ignoring CDNs or anything just the basic start/stop/play)?

Or in other words, assuming I have a ten minute video I want to make available, how do I go about it rather than "throw on YouTube, add to page"?


It looks like Wordpress may be the solution: https://wpengine.com/resources/wordpress-video-embed-how-to/

I’ve never used it, but they make it sound like just a few clicks.


That's embedding (linking) remote video, not hosting.

I think you should be able to get pretty far with HTML5 <video> tag. You can also sprinkle something like playerjs or videojs to make it nicer.

Yes! Shocks me at how overly complicated people can decide that hosting a few videos is these days.

Doing all the things YouTube does, like transcode videos, gets complicated quick.

At the University of Nebraska–Lincoln, we created our own video hosting site, https://mediahub.unl.edu/ (https://github.com/unl/UNL_MediaHub) to self-host videos for when avoiding government or k-12 firewalls is desirable as well as integrating with rev.com for captioning.

It's really unfortunate that higher education hasn't banded together more often to develop projects of mutual interest.


It might be a nice business for someone to put together "eduTube" - a video platform for educational institutions. It would have a annual fee per organization based on size.

It would: * allow archiving, or display with sunset on certain lectures * be available to university libraries, lecturers and student projects. * have less stringent content rules because educational * allow each institution could dictate what their rules were * allow institutions to determine who got what type of access

I would do it, but I am kinda soft on the video transcoding skills. Maybe UNL could work on expanding the service, with a fee, to other universities.


I’m not sure how much transcoding skills you need beyond ffmpeg; that’s what everyone uses.

Your point about coordination is a great one. In a perfectly rational world, even and economically self-interested one, there are many ways that the thousands of educational institutions around the country could pool their money to build something better than YouTube for their use case. But coordination is hard (I certainly don’t have any magic answers) so they each roll the dice with YouTube itself

Used to work for Vimeo, I know a bit about this.

Hosting videos is a non-trivial thing. Let's say you upload a video in 4K. Not everyone can watch in 4K, so it needs to be converted into other resolutions. That's transcoding. Then those other files need to be packaged so a player can intelligently upscale automatically depending on throttling — that requires a bunch of frontend and backend code to work correctly.

Instead of building it themselves, Cornell could pay $$$ for a white-labelled third-party service, but they would be on the hook for a large recurring cost, and they won't get the benefit of YouTube's excellent discovery.

We saw this phenomenon at Vimeo, which offers all the transcoding and throttling etc. Back in the day camera reviews were overwhelmingly hosted on Vimeo (mainly because it had a reputation for higher-quality video).

We assumed that paying users would prefer their videos be watched in isolation from others' accounts — this seems obvious, but it means that the average user watches fewer videos per session, which decreases the overall view count.

Over time YouTube mostly caught up on the video quality front, and camera reviewers were drawn to YouTube where there was a much greater chance of their video racking up a high view count than on Vimeo, simply because users watched more videos on YouTube.

Stated cynically, YouTube doesn't respect users' attention spans as much, so it gets more viewers' time than other platforms. That means more people want to put their videos there for people to watch.


I used to work for a very large porn company. We just transcoded and stored everything ourselves. Put a CDN in front of it and all good.

Discovery is the issue here.


Why is the answer other platforms though?

Just use handbrake to post three versions at three different resolutions and be done with it, or we all agree to move to some format that can easily support streaming with multiple bitrates.


Youtube has millions of viewers that could stumble upon your videos. If you host them yourself, you won't get any views.

Main value of Youtube is not their video hosting technology; it's the userbase.


That might matter for people trying to live of those videos but is not really important for educational institutions making their lectures avaialable online.

> As a society, we may need a platform where educational/government institutions can upload content (with content warnings, if necessary) because Youtube is a terrible fit for content like this.

It's problematic to only let government or entrenched institutions deliver uncensored content. Powerful groups have a habit of shaping what is "educational" to fit their goals.


The goal of a university is to promote learning. It’s not very nefarious.

Universities in HK will soon eliminate any mention of the period when it was part of the British empire from their History programmes. That's one example among many. In general, the establishment teaches what is convenient for the establishment to teach.

If this is true (and I assume it is for the sake of argument), then it's pretty obvious that there's an element of coercion at play. This is not happening because the HK universities decided that they should do so. My comment obviously presupposes that a university is acting autonomously or independently, and this assumption largely holds true across space and time: even throughout History and in challenging political regimes, universities have tended to be at the forefront of academic freedom.

Not everyone who promotes learning is a university.

That is true, but irrelevant to my comment.

Now you have to define 'educational institution'.

> As a society, we may need a platform where educational/government institutions can upload content

Some of the museums in Vienna started posting to onlyfans to avoid running into the rules around nudity on other platforms https://www.artnews.com/art-news/news/onlyfans-vienna-museum...


This is a bit off topic: The parent article mentions some higher math talks - out of curiosity, anyone can see one? I don't doubt everything got restored, but I can't seem to find any math.

I can't find it either. Looking at an archived copy of their channel[1] I mostly see videos about the humanities. The playlists are also there.

I don't think they weren't restored. It's possible that the videos are unlisted or uploaded on some other account.

[1]: http://web.archive.org/web/20211010100521/https://www.youtub...


> Youtube does state that explicit materials are allowed for educational content, but if their copyright machines can't even distinguish white noise from copyrighted music then I have very little hope that context like "educational discussion" will ever be recognised automatically.

But why the obsession with recognizing it automatically?

I've long wondered why we never developed a digital system for asserting copyright rights. Like, you register a piece of IP with Google and Google should give you a key. You can then use that key to sign certificates that grant certain people legal use of your content on Google. When I go upload a video, I upload all of the relevant keys that indicate that I've included certain copyright material in my video but I've cleared it up with rights holders ahead of time. Video game companies, for example, could one click hand out keys for free to make sure game streamers could legally stream their games.

Similarly, Google could outsource their enforcement of "for educational discussion" to individual organizations. Cornell could agree that staff uploading content that is in keeping with Google's TOS is an official part of Cornell disciplinary proceedings. In return, Cornell has to place a bond at Google that would be forfeited if they ever broke their agreement. Insurance companies could insure such bonds by evaluating whether Cornell has the right processes in place that would prevent TOS violating content from ever appearing on Cornell's YT channel. In return, Google agrees to turn off all ML content moderation on Cornell's account.

This could easily scale down to smaller channels. If I'm a video essayist channel, I could agree to take a mandatory training course and put down a $X00 bond to have my channel be free from all ML copyright checking up to my first X0,000 followers/X00,000,000 views. If a human judges me to have violated the TOS, the $X00 is there to pay for that human's time. No spam channel would find it worthwhile to go through verification and they would still be subjected to the normal ML checking but people who depend on YT for professional reasons could be more certain of avoiding automated false positives & negatives.


Q: But why the obsession with recognizing it automatically?

A: "500 hours of video are uploaded to YouTube every minute worldwide." https://www.oberlo.com/blog/youtube-statistics A bit of arithmetic gives 30000 days of video uploaded to YouTube per day. Manually reviewing all that content would require an army of people.


Which I take as a given premise at the start. The bulk of the content would be ML reviewed but you could opt out of review entirely and take on the burden of legal responsibility instead of YT.

Thanks for the clarification, I might have misread your OP. Legal responsibility is an interesting angle, somewhat concerned about its practicality in the land of punitive damage, see e.g. https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...

Why automated? Quantity. There aren't enough people to review all YouTube footage and nobody could afford them anyway. YouTube would need strict limits on contents to get enough human analist and the platform would wither and die.

Why not add a complex system of DRM keys? Because the current system works, there is no de facto standard of copyright ownership (though certificates would work pretty easily, even X509 certs have an authority, an expiration date, and a revocation system) and there's no interest in setting something like that up.

Currently, copyright holders have all the power and Google doesn't really care about rights of the people who upload stuff. Why invest money into change? What competitor does it better?


The idea isn't for people to review the content, the idea is to opt out of content review entirely and replace it with your own human review system and take on the legal responsibility that entails.

> As a society, we may need a platform where educational/government institutions can upload content (with content warnings, if necessary) because Youtube is a terrible fit for content like this.

We do: It's called Their Own Websites.

Not that what YouTube did was right, but I sure wouldn't rely on them to be the sole holder and provider of my content.


I remember discovering, and reading, issues of OOB in my college library in the 80s. I wonder if they are archived online now.

None

Just nitpicking, but...

> As one Google rep emailed me in the 1990s

In the 90s Google was a small and unknown startup that wasn't in the business of censoring anything. I doubt author would have any reason to exchange emails with them.


That caught my eye too and made me wonder just how much larger the fish in their tale had gotten over the years.

I noticed this as well and am wondering if maybe the author meant AOL or something?

Though it no longer seems offline, this is why you should pick up an extra hard drive or two and use Downie to back up channels/videos you enjoy.

A consequence of the major tech companies being based out of the US and serving the US as their primary market is they've all adopted US-style prudishness. In many US jurisdictions, public toplessness is perfectly legal. But on the internet it will generally get your account flagged/banned/deleted. Instead of the internet providing a free medium for the exchange of ideas it has exported the US obsession with censoring nudity to other parts of the world.

> As one Google rep emailed me in the 1990s, “If you were a 10-figure net worth institution, we woudn’t bother. But you aren’t.”

Uh-huh. In 1999, Google was a pre-revenue startup with probably 50 employees and a tiny number of early users. In the unlikely case that they'd somehow specifically targeted the author for "un-platforming", how likely is it that the author would have even noticed, let alone gotten a reply talking about requiring a "10-figure net worth" to be listened to from a company that itself had only raised low 8-figures of VC.


Yeah that line really threw me off. My earliest memories of google were ~1999 and it was JUST a search engine. At the time altavista was still superior anyway. Dont know what the hell the author is talking about there.

Imagined/fabricated grievances to underscore whatever point she is trying to make.

It is unacceptable that accounts are locked or deleted by huge tech cos with no recourse. Sometimes there actually is a political agenda, more often it is just some random, unfortunate AI screw up.

This example sounds a lot more like the latter but the author (a professional victim) is trying very hard to convince us it is the former.


It isn't an "AI screw-up". They posted a 90-minute video containing explicit photographic depictions of sex acts, without noting that the content was possibly inappropriate for some ages. Some system correctly took down. Now it's back up, with the correct age notification in place.

The lesson here is not about AI, it's about how correctly managing your videos is a lot easier than whining on substack about how oppressed you are.


That detail and every other relevant factual aspect of the article is fabricated. That this is the top item on HN shows how widespread and automatic the "canceled by google" meme has become, even when the complainant is clearly just making it up.

It's hard to imagine that this person even got twenty emails from bewildered colleagues, when the view stats on Youtube indicate that this channel gets about 5 views per day.


That certainly impacted the believability of the overall story. Not saying the rest of the article is false, I don't know, but that line is so out of place that I wonder.

Urgent! This just in: humanities academics with not much that’s new to say, decide that the best way to be relevant is to be “provocative”.

Does Youtube automatically delete videos? Does it automatically delete whole channels? At one point, about 4 years ago, they said their algorithm only flagged potentially violative content so that human reviewers could make a decision about it[1]. If that is still the case (and they may well have changed their policies, or been lying the whole time), it seems like all of the responsibility in the case of false positives like this rests on the human reviewers, not on the algorithms.

[1] https://www.aclu.org/blog/privacy-technology/internet-privac...


I think this entire post may be an experiment in misinformation? Nothing in here really adds up.

For one, Google didn't own YouTube in the 90s. In fact, YouTube didn't exist. So that reference to a "Google rep in the 90s" is clearly wrong. But let's assume they meant "the 2010s", which would be the most reasonable time for such an interaction.

Beyond that there doesn't seem to be much evidence that the Cornell account was de-platformed, or if it was, it was not for very long and clearly a quickly rectified error.

The author has some legitimate complaints, but they seem to be surrounded by exaggerations or just lies.


Try Rumble. No censorship, it's the whole reason it exists.

Relying on advertising companies to host stuff because server software is hard to self-administer was a good way to bootstrap things, but the shine is wearing off.

Why is a presitge University hosting their material on YouTube anyways?

Oh, because it's free and cool, comes with free distribution too.

Google didn't just target her or the university, like she said the AIs are amoral and the company doesn't really care because they only care about profit (or maybe they couldn't care less because of the yottabyte of content published everyday?)

Just being too dramatic while she could actually learn the rules and use YouTube as a platform to promote her content, i.e.cutting down full length lectures into smaller bitsize fun info video and direct viewers to their own hosting platform, etc.

And ironically, she's directing people to her paid subscription. We all need a sustainable way to create content and host it. She deserves to be paid for her content but don't play the victim card.

And Google and other platform is not all innocent, they need to be transparent about content moderation, like if you remove someone's content, please let everyone know why (on the page instead of just channel not found) and how the decision was made (bot or manual reviewed).


Because the point of publishing videos like this is for people to see them, and Youtube is where the audience is. Universities and academics have to compete for mindshare and reputation just like anyone else does. The outcome they are after is not video hosting, it's people watching and sharing their videos.

Legal | privacy