Sure! I like people. People make work fun. People that disagree with you (or that you disagree with) make life fun. Most of us spend the majority of our working hours working, and that work makes it easy to spark conversations with others that would otherwise not be had. Not having that for 8+ hours would make me feel incredibly lonely after a while.
(I love working at the cafe and shared work spaces, but that isn’t the same.)
I work where I do now largely because my friends do. I mean, if you're going to spend 60 hours a week in close-ish quarters with people, it'd be nice if you enjoyed it as much as possible.
Granted, there is an extreme, and there are discrimination laws, but I do want to spend time with people I get along with.
Every build system is like Make, but more friendly for their language (IIRC Make was originally for compiling C and C++). Make just so happened to become generic enough to build damn nearly anything and also get bundled into most Linux distros.
I think the author is arguing that having to install a shit ton of dependencies to use some other Make-like build system is garbage. That’s true in some cases. But I wouldn’t want to use a Makefile for packaging Node; npm is great for that and understands how Node works.
My first thought is "which rich family member can they coast off of when shit gets really bad" or "where's the rest of the money you haven't touched yet?"
So many of these "I've been funemployed for 5 years" articles NEVER talk about the hidden asterisk that, well, actually, they have plenty of money and are just fine; they're just conventionally broke.
Routers supplied by AT&T here in the US for their fiber gigabit service do RADIUS authentication with the carrier gateway using certs built into the device. There used to be an older version of this router that had known vulnerabilities which made extracting those certs possible but they've since been patched and those certs have been invalidated.
Vimeo has $1b market cap and was looking at $100m in revenue this year. [1]
Sure it's not YouTube (yet), but it's nothing to sneeze at either.
> many video producers are extremely unhappy using youtube
YouTube basically created an entire new market around video production. If it weren't for YouTube (or a similarly popular service), I'd hazard a guess that many of those video production jobs simply wouldn't exist.
It's hard to complain about a market with a too-big-player when the player basically created the entire market.
> I don't think that was an accurate prediction of the past 10 years. Hollywood lost the copyright wars: even the most locked down TV dongle will happily play pirated movies with the right apps
Did they?
Remember how "music lockers" like Google Play Music and the original incantation of Amazon Music, used to be a thing? Yeah, Big Music killed that. The closest thing we have now is music matching services that will happily nuke your music when the publisher delists those compositions for whatever reason.
Plex and the like make organizing and consuming pirated movies easy, but finding the source of those movies is still not trivial, and people _still_ get scary emails from their ISPs if they detect that enough of your traffic is being used for this purpose.
The Netflix app does not allow AirPlay (used to) or Bluetooth streaming (also used to).
It's still very much a cat and mouse game, and the cat is getting smarter every day.
I'm guessing either the author had a very strong personal reason (i.e. really wanted out), or the business was not particularly sustainable/growth-favorable for one reason or another.
Much warmer weather. Very very VERY little snow. Amazing roads (I love to drive). More value for my rent money. "No state income tax." (This benefit will go away once I start paying property taxes if we ever buy a home here.) Lots of opportunity for learning how to start a business. (Lots of businesses here; many are non-tech, which I am fine with since tech companies are kind of overrated and there are plenty of big markets to tackle outside of tech.) Wife is from nearby and some of her family is here.
Most people have heard of blockchain (through crypto), but non-techies are _actually using_ ChatGPT for daily tasks. Departments of ed and universities moved fast in integrating anti-ChatGPT into their programs.
Yes, monorepos can be slow to browse through if the VCS isn’t configured to handle the size (sparse pulls aren’t the default with Git; that alone can make a massive difference when your repo is massive). Polyrepos can be just as slow? however; what’s worse is that there are more of them.
I remember working with a repo that was >20GB large, mostly from videos (we didn’t know that initially). Pulling that repo took _forever_. Nobody on that team cared because they almost never did a fresh pull and accounted the time it took for their CI/CD to do so in their reports. If it were a monorepo, MANY teams would’ve felt that pain more immediately.
Yes, monorepos require some tooling to prevent a gazillion artifacts from being deployed at once (and to specify what’s related to what if code lives across different folders). So do polyrepos! I’ve configured a few Jenkins jobs for my clients to dynamically pull different co-dependent Git repositories at build time. It’s a pain! Especially when multiple credentials are involved! Then there’s the whole “We have a gazillion repos and 20% of them are junk” problem, which requires automated reaping; also a more difficult problem than it seems.
Same with refactors. Refactors across polyrepos are just as much of a pain because you’re now subject to n build and review processes/pull requests, and seeing the entire diff is hard/impossible. This introduces mistakes. If anything, refactors in polyrepos are more of an event than they are for monorepos.
While monorepos have their problems, I will continue to advocate for them because the ability to see what’s going on in one place and for any developer to propose changes to any part of the code (theoretically) is massively beneficial, ESPECIALLY for complex business domains like healthcare or financial services. Plus, you will have a RelEng/BuildEng team when your codebase and engineering org gets large enough; why add more complexity by creating a gazillion repos that are possibly related to each other?
(The large engineering organization without a team focussed on tools and builds doesn’t exist. If it doesn’t, that means that some/many developers are spending way more time spinning their wheels on build systems than they should be.)
The real reason why monorepos don’t happen in the aforementioned domains is because there’s no easy way to allow them and pass regulatory audits.
Many regulating bodies require hard boundaries enforced by role-based access control, especially for code that deals with personally-identifiable information or code between two or more domains that have a Chinese Wall between them. “All of my developers can check out the entire codebase” is an easy way to get fined hard, and polyrepos are much easier to restrict access into than folders in a monorepo are (one advantage not mentioned in the article). While you _can_ restrict access into directories within a single repo, doing so is not straightforward, and most organizations would rather not waste the engineering effort.
I would like to think that Google and Facebook have gotten away with it because they implemented a monorepo from the very beginning and the engineering involved in splitting it up is much more involved than engineering around it.
That said, I continue to advocate for them because discoverability is good and it builds a better engineering culture in the end. I would rather hit those walls and make just-in-time exceptions for them than assume that the walls are there and create a worse development experience without exploring better alternatives.
> is there any real difference between checking out a portion of the tree via a VFS or checking out multiple repositories? There is no difference.
How big is your monorepo? Assume each line of code is a full 80 characters, stored via ASCII/UTF-8. That 67 million lines of code in 5GB. I can fit five of those on a Blu-ray.
> The end result is that the realities of build/deploy management at scale are largely identical whether using a monorepo or polyrepo.
True.
> It might be deployed over a period of hours, days, or months. Thus, modern developers must think about backwards compatibility in the wild.
Depends entire on the application. Lots of changes are deployed within short periods of time with low compatibility requirements.
> Downside 1: Tight coupling
Monorepos do often have tightly coupled software. Polyrepos also often have tightly coupled software. Polyrepos look more decoupled, but pragmatically I can't say I've noticed a much of difference.
> Downside 2: VCS scalability
I've also heard Twitter engineers complain about the VCS. But what is the scope of the author's discussion? 1,000 engineer orgs? Or 20 engineer orgs? Those are vastly different levels of engineering collaboration. I assume the article was not written to cover both of those. Or was it?
---
Ultimately, I think the author implicitly assumed a universe of discourse of gigantic repos with hundreds and hundreds of daily contributors.
When people talk about the spectrum of monorepo vs polyrepo architectures, that is very extreme. For example, last I knew, Uber has more repos than it did engineers. And I don't assume that "polyrepos" always means multiple repos per engineer.
I think this viewpoint is really interesting, especially when one considers how massively incentivized this addiction is (by way of high-paying jobs with generally-great benefits and working environments).
reply