Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Everyone is saying: just work on your local repo. But GitHub is way more than just git. There's bug tracking, code review, continuous integration, etc etc.

Making your organisation too dependent on a remote service can indeed be a scary prospect and I'm not sure what GitHub offers to mitigate this.



view as:

They offer a local instance of GitHub actually.

Which I can't imagine to be any more reliable than hosted GitHub.

But hey, I guess there's the comfort in know that when it goes down it's our fault!


Reliability can always be improved... It just costs more.

The argument is with a self-hosted solution you can choose your "danger times".

If you know you have a major deliverable coming up, you can choose not to touch your git server until it's at a somewhat safe moment to do so.


While it definitely sounds like a good idea, every single time I've seen it applied, the reason was the opposite. It was essentially "we've got so little control over our stuff that we'd rather not touch". Mostly goes along with lack of backups, lack of change management, and lack of automated rebuild procedure.

If you have "danger times", then any unplanned disruption in that time will hurt so much more.


I agree with you, and I'm generally in favor of having 2 or more ways of doing critical things if possible, just pointing out the response which can have some merit.

Redundancy provides availability more than reliability.

The odds of GitHub going down and your local copy/instance simultaneously is very low so availability is high.

Reliability is how often GitHub or your local instance goes down.


Your local instance of github and github.com are not redundant / interchangeable. If either of those goes down and you rely on it, the other one is not gonna help you much, so availability is not improved by using a local instance.

There's no ha for github enterprise. Single instance with hot failover.

Fossil [0] is a great alternative in some cases. It includes a bug tracker and wiki, and it's much simpler than git.

Not that I'd suggest using it for every scenario, but being aware of alternatives is always a good thing.

[0] http://fossil-scm.org/index.html/doc/trunk/www/index.wiki


> Making your organisation too dependent on a remote service can indeed be a scary prospect

At every company I've worked for, internal services have been less reliable than github. Certainly way less reliable than gmail.

I get that it's scary, in that it feels like you're giving up control over something important to your business. But I'll posit that you never actually had control, only the illusion of control.


> But I'll posit that you never actually had control, only the illusion of control.

That's a weird way of phrasing it. When you run your own services and they break you have total control and do have the power to fix it.

When you buy SaaS you are relying on someone else. You may very well have more reliability and uptime but you are nonetheless giving up control.


> When you run your own services and they break you have total control and do have the power to fix it.

Sure, but most of the time you just reboot the server or some other temporary hack and kick the problem down the road.

The power and skill to fix isn't worth much if you don't have the time.


That's exactly the point. Sure you have "control" but what good is all that control if it takes you 4 hours to track down the source of a problem and fix it (for example). Github has 100s of engineers (with heavily specialized knowledge that you don't have btw) working to fix any problems, I'd bet on them over myself and (maybe a handful of engineers) everytime ... and I've run my own subversion/git servers before.

Some folks just hate the feeling of not knowing what's happening and how long its going to take to fix, vs having direct access to work on problems themselves even though its not necessarily "better" in any sense of the word ... hence the illusion


100s of engineers working on an outage is a mess. It is more likely you have a handful of them on a given outage.

Yes, but problems are more likely to occur when engineers alter the system in some way. Now if you are in control, you can decide about windows of time in which you don't want something bad to happen (which is of course a reduced probability, not an absolute guarantee, but still the point stands).

Bad analogy time: Its like flying. You don't have control but your are better off without it (unless you are a pilot of course).

Your post gets to the false tradeoff that turned "devops" from a push for better interdisciplinary collaboration into a "2 jobs, 1 paycheck" role.

Software teams are often pathologically unable to create a deliverable that is scrutable and manageable by an operations audience. Building a product that can stand on it's own two feat without constant minding is work.

For all the testing dogma that files around, constant delivery has resulted in systems that are more brittle than ever. If someone with all the institutional knowledge is always there to catch the system when it fails, why bother making it easy to investigate failures? Why bother with analysis and building in fault tolerance when someone can worry about a long term solution for that failure mode when it causes them to get paged at 3 am?

So it becomes easy to say minders are necessary, and they must be developers. Business incentives mean that answer isn't always scrutinized as heavily as it should be, because it means spreading maintenance costs over time instead of an upfront investment in resilience and maintainability. Refusing to provide a toolset and manual for maintenance means some nobody with access to google can't fix 99% of the problems that could occur. That way the magic black box creators can ensure they're the ones getting paid to do it.

We don't have or need Windows engineers, kernel engineers, or Cisco engineers on call. Nor do we have nginx, postgres, cpython, exim, apache, php, mysql, Active Directory, Exchange or Office engineers on call. We use weird enterprise software from companies that have gone out of business, we don't have high-priority support contracts with many of the rest.

Software that needs minding from the developers is just bad software. That's technical debt they took on to get the product out the door.


> When you run your own services and they break you have total control and do have the power to fix it.

If I had total control of a service, I would make it have 100% uptime. Wouldn't you? The fact that there exists no service with 100% uptime indicates to me that nobody has total control of their services.

Similarly wrt having the power to fix my services. If I had unrestricted power to fix things when they broke, then I would use that power to fix all breakages immediately. Since nobody seems to be able to do that, I conclude that nobody actually has unrestricted power to fix breakages in their systems.

Or do you mean to say, I have some limited ability to control and fix the systems I run? I would agree with that. That's my whole point.


Internal has always been much less reliable for me as well, the difference is it's generally unreliable outside of office hours when I don't care, hosted services often go down in the middle of my workday, which is the middle of the night in America coincidentally.

Yeah. And a fact that may be one reason is that blaming a outboard supplier like github is easy, but blaming a colleague in company is more complex.

They offer GitHub enterprise you can setup internally if you want.

Legal | privacy