My main concern there would be the management and orchestration of the releases for that since its detached from the codebase that is changing. It sounds very performant though. Have any examples of this setup?
Does anyone have experience using Configuration Management software in a heterogeneous environment? For example, I've seen large environments running Windows 2008/2008R2/2012/2012R2, various flavors and versions of Linux including Ubuntu Server, CentOS, SUSE, etc... What's the pretty? What's the ugly?
I understand consolidation and standardization of operating systems is usually the best state to be in, but in a lot of larger companies running legacy software it's not economically feasible to do.
Are there any organizations out there that actually make multiple versions of their product and deploy it together, in real time, all the while being completely agnostic to the end user?
It seems like it will be very hard to justify the immense costs for this.
I wonder if any company will see an opening to create an easy process utilizing all those programs which I assume have to be individually setup? I'm sure if it was easier to configure more teams would utilize that method of deployment.
I've heard that unlike most companies that cobble together software dev systems by mashing disparate products (confluence, slack, jira, github, terraform, etc.), opensource pieces (ELK, Prometheus, argo-cd, etc.) and software-defined-infra (cloud providers), both Google and Meta have well integrated, bespoke infra for all aspects of software development. Is there a book or blog or some reference to how this works? Is it really a thing of beauty and efficiency or you don't really miss it once you leave?
All that infra doesn’t integrate itself. Everywhere I’ve worked that had this kind of stack employed at least one if not a team of DevOps people to maintain it all, full time, the year round. Automating a database backup and testing it works takes half a day unless you’re doing something weird
Continuous integration can -- and does -- happen with local development teams using their own equipment. In general, those are typically pretty trusting environments where Firecracker style isolation is not as critical.
Ericsson has another philosophy where there connect multiple repos at ci/cd-time. They have open sourced a tool for that called Eiffel [1]. There is also a book [2] written by the author of Eiffel that is quite good. One of his argument is that when, as an enterprise, you buy a company with a big mature code base, you can’t just move it into another common repo with all custom tooling (also very anti agile to force everyone into the same suit). A big difference though might be that Ericsson deals with a lot of custom hardware for telecom networks. So their ci tooling might be more complex than google’s. Also, continuous deployment is not really an option for them. Then it is better to just have each piece sent out events on what’s happening (builds, test runs etc) and let event listeners in other parts of the ci/cd pipeline work out what to do.
(I have worked for Ericsson previously for 7 years but that was before Eiffel)
Well, you just got an user. Love the concept of temporal, but i can't justify the overhead you need with infra to make it work for the upper guys... And the cloud offering is a bit expensive for small companies.
This is a situation I'll be facing soon - a few years back the "parent" company decided to merge 10 separate tech departments into one - resulting in 10 different ways of doing something, 10 different release processes, 10 different branching strategies, etc.
My role is to normalise the SDLC (from inception to sunsetting), whether it's IaC (TF, Ansible), Java, .NET or JavaScript - on Linux (DEB/RPM), Windows and containers.
Most likely I'll step on some toes, although my argument is I'll step on everyone's equally, for the benefit of all.
We tried Istio, but our Devops team (8 people) said they don't have the capacity to manage that complexity.
We're rolling with Linkerd ever since, still a joy
Without giving away the secret sauce, could you talk a little about how you unlock growth by decoupling from the incredibly time intensive task of custom integration with each customers (Financial Insitution's) backend? Or maybe you don't and it just is what it is. I imagine you have some internal model you code to that is multi-tenant in some fashion but you are still bound by integration time with each client and then each client needs to be constantly monitored for change management basically forever. Like you need to be inserted in their change management process so they don't break your product. Eeep!
none of what your suggesting is technically challenging to setup for a small org, scale it up to 10's of thousands for some companies, or more..
Plus its almost NEVER up to the tech people to deploy whatever they want, someone at the C level got a sales pitch for a given product, liked it (may have even gotten commission or a kickback) and told you to install, setup, support given product(s). or in my situation Infosec told the network guys to implement X and it breaks everything at the user level...
and your end users will hate you if you reinvent the wheel for every internal product/process... things like Jira put a ton of that kind of thing into one interface, under one user - preferably with single sign on.
your v1.0 of a given product will never be as easy to use, or well thought out as v10 of another vendor's product doing the same thing.
And those companies take care of regulations, scalability, and can even help with audit findings - not to mention gives you someone to blame when things hit the fan.
We have a lot of larger use cases including enterprise that manage their advanced configurations quite easily. but if you have specific suggestions I'd be happy to consider them.
Worth mentioning, another strategy for customers is to use an abstraction layer. Good ones include jclouds and libcloud. Still take the point about focus/quality.
We do have a good sized "fusion" team focusing on product integration these days. Their mandate is to not only smooth out the integration process, but also to build some pretty kick-ass features on top of it. For example, between Bitbucket and JIRA these days you can not only view the code changes that relate to issues (and vice versa), but you can also transition issues through your workflow automatically based on the state of your commits, branches, & pull requests. We're also working on unifying the experience across the products. This started with the Atlassian Design Guidelines[1], but is continuing with product improvements like making the concept of a "project" consistent across Bitbucket Cloud, Bitbucket Server & JIRA (and making it easier to map them together). Integration and distributed systems are traditionally a tough problem in software, but we have some of our best engineers on it, so you can expect the cross-product experience to keep getting better!
How do you go about different software though, have one center running a version or two behind to fallover to?
reply