Disclaimer: I work at a company that is both an Akamai and a Fastly client. I worked with Fastly first, then Akamai after. Opinions are my own and not the views of my employer.
There has definitely been a push to catch up, but I wouldn't say you're there yet. The Terraform provider for instance is not up to par with the Fastly one. Having attended an Akamai DevOps workshop not so long ago, there didn't seem to be an easy way to use the CLI tools to configure a property in an idempotent way in a CI/CD pipeline. Maybe things are different now.
Akamai has some advantages for enterprise customers though such as easy assignment of cost using the CP codes. That's very handy. The integration with Let's Encrypt is also very nice.
Fastly delivers in it's simplicity and documentation. Whatever use case you need, you'll most likely find it in their docs and on their blog. Being able to use VCL to configure your caching is in my opinion way easier (and with less limitations) than the Akamai rule tree (even if you don't have previous Varnish experience). Varnish's finite state machine makes it easy to configure and debug any kind of behavior you want. The Akamai rule tree has caused me quite a few 'WTF' moments. It's difficult to debug when something doesn't behave as expected and there's a certain amount of 'black magic' in the behaviors sometimes that makes it difficult to judge what the outcome will be.
The good thing is that both companies have different features and get pushed to incorporate features deemed useful by their customers at one or the other provider.
I remember Velocity and articles on it initially. Thanks for the help. I am looking up AppFabric now.
See I prefer sticking with one flavor of tools because its easier for developers to adjust. TBH MS does supply almost everything from grounds up. I only had to look elsewhere for advanced distributed caching frameworks. In fact before switching to Amazon EC2 our old datacenter was running MS VMM and our stack still didn't have anything other than MS software.
Yeah, CI is a good use case. Even autoscaling I kinda feel like you need to be a lot faster to make a huge difference tbh though.
And yeah, Firecracker is pretty sick, but it's also something you can just use yourself on ec2 metal instances, and then you get full control over the kernel and networking too, which is neat.
Creator of Cicada here. Thank you for the feedback! I've mentioned this in a few threads already, but the reason for making a new DSL for writing the workflows is that YAML makes it hard/cumbersome to express more complex workflows. Using a programming language though gives you more control over how your workflows execute. While there are already plenty of tools that use existing programming languages (ie, Python/TypeScript) to configure workflows, having a custom DSL allows you to make some of the more abstract terms like caching, conditional execution, permissions, etc. more explicit.
To your last point, I have experience with using CD in production, but not to the scale where I have builds stepping over each other and causing issues. I agree that serial builds are important in this case, and is something that I will need to look into (conceptually it sounds pretty simple).
We have a lot of larger use cases including enterprise that manage their advanced configurations quite easily. but if you have specific suggestions I'd be happy to consider them.
I am very much of the 'just code it' crowd, with a few small allowances that basically add up to a subset of the 12 Factor App philosophy.
That said, the combination of SaaS and CI/CD makes that a bit of a challenge. Everyone wants to launch features darkly. You don't have to change your strategy to do that per se. You can move everything else into code or into a service discovery system, but the 'feature toggles' have to live somewhere, and they are essentially config.
If your CD system were ridiculously fast, you could just push a commit to turn something on and back off again. But I haven't seen many systems that are as fast as pushing a change into for instance Consul (even if you are using git2consul, that'd be faster than a typical CI/CD build)
I've helped a few clients migrate from whatever ad-hoc process they had to CI/CD with Azure Devops.
Apart from one minor issue with a bug in node.js timing when running the build server on burstable VMs (which is probably more to do with AWS setup than node), I've not had any issues that I can think of.
I don't have the problem with cache though, as I always run my own build servers/agents.
The UI has changed a few times over the past year, mostly around the rebrand from VSTS to Azure DevOps. But once learned, it's pretty straightforward and I'm sure I've barely touched the surface of its capabilities.
I, for one, am very impressed.
EDIT: oh, one improvement I can think of, would be to accept web-hooks rather than polling Bitbucket. But honestly, that's only a delay of a minute or two, so it's not a big deal. You could also easily enough create your own serverless webhook listener that triggers the build, so it's a bit of a non-issue, but I was surprised at the lack.
Also razor from emc/puppet. IMO a large scale cobbler config takes a bunch of work to get right. the provisioning and host management space is just starting to catch on to APIs and SOA. You're going to spend lots of time rolling your own apis, libs, services, and integration.
I think these are all great points! I have came across Concourse CI, but have not looked too deeply into it yet.
I hadn't thought to use TOML for configuring CI pipelines, I'm curious what that might look like. TOML is indeed very flat, so it would be interesting to see what the equivalent TOML pipeline looks like compared to a YAML one.
This is very true indeed. Company mode is the completion framework that rules them all. I have abandoned the older ac-alternatives in favour of this. And it's easy to write new backend source plugins too.
I stopped arguing about which is better CF or TF. Some people go crazy every time this topic is discussed.
Me, I just hate all this IAC thing. I spend weeks sometimes to deliver a script that deploys an rds database with blah blah blah. Yeah it is nice to have all your infrastructure as code. It would be definitely useful if you are deploying the same structure again and again. or if you are running it frequently.
But if you are deploying a structure once only, do you really need this? Do you have to spend 2 weeks on a script you will re-run once in 6 months? (and then you will find that many objects need to be updated to re-run without errors) It seems a lot of over-engineering for me (I know I will be hated so much for this).
Sometimes when I run TF in production, I start praying that all goes well!
We use Fargate, and what we launch is tightly coupled to our application (background jobs spin down and spin up tasks via the SDK) so for now, we aren't doing anything with IaC, other than CI deployment.
I run Concourse CI for my personal stuff with the code in Fossil. It works decently enough, I like that it's somewhat extensible, but I have hit use cases that it can't really do like building an Alpine Linux cloud image.
As for YAML vs TOML, I am mostly sick of YAML, and I really like using TOML to configure Caddy, so I thought it might be good enough for CI config. I feel like it would be worth exploring, but maybe something slightly more complex or custom would be required.
I think a hosted mulit-tenant CI platform appears fairly simple on the surface, but there's actually a lot of complexity under the hood when your product is arbitrary code execution as-a-service - and you want to offer a smoother user experience than something as general-purpose as a container runtime.
Especially when you throw in a large and generous free tier.
Having set up ci/cd systems a few times previously by hand I've been happy so far with aws codestar.
It uses codecommit, codebuild, codepipeline, codedeploy, with permissions configured via iam. You can push to ec2, beanstalk or lambda.
I find beanstalk to be the sweet spot for ease of use with time saving abstractions. You can integrate jira too, and it enables cloudwatch for monitoring.
It's easy to set up. It sounds like Flexport's solution is more robust as they've done some great tweaks to their workflow, but for a quick ci/cd pipeline setup, codestar is pretty slick.
Personally I find Concourse to be better at CD than CI. We've been using it for a couple of years now to deploy Cloud Foundry which has a complex dependency graph of deployment steps. Concourse pipelines are great at modelling this, and resources like Terraform[1], Docker and S3 save you writing what would otherwise be a hell of a lot of Bash.
Disclosure: I run a company that sells hosted Concourse.
The devops training site katacoda.com will be interesting to watch. They spin up and tear down _so_ many VMs, their cloud bill must be monstrous. Firecracker is much leaner, so they would save a lot of cycles by spinning up Firecracker over Kata.
We were looking at RLS, various ABAC integrated frameworks (casbin, ..), and zanzibar clones late last year --
* RLS is super appealing. Long-term, the architecture just makes so much more sense than bringing in additional maintenance/security/perf/etc burdens. So over time, I expect it to hollow out how much the others need to do, reducing them just to developer experience & tools (policy analysis, db log auditing, ...). Short-term, I'd only use it for simple internal projects because cross-tenant sharing is so useful in so many domains (esp if growing a business), and for now, RLS seems full of perf/expressivity/etc. footguns. So I wouldn't use for a SaaS unless something severely distinct tenant like payroll, and even then, I'd have a lot of operational questions before jumping in.
* For the needed flexibility and app layer controls, we took the middle of casbin, though others tools emerging to. Unlike the zanzibar style tools that bring another DB + runtime + ..., casbin's system of record is our existing system of record. Using it is more like a regular library call than growing the dumpster fire that is most distributed systems. Database backups, maintenance, migrations, etc are business as usual, no need to introduce more PITAs here, and especially not a vendor-in-the-middle with proprietary API protocols that we're stuck with ~forever as a dependency.
* A separate managed service might make zanzibar-style OK in some cases. One aspect is ensuring the use case won't suffer the view problem. From there, it just comes down to governance & risk. Auth0 being bought by Okta means we kind of know what it'll look like for awhile, and big cloud providers have growing identity services, which may be fine for folks. Startup-of-the-month owning parts of your control plane is scarier to me: if they get hacked, go out of business, get acquired by EvilCorp or raise $100M in VC and jack up prices, etc.
There's a lot of innovation to do here. A super-RLS postgres startup is on my list of easily growable ideas :)
On a related note: We're doing a bunch of analytics work on how to look at internal+customer auth logs -- viz, anomaly detection, and supervised behavioral AI -- so if folks are into things like looking into account take overs & privilege escalations / access abuse / fraud in their own logs, would love to chat!
There has definitely been a push to catch up, but I wouldn't say you're there yet. The Terraform provider for instance is not up to par with the Fastly one. Having attended an Akamai DevOps workshop not so long ago, there didn't seem to be an easy way to use the CLI tools to configure a property in an idempotent way in a CI/CD pipeline. Maybe things are different now.
Akamai has some advantages for enterprise customers though such as easy assignment of cost using the CP codes. That's very handy. The integration with Let's Encrypt is also very nice.
Fastly delivers in it's simplicity and documentation. Whatever use case you need, you'll most likely find it in their docs and on their blog. Being able to use VCL to configure your caching is in my opinion way easier (and with less limitations) than the Akamai rule tree (even if you don't have previous Varnish experience). Varnish's finite state machine makes it easy to configure and debug any kind of behavior you want. The Akamai rule tree has caused me quite a few 'WTF' moments. It's difficult to debug when something doesn't behave as expected and there's a certain amount of 'black magic' in the behaviors sometimes that makes it difficult to judge what the outcome will be.
The good thing is that both companies have different features and get pushed to incorporate features deemed useful by their customers at one or the other provider.
reply