I personally have no experience with Knative. In my view, you open up a different can of worms by dragging in k8s. Either do everything natively on your cloud provider or go all-in with k8s.
You don't necessarily HAVE to use K8s to get advantage of it. Use something like Knative and you're good to go. Google has Cloud run and Azure would soon come up with some similar abstraction on top of kubernetes.
True, we think that for most setups and early explorations K8s is too much of a beast. TBH we are in the process of switching our cloud from K8s to Cloud Run which is kind of close to Knative but without the ops hustle ;-)
Wanted to add another option to ushakov's comment: KNative (which is actually what CloudRun is built on).
If you run k8s clusters anywhere, OpenFaaS and KNative are both solid options. OpenFaaS is seems better suited for short running, less compute intensive things. Whereas KNative is a great fit for API's.. it just removed a bunch of the complexity around deployment (like writing a helm chart, configuring an HPA, etc).
So I have no doubt that SQS is more mature than knative. In my case I am developing an actual product IN k8s (with around 40 new CRDs) , so my goal is to use as much of the platform as possible (including istio and knative) and avoid cloud specific services.
The value with k8s is that you can abstract the underlying hardware by using only k8s objects.
On the small scale, no, it's not pay-for-what-you-use. Imagine though you're an enterprise, running a large K8S cluster with lots of workloads inside. By using Knative and scale-to-zero, now you can pack more lightweight workloads into the same cluster resources, because the pods scale down when they're not actively being used. It gives you (as the cluster operator in your company) the ability to run your cluster the same way serverless works in the cloud.
Gotcha. Yeah, I would much rather use a managed k8s service over deploying and maintaining my own. GCP's GKE has been great for us so far and very reasonably priced.
I would not use k8s unless we are convinced it will benefit us in the long run (think about constant effort that needs to put in to get things running). k8s is not magic. I would just stick with docker-compose or digital ocean for small startup. OR rent a VM on Azure OR if you really really need k8s use a managed k8s.
Thank you for the link to the samples, they are much more informative. But it still doesn't quite explain it.
The samples show that you can make an app, make a container, and make a service config file, and deploy your app to K8s. Yes, we've been able to do that for some time now.
This thing is supposed to provide a bunch of advanced features for devs to not have to think about. However, the build repo says this:
"While Knative builds are optimized for building, testing, and deploying source code, you are still responsible for developing the corresponding components that: + Retrieve source code from repositories. + Run multiple sequential jobs against a shared filesystem (for example, Install dependencies, Run unit and integration tests). + Build container images. + Push container images to an image registry, or deploy them to a cluster."
"While today, a Knative build does not provide a complete standalone CI/CD solution, it does however, provide a lower-level building block that was purposefully designed to enable integration and utilization in larger systems."
So as a developer you still have to have all the things you had before, but with extra layers of abstraction now, apparently just to support hybrid cloud installations.
The marketing lingo appeals to developers as if it makes all this simple, when in fact it may be more complicated.
The thing is, k8s has a lot of mindshare. It's entirely easier to go and find some pertinent issue with your k8s set-up through a simple search, whereas e.g. nomad and swarm have much less mindshare.
You don't have to go full complexity with k8s. You can just use a managed k8s provider and ship a simple deployment with an anti-affinity policy to distribute across VMs on creation. Sure, there are other solutions [and this one is likely to not be the cheapest], but they're all vendor locked--k8s is about as close to a cloud agnostic compute as we're likely to get.
Use k8s exactly for the reason of either being federated across cloud providers or simply portable across cloud providers. You can't do that with vendor specific solutions e.g. elastic beanstalk
It really depends on what your needs are and whether your provider's k8s service addresses them. AKS for example does work but, I'm told, is missing some fundamental stuff in regards to security etc. (Caveat: I only know this through others.)
OTOH if a service is available and would work for someone's use case, by all means, use the service. I've set up native k8s clusters just as exercises, and it's no picnic–and I didn't need to manage those clusters, later. Just doing build exercises alone was enough to scare me straight.
k8s saves a TON of time if you need to run on more than 1 machine and you use a managed solution (by AWS/GCP/Azure) and only use it as a glorified docker container orchestrator. Ingress solves how to route HTTP traffic coming into your domain. PersistentVolume solves how and where to store data. LivenessCheck solves how to restart your crapware when it freezes. Just don't go crazy with it unless you actually need the fancy stuff.
reply