Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I recently started playing around with Google Cloud Run and am running some python/flask/gunicorn code in a docker container on the platform.

I noticed in the logs that I am getting a lot of Critical Worker Timeouts and I am wondering if this has anything to do with it.



sort by: page size:

Congratulations on launching! Would it make sense to think of this like Cloudflare Workers but for any application running in a Docker container? Are there any restrictions on outbound connections?

As a side note, Wasmer offers an Edge product that has none of the drawbacks commented when running Python in Cloudflare Workers, providing incredibly fast cold-start times:

https://wasmer.io/templates?language=python


This is neat. How does it compare to Cloudflare Workers?

I’ve been thinking about building a tool like this for fun. Does the code run inside a Docker container? I’ve been looking at Firecracker VMs for sandboxing which seems pretty cool


One thing this doesn't talk about is Cloudflare Workers. Rather than running a separate container for each function we use V8 isolates directly, meaning the cold start time is under 5ms.

Cloudflare workers both bundled and unbound run on v8 isolates. While they’re performant you can only use JavaScript. Js that works with cloudflare’s v8 version.

With google cloud run it’s effectively running your container. You can use whatever language you want, whatever apt dependencies you need, whatever Linux flavor (ubuntu/alpine/busybox) e.t.c

In that sense cloud run is the more generic serverless platform. The devs can indeed say “it works on my machine, so let’s ship my machine and scale it up as needed and hibernate when not in use”


If anyone else gets frustrated with Google Cloud Functions, I'd highly recommend trying Cloudflare Workers. The start time and developer experience are amazing.

(Note I'm not affiliated with Cloudflare in any way. Just a satisfied developer.)


I wish Cloudflare supported Python for Workers/Functions. There is some weird translation from Python to Javascript which seems extremely bloated. I just want to run Python for the backend, natively. One day, we can deploy Flask apps through Cloudflare edge nodes, that would be cool.

I can really only speak to CloudFlare workers in latency sensitive, user-facing settings, but from my experience their cold start times are insanely low, like 5-10ms. They use V8 Isolates (uberjar for js?) running WASM bytecode, so there is very little runtime overhead. So in that case, there is an actual benefit that being a short hop away, but otherwise I think you're right.

I've also used some AWS Lambdas and GCP Cloud Functions in background data processing settings where there are under 1M invocations/month, so its cheaper than running a tiny instance 24/7. In that setting latency doesn't really matter as much, but I was seeing cold starts ~2 seconds and warm start invocations on my node and python code responding within 30-60ms in most cases. Once the container is up and running its reused for follow-on invocations, but eventually(and opaquely) times out and the next invocation will be cold. People who are using serverless runtimes have historically done hacky things to ensure they always have a running container like invoking the function with a cron job. At that point it seems a bit silly not to just run a small vm / container.



Unfortunately they don't support shelling out or running static binaries, which makes Cloudflare worthless if you want to run other programming languages like Crystal, Ruby or Python, or do OpenCV stuff, which you can readily do (with some difficulty) in Google Cloud Functions and in Lambda. You are stuck in pure js land with Cloudflare workers, though they are still awesome for this speed increase.

disclaimer: I have 27 Google Cloud Functions doing native stuff atm, about 70% in Crystal, 15% in Ruby, and 15% in Python.


Looks super similar to Cloudflare Workers [0] but less generic, as Cloudflare Workers use a modified way of service workers (which means that code reuse is possible) but this looks like it’s creating a lock in :-(

Still great about the small latency (1ms~?) though. Much better than docker + node where the latency goes crazy (~1s) :-( I would like to see some BasS products like these (small latency and not docker)

[0] https://developers.cloudflare.com/workers/about/


They have open sourced the workers runtime to alleviate lock-in fears: https://blog.cloudflare.com/workerd-open-source-workers-runt...

Cloudflare Pages and Workers (and similar products) are indeed great but I recently switched over to a plain old free tier Oracle Cloud VM with FastAPI behind Nginx (Docker containers). I use Cloudflare as a proxy for HTTPS/certs. I don't have to think about Cloudflare Worker limitations, can host a Postgres instance, and simply deploy through `git pull` and `docker-compose up`.

Has anyone here used Cloudflare workers yet? I know they just launched recently. - Where you using new or existing code (coming from Lambda for example)? - How did the performance compare? - Did you run into any difficulties?

Cloudflare Workers

IIRC, Cloudflare Workers are based on V8 sandboxing ... which is in part why the cold starts are so fast.

https://developers.cloudflare.com/workers/reference/how-work...


cloudflare workers run in an environment more like Chrome than node.

Containers have isolation problems thus requiring a further layer of isolation (microVMs like Firecracker by AWS), which slows startup times. CloudFlare Workers are mostly intended for Edge-ish scenarios so they need to have fast startup times.

Hm yes, the fact I can't run Cloudflare Workers somewhere else is a worry. Fair point.

> does Cloudflare provide container hosting service agnostic of language -- something like Google Cloud Run?

Nope. Their Workers are V8 based so JS or Wasm

next

Legal | privacy