We write backend in C++ with http server builtin, so there is 1 forever running thread for all the requests and zero initialization time: http://fintank.ru:8080/s/test
Enjoyed doing FastCGI with this framework, but simple examples don't present realistic complexity. Struggled with ways to create a small web app while maintaining simplicity similar to the examples provided (such as here - https://learnbchs.org/easy.html ) even though semantics wasn't that different (present a form, after accepting data make a request, present a transformed result).
While I think quite a few people use this, I question the actual benefits. People often argue with performance, but unless you spend a lot of work into doing things right, the general "C backend" won't outperform even simple things like nodejs ... at least once you reach the realm of ten-thousands of requests per second coupled with more than just a static page, "hello world" and a for-loop. Same goes for the "DBMS" part of things, being the bottleneck for the majority of real world applications anyway.
I think the best use case would be some static content delivery coupled with a few dynamic pages and APIs backed up by SQLite here and there ...
You can easily serve 10000 static requests/second per core on a modern server grade CPU over TLS with nginx, which is written in C. Slap node.js and express on top and you might dip to something like half of that.
But the real win with using C (or rather, using direct syscalls) comes from how much of that throughput you can retain while serving dynamic content. At least on Linux, performantly serving a static file would have a syscall flow similar to the following:
file -> splice -> pipe -> splice -> kTLS socket
The performance increase comes from copying the file to the socket without any detouring of the data via userspace. This can easily be adapted for the case where you need to inject portions of that file with dynamically retrieved content, by using offset fields in the splice call from the file being served, and manually writing to the pipe to inject the dynamic content. You cannot easily replicate this construct in any other dynamic-content server framework I know of - you have to do it manually in C (or any language that allows you to directly invoke system calls).
For most stuff http/2 doesn’t matter. I guess at some point they’ll want to add support for http/2, for now it not important unless your a fairly large site.
I host my personal site on OpenBSD and its built-in httpd, though in its current incarnation it's just serving static HTML. No, no HTTP/2, but if I cared about that I could install Nginx and use that instead.
httpd's config syntax is mostly sane and error messages are mostly legible, and that it's written by OpenBSD's core devs inspires confidence. I recommend that system admins at least give it a try, especially if they've already decided to use OpenBSD. It can't do everything but I think it does enough to meet the needs of 80% of the web's sites.
I don't know why you are downvoted - this is one of the tasks, where Go is just a better C. I do like the approach of this stack, having a slim stack with a staticly compiled web page. But switching to Go gives the developer much more safety and quicker development for the custom part.
C is great for developing components like sqlite - which are well tested and only slowly evolving. But the quick changing custom code in a web site, there Go shines.
I can recommend Go for this too. I use it for small APIs. Go has really easy cross compiling as well, so deploying from Linux to OpenBSD is really simple.
I still use httpd as the proxy but use compiled Go executables for CGI.
It is good idea in terms of long-term money spendings: if you already have good C++ HTTP framework, you spend less cpu/memory and can make web-service\site that can live forever for 0.00001 dollars a month. Of if you planning to serve lost of requests AND want users to feel low latency responce, you have no options: Rust, Go, C++, maybe something else. Main fight must occur on the "C++ vs Rust" scene, but C++ certainly wont die in another 20 yers, so you happy using it for long-term projects.
> Of if you planning to serve lost of requests AND want users to feel low latency responce, you have no options: Rust, Go, C++, maybe something else.
That sounds like a lot of options. While I don't exactly like Go, that's very much the sort of things it was designed for, "stdlib-only" go will give you everything you need with pretty much guaranteed safety.
> Main fight must occur on the "C++ vs Rust" scene, but C++ certainly wont die in another 20 yers, so you happy using it for long-term projects.
The concern is less the death of C++ and more exposing C over the web, which exponentially increases the damage of any mistake you make (likewise C++ though likely with a lower exponent).
As I remember, Go was "invented" to allow writing SMALL projects, where people previously used Bash/C/Perl. Like all these small command-line utilities and scripts to solve Site-Reliability-Engineer-tasks or DevOps-tasks. Using Go in projects larger than "multithreaded custom-binary-format to CSV converter" or "daemon watching sensors and inserting ROWS to some DB" is not very good. Big thing about GO is lots of libs available for everything (but in that view Python is better).
> As I remember, Go was "invented" to allow writing SMALL projects
On the contrary, one of the motivators for building Go was for large (millions of LOC) codebases; build speed and how fast a developer is comfortable in the codebase were big motivators for its build pipeline and code style.
Have experience with GO. Feels like C in terms of lack of templates, and feels like Java in terms of "how I avoid world-freezes by GC". Go seems semantically less rich than C++ language makes you write MORE code for the same thing you would write in C++ using your HTTP-framework.
> feels like Java in terms of "how I avoid world-freezes by GC"
No… that’s not Go’s GC that you have experience with. I don’t know what language you used that you confused with Go. GC pauses in Go are notoriously tiny, since Go prioritizes consistently low latency. I’ve worked on many Go projects, and GC is not something that I worry about.
> Feels like C in terms of lack of templates
Are you seriously implying that C was not designed for large projects? That it was designed for tiny projects only? Because that’s not what history shows us, and C is used in massive projects like the Linux kernel. (Which causes innumerable CVEs related to human errors that C doesn’t attempt to prevent, unfortunately.)
It also doesn’t matter how fast you can churn out code if it’s Swiss cheese full of memory safety vulnerabilities. No one has ever pointed me to a popular C or C++ project that hasn’t been rife with vulnerabilities that would have been trivially avoided by using any memory safe language… so it’s a fair assumption that any C++ that you’re churning out has problems.
You mention Rust, and that’s a perfectly valid option if you have such a passionate hatred of garbage collectors. But, using C++ for a web service in 2021? No thanks. We need to be minimizing attack surface these days. (Memory safe languages can’t prevent all vulnerabilities, but neither do seatbelts. You should still wear a seatbelt when driving. Unbuckling doesn’t make the car go faster, and seatbelts do prevent certain classes of injuries.)
> Are you seriously implying that C was not designed for large projects?
C was designed to move from asm to something highlevel. C made possible to write larger projects than in ASM. That's all. There is no absulute measurement of project size, only relative: larger->smaller. Go goes to "smaller" direction compared to C++.
All that arguments about safe/bugs etc is old. In modern C++ you write with no controlling memory allocation at all and you dont write your own arrays with can be overflown.
> All that arguments about safe/bugs etc is old. In modern C++ you write with no controlling memory allocation at all and you dont write your own arrays with can be overflown.
People say this, but they never point to any large, popular projects that demonstrate it.
If you write any serious project in Rust, you'll quickly realize how unsafe "modern" C++ is. There's just no comparison. I'm happy for people to pick whatever memory safe language meets their needs, and most of those are garbage collected. Garbage collectors are fine for most software, but Rust exists for situations where they aren't fine.
> C was designed to move from asm to something highlevel. C made possible to write larger projects than in ASM. That's all. There is no absulute measurement of project size, only relative: larger->smaller. Go goes to "smaller" direction compared to C++.
By your definition, everyone should be writing Haskell, because a single line of Haskell can take tens of lines of C++ to express the same code. Haskell is immensely powerful in this way.
> In modern C++ you write with no controlling memory allocation at all and you dont write your own arrays with can be overflown.
The simplest and most straightforward way to access an std::vector item allows OOB read. Literally every smart pointer an be empty and will UB with no warning if deref'd in that state (that includes the brand new std::optional). The rules of X remain a tarpit lined with shit-smeared stakes, Chrome got bit by a GDI leak just a pair of years back because of that (a refactoring in an RAII object removed an operator= overload and started leaking GDI handles by the hundred when using chrome remote desktop).
> As I remember, Go was "invented" to allow writing SMALL projects
You remember wrong. Go was designed to allow quickly onboarding grads onto large projects.
> Like all these small command-line utilities and scripts to solve Site-Reliability-Engineer-tasks or DevOps-tasks.
I don't know what you're thinking about but it's certainly not Go: half the language is centered around concurrency, which is not very useful for this, especially going through the pain and costs "green" threads otherwise impose.
Go was designed to build network services and servers.
The boring, memory safe, option seems to be Java, but I haven't touched java for web stuff in a number of years. Is the performance much worse than Go? Go seems to be really nice for web stuff, afaik it's one of the things it was designed for.
Java makes you think about its GC. "What i must do to prevent GC wake up". Always re-use allocated objects and so on. Or you turn off GC. So, you got C++. But with C++ you write smaller code (considering C++11...C++20 standards) compared to Java, so why you need "bloated C++-like language (Java is) if you can use C++".
I'm sure performance can be comparable (in terms of requests / second) with enough tweaking and sane design choices, but Java has much longer startup times (especially if you use a framework like Spring) and much higher memory usage, so those are things to keep in mind.
Go is / feels a lot more minimalistic, much less drama and more direct code. Less opportunities to be clever.
Jails are a FreeBSD concept, not implemented in OpenBSD. OpenBSD has a lighter form of sandboxing with `pledge` and `unveil`. My favourite explanation for these system calls is from SerenityOS, which uses similar versions of the calls:
To answer questions so far (I'm the author of learnbhcs): yes, folks use BSD to host servers (if you want HTTP/2, there's nginx, etc.); kcgi interfaces with either CGI or FastCGI depending upon what the caller wants; yes, folks use this.
Doing web development in c and getting that sweet performance boost is very tempting but the time it takes to develop even a simple website is just not worth it. I would rather spend a few pound extra on servers and goto the cinema than the hours of developer time it would take to build PHP sites in C++
You just using some "framework" to develop C++ apps. This framework should give you place for you DO_FUNCTION() and everything else is implemented: 80-port-listening, DB-interaction and so on.
You can get the same performance boost without the footgun of C if you use e.g. Go, I've done so for a while now and I'm quite enamoured by it. Possible downside is that it ships with a runtime, so binaries are at least 10MB (before optimizing for size).
That said, you mention PHP, I wonder how PHP running in HHVM compares to a more bare metal solution (or regular interpreted PHP).
I agree, Anyone who runs a Go web service would probably concur. Switching to Go for my web applications has not only enabled me to make faster releases but has also increased the capital efficiency from the cost savings for HW resources(Because of efficiency).
I'm running a moderately popular website with this stack on a $5 VPS. More precisely I'm running an ARNPCoW (Cloudflare or WireGuard) stack. That is, an ARNP stack that serves https from behind Cloudflare and drops packets that don't come from one of Cloudflare's advertised IPs unless they are from a WireGuard connection.
Our core business is served by an apache module written in C++. I don't particularly like apache but writing an HTTP server from scratch is deceptively hard, especially in C. We took a quick look at nginx but found it hard to figure out. We also use microhttpd to give us an entrypoint where we can pull stats from small programs.
I totally get the geeky appeal of such a web development stack (a few years ago I could have chosen that for my personal web page, for fun). But in practice I doubt that it ends up a win in terms of security, maintainability, and development time.
> Yes. But some folks confused humour with levity.
Time to fire up the Wayback machine ;)
reply