Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That is pretty far-out, conceptually.

But in practice I assume there would be little need for that right? If you have the socket already and you get the same socket again from somebody else, would that give you any new capabilities?

If a socket is shared among multiple actors, can a same message be read by all of them? Or does the first reader "remove" the message from it?



sort by: page size:

One of my coworkers wrote a really cool bit of software to do this. I want him to open source it.

Basically, you can share a single socket amongst many servers. The OS ensures that just one process accepts each connection.

You can therefore have a manager process that owns the socket and passes it on to application processes.

To update, start new processes, then politely tell the old ones to go away.


Yes exactly, and if I remember correctly (my C language memory is starting to fade these days), there is even a flag to reuse the same socket when the program open again (SO_REUSEADDR ?).

Not in any sense that matters here, no.

There's no math equation for reading from a socket. Code is not math and math is not code. It's possible to sort of hide the fact by stacking enough abstractions on top, but in the end it's going to be the same old code that makes it happen.


Does that mean that I could have any node send a socket message to any client now?

Does it? You could implement a classic select-based reader and writer in a single goroutine, to handle many sockets at once.

The true evil approach is to send the socket around, not the message, so that there is no copying required no matter what ;)

IIRC, both the Linux and BSD socket implementations let you do this with a combination of a few sockopts.

No, you can simply have memory mapped ring buffers between processes and all of the stuff not required for the specific application cut away. You don't need to have any more context switches that way.

No need to have a traditional socket API while still being able to do access it from multiple applications.

I have no interest to create such beast, but I'd be truly shocked if it couldn't at least beat a generic kernel based stack.

It'll lose some performance compared to the single application approach, but there might still be a niche for this kind of way.

Sure, you'll need to copy memory, but the data should be almost always in L3 cache anyways.


When we use socket.io we are generally trying to publish messages to all the collaborators of the workspace in an application even though the origin of the message was one of the collaborator specially in a collaborative application.

In this scenario where a server must convey its message to multiple users, a socket client creates a room and adds users to this room. when any change (like app update , new comment) happens in this room that particular change must be conveyed to all the members of the room.

The information, that there is a room called xyz and there are some collaborators say a, b, c in the room is stored by a socket instance on a particular server instance.

When multiple instances of the application or server are created a new instance of socket is also created for the new instance of application. Now, when the changes of a room comes to this server this instance does not know about the room created by previous instance and also does not know about the members / collaborators of the room. Therefore new server instance can’t send the changes to those connected to the existing server. I have tried to explain the problem visually here. Read more here - https://sosha.hashnode.dev/how-to-use-redis-pubsub-to-handle-socketio-sessions-across-multiple-instances


Oh, thanks! I didn't know that. I supposed it worked by inheriting the listening socket but I didn't check.

Sure... a packet/socket/port level interpretation works too.

The spirit of his point, though, appears to be creating decoupled systems from simple message passing protocols.


The problem you mention is solved by evented sockets.

You don't even need that. If the old server process exec()s the new one, it can pass on its file descriptors -- including the listening socket -- when that happens.

Just to check my own knowledge. Do you mean opening another socket for logs?

If not, what is a channel?


You don't just need to preserve the socket; You also need to keep both the new and old server in memory for sufficient time that the existing connections die.

Perhaps there's a kernel-level API that could be added to allow sockets to be snatched or handed over to a new process. That is, honestly, probably the more apropos solution. The fact that sockets act as a kind of lock is an implementation detail.


Is there a reason this couldn't be done as a new socket type (initialized by socket()) instead of either a dedicated new system call and/or the device node they're doing now? I'm not sure it'd be important to do that instead, I'm just curious if there's an obvious rationale I'm missing.

Wow, it only includes Socket.

You could make this a bit simpler nowadays with some libs like Mojo and LWP.


At most, you need to track a couple of booleans per socket, one for read and one for write.

Depending on what you are doing, you might not even need to track these booleans. For example, on the read side you can ignore read events when you are not interested in reading. When you switch back to read interest, you can read the socket to see if data arrived while you ignored events. A similar strategy can be used on the write side.


> can the client assume that communication received on that socket is actually from the server?

No.

next

Legal | privacy