using two English words as the name would already be an improvement for the common web-searcher... As an example, here is the name of two apps that do similar things: "Blue iris" and "motion".... let me know which one that you are able find more quickly.
I may have misunderstood, but I think they're using the log as a write ahead log, which is a similar, but subtly different idea to event logging.
A Write Ahead Log is an implementation detail of a distributed system, whereby the system has a log of actions it applies against a state machine. Protocols, such as Raft, exist to keep the log in sync, and the state machine is kept in sync as a consequence. The log entries would be quite abstract and low-level, like "tuple inserted".
An Event Log, on the other hand, is an application level architecture technique. The log entries would be application-specific actions, like "user logged in".
Out of the box, imo it'll be faffy and probably not worth it vs just using an existing solution. :digraph is great, having a graph structure available out of the box in the standard library is super nice for certain things, but it's 3 ETS tables, and ETS tables can't be replicated across nodes. MNesia tables can (that's the point), so replicating how :digraph works using MNesia is probably fairly doable (I assume this has been tried but I haven't investigated it)
The architecture is: materialized views from a central append log.
I don't see a reference to Apache Bookkeeper[1][2] in the above. My recommendation to anyone who wants to build their own variant of this architecture is to build it on top of Bookkeeper instead of rolling your own. It is designed for this precise workload [2] and addresses the write throughput limitations of the OP 'Beam'.
[edit: pull quote from [2]]:
"what makes BookKeeper unique is its ability to offer a short-tailed, low-latency, distributed scale-out storage solution. Although this is a CP system, its greater availability makes it almost a C(A)P system. It is an apt storage for immutable data."
"some functionality is lacking before Beam could be used as a critical production data store, including deletion of facts"
It sounds like this needs a second central log to enable eviction of sensitive data before it can be seriously considered usable. One log for transactions and another other for the data itself. Or at least that is the conclusion we reached when designing and building our own knowledge graph store (with a similar use of Kafka): https://juxt.pro/crux/docs/index.html#_unbundled
https://en.m.wikipedia.org/wiki/BEAM_(Erlang_virtual_machine...
reply