The current application is Ethereum L1 state and state history, but it has useful properties for other applications. It's particularly good at being small and fast, and compressing time-varying blockchain-like or graph data.
As it's a prototype I'm not committing to final figures, but measurement, theory and prototype tests project the method to be significantly smaller and faster than other implementations, or at least competitive with the state of the art being researched by other groups.
> Why did you reinvent the wheel?
Different kind of wheel. No storage engine that I'm aware of has the desired combination of properties to get the size (small) and speed (IOPS, lower read & write amplification) in each of the types of operations required. Size and I/O are major bottlenecks for this type of application; in a way it's one of the worst cases for any kind of database or schema.
It's neither a B-tree nor an LSM-tree, (not a fractal tree either), because all of those are algorithmically poor for some of the operations required. I found another structure after being willing to "go there" relating the application to low-level storage behaviour, and reading older academic papers.
These data structures are not hard to understand or implement, once you get used to them. As I've been working on and off for many years on storage structures as a hobby (yeah, it's fun!), it's only natural to consider it an option when faced with an unusual performance challenge.
It also allowed me to leverage separate work I've done on raw Linux I/O performance (for filesystems, VMs etc), which is how random-access reads are able to reach millions/s on a single NVMe SSD.
> Is it as durable as Postgres?
Yes.
Modulo implementation bugs (because it won't have the scrutiny and many eyes/years of testing that Postgres does).
The current application is Ethereum L1 state and state history, but it has useful properties for other applications. It's particularly good at being small and fast, and compressing time-varying blockchain-like or graph data.
As it's a prototype I'm not committing to final figures, but measurement, theory and prototype tests project the method to be significantly smaller and faster than other implementations, or at least competitive with the state of the art being researched by other groups.
> Why did you reinvent the wheel?
Different kind of wheel. No storage engine that I'm aware of has the desired combination of properties to get the size (small) and speed (IOPS, lower read & write amplification) in each of the types of operations required. Size and I/O are major bottlenecks for this type of application; in a way it's one of the worst cases for any kind of database or schema.
It's neither a B-tree nor an LSM-tree, (not a fractal tree either), because all of those are algorithmically poor for some of the operations required. I found another structure after being willing to "go there" relating the application to low-level storage behaviour, and reading older academic papers.
These data structures are not hard to understand or implement, once you get used to them. As I've been working on and off for many years on storage structures as a hobby (yeah, it's fun!), it's only natural to consider it an option when faced with an unusual performance challenge.
It also allowed me to leverage separate work I've done on raw Linux I/O performance (for filesystems, VMs etc), which is how random-access reads are able to reach millions/s on a single NVMe SSD.
> Is it as durable as Postgres?
Yes.
Modulo implementation bugs (because it won't have the scrutiny and many eyes/years of testing that Postgres does).
> Any links?
Not at this time, sorry!