Both OT and CRDT approaches have lots of obscure edge cases. In both cases, I can't write a correct algorithm without a fuzzer to save myself. (And I don't trust anyone else to, either).
CRDTs aren't that complex on the surface - this messy file[1] implements 4 different list CRDT algorithms in a few hundred lines of code. And CRDTs are pretty simple for JSON structures, which is not at all true for OT algorithms.
But text CRDTs in particular need a whole lot more thought around optimization because of how locations work. Locations in list/text CRDTs generally work by giving every item a globally-unique ID and referencing that ID with every change. So, inserts work by saying "Insert between ID xxx and yyy". But, if you have a text document which needs to store a GUID for every keystroke which has ever happened, disk and memory usage becomes crazy bad. And if you don't have a clever indexing strategy, looking up IDs will bottleneck your program.
In diamond types (my CRDT), I solve that with a pancake stack of optimizations which all feed into each other. Ids are (agent, sequence number) pairs so I can run-length encode them. Internally those IDs get flattened into integers, and I use a special hand-written b-tree which internally run-length encodes all the items it stores. I've iterated on Yjs's file format to get the file size down to ~0.05 bytes of overhead per inserted character in some real world data sets, which is wild given each inserted character needs to store about 8 different fields. (Insert position, ID (agent + seq), parents, the inserted character, and so on).
CRDTs aren't that complex. Making them small and fast is the challenge. Automerge has been around for years and still takes hundreds of megabytes of ram to process a simple editing trace for a 10 page text document. Even in their rust implementation. Diamond types is orders of magnitude faster, and uses ~1M of RAM for the same test.
The current version of diamond types is many times faster than any OT algorithm I've written in the past thanks to all those optimizations. Now that I've been down this road, I could optimize an OT algorithm in the same way if necessary, but you don't really need to.
Other kinds of CRDTs don't need these optimizations. (Eg the sort you'd use for a normal database). There's two reasons: 1. When editing text, you make an edit with each keystroke. So you have a lot of edits. 2. Merging list items correctly is orders of magnitude more complex than merging a database entry. If you just want CRDTs for editing a database of records, that'd probably only take a few dozen lines.
CRDTs aren't that complex on the surface - this messy file[1] implements 4 different list CRDT algorithms in a few hundred lines of code. And CRDTs are pretty simple for JSON structures, which is not at all true for OT algorithms.
But text CRDTs in particular need a whole lot more thought around optimization because of how locations work. Locations in list/text CRDTs generally work by giving every item a globally-unique ID and referencing that ID with every change. So, inserts work by saying "Insert between ID xxx and yyy". But, if you have a text document which needs to store a GUID for every keystroke which has ever happened, disk and memory usage becomes crazy bad. And if you don't have a clever indexing strategy, looking up IDs will bottleneck your program.
In diamond types (my CRDT), I solve that with a pancake stack of optimizations which all feed into each other. Ids are (agent, sequence number) pairs so I can run-length encode them. Internally those IDs get flattened into integers, and I use a special hand-written b-tree which internally run-length encodes all the items it stores. I've iterated on Yjs's file format to get the file size down to ~0.05 bytes of overhead per inserted character in some real world data sets, which is wild given each inserted character needs to store about 8 different fields. (Insert position, ID (agent + seq), parents, the inserted character, and so on).
CRDTs aren't that complex. Making them small and fast is the challenge. Automerge has been around for years and still takes hundreds of megabytes of ram to process a simple editing trace for a 10 page text document. Even in their rust implementation. Diamond types is orders of magnitude faster, and uses ~1M of RAM for the same test.
The current version of diamond types is many times faster than any OT algorithm I've written in the past thanks to all those optimizations. Now that I've been down this road, I could optimize an OT algorithm in the same way if necessary, but you don't really need to.
Other kinds of CRDTs don't need these optimizations. (Eg the sort you'd use for a normal database). There's two reasons: 1. When editing text, you make an edit with each keystroke. So you have a lot of edits. 2. Merging list items correctly is orders of magnitude more complex than merging a database entry. If you just want CRDTs for editing a database of records, that'd probably only take a few dozen lines.
[1] https://github.com/josephg/reference-crdts/blob/main/crdts.t...