(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=44053560

Hacker News 的讨论围绕着一种协同文本编辑算法展开,该算法避免了传统的 CRDT 和 OT。其核心思想是为每个字符分配全局唯一的 ID,并使用中心服务器根据这些 ID 来序列化“插入到之后”的操作。删除操作通过隐藏字符来处理,保留字符以用于定位。 一些评论者认为,由于中心服务器在解决冲突中的作用,这是一种简化了的或“退化的”CRDT,而另一些人则认为它与 CRDT 不同,因为它缺乏离线保证收敛的能力。关于性能、可扩展性和使用客户端 ID 或时间排序的 UUID 来解决冲突的潜力展开了辩论。讨论还探讨了撤销/重做的复杂性,以及当中心服务器已经提供排序时,CRDT 或 OT 是否真正必要。还提到了将其扩展到其他数据结构(如映射和数组)的可能性。一些网友认为这种方法并不新颖,只是一种简单的方案,并且存在单点故障的隐患。

相关文章
  • 2025-05-15
  • (评论) 2025-05-17
  • 2025-05-20
  • 2025-05-20
  • (评论) 2025-05-16

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Collaborative Text Editing Without CRDTs or OT (mattweidner.com)
    212 points by samwillis 12 hours ago | hide | past | favorite | 57 comments










    That is very neat. The algorithm:

    - Label each text character with a globally unique ID (e.g., a UUID), so that we can refer to it in a consistent way across time - instead of using an array index that changes constantly.

    - Clients send the server “insert after” operations that reference an existing ID. The server looks up the target ID and inserts the new characters immediately after it.

    - Deletion hides a character for display purposes, but it is still kept for "insert after" position purposes.

    This might have potential outside text editing. Game world synchronization, maybe.



    This is literally a degenerate CRDT. Central server for tie-breaking goes back to Google Wave.


    Any algorithm that uses a central server as a tie-breaker could easily be replaced by one where client ids are used for the tie-breaker.


    If you used UUIDv7 you get time-ordered UUID and could use that for a tie breaker.


    Don’t you have to be confident the clocks are sufficiently synced across the system?


    And you know.. simultaneity is not a thing in reality...


    I miss Wave a lot, very quirky in a good way imo. We ran a few D&D games over it. RIP


    What you describe is a CRDT, isn't it ?


    No; there is no single consistent final state that the system must converge to if the parts go offline. If you have this document:

       a{uuid=1}
    
    and two clients send the following operations:

       b{uuid=2} insert-after{uuid=1}
       c{uuid=3} insert-after{uuid=1}
    
    then the following two documents are both valid final states:

       abc
       acb
    
    That's fine as long as you have an authoritative server that observes all events in a single order and a way to unwind misordered local state, but it means that it's not a CRDT.


    Like Raft is a "special case" of Paxos, this feels like a "special case" of CRDT.

    It has all the flavor of CRDT, but adds a leader and a different way for the total ordering (basically using leader's local lamport clock to break tie).

    Throw in leader reelection and some ledger syncing and then give everything some other names, I bet you can have "collaborative text editing on one page".



    > No; there is no single consistent final state that the system must converge to if the parts go offline.

    Sounds like a naive implementation of delta state CRDT. I mean, what if the author has this major epiphany that it's nice to sync states to get convergence?



    What about ordering concurrent operations by id? Then "abc" is the only consistent final state.

    I get what you mean though, having a central authority greatly relaxes the requirement.



    How do you determine which operations are concurrent?


    We could generalize having a tree of known authorities upstream with what a CRDT does, resulting in both being two special cases of a more general consistent event processing model. CRDT makes the order of events commutative, hence the "authority" becomes a property of the math itself, rather than a physical service.


    If you have the additional semantics that says "operation with lowest uuid gets applied first", don't you essentially get a CRDT?

    I mean, a uuid is kind of a poor man's Lamport clock, isn't it?



    Yes, you can extend the algorithm that was described with additional semantics, and you can turn it into a CRDT.


    Is this really that novel? I mean using a central process for serializing a distributed system is like a no brainer -- didn't we start off from here originally? -- until you have to worry about network partitions, and CAP and all that jazz. You also now have a single point of failure. Also I skimmed the thing but was performance discussed?


    I really want to believe you were trying to make a reference to cap'n jazz - https://en.m.wikipedia.org/wiki/Cap'n_Jazz

    :)



    Never heard of them

    :(



    Yeah I have the same question. I'm not familiar with the problem space but this seems like my naive first idea so I'm wondering what the catch is.


    As the author, same.

    My best guess is:

    - Central-server collaborative editing work focuses on Operational Transformation (OT), likely due to inertia (studied since 1989) and the perception that storing an ID per character is inefficient. In fairness, it is, absent the optimizations introduced by RGASplit and Yjs (~2015).

    - For decentralized editing, OT is very complicated, and CRDTs took over as the solution of interest (studied since 2005). Storing every operation permanently in a log - needed to use the linked approach without a server - feels inefficient, as does server reconciliation's undo/re-apply process. So CRDT research has focused on avoiding those inefficiencies, sacrificing simplicity along the way, instead of just embracing them as the easy way out.

    To me, the "inefficiencies" seem quite manageable. Storage is cheap, text is small, and you probably want a complete op log anyway for auditing and document histories (cf. git). Server reconciliation's undo/re-apply process can be batched aggressively, e.g., only do it a few times per second; that just makes remote ops take a little longer to show up.

    Granted, I have not built a complete app around server reconciliation or the linked approach, so perhaps there is a hidden catch. But I am encouraged by the success of Replicache (https://doc.replicache.dev/concepts/how-it-works), which is where I learned of server reconciliation.



    ctrl+a

    ctrl+x

    ctrl+v

    Good luck



    Isn’t this a NOP?

    I think you could batch these things though. You just need a slightly more advanced protocol. If B is the first id, and L is the last, create range(B, L) as R and insert R after L (assuming Ctrl-v a second time).



    Surprised to see no discussion of other data structures like dicts/maps, or arrays of arbitrary type. Hopefully they'd be a straightforward extension. IME, apps need collaborative data structures more often than they need pure collaborative text editing.

    The motivating examples (update validation, partial loading, higher-level operations) are interesting, but I don't see a strong case that the reason Yjs etc. lack these features is the underlying CRDT implementation, as opposed to these features being intrinsically difficult to build.



    > Surprised to see no discussion of other data structures like dicts/maps, or arrays of arbitrary type. Hopefully they'd be a straightforward extension. IME, apps need collaborative data structures more often than they need pure collaborative text editing.

    Totally agree. I guess an array of "atomic" objects, where the properties of the objects can't be changed can be done just by replacing the string with your own type. Changes inside of the object is probably trickier, but maybe it's just about efficiently storing/traversing the tree?

    I've also always thoguth it should be possible to create something where the consumer of the helper library (per OP terminology) can hook in their own lightweight "semantic model" logic, to prevent/manage invalid states. A todo item can't both have isDone: true and state: inProgress at the same time. Similar to rich text formatting semantics mentioned in the linked article.



    Cool to see a write up on this! Discovered the same method years ago and also wondered why it doesn’t show up in academic literature.

    I implemented this in a decentralized context and as a CRDT though, so that the properties hold (commutative, idempotent and associative).



    If the idea is to have an alternative to CRDT, what did you gain from building it as CRDT?


    It wasn’t really that we wanted to have a CRDT per se, but as it was implemented on top of an op-based append-only log CRDT, it turned out to hold those properties, which makes it a CRDT. We wanted to have the edits to be able to arrive in any order or after delays due to network partitions (this was for a p2p network).


    Is the take-home message of the post that the full complexity of CRDTs/OT is necessary only in the absence of a central server?


    Even in the absence of a central server, you can still avoid CRDT/OT complexity if you have a decentralized way to eventually total order operations & apply them in that order: https://mattweidner.com/2025/05/21/text-without-crdts.html#d...

    As others in the comments argue, this is technically a CRDT (though a fully general one); also, undoing/replaying ops is itself non-trivial to implement. However, I hope this is still simpler than using a traditional CRDT/OT for each data type.



    Did you perhaps mean to write, “though not a fully general one”?


    A decentralized, eventually consistent total order on operations is a fully general CRDT, in the sense that you can put whatever (deterministic) operations you want in the total order and clients will end up in eventually consistent states.

    Whether the converged result is at all reasonable is a different question.



    Ok, now I understand your reply. In retrospect, you were being perfectly clear all along and my confusion was due to me not appreciating the precise definition of a CRDT. Thanks!


    OT requires centralized server.


    Some OTs do, some don't. OTs with the TP2 property do not require a central authority to order edits I believe.

    In my experience if you are persisting your edits or document state, you have something that creates an ordering anyways. That thing is commonly an OLTP database. OLTPs are optimized for this kind of write-heavy workload and there's a lot of existing work on how to optimize them further.

    But now even S3 has PUT-IF, so you could use that to create an ordering. https://docs.aws.amazon.com/AmazonS3/latest/userguide/condit...



    so, an unoptimized crdt? just set max set size to 1 and yolo?


    It's very appealing as a kind of irreducible complexity. It feels close to what is actually happening and is simple hahah if like you say not optimized.


    I'm not an expert on this, but the main difference with a CRDT like Automerge seems to be the server reconciliation. See for example this article [1]. Automerge handles concurrent insertions by using a sequence number and relying on an agreed ordering of agent ids when insertions are concurrent, while this scheme relies on the server to handle them in the order they come in.

    The article mentions this:

    > This contrasts with text-editing CRDTs, in which IDs are ordered for you by a fancy algorithm. That ordering algorithm is what differs between the numerous text-editing CRDTs, and it’s the complicated part of any CRDT paper; we get to avoid it entirely.

    I can buy the idea that many apps have a central server anyway, so you can avoid the "fancy algorithm", though the server reconciliation requires undoing and replaying of local edits, so it's not 100% clear to me if that's much simpler.

    [1] https://josephg.com/blog/crdts-go-brrr/



    Agreed, the undoing and replying isn’t exactly non-fancy. Peristent B+Tree is also not very non-fancy.


    I believe that Automerge internally stores all operations in an eventually consistent total order, which you can use as a substitute for the server in server reconciliation (cf. https://mattweidner.com/2025/05/21/text-without-crdts.html#d...). However, Automerge does not actually do that - instead it processes text operations using a traditional CRDT, RGA, probably because (as you point out) undoing and replaying ops is not trivial to implement.


    Use of server reconciliation makes me think client-side reconciliation would be tricky… how do you preserve smooth editor UX while applying server updates as they arrive?

    For example, if your client-sent request to insert a character fails, do you just retry the request? What if an update arrived in the intervening time? (Edit: they acknowledge this case in the “Client-Side” section, the proposal is to rewind and replay, and a simpler proposal to block until the pending queue is flushed)

    From a frontend vantage I feel like there may be a long tail of underspecified UI/UX edge cases, such that CRDT would be simpler overall. And how does the editor feel to use while riding the NYC subway where coverage is spotty?



    Both ProseMirror and the newer version of CodeMirror have a pretty elegant solution to this: each modification to the document is modeled as a step that keeps track of indices instead of node/text identities, and uses a data structure called a "position map" that the buffered steps can be mapped through to create steps with updated positions, which are then applied to your document.

    In practice, it works quite well. Here's more info:

    https://marijnhaverbeke.nl/blog/collaborative-editing.html https://marijnhaverbeke.nl/blog/collaborative-editing-cm.htm...



    Does this finally solve collaborative text editing and its friends?

    Such an awesome approach.



    Could it? If you and I are simultaneously editing a list of people and you are trying to order them by age and I'm ordering them alphabetically, we are still going to have to reconcile the result.


    By that definition, it's not a solvable problem, is it?


    I don’t think so.

    AFAIK, CRDTs don’t solve semantic conflict issues.



    Is collaborative text editing with offline sync a nerd snipe [0]? I work for a big tech and write a lot and usually worse case someone else edits at the same time and the server can figure it out. yes it needs some kind of algo but most concurrent edits are on different parts of a huge doc.

    Compare this to Git workflows. Git already handles merging most changes seamlessly.

    [0] https://xkcd.com/356/



    wonder if there would be a perf gain with UUIDv7


    They do discuss optimization of the IDs at the bottom of the article.


    Why not just use an ever incrementing u64?


    Then you need central coordination, either a single central server containing the counter, or something like Snowflake where you have multiple counters, each assigned orthogonal blocks ahead of time (that need to coordinate with a central server).

    UUIDs/ULIDs/etc are fully distributed, you can have two clients assign an ID without coordinating with ~0% of collision.



    You could also split the u64 to have the first 24 bits be unique to the client, and the last 40 be unique to the produced character. This would still allow 1 TiB of data produced per user and session. The single mutex would be the user ID counter.


    An incrementing u64 requires either atomic increments between concurrent clients or recalculation logic to consistently find the newly incremented ID after conflicting information syncs. UUIDs just spit out a unique ID without any complexity or associations with other clients.


    There's closely related idea that might work, though. Each device editing text could be assigned a 32-bit ID by the server (perhaps auto-incrementing). Devices then maintain a separate 32-bit ID that they increment for each operation they perform. The ID used for each character is (device_id, edit_id), which should fit nicely in 8 bytes.


    Indeed, this is close to what Yjs (popular CRDT library) does: each client instance (~ browser tab) chooses a random 32-bit clientId, and character IDs combine this clientId with local counters. https://github.com/yjs/yjs/blob/987c9ebb5ad0a2a89a0230f3a0c6...

    Any given collaborative document will probably only see ~1k clientIds in its lifetime, so the odds of a collision are fairly low, though I'd be more comfortable with a 64-bit ID.



    [flagged]



    How reliable is that?


    must be sarcasm I believe.






    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com