I am persuaded by John Bywater that every domain can be exhaustively described with events. As Whitehead said, "there's no going behind the events to find something more real." See Events Software Process Reality. transcript
John's talk identifies a pivotal challenge with event sourcing that asks for atomicity of a change and notification of that change.
Wiki enjoys inherently async notification. Humans must pull the notification of changes to their own browser.
Let the journal be the authoritative data source.
Write edits to browser storage first.
Sync the changes in the journal with the server. Built-in offline mode.
We gain atomicity here because edits only happen in the browser.
.
Perhaps this is powered by a service worker which simulates the live network connection and performs the sync when network becomes available.
The events which sync with the server might be an opportunity for a base level of Journal Optimization
Paul90 shared Paul Frazee's thread: thread
See also Cryptographic Wiki for specific suggestions about authors and public keys.
If the page journal is something like a merkle tree, even with or without full public-private keys, it should simplify the sync mechanics, as git demonstrates.
I think the Page would be a DDD Aggregate Root stackoverflow
We could keep the same granular journal. To display provenance, filter the journal for provenance events (forks of pages, or drags of paragraphs from different authors)
See also postMessage Excursion for a sampling of javascript prior art for event handling between frames and intersections with functional reactive programming.