The design of a distributed variant of Plato framework to support collaborated editing
A first thought system architecture (pulling mode) is one that the server doesn't keep client information and the client takes a more active part in retrieving updates from the server,
Note the previous model manager which is responsible for both the management of the entire model collection and the model provisioning for the rendering department is now split to two parts each taking one of the responsibilities respectively and communicating with each other locally or on the wire.
Server | Client
o IModelProvider
|
ModelManager ---------- Local/Remote ---------> On-screen model provider ---> View/View Model department
[All models] 1. -> 'model changed' signal [On-screen models]
<- check current o/s async
-> reply if its update
<- current collection
-> changes in collection
2. <- sight change
<- current collection
-> changes in collection
(any new signal invalidates updated)
<----------------------------------- Handlers
references local o/s first
1. <- send update
2. -> Acknowledge update
For recording, for each client request a recording object is instantiated and starts listening on user's further request and working as normal on the revision tree on the server as usual.
For playback, The recording is sent to the client who will also have a copy of the tree (which should only add up and not change or reduce) to play the recording on.
For undo and redo, it's also totally up to the server, and the client is subject to its latest update.
Obviously the communication between the client and server for updating the models is not complete yet and there are lots of cases that need to be dealt with. So this pattern has a high complexity in the client design which is error prone, inflexible to changes, and against the general app design guidelines.
Last night before I fell a sleep I came up with an idea of using sort of pushing mode, which should be so straightforward and had been so long missing from my mind. In detail, it's using a more sophisticated server and the principle of matching design and minimal data exchange.
Take a look at the architecture first,
Server | Client
o IModelProvider
|
ModelManager --> Per-session On-screen --------Local/Remote -------> On-screen provider ---> View/View Model
[All models] [On-screen models]
Client view scope info Handshakes,
<- client view changes
-> update of on-screen list
and individual model data
EXCLUDES unnecessary self
initiated model changes
Handlers
references local o/s first
<-----------------------------
model changes
This design greatly simplifies the data exchange and therefore makes it much more manageable by minimising the model data update to only changes to on-screen list and models. It's literally a split of on-screen model provider with each part sitting on either server or client with a minimum communication between. Consequently, the cost is the provider on the server needs to keep a session for each client which consists of a mirror of the clients on-screen list and the client's current view scope, which is totally reasonable and affordable.
The handlers check the local on-screen list as well (again it makes sense, as only those on-screen can be manipulated) and immediately report the changes to the server and leave the server to confirm and make the verfied model update. (Here mechanism for proper presentation/notification is needed if the update is rejected/invalidated)
Another question that remains is how to determine if a model is on screen if its representing shape and size is not certain before rendered.
One simple solution to it is completely get the server to decide the on-screen status using approximation in a C/S scenario, which is fine in most uses.
However if a finer approach is really needed, it can be
- canvas simulation on the server (which is subject to the system support on the server)
- use a distributed version of posterior checking mechanism, which is to flag all changed size undetermined model as VisualToMeasure initially and get client to update their dimension information. In a distributed scenario, the server may receive multiple client responses, however in this application, the measurements are supposed to be close to each other, the server just needs to average them out.