Announcement Announcement Module
Collapse
No announcement yet.
buffered value models vs. cloning (was Not updating form...) Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • buffered value models vs. cloning (was Not updating form...)

    I moved this conversation here from this thread:

    http://forum.springframework.org/showthread.php?t=16641

    Where Ollie said:

    Given a basic M-D form with one form model for the master form and one form model which allows for editing a single detail item at a time and some logic for switch between which detail item is edited (along the lines of the M-D code you posted on the Wiki).

    Now a user comes in and makes some edits to detail item number 1 and then decides to start editing detail item number 2. If we're buffering what do we do with the buffered edits for item 1 at this point? Well, at the moment, you must commit them back to into the item 1 form object or lose them, however once you've commited them back you loose an benefits you gain by using buffering - dirty checking no longer works, you can no longer revert the form etc.
    This problem seems, to me anyway, to stem from the current implementation of AbstractForm and how it handles a set of editable objects. It simply "swaps out" the underlying domain object being referenced by the value models for the bound controls on the form. Thus, you have to commit one set of changes before you can work on a different object. In essence you have a N:1 mapping from domain objects to form model (inclusive of the value modesl in the bound controls). It would seem that this could be resolved by moving to a N:N model where you create one set of value holders for every object in the editable list. To be honest, I have no idea how hard this would be, but it would preserve the value model in use today and it would remove the "must commit" problem.

    That said, I have always used a cloning model in my middle tier implementations for web apps. Without the whole value model framework to build on, it was the simplest model that provided the ability to back out edits. So, I'm in now way opposed to cloning as the means to achieve the buffering.

    In the cloning case, you are moving to a "coarse grained" buffer for the form model - the clone is the buffer for all the bindings on the form. So, if I may guess at the imlpentation you have in mind, you would:

    1. Implement the buffering FormModel to create a clone of all domain objects (multiple objects in the case of a M-D form).

    2. Bindings created for the form would then reference the currently selected clone directly, using the same access models as now but without buffering. The "current" clone cuold be swapped out just like it does now.

    3. The dirty tracking would then be centralized in the form model. Triggering whenever a value is posted (via setValue on the value holder) to the cloned object. This would be necessary since without buffering, the bound value models wouldn't have a meaningful dirty state.

    4. Revert operations would simply operate by creating a new clone and replacing the dirty one in the form model.

    All of that makes sense (and even seems simpler than the current system), but I see a couple of troublesome areas:

    1. How would the application deal with updates to "topmost" domain object in this new system. Currently, since all updates happen through direct property modifications, there is no need for the application to do anything unique when the changes aer committed. The domain object is updated "in situ." However, in the cloning model, there would need to be some method of "replacing" the updated object (since it is a clone that has been changed and not the original). What are your ideas on how to solve this problem? (or am I missing something obvious?)

    2. There migt be a performance implication when dealing with large objects. Currently, I can edit a single property on an object of any size (number of properties) in the same time as I can edit one property on a tiny object. Moving to a cloning model would mean that to edit just a single property, I have to clone the entire object. This may not be a significant issue, but I think it bears considering.

    3. How drastic would be the changes to the current code base (and code based on it)? I know that compatibility isn't a prime factor given the youth of the platform, but also worth discussing.

    4. Probably a few others that I haven't thought of in the last hour of pondering this topic. :wink:

    Buffering also makes it impossible to use validation implementations that access the form object directly (implementations of Spring Validator for instance) rather than through a property access strategy.
    Agreed, being able to leverage the code in the spring core would be a nice benefit.

    All in all, this is an intriguing topic and I'm looking forward to the discussion.

    Thanks.
    Larry.
    Last edited by robyn; May 14th, 2006, 08:26 PM.

  • #2
    This problem seems, to me anyway, to stem from the current implementation of AbstractForm and how it handles a set of editable objects.
    First up, I'd like to say that, any solution to this M-D buffering problem must be fully implemented in FormModels and not in the Form implementations. A Form should really be a simply façade in front of a FormModel and it’s GUI. IMHO AbstractForm needs to be heavily refactored.

    It simply "swaps out" the underlying domain object being referenced by the value models for the bound controls on the form. Thus, you have to commit one set of changes before you can work on a different object. In essence you have a N:1 mapping from domain objects to form model (inclusive of the value modesl in the bound controls). It would seem that this could be resolved by moving to a N:N model where you create one set of value holders for every object in the editable list. To be honest, I have no idea how hard this would be, but it would preserve the value model in use today and it would remove the "must commit" problem.
    One solution is to have form models implement the memento pattern. So on each change of editable object, rather than committing, you just take a memento from the form model and then on commit you would go back and one by one apply any mementos that have previously been captured and commit to their relevant editable objects. Sound feasible?

    In the cloning case, you are moving to a "coarse grained" buffer for the form model - the clone is the buffer for all the bindings on the form. So, if I may guess at the imlpentation you have in mind, you would:

    1. Implement the buffering FormModel to create a clone of all domain objects (multiple objects in the case of a M-D form).

    2. Bindings created for the form would then reference the currently selected clone directly, using the same access models as now but without buffering. The "current" clone cuold be swapped out just like it does now.

    3. The dirty tracking would then be centralized in the form model. Triggering whenever a value is posted (via setValue on the value holder) to the cloned object. This would be necessary since without buffering, the bound value models wouldn't have a meaningful dirty state.

    4. Revert operations would simply operate by creating a new clone and replacing the dirty one in the form model.
    Exactly what I had in mind. I have a bad habit of expressing my ideas in a bare minimum of words, so thanks for this excellent expansion on my "using cloning as a replacement for buffered form models" comment ;-)

    How would the application deal with updates to "topmost" domain object in this new system. Currently, since all updates happen through direct property modifications, there is no need for the application to do anything unique when the changes aer committed. The domain object is updated "in situ." However, in the cloning model, there would need to be some method of "replacing" the updated object (since it is a clone that has been changed and not the original). What are your ideas on how to solve this problem? (or am I missing something obvious?)
    The simplest solution would be to return the updated clone, however I suspect there are valid cases where this is not appropriate.

    The memento idea above suggests another approach that would let us update the original. On commit, you take a memento of the master form model (I assume this would also take mementos of all child form models) and then switch in the original domain object and apply the state held by that memento back over the original. Make sense?

    There migt be a performance implication when dealing with large objects. Currently, I can edit a single property on an object of any size (number of properties) in the same time as I can edit one property on a tiny object. Moving to a cloning model would mean that to edit just a single property, I have to clone the entire object. This may not be a significant issue, but I think it bears considering.
    My gut feeling is that this is a non issue. In my system I implement the clone operation using serialization which I know is very fast and I imagine a custom clone would be even faster again, you also have to wonder about the usability of a form which is so complex that the time it takes to clone the backing object becomes an issue.

    How drastic would be the changes to the current code base (and code based on it)? I know that compatibility isn't a prime factor given the youth of the platform, but also worth discussing.
    I don’t see this as an issue - we're largely talking about implementation details the client facing interfaces should not need to change much at all. I also think getting this right first up is quite important as it's only going to get harder to fix after we make a release, and as you've already discovered we don’t decent implementation of M-D forms at the moment.

    Ollie

    Comment


    • #3
      First up, I'd like to say that, any solution to this M-D buffering problem must be fully implemented in FormModels and not in the Form implementations. A Form should really be a simply façade in front of a FormModel and it’s GUI. IMHO AbstractForm needs to be heavily refactored.
      I couldn't agree more.

      One solution is to have form models implement the memento pattern. So on each change of editable object, rather than committing, you just take a memento from the form model and then on commit you would go back and one by one apply any mementos that have previously been captured and commit to their relevant editable objects. Sound feasible?
      Actually, you and I were thinking along the same lines. As I thought more about this, I came to a similar plan. So, each time one of the editable objects is selected, the following would take place:

      1. If there is a current object, traverse the set of bound value models and obtain the memento. Store that collected state in a collection in some way keyed to the current object.

      2. Install the new object and let the bound controls update themselves (just as they do now).

      3. If there is a stored state for the new object, traverse each saved memento and hand it back to the bound value models so they can restore their previous state.

      And commit would get a little more complicated:

      1. Iterate over all the stored memento sets, and
      2. install the domain object and saved state to the bound controls
      3. tell the controls to commit

      Sidebar: Actually, that may be a bad solution for commit since it might leave the controls in a different state than at the time the commit command was executed. It's probably more efficient to implement a mechanism (in the BVM) to get the commit value directly from the saved state. Either that or you'd have to save and restore the state of the controls and the current editable object before and after the commit processing.


      Honestly, as I think through this, state saving feels like the correct solution.

      I'm leaning away from cloning (as compared to the above model) for the following reason:

      1. All applications will have to write additional code to deal with the commit of the toplevel editable object. With the current (and proposed above) model, this is not necessary as the commit updates the object directly.

      2. I still see a problem with cloning complex objects, especially those that exist in connected graphs. To your comment:

      you also have to wonder about the usability of a form which is so complex that the time it takes to clone the backing object becomes an issue.
      I wasn't talking about a complex form, I was talking about a complex object - one that is connected to a lot of other objects that shouldn't be involved in the edit operation. Admittedly, application developers could implement custom cloning code for these cases, but that seems like extra work that can be avoided if we use a state saving pattern instead of cloning.

      What do you think?

      Larry.

      Comment


      • #4
        So, each time one of the editable objects is selected, the following would take place:

        1. If there is a current object, traverse the set of bound value models and obtain the memento. Store that collected state in a collection in some way keyed to the current object.

        2. Install the new object and let the bound controls update themselves (just as they do now).

        3. If there is a stored state for the new object, traverse each saved memento and hand it back to the bound value models so they can restore their previous state.

        And commit would get a little more complicated:

        1. Iterate over all the stored memento sets, and
        2. install the domain object and saved state to the bound controls
        3. tell the controls to commit
        Yep. Almost exactly what I had in mind.

        Sidebar: Actually, that may be a bad solution for commit since it might leave the controls in a different state than at the time the commit command was executed. It's probably more efficient to implement a mechanism (in the BVM) to get the commit value directly from the saved state. Either that or you'd have to save and restore the state of the controls and the current editable object before and after the commit processing.
        Actually, when I was making the changes I commited last night I included some preliminary support for this case. I have introduced a new VM into all form models (FormModelMediatingValueModel) which has the responsibility for mediating between the bound controls and the actual property value models. I intend to add support to this value model for "disconnecting" controls from their property which would provide a nice solution to the problem above - just disconnect all the controls before you go through and apply all of the mementos and then reconnect after commit. This would also speed up commit considerably as there would be no GUI updates to deal with.

        Ollie

        Comment


        • #5
          Originally posted by lstreepy
          1. If there is a current object, traverse the set of bound value models and obtain the memento. Store that collected state in a collection in some way keyed to the current object.

          2. Install the new object and let the bound controls update themselves (just as they do now).

          3. If there is a stored state for the new object, traverse each saved memento and hand it back to the bound value models so they can restore their previous state.

          And commit would get a little more complicated:

          1. Iterate over all the stored memento sets, and
          2. install the domain object and saved state to the bound controls
          3. tell the controls to commit

          Sidebar: Actually, that may be a bad solution for commit since it might leave the controls in a different state than at the time the commit command was executed. It's probably more efficient to implement a mechanism (in the BVM) to get the commit value directly from the saved state. Either that or you'd have to save and restore the state of the controls and the current editable object before and after the commit processing.
          All in all, buffering causes no amount of headaches when considering how to implement FormModels. Consider what buffering vs non-buffering achieves and implies for the undelrying domain model:

          Buffering:
          1) The underlying domain object state represents the 'persistent' data (otherwise why buffer?)
          2) Commiting to the underlying domain object immediately commits the data to 'persistent' storage (again, otherwise, why buffer?)
          3) No domain logic can be performed on the domain objects when dirty, so domain objects become mere 'value' objects whilst dirty (so why not use maps or RecordSets, why use rich domain objects at all?)

          Non-buffering:
          1) Domain objects are non-transactional data corresponding to 'persistent' records,
          2) Domain objects are their own buffer, because a) commit causes a secondary action (eg. database write), and revert causes a complementary action (eg. database reload of original state).
          3) Domain logic can be performed on these objects, because they encapsulate their own dirty state.

          I guess what I'm trying to say here is that I don't really use buffering because I already have to handle database read/commits secondarily anyway, and I can't see the usefulness of buffering already stale data. And I find buffering to be incredibly difficult for a automated library like spring-rcp to handle in any but the most rudimentary way. Problems immediately arise in the areas of notification (of changes) and consistent reading back of buffered changes. These problems become readily apparent in a non-simple domain object structure (like a 2+ level object graph).

          I had a stab at implementing dirty tracking for more complex parent-child relationships a while back using OGNL. What I came up with had some of these features:

          1) Buffered state was not handled by individual ValueModels or collection ValueModels.
          2) A single BufferManager was instantiated by a top-level FormModel, and shared by all child FormModels.
          3) The PropertyAccessStrategies for all these FormModels (parent and children) were aware of the BufferManager, and all read/writes were channelled through it (including intermediate read/writes such as complex path properties).
          4) All mutating operations (ie. property writes, collection adds and removes) were added to the BufferedManager's internal cache, and stored in order,
          5) All accessing operations (ie. property reads, collection gets and iterations) first looked up the BufferManager's internal cache to see if a matching record could be found. If a match was found it was returned, otherwise the BufferManager passed the request through to the underlying bean-accessor to perform the access.
          6) Any read that returned a collection, would instead return a wrapper collection, that also channelled all read/writes through the BufferManager
          7) Upon revert, the BufferManager merely dumped its cache,
          8) Upon commit, the BufferManager passed each operation in its cache to the underlying bean-mutator to be performed in original order of operation.

          Whew, sorry for the long-winded explanation. In the end, while the implementation above is designed to handle buffered modifications to complex object graphs, it still suffers from the problem that since no domain logic gets triggered during buffered writes, it still represents a anaemic mimicry of the real thing. But I guess if buffering of deep object graphs is a desirable feature for some, then at least it would be able to handle it as correctly as possible. Its just that I can't see its usefulness.

          regards,
          Scott

          Comment


          • #6
            All in all, buffering causes no amount of headaches when considering how to implement FormModels. Consider what buffering vs non-buffering achieves and implies for the undelrying domain model:

            Buffering:
            1) The underlying domain object state represents the 'persistent' data (otherwise why buffer?)
            2) Commiting to the underlying domain object immediately commits the data to 'persistent' storage (again, otherwise, why buffer?)
            3) No domain logic can be performed on the domain objects when dirty, so domain objects become mere 'value' objects whilst dirty (so why not use maps or RecordSets, why use rich domain objects at all?)

            Non-buffering:
            1) Domain objects are non-transactional data corresponding to 'persistent' records,
            2) Domain objects are their own buffer, because a) commit causes a secondary action (eg. database write), and revert causes a complementary action (eg. database reload of original state).
            3) Domain logic can be performed on these objects, because they encapsulate their own dirty state.

            I guess what I'm trying to say here is that I don't really use buffering because I already have to handle database read/commits secondarily anyway, and I can't see the usefulness of buffering already stale data. And I find buffering to be incredibly difficult for a automated library like spring-rcp to handle in any but the most rudimentary way. Problems immediately arise in the areas of notification (of changes) and consistent reading back of buffered changes. These problems become readily apparent in a non-simple domain object structure (like a 2+ level object graph).
            I totally with agree with everything you've stated Scott - as someone who's had to actually try and make our buffering work I can vouch for the headaches and a fair amount of gnashing of teeth to boot. One of the big problems with buffering value models is it can get really hard to conceptually work out where the hell a value is actually coming from - does it come from the buffer, what happens to change notifications, how do changes to parent object propagate up to child object, should they be buffered as well etc...

            Have a look at AbstractFormModelCornerCaseTests#testBufferingMust CommitParentPropertiesBeforeChildProperties (which fails by the way) for an example of the unexpected interactions that need to be taking into account when buffering.

            Why are we striving to implement this extremely complex buffing support when there's an extremely simple solution (cloning) we're not taking advantage of? It's also interesting that many of us have independently come up with home grown cloning based solutions (Larry's DeepCopyBCVM, I clone objects before setting them into the form, Scott is treating all client side domain objects as clones of their server side counterparts).

            Ollie

            Comment


            • #7
              Just found this older thread, and wondered if there's anything new to say on the subject?

              I've been struggling with the "no domain logic gets triggered during buffered writes", because somehow I thought I really should use buffering. Cloning seems somehow so brute force (though I need it for other reasons anyway).

              I need dirty tracking, but looking at the RCP source, it seems like I get that from FormModelMediatingValueModel even in an unbuffered model, right?

              --kerry

              Comment


              • #8
                The debate continued on the developer forum as well and I was more and more convinced that cloning has as many problems (especially when you're not dealing with a collection) as the current solution. Support for (better) handling collections of objects is still an open area of research.

                If you're working on a single domain object, the current model works very well. If you are having problems with business logic not firing, then I'm guessing you haven't wired in your listeners properly.

                If you'd like to post your problem to get some feedback, please start another thread with the details.

                Thanks,
                Larry.

                Comment


                • #9
                  I would also prefer cloning, as it's more readable code then this:
                  Code:
                          boolean readOnly = !((Boolean) getValueModel("active").getValue());
                          boolean selectable = ((Boolean) getValueModel("selectable").getValue());
                          getFormModel().getPropertyMetadata("selectable").setReadOnly(readOnly);
                  Also I think that formmodel should be created based on the class, not an instance. That way a form's formObject could be null.

                  Comment


                  • #10
                    Geoffrey,

                    I think both cloning and buffering have problems. Have you read through the thread on the developer mailing list as well? Many of the problems were discussed there (and no good solutions have been proposed).

                    As for FormModel being based on a class, the current approach which allows any object that provides the properties referenced by a form is very valuable. That polymorophism is critical in being able to reuse forms for multiple object types.

                    There is a JIRA issue posted to have the form model record (or maintain) the class of object to create in the case that the form object is set to null (instead of using the class of the last object to have been in the form model). I think this approach is better than basing the model on a single class.

                    Larry.

                    Comment

                    Working...
                    X