Announcement Announcement Module
Collapse
No announcement yet.
[CLEAN DESIGN] Quest for best Architecture for a Rich-Client / Server application Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by adepue
    I can see that our time zone difference may drag this conversation out.
    Yeah ;P

    Originally posted by adepue
    It should help now that you can see how all the pieces fit together. If you want to employ your own authentication token mechanism or your own HTTP request authentication header encryption (or whatever), you would create a custom HttpInvokerRequestExecutor. You would most likely have it respond to AuthenticationAware so that it receives the user's Authentication when they log in to the rich client. You would also create a Servlet filter on the servlet side to pick up that header and handle it appropriately. Of course, Acegi already provides filters for the most popular types (BASIC, DIGEST, and I believe security certificates), so you would only be doing this if implementing something that Acegi doesn't already handle. Another approach would be to use one of the 3rd party single sign on services that Acegi supports. These typically work by authenticating once and then passing around a temporary authentication token for subsequent invocations. Acegi provides server side support for most of these, so in most cases you would only have to develop the HttpInvokerRequestExecutor (which would make a great contribution back to Acegi) .
    Thanks for the explanation. I will check the DIGEST authentication mechanism to see if it suits our security needs.

    Otherwise we will code a properly HASH+ONE_TIME_SEED authentication executor and submit it to spring community, if useful ^_^

    But honestly, since security is quite a big issue for us, I think I'll prefer to go after the most secure method: = writing our own authenticator integrated with Acegi, that makes use of random seeds and proper authentication patterns, instead of simpler less safe solution such as DIGEST.

    Originally posted by adepue
    In our case, our service interfaces have become so close to the controller, that we did away with separate controller interfaces. There were a few cases where the controller needed to fire an event, so we created the ability to define add*Listener and remove*Listener methods at the service interface level (using ActiveMQ to deliver the events). We did this because it got to the point where it wasn't just the UI that wanted to listen in on certain events, but other services as well. For example, one service wanted to know whenever another service performed a certain action, so now both the UI and any other component can add themselves as a listener to a service.
    We simply have a client side proxy using the service interface itself that handles any interfacing between the remote service (and soon, offline cache).
    This is our same approach, but since we don't want this caching/client DB to be completely transparent (we want the user to understand when he's online or offline, and understand when a synchornization is taking place, and which operations are permitted in the two scenarios), we have preferred the controller to be more than just a smart caching proxy of the server, but we have added a Controller interface, so that GUI and view can query the controller for its states, ask for a synchronization and the like.

    Originally posted by adepue
    [*]Deleted records. I believe you mentioned this in the thread somewhere, but since the offline cache is populated by data as it is accessed from the service, there would be no automatic way for the service to indicate deleted data to the clients. So, now the service has to start keeping track of records in some way (keep track of deletions) so that it can somehow notify clients to delete those records from their offline cache.
    I was planning to keep a 'objects modified' log table, where I put, along with the timestamp, each operation done to any of the object, like for example

    TIMESTAMP - OBJECT_ID - OBJECT_TYPE - ACTION (Modified, Cancelled, Inserted)

    This log would allow me to know which objects have been deleted and purge them from the client/server database, in a simple and efficient way (select * from itemlog where ACTION = DELETED AND timestamp > lastSyncStamp)

    Originally posted by adepue
    [*]Offline access of data not previously accessed by user (or by the service interface). Our app will contain lots of data, and we must be able to support the case where someone is offline and they want to pull up some information never before accessed via the service interface. This suggests to me that we will not just be caching previously accessed data in the offline cache, but instead the offline cache will contain all data in the system accessible by the user. I've had the idea of making the user decide in advance what data they want accessible when offline, but this violates our ease-of-use ideals. My current thinking is that the application would do background synchronization of data using any available idle bandwidth. Of course, this means we would always have to keep all data synchronized (more bandwidth!), but based on projected information flow, the data changes will be quite manageable (bandwidth wise) once the data is fully synchronized (assuming we can transmit single object deltas instead of entire object graphs). There would have to be some mechanism in place for a client to load an entire data set (based on the user's secure "view" of the data) for a particular service. This could be as simple as a "load" method on the service interface. Or, it could happen via ActiveMQ, in which case the service would post messages to a queue containing load data.[/LIST]I've had some other questions as well, but they've slipped my mind for the moment... I'll bring them up later if I remember.
    In my case this is quite easy to obtain: I have a very simple way to get the entire set of objects the client needs for most offline operations (his collections and related items), and will always load/cache those.

    Of course, a client won't be allowed to search for new items he still hasn't in any collection (and hence in the db), but this is perfectly reasonable for our use cases, and does not pose a problem.

    We are fine with the fact that the offline work will be able to deal ONLY with cached data, and we will always be sure that the client has in the db at least the item he needs for most operations, by mass-loading the entire set of his collections and related items at the first synchornization.

    Comment


    • #47
      Originally posted by Andreas Senft
      Maybe I can add something to this point: If you have defined domain-specific ids for all "top-level" types (in contrast to dependent types) you could use these ids to track changes of all types.

      On any modification (insert, update, delete) you could store the domain-specific id, the type of modification and the timestamp of modification.
      These informations can then be used for synchronization. As the domain-specific ids should be the same on server and client (what would not be the case for the technical database ids) you could even synchronize on deletions.

      Regards,
      Andreas
      Exactly what I meant, wow Andreas, we always come to the same solution :P Let's hope it's right heehehe

      Comment

      Working...
      X