Announcement Announcement Module
Collapse
No announcement yet.
Distributed transactions involving web services, JMS, JDBC, ... Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Distributed transactions involving web services, JMS, JDBC, ...

    I am writing a hub that will be used for legacy system integration. The hub has two functions:
    1. Synchronise data between several different sources
    2. Expose business logic functionality to several different sources
    I intend to use Spring Integration to bring these legacy systems together.

    The legacy systems will be integrated across several different mechanisms, using channel adapters and other SI tools to bridge the gap. In one case, we will synchronise data across several systems:
    1. Connect to a legacy database using a JDBC channel adapter
    2. Poll for changes to an update queue table
    3. Read any new rows
    4. Transform that data
    5. Push out updates to several legacy and non-legacy systems using appropriate channel adapters (WS, JDBC, filesystem, ...)
    These updates must conform to the ACID rules. If an update pushed out to one of the legacy or non-legacy systems fails then all updates must be rolled-back.

    We can make self-contained changes to the legacy systems, within the limits of their technologies.

    I am really stuck on how to enlist those different transports into a single transaction. If the multiple resources were all JDBC datasources then this would be easy but we've got a real mix going on.

    Example: a use on a legacy system is updated. That update is recorded in a new database table. SI's JDBC channel adapter is configured to regularly poll for changes to the table and select any new records. The data is processed and new messages pushed out to channel adapters for a web service and database. If, say, the system exposing the web service encounters a problem executing the update then updates to both systems must be rolled-back.

  • #2
    Well, I doubt there is a Transaction Manager out there that can do what you are asking. By mentioning adapters (e.g., WS, HTTP etc) you are essentially saying that you will be talking to a remote NON-transactional systems or communicating via non-transactional protocol (e.g., HTTP). You may want to look at Compensating Transaction paradigm http://en.wikipedia.org/wiki/Compensating_transaction

    Comment


    • #3
      What Oleg has mentioned is correct. We struggled with same problem as you (with only difference being, we didn't have to deal with legacy systems). We wrote a central Orchestrator layer, which would interact with several components for a request (a typical high value payment flow end to end, where you need to reduce User's daily limits - system 1, apply approvals - system 2, submit payment to downstream system for further processing - system 3, do Audits - system 4 etc). Anything failing (except Audits) had to be rolled back, so essentially every system provided an interface to rollback the previous action. We used SI's Splitter - Aggregator pattern. Every system is supposed to reply back with some unique identifier, and a success / failure indicator.
      In Aggregator, failure from any system would result in another flow being started, to again call individual systems to rollback based on the unique Ids. If all is successful, using the same flow, send confirmation to all systems to commit based on unique Ids.
      This might not be what you are looking for (its hard dealing with legacy systems), but this is what we could go closest. I would interested to hear any alternatives.

      Comment


      • #4
        Thank you for the replies. I can see that the approach we're trying to take won't work with inherently non-transactional systems.

        We do have some scope to add additional features into the legacy systems that we own. We'll add a pre-validation stage that emulates the first phase of a two-phase commit. We'll have to come up with something to handle transaction failure due to something happening after the end of a successful pre-validation stage.

        I'll update the thread with whatever idea we implement.

        Comment


        • #5
          Hi, I am just curious of how you defined your transaction boundaries? Can you please elaborate a little?
          Also how did you ensure to always read committed data?

          Thanks!

          Originally posted by GPS View Post
          What Oleg has mentioned is correct. We struggled with same problem as you (with only difference being, we didn't have to deal with legacy systems). We wrote a central Orchestrator layer, which would interact with several components for a request (a typical high value payment flow end to end, where you need to reduce User's daily limits - system 1, apply approvals - system 2, submit payment to downstream system for further processing - system 3, do Audits - system 4 etc). Anything failing (except Audits) had to be rolled back, so essentially every system provided an interface to rollback the previous action. We used SI's Splitter - Aggregator pattern. Every system is supposed to reply back with some unique identifier, and a success / failure indicator.
          In Aggregator, failure from any system would result in another flow being started, to again call individual systems to rollback based on the unique Ids. If all is successful, using the same flow, send confirmation to all systems to commit based on unique Ids.
          This might not be what you are looking for (its hard dealing with legacy systems), but this is what we could go closest. I would interested to hear any alternatives.

          Comment

          Working...
          X