Announcement Announcement Module
Collapse
No announcement yet.
spring poller against jdbc backed queue channel - Clustered mode Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • spring poller against jdbc backed queue channel - Clustered mode

    Will a message get processed twice(meaning in more than 1 cluster node) in the below scenario?
    1. A jdbc message store backed queue channel containing pending messages
    2. Poller configured on the message consumer side to process messages

    Does the jdbc message store lock messages once picked for processing to avoid the message being re-processed in another cluster?

  • #2
    In order to accomplish something like this Spring Integration would need to be aware of the cluster and its clustering model. Obviously every vendor has its own so it would be quite a task for us to maintain especially as the world is moving away from traditional clustering to the cloud-based, stateless (where each node is independent and not aware of another node) infrastructures.
    Of the top of my head the simplest way and the lightest to solve your problem (especially if order of messages does not matter) is to let each process to pick whatever message and announce its processing via a record in some table which would be locked for the duration of such announcement. This would mean that everyone else including the other process that could possibly have the same record would have to wait before making its announcement. When the lock is finally released and second process has a chance to make (insert) the same announcement it quickly realizes that some one else is already processing it and therefore this message could be dropped. You can actually have a custom filter doing this.
    Also, to speed things up you can have more than one such table. For example if such announcements wil be keyed based on some integer-based ID of the record than you may have two tables (one for even and one for odd IDs) this ensuring that more than one announcement can happen concurrently, etc.

    Comment


    • #3
      Given that spring does not support a clustered environment, what is you opinion on setting the transaction isolation level for poller transactions to SERIALIZABLE.

      Comment


      • #4
        Once again, Spring or any other framework has no awareness off clustered environment since those are vendor/platform specific and frameworks like Spring are platform neutral.

        Yes, you can definitely configure poller with transaction configuration where you can set isolation levels such as SERIALIZABLE. Just keep in mind how long does it take to complete your transaction and will you gain anything (in performance) with such setup.

        Comment


        • #5
          I am not able to appreciate the philosophy behind 'no awareness off clustered environment' thing of spring. I believe the framework should definitely keep messages from being processed by more than consumer where it is not intended to. Your thoughts on this please if you have the patience to explain me.

          Nevertheless FYI in our design the consumer only kick starts an asynchronous process on a message, meaning the message processing itself is light-weight. The asynchronous process is another spring batch job. So we should be fine with this.

          Comment


          • #6
            oleg I am coming back to you for help..
            SERIALIZABLE transactions didn't help. The framework only deletes the message record, never updates - deletes don't fail even if the record is missing.

            I tried a couple of options, nothing seems to be elegant - both around your idea. Could you please advise?

            Experiment 1 - Rely on read uncommitted

            Read the db record
            Lock the record by id in another table, as part of the global JTA transaction
            Process the record
            A second transaction which tries to lock the same record will fail, will drop the record. But for this to work the RDBMS should allow dirty reads. Unfortunately Oracle does not support read uncommitted isolation level.

            Experiment 2 - Lock record in local transaction

            Read the db record
            Lock the record by id in another table, as a separate local transaction
            Process the record and delete the record when the transaction commits successfully
            A second transaction which tries to lock the same record will fail, will drop the record. This approach is based on committed data, should work fine. Here is the problem - Since the lock transaction and the global parent are different, if the processing fails rolling back the main transaction, I should compensate by rolling back the lock transaction, which I do not know how to do - Need help here

            If Iam not able to rollback the record locking transaction, would have to write some dirty logic around the record locking code. I dont prefer this.

            Does Oracle support in any way making uncommitted updates visible to all transactions?

            Thanks in advace

            Comment


            • #7
              I am not sure, not an expert on Oracle at all, sorry. Are there any Oracle forums as it is kind of became an Oracle specific question?
              Last edited by oleg.zhurakousky; Sep 12th, 2011, 12:43 PM.

              Comment

              Working...
              X