Announcement Announcement Module
Collapse
No announcement yet.
Deadlock with AuditLog when in a distributed transaction Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Deadlock with AuditLog when in a distributed transaction

    Hi folks,

    I created an Audit Log interceptor that basically intercepts the postFlush() and records entries in a table via Hibernate to say what was done. For instance, if two table rows were inserted, there would be two entries in the audit log for that transaction.

    Because I'm in an interceptor and at the post flush state, it's illegal for me to use the original session, so what I do is create a new session using a different session factory that points to the same database (see appctx.xml below). Doing this works fine for local transactions, but when my module is used as part of a global/distributed transaction, I am seeing a deadlock occur between the session that is INSERTing into the main table and the INSERT to the log table. Seeing that I want both these INSERTs as part of the transaction, could anyone shed some light as to how I could prevent what seems to be a race condition between the two session INSERTs?

    TIA,
    Lou

    Code:
    	<bean id="mySessionFactory" 
    		class="org.springframework.orm.hibernate.LocalSessionFactoryBean">
    		<property name="dataSource">
    			<ref local="clmDataSource"/>
    		</property>
    		<property name="configLocation">
    			<value>classpath&#58;hibernate.cfg.xml</value>
    		</property>
    	        <property name="entityInterceptor">
    	           <ref local="auditLogInterceptor"/>
    	        </property>
    	</bean>
    	
    	<bean id="logSessionFactory" 
    		class="org.springframework.orm.hibernate.LocalSessionFactoryBean">
    		<property name="dataSource">
    			<ref local="clmDataSource"/>
    		</property>
    		<property name="configLocation">
    			<value>classpath&#58;hibernate.cfg.xml</value>
    		</property>
    	</bean>
    	
    		
    	<!--Use an AOP interceptor to attach the Hibernate session to CMT for session-per-
    	  - transaction scoping.  This way one hibernate session will live with the transaction.-->	
    	<bean id="myHibernateInterceptor" 
    		class="org.springframework.orm.hibernate.HibernateInterceptor">
    		<property name="sessionFactory">
    			<ref bean="mySessionFactory"/>
    		</property>
    	</bean>
    	
    	<!-- Entity Interceptor for tracking changes in Hibernate VOs.  Must be attached to
    	   - a second SessionFactory in order to prevent calling itself.-->
    	<bean id="auditLogInterceptor"  
    		class="com.mitchell.services.technical.claim.util.AuditLogInterceptor">		
    		<property name="userId">
    			<value>Hibernate AuditLogInterceptor</value>
    		</property>
    	</bean>
    	
    	<!-- AOP interceptor for trapping Spring runtime exceptions and to rethrow that
    	   - as a checked user-defined exceptions. -->
    	<bean id="exceptionInterceptor"  class="com.mitchell.services.technical.claim.exception.ExceptionInterceptor" > 		
    		<property name="auditLogInterceptor">
    			<ref bean="auditLogInterceptor"/>
    		</property>
    	</bean>
    		
    	
    	<bean id="auditLog" 
    		class="com.mitchell.services.technical.claim.util.AuditLog">
    		<property name="sessionFactory">
    			<ref local="logSessionFactory"/>
    		</property>
    	</bean>

  • #2
    I don't know if this would do the trick but you can use a new session based on the very same data source used in the original session. The SessionFactory has a factory method taking a connection and you can grab the one used by the original session with the method named connection().
    I think this is the recommended way of achieving this.

    HTH

    Comment


    • #3
      Originally posted by ojolly
      use a new session based on the very same data source used in the original session. The SessionFactory has a factory method taking a connection and you can grab the one used by the original session with the method named connection().
      Thanks Oliver for the response. I finally had the opportunity to test this out. I recall trying this before, but I didn't recall the reason it didn't work. Now I remember. Basically, an infinite loop occurs when I try to attempt the save in the audit log, which calls back to the postFlush from the originating Interceptor. Does that make sense? I think it does, because the postFlush is a listener for flush events.

      Have a look a what I'm doing:

      Code:
          public Long save&#40;final ActivityLog actLog, final String className&#41;
                  throws CallbackException, HibernateException &#123;
              log.entering&#40;thisClassName, "save", actLog&#41;;
              Long rslt = null;
              Session session = null;
              try &#123;
                  SessionFactory fact = getSessionFactory&#40;&#41;;
      
                  // Use Spring IoC to attach the Session Factory to this class and then get the original  session.
                  Session sessionOrig = SessionFactoryUtils.getSession&#40;fact, false&#41;;
                  session = fact.openSession&#40;sessionOrig.connection&#40;&#41;&#41;;
      
                  rslt = &#40;Long&#41; this.getHibernateTemplate&#40;&#41;.execute&#40;
                          new HibernateCallback&#40;&#41; &#123;
                              public Object doInHibernate&#40;Session session&#41;
                                      throws HibernateException &#123;
                                  Object obj = session.save&#40;actLog&#41;;
                                  session.flush&#40;&#41;;  //<<<<<< this is the line going back to the post flush!
                                  return obj;
                              &#125;
                          &#125;&#41;;
              &#125; catch &#40;HibernateException he&#41; &#123;
                  he.printStackTrace&#40;&#41;;
                  throw he;
              &#125; finally &#123;
                  SessionFactoryUtils.closeSessionIfNecessary&#40;session,
                          getSessionFactory&#40;&#41;&#41;;
                  log.exiting&#40;thisClassName, "save"&#41;;
              &#125;
              return rslt;
          &#125;

      Comment


      • #4
        Loumeister,
        Code:
           <bean id="mySessionFactory" 
              class="org.springframework.orm.hibernate.LocalSessionFactoryBean"> 
              <property name="dataSource"> 
                 <ref local="clmDataSource"/> 
              </property> 
              <property name="configLocation"> 
                 <value>classpath&#58;hibernate.cfg.xml</value> 
              </property> 
                   <property name="entityInterceptor"> 
                      <ref local="auditLogInterceptor"/> 
                   </property> 
           </bean>
        You should make attention to the fact that auditLogInterceptor is configured at the HibernateSessionFactory level. That means, it will be invoked each time a flush is called on a session created by your sessionFactory!!!
        A work arround / more clean way to achieve your need, is to use two sessionFactories, one for business processing and the other for auditing. You can also use Spring JDBC abstraction layer for auditing.
        HTH

        Comment


        • #5
          Yup, that's what I was doing and really the point of this thread (see my original post - I have two factories). I am concerned that these two sessions to the database are potentially causing a deadlock during a distributed transaction because of the relation between the audit table and the one of the tables where an INSERT occurs (there's a one to many relationship between these two tables).

          What do you think? Does this make sense, otherwise, I could provide more info.

          Thanks,
          Lou

          Comment


          • #6
            Are you using the same transaction manager for both the business process and audit interceptor? Using the same transaction manager ensures that all database operations use the same jdbc connection, so no dead lock. The draw back is that if one database operation fails, the other will fails too. This is not something you would like to experience, especially while auditing database access.

            Comment

            Working...
            X