Announcement Announcement Module
No announcement yet.
Hibernate Transactions - Retrying Optimistic Locks Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hibernate Transactions - Retrying Optimistic Locks


    I have been battling with optimistic locks in hibernate for the past couple of days and seem to be going round in circles. If anybody could point me in the right direction with this it would be much appreciated.

    Basically my application uses optimistic locking on some object using the @Version annotation. This seems to be all working fine. However, I would like to retry the transaction if it fails due to the lock i.e. another user has updated the object - retry.

    This area seems to be somewhat lacking in the documentation for either Spring or Hibernate but I have come to the conclusion that I should make an AOP aspect to run on a custom annotation (e.g. @RetriableTransaction). (I am also using spring's @Transactional which has meant that I had to adjust the precedence of that advice so my advice runs above @Transactional - is this advisable?)

    At first this seems to be work correctly - however...when I force a Optimistic lock failure (modify the version number in the database while the application is paused) things start to fall apart. It does appear to complete the retry successfully but other persistent objects in the application start to fail with LazyInit Exception or session not open. It appears that once the Optimistic lock exeption has been thrown the session becomes null.

    After some more reading it is suggested that all Hibernate Exceptions are non-recoverable? If this is the case how do I achieve a retry?
    I am convinced I am doing something wrong as this functionality is surely fairly basic for any scalable enterprise system? i.e. transactions which are able to be re-applied shouldn't ever fall back to the user

    Any advice or suggestions are most welcome.

    Retry Advice method
        public Object retry(ProceedingJoinPoint pjp) throws Throwable {
            boolean success=false;
            int tries=0;
            Object o=null;
            while(!success) {
                try {
                } catch (HibernateOptimisticLockingFailureException e) {
                    System.err.println("Collision - optlock");
                    if (tries>=maxRetries) throw e;
                } catch (StaleStateException se) {
                    System.err.println("Collision - stalestate");
                    if (tries>=maxRetries) throw se;
            return o;
    Service Layer annotated method (User and Role objects have a version field annotated with @Version)
        public User addRoleToUser(User user, Role role) {
            return user;
    Another non-transactional service layer method
    public void anotherServiceLayerMethod() {
           MyPersistentObject o=(MyPersistentObject)dao.load(MyPersistentObject.class, new Long(1));
           User u=userService.getUserById(new Long(1));
           Role r=userService.getRoleById(new Long(2));
           u=userService.addRoleToUser(u, r);
           o.getMyCollection();  //<--EXCEPTION thrown here (LazyInitializationException)

  • #2
    What you are trying seems to be incorrect. In principle HibernateOptimisticLockingFailureException which wraps the StaleObjectStateException should not be re-tried automatically. This exception is thrown as the inital data which was loaded in transaction is stale. This invalidates the update operation as it was made on data which has been changed by another transaction. In this case the user should be shown the changed data & then he should decide again if he needs to progress with the update transaction.


    • #3
      Thank you for your reply.

      I thought I was doing something wrong.

      I understand OptLocks work for the scenario you describe (user should be informed of the failure) however there are surely may operations that can be repeated automatically (e.g. add a Role to a User).

      The only other option I can think of is to use pessimistic locking. Surely for a write operation I would need to do the following:
      1) Read/Write-Lock the object / entire object graph (to ensure other write operations do not load stale data)
      2) Read data
      3) Do Logic
      4) Write data
      5) Release lock

      This seems like it would scale very badly - every read operation on the object would block while a write was in progress. Is this standard practice in large scalable systems?

      I am concerned about this as I have objects which will be read very frequently and written semi-frequently. It seems that reads on this application will be crippled by even occasional write operations.
      Is it possible in spring/hibernate to allow certain (write) operations to wait for the lock but other (read-only) operations to view the data? (I doubt this since the locking is done at the SQL level)


      • #4
        I agree that pessimistic locking is not an option. This will have very negative impact on scalability. Also as mentioned by you Spring/Hibernate can't do much about it as the locking is handled by underlying database.

        In case you want to retry HibernateOptimisticLockingFailureException you could try this,
        1.) Reload data from database, this means that the object will have the current version number in database
        2.) Apply changed values to the reloaded object
        3.) Update back to the database

        This is not a very clean approach. Firstly you will need to keep track of properties changed as you will need to re-apply these on reloaded object. Secondly doing this also means that you can potentially overwrite changes made by some other transaction as you are applying the properties changed on the reloaded.

        I feel that you should let this transaction fail & let user should retry. There are some type of exceptions that cannot be re-tried.


        • #5
          Yes, neither of these approaches are ideal. I am still confused as I have never seen an real application fail because of an obviously retry-able transaction.

          For example, a banking application may have objects:
          - List<Customer>
          - List<Employee>
          If the Bank was to add a new customer the code might create a new 'Customer', add this to the List<Customer> in the 'Bank' object and then save. Of course, this application would be highly concurrent and you would not expect other queries on the bank e.g. addNewEmployee to fail because of this.

          I understand that in this example it is not critical that the customer be created immediately and could be added to a queue however this just illustrates the problem. My application is request-response and the response times are critical therefore using a queue would have a similar impact to pessimistic locking.

          I think I am able to keep track of changes with the method mentioned in the first post (just re-run the service layer method with fresh data). However once the transaction fails the hibernate session is closed. This approach would need very careful consideration to ensure any method that is a 'RetryableTransaction' is not called from a non-retryable transaction (e.g anotherServiceLayerMethod() in the first post) as the persistent objects are unusable since their session has gone.
          This seems overly complicated and therefore potentially dangerous.

          From this point the only solid way forward I can see is to make any potentially 'busy' objects have less updates. In the Bank example this would mean change the domain structure to:
          Customer 1->1 Bank
          Employee 1->1 Bank
          This would see a big performance loss on operations such as finding all Customers of the Bank but would mean that the Bank object is not constantly updated when new Employees or Customers are added.

          Is this a sensible approach or am I now compromising the domain model because of unrelated concerns?


          • #6
            I think before compromising your domain model for this you need to evaluate how big the problem is.. What i mean is how many times are you going to encounter the optimistic lock exception... Is this number big enough for you to make such changes to the domain model, which as you mentioned has downside in terms of performance & data not being upto date. Basically does the cost/benefit work out .. Is the cost worth the perceived benefit ?