Announcement Announcement Module
No announcement yet.
How to optimize ACL Page Title Module
Move Remove Collapse
This topic is closed
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to optimize ACL


    I'm new to acegi security and I'm evaluate them.
    I get contacts sample from CVS and I change DataSourcePopulate to
    populate 1.000 contacts and 10.000 ACL_OBJECT_IDENTITY.

    When I try to see the efect this code
    long iTime = System.currentTimeMillis();
            List myContactsList = contactManager.getAll();
            Contact[] myContacts;
            if (myContactsList.size() == 0) {
                myContacts = null;
            } else {
                myContacts = (Contact[]) myContactsList.toArray(new Contact[] {});
            Map model = new HashMap();
            model.put("contacts", myContacts);
    give me 2 a 3 seconds, for one user, and 16 second for 15 users.

    To see what cause this too long time, I remove AFTER_ACL_COLLECTION_READ from applicationContext-common-authorization.xml
    and time reduce to 62 miliseconds.

    How can I optimize ACL to reduce time to get contacts ?


    My Computer is:
    Pentium IV 2.8 Ghz
    1gb RAM
    Windows XP

  • #2
    I would not recommend using a getAll() type method to retrieve 1,000 objects and then expect to perform Java-level filtering - irrespective of the framework/system helping you out in the Java tier. Heavy-duty filtering is the responsibility of your database tier, not the Java tier. Even if you did execute a DB query that returned 1,000 Objects, I'd encourage using pagination so that the ACL services are only dealing with 10-50 Objects at a time. This is all your users will expect to see on a results page at a given time. If it's not a user interaction use case - say it's a services layer method updating the prices of all your inventory - you will probably want to use batch updating and principal-level filtering is unlikely to be important.

    The other issue is the ACL services perform caching. If you ran the operation a second time without restarting the application context, you would find performance considerably better.

    Overall I would anticipate the type of use case your benchmark is seeking to demonstrate is not illustrative of what a properly designed application would usually be doing.