Announcement Announcement Module
No announcement yet.
Dynamic Datasources Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dynamic Datasources

    I am working on an application that will connect to DB2 on the iSeries (IS) via JT400 . I'm not too familiar with the IS jargon, so I'm a little lost on what exactly I need to do. The premise of the application is that a user will enter their IS credentials, authenticate, and then select an "environment"/Library List (eg. Testing, Production). From that point on, all connections are to be associated with that environment. Since this is a multi-user app, 2 users could be operating on 2 different data sets. I'd like to use Hibernate, but it seems that I would have to create a sessionFactory per user or environment and that doesn't seem right. So, for now, I'm taking it out of the equation. Should the datasources be associated with the authenticated user, or should some sort generic user be created, or none of the above? Any help to get me in the right direction would be much appreciated. Thanks!

  • #2
    I have had to deal with a very similar situation in which each user needed authenticated to the iSeries and needed to use their username as the authenticated user for the datasource. This had to do with a legacy application that tracked changes by user. It was a nightmare. You lose all the advantages of pooling your datasources and caching information as well as the added time and complexity of managing the user information and data connection. Plus, since the legacy app ran on the same machine, the number connections were not an issue. This became an issue when those became TCP/IP connections across the network.

    In our scenario, each user, and therefore each connection, had its own library, and own permissions. If the app tried to give them access to a table or function they shouldn't have access to, then the transaction failed. That was nice because authorization was enforced at the database level in addition to the application level. This is the only real advantage to the scenario and only existed because the DB administrators had a very rigorous process for enforcing permissions. The same rigor can be applied to the application level and achieve the same results.

    In addition, while using this legacy app, almost every insert and update, as well as many reads, were all used through stored procedures. This is difficult to write in hibernate (or at least it was about 4 years ago). You have to use the CRUD factory thing (the details are escaping me) to define explicit actions in your mappings. I do not know if you have this issue or not but it is a pretty common model on old Cobol/AS400 applications.

    One big difference is that we had one app per environment. We never let a user get to different environments from one installation because of the ramifications of forgetting which environment you are running on. This allowed us to control other indicators about the environment to help aid the user about where they are making changes or getting data.

    From a data and connectivity point of view, it is easier to use a shared datasource. That simplifies the logic needed to connect to the database, allows connection pooling, and allows for caching of data. It does come at a cost. Much more time needs to be spent on authorization. That means more complex joins and views to add record level management to tables. It means creating filters to limit access. If your application is running in conjunction with another application that accesses the data then that authorization needs duplicated across each environment.

    I know I have rambled a lot but this isn't a clear cut problem. You have to weigh security versus ease of development. Who are the users? How many users will access the system? How do they access the system? How much legacy work exists on the AS400... err iSeries? Is it worth leveraging that legacy work or is the goal to rewrite and move away from it? Where is your web application going to run? (You can avoid many TCP/IP connections if they are on different partitions of the same box) Is it distributed or on a single box? (Datasources do not serialize and share will between multiple instances of application servers, IIRC)

    One pitfall I have seen many younger developer/engineers make is picking the tools before solving the problem. Hibernate is great and may provide a benefit but it is wise to ignore that until you come to solving ORM. It is likely to fall in and make sense but is not core to this question.