Announcement Announcement Module
Collapse
No announcement yet.
Remoting and Anemic Anti Pattern Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by rgeorgiev
    The problem with Rich Domains, is how to handle serialization . You need somehow to inject repositories,etc. there. One solution is to make it with AspectJ on getter call, so they will be available after deserializing.
    But I prefer for now to use DTO(Value Objects) when dealing with remote services.
    The term "Rich Domain Model" does not imply dependency injection. There's no reason why you couldn't create rich domain models without injections.

    There may be reasons to depend on injection but in many cases this be avoided.

    Comment


    • #17
      In a rich client environment sometimes you will want the domain objects to be used directly by the client. You will want to perform CRUD operations on them and then simply tell a local persistence manager to 'commit' - all from the client. I don't know of anything open source that delivers this, as you might call it - 'heavyweight remote proxying'. In the JDO world JDOGenie used to have it, and now I only know of BEA's Kodo (http://edocs.bea.com/kodo/docs40/ful..._remotepm.html).

      At other times you want to have a services layer running on the server and use it to invoke methods that probably do intensive things with your domain objects, yet return relatively simple results. Here Spring remoting would seem to be the ideal thing to use.

      Comment


      • #18
        Originally posted by cjmurphy View Post
        ...and now I only know of BEA's Kodo
        I just had a chance to look into Kodo's remoting/offline capabilities. It does seem like a good solution in cases when tight coupling b/w both server and client models are needed. Especially, I liked the idea of the commit-notifications and transaction issues.

        Has anyone had a chance to use it for the actual projects? Any feedback?

        Comment


        • #19
          Kodo/Spring in production

          We are using Kodo/Spring in production:

          http://www.strandz.org/mvnforum/mvnf...23&offset=0#97

          Another issue we have had is documented under Case #: 687756 with Kodo. Unfortunately this case is not accessible, so I'll copy some of it here:

          [[
          My Swing client is accessing two Kodo servers (ie. two separate JVMs):

          1./ PersistenceServerServlet (HTTP Persistence Server)
          2./ Spring container

          The idea is that database intensive batch/report type tasks are done by 2, and 1 is used where the Data Objects (DOs) are worked on interactively by the user.

          All this is working fine but I have had to do something quite odd for changes made by 1 to be picked up by 2. What I thought is that as long as a commit was done on the remote PM (attached to 1) then the queries invoked by a service call to 2 would be able to pick up those changes. I have managed to get this behaviour working, but only by re-creating the PM at the beginning of each service call.

          The properties file that 2 (Spring) uses is:
          <<
          kodo.LicenseKey: XXXX
          javax.jdo.PersistenceManagerFactoryClass: kodo.jdbc.runtime.JDBCPersistenceManagerFactory
          javax.jdo.option.ConnectionDriverName=com.mysql.jd bc.Driver
          javax.jdo.option.ConnectionURL=jdbc:mysql://localhost:3306/kodo
          javax.jdo.option.ConnectionUserName=root
          kodo.jdbc.EagerFetchMode: join
          kodo.jdbc.SubclassFetchMode: parallel
          DefaultLevel=TRACE, Runtime=INFO, Tool=INFO
          kodo.MetaDataFactory: kodo3
          kodo.Sequence: Table=JDO_SEQUENCE
          kodo.jdbc.MappingFactory: file
          kodo.jdbc.SchemaFactory: native(ForeignKeys=true)
          kodo.IgnoreChanges: true
          >>

          The properties file that 1 (servlet) uses is very similar - it just does not have the kodo.IgnoreChanges line.

          That just leaves the properties file that the Swing client uses:
          <<
          kodo.LicenseKey: XXXX
          kodo.BrokerFactory: remote
          kodo.PersistenceServer: http(URL=http://localhost:8080/teresaRemoteService/kodoservice)
          kodo.ConnectionRetainMode: transaction
          kodo.ConnectionFactoryProperties: MaxIdle=3, ValidationTimeout=60000
          kodo.remote.ClientPersistenceManagerFactory
          http(URL=http://localhost:8080/teresaRemoteService/kodoservice)
          DefaultLevel=DEBUG, Runtime=INFO, Tool=INFO
          javax.jdo.PersistenceManagerFactoryClass: kodo.jdbc.runtime.JDBCPersistenceManagerFactory
          >>

          To restate the problem I can only pick up commited changes made by a remote PM from a different PM/JVM if for each service call on the different PM/JVM I force a re-connect. It is almost as if if I don't do this step that any queries will just ignore changes that have been made and commited to the DB.

          So does the service really need to re-connect each time a call is made upon it?
          ]]

          The settings above are against a local tomcat. In production the settings are similar, however we use a filter to zip the data before it goes across the wire.

          Comment


          • #20
            False sense of loose coupling

            Originally posted by debasishg View Post
            Can't agree more! The domain model is never an implementation only artifact which should be hidden from the presentation tier. As usual the domain model also should be contract based, which presents the relationships between the domain objects to its clients - here the presentation layer. Any change in contract is bound to change the client. While any change in implementation should not have any impact on the client. With such controlled contract based protocol, we can do away with the DTOs, which are, after all, an extra layer - and as someone said, layers are only for wedding cakes . DTOs are pure glue codes which need to be developed and managed - the lesser they are the better for the application.

            The other issue is regarding the serializability aspect when we consider that the UI layer and the domain layer may not be colocated. Domain objects may be heavy and serialization of entire object graphs may be expensive. Only for such cases, the TO layer may be considered selectively.

            I think instead of totally banning the DTOs, we need to be flexible in design and apply layering with DTOs only as performance optimization measures.

            Thanks.

            Indeed, I couldn't agree more with Debasishg and Manifoldronin. The domain layer will only expose certain methods to the presentation tier and therefore is contract based. The 'aggregate pattern' as described by Evans in DDD seems a good candidat for implementing an interface. This way the objects contained in the aggregate will not be exposed to the presentation tier.
            DTO's are mostly a behaviorless copy of the domain objects. When a field in a DO get's renamed it probably has a semantical reason so most of the time you want it also to get renamed in the DTO (to keep things transparant and semantically correct). IMO DTO's give you a false sense of protection and the so called 'loose coupling' is not as loose as most people think. As Manifoldronin says: "But is it really loose coupling, or is it actually "semantics lost in translation" which will eventually come back and get you?" Wise words, véry wise words.

            Comment


            • #21
              Originally posted by JimmyK View Post
              IMO DTO's give you a false sense of protection and the so called 'loose coupling' is not as loose as most people think. As Manifoldronin says: "But is it really loose coupling, or is it actually "semantics lost in translation" which will eventually come back and get you?" Wise words, véry wise words.
              I've seen in many cases DTO working just fine. Web Services themselves operate on DTOs (i.e. the same behaviorless domain copies/fragments). Sometimes protection provided by DTO is well sufficient. But in many cases, when tight coupling b/w server and client is required, it's definetely looks like a waste of time and resources. I don't know what's the proportion b/w the both cases. In our applications it seems to be close to 50/50, so saving bulk of time with these approaches looks very benefitial.

              Comment


              • #22
                Originally posted by dvoytenko View Post
                I've seen in many cases DTO working just fine.
                A lot of decisions depends upon the nature of the application, the performance guarantees, throughput requirements etc. Obviously DTOs introduce an extra level of behaviorless abstractions, thereby introducing an extra level of comfort in the design. Of course this means more classes (or value objects) that do not add to the richness of the domain model. But, yeah .. it gives a decoupling between the domain layer and the presentation layer. IMHO DTOs offer a valid paradigm in remoting use cases, as has been mentioned earlier in this thread.

                Direct usage of domain objects throughout the layers is not an easy option either. There you are making your domain objects visible across the layers and incur the risk of polluting the model. The presentation layer, by invoking domain methods directly can easily mess away with the implementation. Hence we need to adopt some startegies to prevent that. Have a look at this thread which discusses some of these issues.

                Cheers.
                - Debasish

                Comment


                • #23
                  I'm going to chime in here with my own personal pipe dream. I would love to remove the service layer from between the presentation layer and the domain model. Make the service layer be nothing more than a "presentation" of the domain model to other computers/systems, just as the UI is a presentation of the domain model to the user. You think this is a stupid idea? I can understand your objections up front, but hear me out. Read the entire message below to see where I'm going with this. I probably answer some of your concerns below.

                  First, imagine a domain model where object instances within this model transcend the VM and become shared amongst all VMs interested in those instances. If I happen to have a reference to a domain object, then I reference the actual domain object instance. Mutations to the model are transactional, and when one VM commits a change, then all VMs see this change. The domain is also automatically persisted in full ACID compliance. Terracotta happens to give us this today (mostly) and is open source, see http://www.terracotta.org/ for details. Terracotta tracks changes to objects and sends only deltas across the network. Those deltas are sent to a Terracotta server which then takes care of updating the distributed graph, persisting changes (if you enable persistence), and transmitting those deltas to only those clients who currently hold a reference to the distributed object. It also handles transparently windowing/paging datasets/object graphs too large for memory.
                  Now comes the controversial part. With this foundation set, I'd like to move away from the whole service idea. Not completely, mind you, but enough as to remove its stranglehold on our thinking. Think about it, a service tends to be a conglomeration of different concerns: use cases, security, (sometimes) CRUD, business logic, validation, (sometimes) utility methods, etc. With the above mentioned distributed domain model, you can begin to separate those concerns out, which make them more flexible and reusable.
                  I can put some of those business methods directly on my business objects. Some concerns become sort of services in their own right. I may decide to use a workflow engine to implement all my use cases. In this case, the "use case" service is what coordinates various domain objects toward a particular end (and it happens to use that workflow engine behind the scenes). Some concerns may become aspects. Security, for example. I may decide that this concern should be declarative, and so implement a listener in my server cluster that intercepts all domain mutations and method invocations and applies declarative security constraints. If a particular mutation or method invocation violates a security constraint, then it propagates an error or conflict back to the client (Terracotta currently doesn't allow one to intercept mutations in this way that I know of, but it is open source, so if the will was present...). Validation could become an aspect mixin so that I can declaratively define validation and have it applied at runtime. The mixin would not allow invalid mutations, and it would also provide methods to query validation. Thus, I can reuse the same validation in any "layer" that I want (presentation, server, etc). Of course, if you keep going down this path, layers begin to vanish. I realize that you can already apply declarative Security aspects to the service model, but not with the granularity and reusability of this domain oriented model. Once everything happens at the domain level, then these various concerns become easily reusable in every layer of your system.
                  This would make rich clients extremely easy to code for. I no longer have to worry about the size of the object graph or lazy init errors. I worry much less about network usage for each service method as Terracotta is extremely efficient. I basically become more focused on doing presentation as I spend less time managing limitations of the service approach. Furthermore, if Terracotta were modified to allow clients to asynchronously update with the server (with conflict resolution, etc) and locally persist changes, then rich clients based on this technology would almost automatically get offline mode support.
                  Am I saying that we should do away with services? No, not at all. They still have their place. Sometimes a simple facade is the easiest approach in certain scenarios. Services will just become much more focused, and will tie together the well separated concerns into a cohesive whole. Instead of the service having its own private business logic that the presentation layer cannot access, the service will be a very thin facade that ties the well separated business logic together with the other well separated parts, for example. With all the concerns separated like this, one could almost declaratively define the service contract and have the system automatically wire the pieces together with no actual code necessary. As I said before, I'm not doing away with services, only trying to get to a place where we can break their mental stranglehold and more easily think outside the box.
                  But really, for those of us doing presentation, we rarely even need the facade. This is because we tend to use other frameworks to build our UIs: I may use a binding framework to bind a view to a domain object. In this case, the binding framework will directly access the validation mixin and other concerns of the system in order to dynamically do its thing. The binding framework won't make use of the service facade. I may use a controller framework for flow through the UI, which will most likely not need to use a service facade, but instead work directly with the domain objects, and maybe some sort of separated ModelQuery service. It might even use the "use case" service to automatically flesh out the flow of the UI. In the end, I'm assembling my final application using solutions that don't really need a service facade. They only need access to the various concerns: validation, security, use cases, business logic, index/query, etc.
                  This all means that the real use for a service facade is really as a "presentation layer" for other systems. In the same way that the UI provides a human with a facade for interacting with the domain model, a service provides another computer/system with a facade for interacting with the domain model. The big move here is that the service layer no longer sits between the domain and the human presentation layer. This was necessary before, because it was the only way people could see to "reuse" all the business logic and use case code that resided in the service itself. The presentation layer would just be a client of the service like any other system. The pain here is that the presentation layer often wanted access to the pieces hidden in the black box of the service (validation, security, etc) in order to avoid duplicating these things in the presentation layer (thereby defeating reuse). In reality, what we've been trying to do all along has been to map the UI presentation onto the computer/system presentation (much like Hibernate tries to map Objects onto the relational model). The impedance mismatch is proving to be too great.

                  Comment


                  • #24
                    I agree that this could be very attractive in many scenarios and not just for RCP.

                    Here's the discussion I had with Hibernate's team about potential foundation for such features: http://opensource.atlassian.com/proj...browse/EJB-255

                    Comment


                    • #25
                      Maps as DTOs

                      I am designing an application which will have web services on one server being invoked by JSF backing beans on another server. I intend to use DTOs to limit the amount of data being passed around. Why pass a domain model with 50 attributes when the caller only needs 2 of them? Domain models are especially problematic when the client needs 2 attributes from 1 model and 3 from another one. I don't think domain models should be designed with the UI in mind, but that is what happens if you don't use DTOs. You design your domain models to have the attributes the screens need to display.

                      I understand people's objections to using DTOs when your DTOs are nothing more than copies of your domain objects. But what is wrong with having your DTOs be HashMaps? That eliminates redundant DTO / domain model classes, and works very well if the data is being passed around as xml documents. You simply dump the XML elements into maps and return the maps to the caller.

                      jh

                      Comment


                      • #26
                        Originally posted by jhh09 View Post
                        I am designing an application which will have web services on one server being invoked by JSF backing beans on another server. I intend to use DTOs to limit the amount of data being passed around. Why pass a domain model with 50 attributes when the caller only needs 2 of them?
                        jh
                        +1 on your decision. As I have mentioned in an earlier post on this thread, DTOs are definitely useful in many cases and yours is one of them. In case of remoting and distribution use cases, it is always preferable to use DTOs to limit the data passing around. The point to take care is NOT to make this a general practice in your application. Use DTOs only when u require them - try to reuse the domain model as much as possible.

                        Cheers.
                        - Debasish

                        Comment


                        • #27
                          I liked adepue's term 'some sort of separated ModelQuery service'. Is this paragraph referring to the same thing (it is talking about an ideal stack for a rich client):
                          "The use of ORMs is supposed to proclude the use of DAOs and in our stack we indeed do not use them. With an ideal ORM API you would be able to use dependency injection to specify the queries that are to be used. Fetching data would thus be done with reference to a query. From then on there would be no contact with the ORM API until the transaction is to be committed or rolled back, which would be a command issued without reference to particular POJOs."
                          ?
                          Applications can work with such a service when they subclass org.strandz.lgpl.store.DataStore.
                          I also note that both 'controller framework' and 'binding framework' are both implemented by http://www.strandz.org.
                          Last edited by cjmurphy; Jan 4th, 2007, 05:06 AM.

                          Comment


                          • #28
                            Originally posted by cjmurphy View Post
                            ...
                            I also note that both 'controller framework' and 'binding framework' are both implemented by http://www.strandz.org.
                            I had not seen Strandz before, thanks for the link! Do you have any idea how active the project is?

                            Comment


                            • #29
                              Originally posted by jhh09 View Post
                              I am designing an application which will have web services on one server being invoked by JSF backing beans on another server. I intend to use DTOs to limit the amount of data being passed around. Why pass a domain model with 50 attributes when the caller only needs 2 of them? Domain models are especially problematic when the client needs 2 attributes from 1 model and 3 from another one. I don't think domain models should be designed with the UI in mind, but that is what happens if you don't use DTOs. You design your domain models to have the attributes the screens need to display.
                              Well, if a domain class has 50 attributes, and there are use cases that consistently concern only a certain subset of them, the first thing I would look at is, why, the class itself. Chances are, even from the domain modeling perspective, it should be broken down to some finer pieces.

                              The problem (I have) with introducing a DTO to capture only the "needed" subset of attributes is this - while your domain model is spared from the impact from UI, somebody else is paying that price. Either the service API or the DTO becomes a lot more sensitive to UI changes - if the UI needs to display some additional attributes, the DTO will have to be changed to include them; if a new UI is added that needs to display a different subset of attributes, a new service API will have to be defined.

                              And to think that, ironically, this whole DTO idea was supposed to achieve "loose coupling"?

                              Originally posted by jhh09 View Post
                              I understand people's objections to using DTOs when your DTOs are nothing more than copies of your domain objects. But what is wrong with having your DTOs be HashMaps? That eliminates redundant DTO / domain model classes, and works very well if the data is being passed around as xml documents. You simply dump the XML elements into maps and return the maps to the caller.
                              Thanks, but no, thanks. I realize this is more of a personal taste thing - I always prefer strongly typed constructs over weakly typed ones. Using a dynamic scripting language for some quick and easy testing setup is one thing, passing an essentially typeless map across a service boundary is a whole different one. There will be no way to clearly define or enforce any contract, which I believe is pretty much all a service layer is about.

                              Comment


                              • #30
                                How active is Strandz

                                I had not seen Strandz before, thanks for the link! Do you have any idea how active the project is?
                                At the moment there is one person fully occupied with the Strandz framework and two production Strandz applications. One of these applications is getting support requests as you can see at http://www.strandz.org/mvnforum/mvnf...hreads?forum=2

                                Comment

                                Working...
                                X