Announcement Announcement Module
No announcement yet.
Spring Integration Testing Questions Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Spring Integration Testing Questions

    I have a few questions regarding unit/mock/end-to-end testing with Spring Integration.

    As some background, the context is configured right now with bridges in a "default" profile connecting different units of work within the flow as well as the components that integrate with external systems. For unit testing, a "test" profile is used that just defines queues for the different units of work within the flow, so the messages can be intercepted and interrogated. The "test" profile also contains service activators that act as mock components.

    1) What is the best way to mock various Spring Integration components since many of the gateways are actually made up of two different components? I solved this by creating service activators for each component I wanted to mock. Is this a good/recommended approach? Is there a better way?

    2) How can I mock components that are in a chain and easily swap them in and out depending on whether I'm doing unit/integration testing? I found this challenging because components in a chain do not have an input/output channel. If I wanted to mock a component in a chain, it seems like I would have to mock the entire chain. There can potentially be many steps in the chain, so a lot of the configuration would go untested. Does anyone have any recommendations?

    3) Is there a good way to add unit integration tests into this approach? In order to add unit tests that integrate with external systems, I would have to add yet another profile that defines queues for intercepting messages, as well as all the components that integrate with external systems. Taking this approach there is a lot of duplication in the configuration. Is there any better/recommended way to tackle this?


  • #2
    Well, it depends on what exactly are you testing.
    For example; In question #1 you are talking about creating special Service Activators to mock the endpoint of some flow. If I understand correctly you may have some different endpoint in production, but for test you use some dummy SA. That's fine and is a good approach, but for what? When we develop Spring Integration we use this approach a lot, but that is because we are testing and ensuring that our Message dispatching strategies work by validating that the Message reaches a particular endpoint. Once that is tested and validated we present it to you (the user) as the contract that we expect you to rely on . Of course you can re-test it, but my point is that you are essentially re-testing what's already been tested by the framework.

    Remember each EIP component has a contract of its own. For example if you implementing a Message Filter the contract is that it MUST return boolean; If you implementing a Message Transformer it MUST return non-NULL value etc. So what you should be testing is that your custom components are in compliance with such contract. Once you've validated that they are you are then at the mercy of the framework since Message dispatching, conversion etc are performed by the framework and therefore testing it would be an overkill unless you found that the framework does NOT do what it promises within the scope of a particular EIP contract and such test proves it and would also constitute a missing scenario in our test suites which we would gladly accept into the framework.

    So in the end I would not be unit testing the entire flows. What I would recommend is this.
    1. Whenever you write any code for a particular endpoint make sure that it returns what is required by the EIP pattern contract thru Unit tests
    2. Have a separate Integration test suite which is hooked up to the real environment to validate that your message flows perform a business function you expect.

    Basically think of separation of responsibilities. If you are using the framework you have to assume that the promised functionality has been tested and therefore you can rely on it. And we do have a very good test coverage. . . i'd say on average 85-95%, but then again - bugs do happen and that is when some mock test from you (the user) would help us to quickly determine where the bug is and fix it.

    Hope that helps