Announcement Announcement Module
No announcement yet.
Prefect and txsize optimum values Page Title Module
Move Remove Collapse
Conversation Detail Module
  • Filter
  • Time
  • Show
Clear All
new posts

  • Prefect and txsize optimum values

    Whats is the optimum value for prefect and txsize?

    I am setting it to 10000 and Dave commented on one other post that this value is too high.

    To one of my earlier queries Dave had replied

    "AMQP has a prefetch limit ("Quality of Service") that you can set on the channel before you consume any messages. Spring AMQP exposes that as a prefetchCount property in the message listener container. But if you are auto-acking my reading is that this has no effect (did you try it?). Maybe you have to consider acking the messages? You can set the txSize (in the current master version of Spring AMQP, or nightly snapshots) in the message listener container quite high to make it more efficient. If it works, but isn't as fast as you like we could consider optimisations. "

    So i set the txsize and prefetch to 10000. I am still unclear on how exactly they both are related.

    From the docs

    *Tells the broker how many messages to send to each consumer in a single request. *Often this can be set quite high
    *to improve throughput. It should be greater than or equal to {@link #setTxSize(int) *the transaction size}.

    * Tells the container how many messages to process in a single transaction (if the channel is transactional). For
    * best results it should be less than or equal to {@link #setPrefetchCount(int) the prefetch count}.
    * @param txSize the transaction size
    public void setTxSize(int txSize)

    In my case I dont really care about ack , I am just setting it so that I could use the prefetch count. for tx size what the docs suggested and what Dave suggested seems to be contradictory. Dave had suggested to set it to a very high value to make it more efficient , the docs suggests that it shd be <= prefetchcount. can some one throw some light on this

    Last edited by avyaktha; May 23rd, 2011, 12:32 PM.

  • #2
    The broker sends messages to the client in batches, and then they are consumed and acknowledged in groups that might be of a different size. The two settings (prefetch count and tx size) control those two "loops". If there is a rollback only the current txSize worth of messages are returned to the broker. The higher the prefetch size the bigger the protocol frame to send the messages over the wire, so the longer it takes. Depending on the message size there is a limit to how much you can improve throughput by increasing prefetch, and I have never heard of a case where a number higher than 100-200 was optimum (but as usual YMMV).

    If you don't care about acks, then set the acknowledgeMode to NONE and the txSize and prefetch count will be ignored. The broker just sends the messages as quickly as it can in batch sizes that it chooses itself (and therefore you probably shouldn't care about) - I think it uses 100 by default.

    I don't quite see what is unclear about the javadocs. Maybe you can help to improve them.