Announcement Announcement Module
Collapse
No announcement yet.
PDX read-serialized, SDG repositories and ClassCastExceptions Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • PDX read-serialized, SDG repositories and ClassCastExceptions

    Given a Spring Data Gemfire Repository:
    Code:
    public interface CustomerRepository extends CrudRepository<Customer, Long> {
    
        Customer findByAccountOid(Long accountOid);
    
    }
    for domain object com.someco.domain.Customer:
    Code:
    @Region("Customer")
    public class Customer {
        
        /**
         * The object identifier of a {@link Account}.
         */
        private Long accountOid;
    
        ...etc...
    }
    and cache configuration:
    Code:
    <?xml version="1.0"?>
    <!DOCTYPE cache PUBLIC 
        "-//GemStone Systems, Inc.//GemFire Declarative Caching 7.0//EN" 
        "http://www.gemstone.com/dtd/cache7_0.dtd">
    <cache>
        
        <pdx read-serialized="true" />
        
        ...etc...
        
        <region name="Customer" refid="PARTITION">
            <region-attributes>
                <key-constraint>java.lang.Long</key-constraint>
                <value-constraint>com.someco.domain.Customer</value-constraint>
                <partition-attributes redundant-copies="1" />
            </region-attributes>
        </region>
        
        ...etc...
        
    </cache>
    When I execute the CustomerRepository.findByAccountOid method I get the following exception, which is thrown out of the dynamic proxy implementation of the findByAccountOid repo method. The dynamic proxy apparently executes the query, receives back a PdxInstanceImp (presumably b/c read-serialized is true) and just returns it, regardless of whether its type matches the return type of the repo find method.

    Code:
    java.lang.ClassCastException: com.gemstone.gemfire.pdx.internal.PdxInstanceImpl cannot be cast to com.someco.domain.Customer
        at com.sun.proxy.$Proxy33.findByAccountOid(Unknown Source)
        at com.someco.repository.CustomerRepositoryIntegrationTest.testFindByAccountOid(CustomerRepositoryIntegrationTest.java:112)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
    I understand that, in a non-SDG world, using read-serialized obligates the programmer to check the type of the objects returned by queries and possibly manually induce de-serialization via PdxInstance.getObject().

    In SDG, does that also imply that repository methods like my hypothetical findByAccountOid finder above need to have a return type of Object and that the client code calling the repo method is responsible for checking the type of the object?

    It does seem like the SDG dynamnic proxy could reflect on the return type of the find method declared in the repository interface and attempt to convert the object returned from the query to that return type. After all, it's already doing all kinds of magic proxy stuff :-)

    For example, if the repository method had a return type of Object then it could just return whatever the query returned. If the return was a domain type (e.g., Customer) then it could attempt to convert the query result to a Customer (in this case by sending the message getObject() to the PdxInstanceImpl and casting it to a Customer).

    Thanks in advance for any comments, suggestions.

  • #2
    Just a couple of observations/recommendations... First, your CustomerRepository implementation could extend SDG's GemfireRepository interface instead of the SDC's CrudRepository directly (which the GemfireRepository interface extends itself).

    Second, not sure about your env., setup, or requirements, but was curious why you were using GemFire's native cache.xml instead of using SDG's XML namespace to configure your GemFire Peer Cache member.

    Anyway, no matter, neither of these solve your problem.

    You are correct in that Spring Data GemFire's current implementation and extension of SDC Repository abstraction as it applies to GemFire does not automagically introspect the value returned from the Cache Region and handle PDX instances appropriately in the context of Repositories. Perhaps more could be done to provide this support, especially as SDG already has a custom PdxSerializer (the MappingPdxSerializer) utilitizing GemfirePersistentEntities and SDG's GemfireMappingContext. However, it may not be as trivial as it seems. Need to think on it more, especially in the case of Query methods.

    You do have a few options though...

    1. And the most obvious one, set the PDX read-serialized attribute to false.

    Keep in mind if you are using Repositories on the Server (as in, your Spring-based app is a Peer Cache, cluster member), then what goes in, is what comes out, meaning... even if you use the CustomerRepository to store Customer objects, then an actual Customer object instance is stored regardless if you have a PdxSerializer (e.g. GemFire's own ReflectionBasedAutoSerializer) configured for the Cache or not.

    Domain objects are only serialized when they need to be serialized, such as between client and server or between peer members for distribution and replication, when writing to a persistent Disk Store, or when overflowing data, etc. However, if you just perform a...

    Code:
     customersRegions.put("id", customerObject);
    Which would be the case if doing...

    Code:
     customerRepository.save(customer);
    Then what is stored in the Region is a Customer object, not the serialized form of the Customer object. It is possible for a Region to contain both serialized and unserialized data. GemFire tends to keep the most recent format in which it was received, un-manipulated.

    However, certain Function operations or even certain OQL-based Queries (e.g. Queries with method operations on the Object contained int the Cache Region) can even cause an object to be fully-deserialized back into an actual Java object form, and then GemFire will continue to store the value as an object.

    2. You can always implement a custom Repository implementation that handles PDX instance conversion using...

    Code:
     pdxInstance.getObject();
    You don't really need to know the actual domain object/entity type. If you are using Region key/value constraints corresponding to the types on the Repository definition (and I highly recommend this), then it does not really matter what you try to cast it to given type erasure, plus the getObject() call will return the type of object you expect anyway.

    Note, also, providing a "custom" Repository implementation can be used to override any existing Repository method provided by the framework.

    3. You might, although I have not tried, to add a RepositoryProxyPostProcessor. to the GemfireRepositoryFactory. This gives you access to the underlying proxy created for the Repository interface, backed by SDG's SimpleGemfireRepository class (the default). With the proxy, you can add your own AOP Advice, such as handling for the PdxInstance type, calling getObject() appropriately.

    4. Etc, Etc...

    With SDG and the power of the Spring Framework behind you, you many options.

    I will think on this more and follow-up to this posting after bit.

    Hope this helps.

    Cheers!
    Last edited by John Blum; May 24th, 2014, 02:03 AM.

    Comment


    • #3
      Hi John -

      Thanks very much for your reply. Very helpful. Without providing unnecessary detail, the example was pulled from an app that is client-server topology that doesn't use SDG on the server side (cache servers, locator, etc. configured using config.xml), which accounts for the cache.xml I provided.

      I didn't realize that it was possible to start a SDG-configured Spring application context by way of gfsh until I read Chapter 11 of the SDG doc over the weekend. We really like using gfsh and had assumed that use of SDG and the desire to start/manage clusters via gfsh were incompatible, but I see now that that is an incorrect assumption.

      At any rate, I understand your answers above and appreciate the direction. I'll come back with additional questions as needed.

      Thanks very much!

      Comment


      • #4
        Right, starting a GemFire Server from Gfsh with SDG XML config is new as of the SDG 1.4.0.RELEASE. Unfortunately, you still need a small snippet of cache.xml to bootstrap the Spring application context using the new SpringContextBootstrappingInitializer.

        For GemFire 7.5, I am planning an extension to the 'start server' Gfsh command adding a new command-line option, 'spring-xml-file' that will take a pure Spring XML config file (which uses the SDG XML namespace) to initialize a GemFire Server without requiring a cache.xml to get things started. Then you will be able to avoid cache.xml altogether and use a pure Spring approach for configuring and initializing a GemFire Server.

        Comment

        Working...
        X