in a cluster it can be beneficial to not call .destory() on each cacheprovider since it clears out the complete cache which might not be the most optimal case.
Not so sure this is safe in terms of proper classloader releasing since Hibernate classes and user classes can be stored into the second level cache. Unless, of course, you are talking about releasing the local node while making sure we are not forcing all nodes to clear.
Either way, I don't think a config param is necessarily needed here. Seems each CacheProvider should know when it is safe to clear itself.
For example, TreeCacheProvider could add checks against the underlying getCacheMode() and properly determine a course of action based on the outcome (LOCAL/REPL_ASYNC/REPL_SYNC).
There must be something similiar for the other clusterable caches...
This issue was originally opened on my behalf, but I don't think that the description really describes that problem that well. Once my application is in production, this will be a very big problem for me.
We are using Hibernate with JBoss Cache for the 2nd level cache. The cache is distributed across a dozen or more nodes. If one of the nodes is shutdown, the distributed cache is completely removed when the Hibernate session factory is closed. Of course what this means is that our applications running in those nodes will immediately hit a wall, because none of the data is cached and it will have to be retrieved from the database again.
After discussions with the JBossCache developers, the only way around this prior to JBossCache-1.3 is to use TreeCache.evict(). evict() explicitly operates locally. However, it does have some other limitations such as it cannot be used within a transaction.
The safer bet starting with JBossCache-1.3 is to use the new Options API where you can specify that certain operations be performed in local mode only (amongst other stuff). Options is also the basis for the new optimistic locking model which is supported in Hibernate 3.2
Closing stale resolved issues