We're updating the issue view to help you get more done. 

hibernate.search.worker.batch_size to prevent OutOfMemoryException while inserting many objects

Description

While inserting many objects in a batch the application runs out of memory because the FullTextIndexEventListener for Lucene collects the updates.

This is a tough problem. Today the solution is to apply the work in n transactions rather than 1. Maybe the notion of batch_size at some point to force a "flush" when the queue goes up can help. But it sacrifices the transactional semantic.

hibernate.cfg.xml

<event type="post-insert">
<listener class="org.hibernate.search.event.FullTextIndexEventListener"/>
</event>

Inserting code something like:

Session session = sessionFactory.openSession();
session.setCacheMode(CacheMode.IGNORE);
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Item item = new Item(...);
session.save(item);
if ( i % 100 == 0 ) {
session.flush();
session.clear();
}
}
tx.commit();
session.close();

Environment

None

Status

Assignee

Unassigned

Reporter

Stefan

Labels

None

Suitable for new contributors

None

Pull Request

None

Feedback Requested

None

Components

Fix versions

Affects versions

3.0.0.beta1

Priority

Major