Hi,
i think i've found a bug in derby. i'm following the guidelines from the website and posting it here before reporting it just to make sure it *is* a bug! I'm running a set of ~50000 queries on one table, using inserts and updates, and i want to be able to roll them back so i turned off autocommit using setAutoCommit(false). As the update runs, the memory used by the JVM increases continually until i get the following exception about 20% of the way through: ERROR 40XT0: An internal error was identified by RawStore module. at org.apache.derby.iapi.error.StandardException.newException(StandardException.java) at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java) at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java) at org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java) at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java) at org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java) at org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java) at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java) at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java) at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java) at org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java) at org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java) at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java) at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java) at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java) at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java) at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java) at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java) at org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java) at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java) at org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java) at org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java) at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java) at org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java) at org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java) at vi.hotspot.database.DataInterface._query(DataInterface.java:181) at vi.hotspot.database.DataInterface.query(DataInterface.java:160) at vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518) at vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619) at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924) at java.lang.Thread.run(Thread.java:534) vi.hotspot.exception.ServerTransactionException at vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555) at vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619) at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924) at java.lang.Thread.run(Thread.java:534) Derby is running in standalone mode. Cheers, Chris |
have you looked in the derby.log file? Often this particular error is
not the actual problem, but instead the error which preceded it. I often will execute 100,000 repeated executions of a single prepared statement in a single transaction so 50,000 is doable - but it may depend on what those queries do. Attaching the whole derby.log may give a better clue as to what is going on. If the JVM does run out of memory then all sorts of errors can result, did you limit the memory of the JVM with a start up parameter? Chris wrote: > Hi, > > i think i've found a bug in derby. i'm following the guidelines from > the website and posting it here before reporting it just to make sure it > *is* a bug! > > I'm running a set of ~50000 queries on one table, using inserts and > updates, and i want to be able to roll them back so i turned off > autocommit using setAutoCommit(false). As the update runs, the memory > used by the JVM increases continually until i get the following > exception about 20% of the way through: > > ERROR 40XT0: An internal error was identified by RawStore module. > at > org.apache.derby.iapi.error.StandardException.newException(StandardException.java) > > at org.apache.derby.impl.store.raw.xact.Xact.setActiveState(Xact.java) > at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java) > at > org.apache.derby.impl.store.access.conglomerate.OpenConglomerate.init(OpenConglomerate.java) > > at org.apache.derby.impl.store.access.heap.Heap.open(Heap.java) > at > org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java) > > at > org.apache.derby.impl.store.access.RAMTransaction.openConglomerate(RAMTransaction.java) > > at > org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getDescriptorViaIndex(DataDictionaryImpl.java) > > at > org.apache.derby.impl.sql.catalog.DataDictionaryImpl.locateSchemaRow(DataDictionaryImpl.java) > > at > org.apache.derby.impl.sql.catalog.DataDictionaryImpl.getSchemaDescriptor(DataDictionaryImpl.java) > > at > org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java) > > at > org.apache.derby.impl.sql.compile.QueryTreeNode.getSchemaDescriptor(QueryTreeNode.java) > > at > org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(FromBaseTable.java) > > at > org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(FromBaseTable.java) > > at org.apache.derby.impl.sql.compile.FromList.bindTables(FromList.java) > at > org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(SelectNode.java) > > at > org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(DMLStatementNode.java) > > at > org.apache.derby.impl.sql.compile.DMLStatementNode.bind(DMLStatementNode.java) > > at > org.apache.derby.impl.sql.compile.ReadCursorNode.bind(ReadCursorNode.java) > at org.apache.derby.impl.sql.compile.CursorNode.bind(CursorNode.java) > at > org.apache.derby.impl.sql.GenericStatement.prepMinion(GenericStatement.java) > > at > org.apache.derby.impl.sql.GenericStatement.prepare(GenericStatement.java) > at > org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(GenericLanguageConnectionContext.java) > > at > org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java) > at > org.apache.derby.impl.jdbc.EmbedStatement.executeQuery(EmbedStatement.java) > at vi.hotspot.database.DataInterface._query(DataInterface.java:181) > at vi.hotspot.database.DataInterface.query(DataInterface.java:160) > at > vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:518) > > at > vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619) > > at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924) > at java.lang.Thread.run(Thread.java:534) > vi.hotspot.exception.ServerTransactionException > at > vi.hotspot.database.UpdateManager.updatePartialTable(UpdateManager.java:555) > > at > vi.hotspot.database.UpdateManager.updatePartialTables(UpdateManager.java:619) > > at vi.hotspot.database.UpdateManager.run(UpdateManager.java:924) > at java.lang.Thread.run(Thread.java:534) > > Derby is running in standalone mode. > > Cheers, > > Chris > > > |
In reply to this post by Chris-6
Before you call it a bug... I wasn't sure about your message. Did you mean to say that you had a long transaction that contained 50K atomic transaction statements? I'm just curious if you were running out of memory, or another resource. A better question would be why would you want to have such a long transaction in the first place? There may be a better way of solving your problem. Please forgive me, I'm new here ... ;-) -Gumby |
Free forum by Nabble | Edit this page |