Indexes grow over time insanly big and can't be shrunk

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Indexes grow over time insanly big and can't be shrunk

Hohl, Gerrit

Hello everyone,

 

we use Apache Derby v10.14.2.0 in our software and now have a problem with an installation.

 

Every night we perform a compact on all tables – at least the ones which can be shrunk (SYSCS_DIAG.SPACE_TABLE() and SYSCS_UTIL.SYSCS_COMPRESS_TABLE(String, String, Smallint)).

Normally that constellation works fine.

But now faced an installation which ran out of free disk space without any reason.

 

After some analysing we narrowed it down to one table which is meant for logging.

Records are inserted and deleted often (the size of it is limited at 10.000 entries through the software).

The maximum – based on the structure – should be around 40 MB. Not really much.

So we were very surprised seeing that this thing took around 14 GB.

 

I realized that I accumulated the table and its indexes to get that value.

After splitting it up, I saw that the table itself really only took 40 MB.

But two of the 6 indexes – one was the primary key index – took more than 13 GB of space.

Apache Derby also said no NUMFREEPAGES and no ESTIMSPACESAVING.

 

My solution then was shutting down the service (in-process database), connect to the database, dropping the indexes, create a new column for temporarily storing the primary key value, copying the values, also dropping that column, recreating the primary key column, copying everything back, dropping the temporary primary key column, creating all the indexes again.

Now everything is back to normal and the table *including* its indexes is 40 MB.

 

Any idea on this? Why did the index grow that big? What can I do preventing it?

Seems reorganising isn’t an option as Apache Derby itself doesn’t realize that it can free space.

 

Regards,

Gerrit

 

Reply | Threaded
Open this post in threaded view
|

Re: Indexes grow over time insanly big and can't be shrunk

Rick Hillegas-3
Hi Gerrit,

I don't have a theory about what caused this problem. Maybe
COMPRESS_TABLE() has a serious bug. A couple questions:

Q1) Do you have the results of SPACE_TABLE() for this situation?

Q2) What value did you specify for the SEQUENTIAL argument of
COMPRESS_TABLE()?

Q3) Other than COMPRESS_TABLE(), what is the pattern of activity on this
table?

If this happens again, take a look at the ALTER TABLE DROP PRIMARY KEY
and ALTER TABLE ADD CONSTRAINT commands. They may make it simpler to
repair the damage.

Thanks,
-Rick

On 9/15/20 5:14 AM, Hohl, Gerrit wrote:

> Hello everyone,
>
> we use Apache Derby v10.14.2.0 in our software and now have a problem with an installation.
>
> Every night we perform a compact on all tables - at least the ones which can be shrunk (SYSCS_DIAG.SPACE_TABLE() and SYSCS_UTIL.SYSCS_COMPRESS_TABLE(String, String, Smallint)).
> Normally that constellation works fine.
> But now faced an installation which ran out of free disk space without any reason.
>
> After some analysing we narrowed it down to one table which is meant for logging.
> Records are inserted and deleted often (the size of it is limited at 10.000 entries through the software).
> The maximum - based on the structure - should be around 40 MB. Not really much.
> So we were very surprised seeing that this thing took around 14 GB.
>
> I realized that I accumulated the table and its indexes to get that value.
> After splitting it up, I saw that the table itself really only took 40 MB.
> But two of the 6 indexes - one was the primary key index - took more than 13 GB of space.
> Apache Derby also said no NUMFREEPAGES and no ESTIMSPACESAVING.
>
> My solution then was shutting down the service (in-process database), connect to the database, dropping the indexes, create a new column for temporarily storing the primary key value, copying the values, also dropping that column, recreating the primary key column, copying everything back, dropping the temporary primary key column, creating all the indexes again.
> Now everything is back to normal and the table *including* its indexes is 40 MB.
>
> Any idea on this? Why did the index grow that big? What can I do preventing it?
> Seems reorganising isn't an option as Apache Derby itself doesn't realize that it can free space.
>
> Regards,
> Gerrit
>
>

Reply | Threaded
Open this post in threaded view
|

AW: Indexes grow over time insanly big and can't be shrunk

Hohl, Gerrit
Hello everyone,

I'm sorry, I noticed that only replied to Rick. 😅


Regards,
Gerrit


-----Ursprüngliche Nachricht-----
Von: Hohl, Gerrit
Gesendet: Mittwoch, 16. September 2020 09:21
An: Rick Hillegas <[hidden email]>
Betreff: AW: Indexes grow over time insanly big and can't be shrunk

Hello Rick,


thanks for your reply.


A1) SELECT * FROM TABLE(SYSCS_DIAG.SPACE_TABLE()) AS x WHERE (x.TABLEID = '1b30f0f5-0168-60ad-cb8f-0000366e0651');

CONGLOMERATENAME                   |ISIND&|NUMALLOCATEDPAGES   |NUMFREEPAGES        |NUMUNFILLEDPAGES    |PAGESIZE   |ESTIMSPACESAVING    |TABLEID
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EVENT_LOG_ENTRY                    |0     |1186                |115                 |1                   |32768      |3768320             |1b30f0f5-0168-60ad-cb8f-0000366e0651
SQL190118123711680                 |1     |1220394             |0                   |1190424             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
EVENT_LOG_ENTRY_LOG_NAME_INDEX     |1     |245                 |151                 |115                 |4096       |618496              |1b30f0f5-0168-60ad-cb8f-0000366e0651
EVENT_LOG_ENTRY_SOURCE_INDEX       |1     |437                 |177                 |241                 |4096       |724992              |1b30f0f5-0168-60ad-cb8f-0000366e0651
EVENT_LOG_ENTRY_TIME_CREATED_INDEX |1     |2032376             |0                   |1965938             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
EVENT_LOG_ENTRY_LOG_LEVEL_INDEX    |1     |181                 |134                 |76                  |4096       |548864              |1b30f0f5-0168-60ad-cb8f-0000366e0651
EVENT_LOG_ENTRY_USER_NAME_INDEX    |1     |258                 |208                 |113                 |4096       |851968              |1b30f0f5-0168-60ad-cb8f-0000366e0651


A2) CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('app', 'event_log_entry', 1);

That command is only issued if the cumulated ESTIMSPACESAVING for the table and its indexes is greater than 0.
But I'm not sure if Derby really reorganised the both problem indexes - namely SQL190118123711680 (primary key) and EVENT_LOG_ENTRY_TIME_CREATED_INDEX - because there was nothing to free.

At the point I got that information the compress command also may have failed because there wasn't enough space left.
But I think it doesn't matter because before there was enough disk space and at that time the index was already growing.
Means none of the previous executions was able to reorganize those 2 indexes also there was enough space back then.


A3) Not sure what you mean with "pattern of activity", but we simply have a lot of "INSERT INTO event_log_entry (...) VALUES (...)" statements.
At the start of the application there is also a "SELECT COUNT(*) FROM event_log_entry" to get the current number of entries.
From the on the application keeps track of it internally.

If 10.000 entries are reached, for each entry which exceeds that number, another entry is deleted (means the oldest). Those actions can also overlap.


Here the DDL of the table and its indexes:

CREATE TABLE event_log_entry (
  id INTEGER NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
  flags INT,
  changecount INT,
  checksum BIGINT,
  log_name VARCHAR(256),
  source VARCHAR(256),
  time_created VARCHAR(17),
  log_level VARCHAR(3),
  user_name VARCHAR(256),
  data BLOB(65536)
);
CREATE INDEX event_log_entry_log_name_index ON event_log_entry (log_name); CREATE INDEX event_log_entry_source_index ON event_log_entry (source); CREATE INDEX event_log_entry_time_created_index ON event_log_entry (time_created); CREATE INDEX event_log_entry_log_level_index ON event_log_entry (log_level); CREATE INDEX event_log_entry_user_name_index ON event_log_entry (user_name);


I already repaired it by dropping the indexes and recreating them (as I wrote in my mail).
Now everything is working perfectly again.
But that doesn't mean that it won't happen again, of course. 😉


Regards
Gerrit

-----Ursprüngliche Nachricht-----
Von: Rick Hillegas <[hidden email]>
Gesendet: Dienstag, 15. September 2020 17:27
An: Derby Discussion <[hidden email]>; Hohl, Gerrit <[hidden email]>
Betreff: Re: Indexes grow over time insanly big and can't be shrunk

Hi Gerrit,

I don't have a theory about what caused this problem. Maybe
COMPRESS_TABLE() has a serious bug. A couple questions:

Q1) Do you have the results of SPACE_TABLE() for this situation?

Q2) What value did you specify for the SEQUENTIAL argument of COMPRESS_TABLE()?

Q3) Other than COMPRESS_TABLE(), what is the pattern of activity on this table?

If this happens again, take a look at the ALTER TABLE DROP PRIMARY KEY and ALTER TABLE ADD CONSTRAINT commands. They may make it simpler to repair the damage.

Thanks,
-Rick

On 9/15/20 5:14 AM, Hohl, Gerrit wrote:

> Hello everyone,
>
> we use Apache Derby v10.14.2.0 in our software and now have a problem with an installation.
>
> Every night we perform a compact on all tables - at least the ones which can be shrunk (SYSCS_DIAG.SPACE_TABLE() and SYSCS_UTIL.SYSCS_COMPRESS_TABLE(String, String, Smallint)).
> Normally that constellation works fine.
> But now faced an installation which ran out of free disk space without any reason.
>
> After some analysing we narrowed it down to one table which is meant for logging.
> Records are inserted and deleted often (the size of it is limited at 10.000 entries through the software).
> The maximum - based on the structure - should be around 40 MB. Not really much.
> So we were very surprised seeing that this thing took around 14 GB.
>
> I realized that I accumulated the table and its indexes to get that value.
> After splitting it up, I saw that the table itself really only took 40 MB.
> But two of the 6 indexes - one was the primary key index - took more than 13 GB of space.
> Apache Derby also said no NUMFREEPAGES and no ESTIMSPACESAVING.
>
> My solution then was shutting down the service (in-process database), connect to the database, dropping the indexes, create a new column for temporarily storing the primary key value, copying the values, also dropping that column, recreating the primary key column, copying everything back, dropping the temporary primary key column, creating all the indexes again.
> Now everything is back to normal and the table *including* its indexes is 40 MB.
>
> Any idea on this? Why did the index grow that big? What can I do preventing it?
> Seems reorganising isn't an option as Apache Derby itself doesn't realize that it can free space.
>
> Regards,
> Gerrit
>
>

Reply | Threaded
Open this post in threaded view
|

Re: AW: Indexes grow over time insanly big and can't be shrunk

Rick Hillegas-3
Thanks for the extra information, Gerrit. I don't have any theories
about why the primary key and EVENT_LOG_ENTRY_TIME_CREATED_INDEX are
failing to compress. I don't have any better suggestion than the following:

1) Run SYSCS_DIAG.SPACE_TABLE() before and after
SYSCS_UTIL.SYSCS_COMPRESS_TABLE().

2) If you notice that Derby is failing to compress those two indexes
again, look in derby.log for errors or other diagnostics which may
provide some clues.

Thanks,
-Rick

On 9/19/20 1:51 AM, Hohl, Gerrit wrote:

> Hello everyone,
>
> I'm sorry, I noticed that only replied to Rick. 😅
>
>
> Regards,
> Gerrit
>
>
> -----Ursprüngliche Nachricht-----
> Von: Hohl, Gerrit
> Gesendet: Mittwoch, 16. September 2020 09:21
> An: Rick Hillegas <[hidden email]>
> Betreff: AW: Indexes grow over time insanly big and can't be shrunk
>
> Hello Rick,
>
>
> thanks for your reply.
>
>
> A1) SELECT * FROM TABLE(SYSCS_DIAG.SPACE_TABLE()) AS x WHERE (x.TABLEID = '1b30f0f5-0168-60ad-cb8f-0000366e0651');
>
> CONGLOMERATENAME                   |ISIND&|NUMALLOCATEDPAGES   |NUMFREEPAGES        |NUMUNFILLEDPAGES    |PAGESIZE   |ESTIMSPACESAVING    |TABLEID
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> EVENT_LOG_ENTRY                    |0     |1186                |115                 |1                   |32768      |3768320             |1b30f0f5-0168-60ad-cb8f-0000366e0651
> SQL190118123711680                 |1     |1220394             |0                   |1190424             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_LOG_NAME_INDEX     |1     |245                 |151                 |115                 |4096       |618496              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_SOURCE_INDEX       |1     |437                 |177                 |241                 |4096       |724992              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_TIME_CREATED_INDEX |1     |2032376             |0                   |1965938             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_LOG_LEVEL_INDEX    |1     |181                 |134                 |76                  |4096       |548864              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_USER_NAME_INDEX    |1     |258                 |208                 |113                 |4096       |851968              |1b30f0f5-0168-60ad-cb8f-0000366e0651
>
>
> A2) CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('app', 'event_log_entry', 1);
>
> That command is only issued if the cumulated ESTIMSPACESAVING for the table and its indexes is greater than 0.
> But I'm not sure if Derby really reorganised the both problem indexes - namely SQL190118123711680 (primary key) and EVENT_LOG_ENTRY_TIME_CREATED_INDEX - because there was nothing to free.
>
> At the point I got that information the compress command also may have failed because there wasn't enough space left.
> But I think it doesn't matter because before there was enough disk space and at that time the index was already growing.
> Means none of the previous executions was able to reorganize those 2 indexes also there was enough space back then.
>
>
> A3) Not sure what you mean with "pattern of activity", but we simply have a lot of "INSERT INTO event_log_entry (...) VALUES (...)" statements.
> At the start of the application there is also a "SELECT COUNT(*) FROM event_log_entry" to get the current number of entries.
>  From the on the application keeps track of it internally.
>
> If 10.000 entries are reached, for each entry which exceeds that number, another entry is deleted (means the oldest). Those actions can also overlap.
>
>
> Here the DDL of the table and its indexes:
>
> CREATE TABLE event_log_entry (
>    id INTEGER NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
>    flags INT,
>    changecount INT,
>    checksum BIGINT,
>    log_name VARCHAR(256),
>    source VARCHAR(256),
>    time_created VARCHAR(17),
>    log_level VARCHAR(3),
>    user_name VARCHAR(256),
>    data BLOB(65536)
> );
> CREATE INDEX event_log_entry_log_name_index ON event_log_entry (log_name); CREATE INDEX event_log_entry_source_index ON event_log_entry (source); CREATE INDEX event_log_entry_time_created_index ON event_log_entry (time_created); CREATE INDEX event_log_entry_log_level_index ON event_log_entry (log_level); CREATE INDEX event_log_entry_user_name_index ON event_log_entry (user_name);
>
>
> I already repaired it by dropping the indexes and recreating them (as I wrote in my mail).
> Now everything is working perfectly again.
> But that doesn't mean that it won't happen again, of course. 😉
>
>
> Regards
> Gerrit
>
> -----Ursprüngliche Nachricht-----
> Von: Rick Hillegas <[hidden email]>
> Gesendet: Dienstag, 15. September 2020 17:27
> An: Derby Discussion <[hidden email]>; Hohl, Gerrit <[hidden email]>
> Betreff: Re: Indexes grow over time insanly big and can't be shrunk
>
> Hi Gerrit,
>
> I don't have a theory about what caused this problem. Maybe
> COMPRESS_TABLE() has a serious bug. A couple questions:
>
> Q1) Do you have the results of SPACE_TABLE() for this situation?
>
> Q2) What value did you specify for the SEQUENTIAL argument of COMPRESS_TABLE()?
>
> Q3) Other than COMPRESS_TABLE(), what is the pattern of activity on this table?
>
> If this happens again, take a look at the ALTER TABLE DROP PRIMARY KEY and ALTER TABLE ADD CONSTRAINT commands. They may make it simpler to repair the damage.
>
> Thanks,
> -Rick
>
> On 9/15/20 5:14 AM, Hohl, Gerrit wrote:
>> Hello everyone,
>>
>> we use Apache Derby v10.14.2.0 in our software and now have a problem with an installation.
>>
>> Every night we perform a compact on all tables - at least the ones which can be shrunk (SYSCS_DIAG.SPACE_TABLE() and SYSCS_UTIL.SYSCS_COMPRESS_TABLE(String, String, Smallint)).
>> Normally that constellation works fine.
>> But now faced an installation which ran out of free disk space without any reason.
>>
>> After some analysing we narrowed it down to one table which is meant for logging.
>> Records are inserted and deleted often (the size of it is limited at 10.000 entries through the software).
>> The maximum - based on the structure - should be around 40 MB. Not really much.
>> So we were very surprised seeing that this thing took around 14 GB.
>>
>> I realized that I accumulated the table and its indexes to get that value.
>> After splitting it up, I saw that the table itself really only took 40 MB.
>> But two of the 6 indexes - one was the primary key index - took more than 13 GB of space.
>> Apache Derby also said no NUMFREEPAGES and no ESTIMSPACESAVING.
>>
>> My solution then was shutting down the service (in-process database), connect to the database, dropping the indexes, create a new column for temporarily storing the primary key value, copying the values, also dropping that column, recreating the primary key column, copying everything back, dropping the temporary primary key column, creating all the indexes again.
>> Now everything is back to normal and the table *including* its indexes is 40 MB.
>>
>> Any idea on this? Why did the index grow that big? What can I do preventing it?
>> Seems reorganising isn't an option as Apache Derby itself doesn't realize that it can free space.
>>
>> Regards,
>> Gerrit
>>
>>

Reply | Threaded
Open this post in threaded view
|

AW: AW: Indexes grow over time insanly big and can't be shrunk

Hohl, Gerrit
Hello Rick,


today I was able to have a look at the derby.log of that installation, but unfortunately had to realize that the property "derby.infolog.append=true" wasn't set.
Means the log was more or less empty.
I'll active that and keep an eye on it.
Hopefully I'll see something there next time.


Regards,
Gerrit

-----Ursprüngliche Nachricht-----
Von: Rick Hillegas <[hidden email]>
Gesendet: Samstag, 19. September 2020 18:27
An: Hohl, Gerrit <[hidden email]>; Derby Discussion <[hidden email]>
Betreff: Re: AW: Indexes grow over time insanly big and can't be shrunk

Thanks for the extra information, Gerrit. I don't have any theories about why the primary key and EVENT_LOG_ENTRY_TIME_CREATED_INDEX are failing to compress. I don't have any better suggestion than the following:

1) Run SYSCS_DIAG.SPACE_TABLE() before and after SYSCS_UTIL.SYSCS_COMPRESS_TABLE().

2) If you notice that Derby is failing to compress those two indexes again, look in derby.log for errors or other diagnostics which may provide some clues.

Thanks,
-Rick

On 9/19/20 1:51 AM, Hohl, Gerrit wrote:

> Hello everyone,
>
> I'm sorry, I noticed that only replied to Rick. 😅
>
>
> Regards,
> Gerrit
>
>
> -----Ursprüngliche Nachricht-----
> Von: Hohl, Gerrit
> Gesendet: Mittwoch, 16. September 2020 09:21
> An: Rick Hillegas <[hidden email]>
> Betreff: AW: Indexes grow over time insanly big and can't be shrunk
>
> Hello Rick,
>
>
> thanks for your reply.
>
>
> A1) SELECT * FROM TABLE(SYSCS_DIAG.SPACE_TABLE()) AS x WHERE
> (x.TABLEID = '1b30f0f5-0168-60ad-cb8f-0000366e0651');
>
> CONGLOMERATENAME                   |ISIND&|NUMALLOCATEDPAGES   |NUMFREEPAGES        |NUMUNFILLEDPAGES    |PAGESIZE   |ESTIMSPACESAVING    |TABLEID
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> EVENT_LOG_ENTRY                    |0     |1186                |115                 |1                   |32768      |3768320             |1b30f0f5-0168-60ad-cb8f-0000366e0651
> SQL190118123711680                 |1     |1220394             |0                   |1190424             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_LOG_NAME_INDEX     |1     |245                 |151                 |115                 |4096       |618496              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_SOURCE_INDEX       |1     |437                 |177                 |241                 |4096       |724992              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_TIME_CREATED_INDEX |1     |2032376             |0                   |1965938             |4096       |0                   |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_LOG_LEVEL_INDEX    |1     |181                 |134                 |76                  |4096       |548864              |1b30f0f5-0168-60ad-cb8f-0000366e0651
> EVENT_LOG_ENTRY_USER_NAME_INDEX    |1     |258                 |208                 |113                 |4096       |851968              |1b30f0f5-0168-60ad-cb8f-0000366e0651
>
>
> A2) CALL SYSCS_UTIL.SYSCS_COMPRESS_TABLE('app', 'event_log_entry', 1);
>
> That command is only issued if the cumulated ESTIMSPACESAVING for the table and its indexes is greater than 0.
> But I'm not sure if Derby really reorganised the both problem indexes - namely SQL190118123711680 (primary key) and EVENT_LOG_ENTRY_TIME_CREATED_INDEX - because there was nothing to free.
>
> At the point I got that information the compress command also may have failed because there wasn't enough space left.
> But I think it doesn't matter because before there was enough disk space and at that time the index was already growing.
> Means none of the previous executions was able to reorganize those 2 indexes also there was enough space back then.
>
>
> A3) Not sure what you mean with "pattern of activity", but we simply have a lot of "INSERT INTO event_log_entry (...) VALUES (...)" statements.
> At the start of the application there is also a "SELECT COUNT(*) FROM event_log_entry" to get the current number of entries.
>  From the on the application keeps track of it internally.
>
> If 10.000 entries are reached, for each entry which exceeds that number, another entry is deleted (means the oldest). Those actions can also overlap.
>
>
> Here the DDL of the table and its indexes:
>
> CREATE TABLE event_log_entry (
>    id INTEGER NOT NULL PRIMARY KEY GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
>    flags INT,
>    changecount INT,
>    checksum BIGINT,
>    log_name VARCHAR(256),
>    source VARCHAR(256),
>    time_created VARCHAR(17),
>    log_level VARCHAR(3),
>    user_name VARCHAR(256),
>    data BLOB(65536)
> );
> CREATE INDEX event_log_entry_log_name_index ON event_log_entry
> (log_name); CREATE INDEX event_log_entry_source_index ON
> event_log_entry (source); CREATE INDEX
> event_log_entry_time_created_index ON event_log_entry (time_created);
> CREATE INDEX event_log_entry_log_level_index ON event_log_entry
> (log_level); CREATE INDEX event_log_entry_user_name_index ON
> event_log_entry (user_name);
>
>
> I already repaired it by dropping the indexes and recreating them (as I wrote in my mail).
> Now everything is working perfectly again.
> But that doesn't mean that it won't happen again, of course. 😉
>
>
> Regards
> Gerrit
>
> -----Ursprüngliche Nachricht-----
> Von: Rick Hillegas <[hidden email]>
> Gesendet: Dienstag, 15. September 2020 17:27
> An: Derby Discussion <[hidden email]>; Hohl, Gerrit
> <[hidden email]>
> Betreff: Re: Indexes grow over time insanly big and can't be shrunk
>
> Hi Gerrit,
>
> I don't have a theory about what caused this problem. Maybe
> COMPRESS_TABLE() has a serious bug. A couple questions:
>
> Q1) Do you have the results of SPACE_TABLE() for this situation?
>
> Q2) What value did you specify for the SEQUENTIAL argument of COMPRESS_TABLE()?
>
> Q3) Other than COMPRESS_TABLE(), what is the pattern of activity on this table?
>
> If this happens again, take a look at the ALTER TABLE DROP PRIMARY KEY and ALTER TABLE ADD CONSTRAINT commands. They may make it simpler to repair the damage.
>
> Thanks,
> -Rick
>
> On 9/15/20 5:14 AM, Hohl, Gerrit wrote:
>> Hello everyone,
>>
>> we use Apache Derby v10.14.2.0 in our software and now have a problem with an installation.
>>
>> Every night we perform a compact on all tables - at least the ones which can be shrunk (SYSCS_DIAG.SPACE_TABLE() and SYSCS_UTIL.SYSCS_COMPRESS_TABLE(String, String, Smallint)).
>> Normally that constellation works fine.
>> But now faced an installation which ran out of free disk space without any reason.
>>
>> After some analysing we narrowed it down to one table which is meant for logging.
>> Records are inserted and deleted often (the size of it is limited at 10.000 entries through the software).
>> The maximum - based on the structure - should be around 40 MB. Not really much.
>> So we were very surprised seeing that this thing took around 14 GB.
>>
>> I realized that I accumulated the table and its indexes to get that value.
>> After splitting it up, I saw that the table itself really only took 40 MB.
>> But two of the 6 indexes - one was the primary key index - took more than 13 GB of space.
>> Apache Derby also said no NUMFREEPAGES and no ESTIMSPACESAVING.
>>
>> My solution then was shutting down the service (in-process database), connect to the database, dropping the indexes, create a new column for temporarily storing the primary key value, copying the values, also dropping that column, recreating the primary key column, copying everything back, dropping the temporary primary key column, creating all the indexes again.
>> Now everything is back to normal and the table *including* its indexes is 40 MB.
>>
>> Any idea on this? Why did the index grow that big? What can I do preventing it?
>> Seems reorganising isn't an option as Apache Derby itself doesn't realize that it can free space.
>>
>> Regards,
>> Gerrit
>>
>>