Once upon a time there was a very experienced database administrator who accidentally dropped a 12 TB index from a 55 TB table (you can read the sob story here).

The question I had was “How did he fix it?” but I did not have the details till today. The details were revealed today on the Oracle-L mailing list.

From: oracle-l-bounce@freelists.org <oracle-l-bounce@freelists.org> on behalf of Tim Gorman <tim@evdbt.com>
Sent: Thursday, September 22, 2016 4:32 PM
To: vkeylis2009@gmail.com; oracle-l@freelists.org
Subject: Re: Creating unique index on huge table.


Several years ago, I accidentally dropped a unique index on a 55TB table, which represented about 12TB of index. There are several on this list who know exactly the circumstances, and one person on this list who was the recipient of the tearful phone call I made when I realized what I had done. :)

High-level description of what worked to rebuild it...

Run "create partitioned index ... unusable" to create the partitioned index with all partitions empty.
Create a shell-script to run NN SQL*Plus processes simultaneously, where "NN" is a number of your choice, each process doing the following...
alter index <index-name> partition <partition-name> parallel <degree> nologging compute statistics

We ordered the SQL*Plus calls inside the shell-script so that the partitions for the most-recent partitions (i.e. the table was partitioned by a DATE column) were populated first, and then let the builds progress back in time. Depending on the application, you can be doing some or all of the normal activities on the table. Our assumption (which proved correct) was that all DML occurs against the newest partitions, so those were the partitions that needed to be made "usable" first.

This approach won't eliminate downtime or performance problems, but it will likely minimize them.

I hope this makes sense?



Note: There is a typo in the above email message. The correct rebuild instruction is: alter index <index-name> rebuild partition <partition-name> parallel <degree> nologging compute statistics.

All I have to say is: luckily somebody had the foresight to partition such a large table. I remember when I was a junior database administrator many years ago, I was appalled when I first encountered a 5 GB table. That’s GB, not TB. The biggest table I had until that time was a few hundred MB in size, and kept that way through regular purging and archiving.