Very many customers are using export / import or data pump as a backup solution in case they are losing a table or some content for several years now. But as the database size increases the size of the export and more problematic the time needed for the export increases as well.
Most recently one of my customers was facing the problem that the time window he specified for the ordinary backup, the export and some batches was no longer sufficient. Instead the batch run hasn’t finished when the users came back to work in the morning.
So the question is: how can the time of the export being reduced?
The answer is pretty simple: don’t do an export – that saves a lot of time!
No way – said the customer! While ago we had a problem with the batch process so we had to restore some tables with the data from the export.
Examples how to recover one or more table(s)
If flashback database is enabled you can simply rewind your database to an older point in time.
This solution helps if it’s possible to rewind the entire database. If there are multiple applications running on the same database this might not be feasible. And the amount of flashback logs needs to be calculated as well. For some situations it might be sufficient to use a guaranteed restore point. This feature is available even if flashback database is not enabled and can help when new applications or versions are rolled out. For a certain amount of time you can rewind the database to that specific point in time and if the application runs fine you can just remove the restore point and don’t have to care about the flashback logs.
Use a Standby database
If Data Guard or another solution is available as a disaster recovery solution you can simply stop the redo apply on the standby database. Than you can open the databases read only (or with a guaranteed restore point even read write) and unload the table or have a look at the table data. You only need to verify that your standby database hasn’t applied the faulty batch already. But this can be achieved if you stop the apply right before the batch run. After the run is successfully completed you start the apply again. In the rare event that a disaster occurs right at that time you can still switch to the standby. It will probably take some few minutes more before the database has recovered all archived logs.
Use an RMAN backup
With Oracle 12c table recovery is a “new” feature for RMAN. That’s indeed helpful and can eliminate the need for an export in many cases. But how does it work?
Unfortunately there is still no way to directly grab an object out of the rman backup. Instead with the table restore an auxiliary database is being created with the basic tablespaces SYSTEM, SYSAUX, and UNDO and with the tablespace holding the table to be recovered.
Example for RMAN table restore:
RMAN> RECOVER TABLE demo.personen_drop
UNTIL TIME “to_date(‘31.05.2013 09:30:00′,’DD.MM.YYYY HH24:MI:SS’)”
AUXILIARY DESTINATION ‘/u03/orabackup/recover’;
The auxiliary destination is mandatory because that’s the path for the data files. Per default the table will be created so if the table still exists the recovery will fail. To avoid this failure, maybe you just want to compare both tables the following options can be specified:
- NOTABLEIMPORT: in this case only the data pump dump is created in the DP_DUMP_DIR directory
- REMAP TABLE: the original table stays where it is and the “recovered” table will be imported with a new name.
Here’s the output for the recovery without importing the table:
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ”/u03/orabackup/recover”
Performing export of tables…
EXPDP> Starting “SYS”.”TSPITR_EXP_CeFE_accc”:
EXPDP> Estimate in progress using BLOCKS method…
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 256 KB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported “DEMO”.”PERSONEN_DROP” 26.34 KB 576 rows
EXPDP> Master table “SYS”.”TSPITR_EXP_CeFE_accc” successfully loaded/unloaded
EXPDP> Dump file set for SYS.TSPITR_EXP_CeFE_accc is:
EXPDP> Job “SYS”.”TSPITR_EXP_CeFE_accc” successfully completed at Fri May 31 10:04:17 2013 elapsed 0 00:00:38
Not performing table import after point-in-time recovery
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
As you probably know the success and time needed is dependent on the size of the tablespaces to be created. If you have a 20 GB System tablespace because of auditing you should change your strategy. And keep in mind that the auxiliary database has its own instance with a certain size. You need to check if that one fit’s on your server.
But if you compare it with the effort you have to take for the rarely event that the batch fails this might be the far better solution. And even if you are not on 12c the procedure to recover a table can be used for an 11g database as well. Database duplication is simple with RMAN – as RMAN is the best and simplest solution for Oracle backups!