Flashback Archive becomes more and more my top Oracle feature as it is easy to implement and has a huge advantage for all kind of applications and compliance of course.
So in my blog from January 2014 I already explained the idea of flashback data archive. Some of you might found some time to test and wondering about additional features or behaviors they wouldn’t have expected.
Let’s go through some additional features:
Flashback Data Archive and Compression
As you probably know Flashback Archive is no longer limited to the Enterprise Edition or a separate license but can be used in Standard Edition and Standard Edition One as well. But either way the Flashback Archive tables are partitioned or a new partition is created every day! As the extend size of a single partition is 8 MB you can easily end up in Gigabyte of wasted store within one single month. On the other hand: if you archive all your DML operations the size of the individual partitions might even exceed the 8 MB like it will if you drop or rename a column because ALL old values will be stored in the corresponding archive table immediately.
With the option “OPTIMIZE DATA” severval levels of compression will be added to the flashback area depending on the datatype or machine (Exadata). Those are:
- Advanced Row Compression
- Advanced Lob Compression
- Advanced LOB Deduplication
- Segment-Level Compression Tiering
CREATE FLASHBACK ARCHIVE fda_opt TABLESPACE fda_tablespace RETENTION 2 YEAR OPTIMIZE DATA;
You don’t have to care about any compression (as long as you are using Enterprise Edition and have a valid license for the Advanced Compression option) but the FBA process will look after the best approach for compression.
Flashback Archive Application
Tables do not stand at their own in fact there is probably an application or more than one associated with the tables of your schema. So the idea with Flashback Archive Application is that a set of tables belong to an application and are no longer managed individual.
First of all you have to create a Flashback Data Archive application:
BEGIN dbms_flashback_archive.register_application ( application_name => 'CUSTOMER', flashback_archive_name => 'FBA_OPT'); END;
Now the individual tables can be added to this application. In this example I will add all current tables of the schema to the application:
BEGIN FOR c_tab IN ( SELECT table_name FROM user_tables ) LOOP DBMS_FLASHBACK_ARCHIVE.ADD_TABLE_TO_APPLICATION ( application_name => 'CUSTOMER', schema_name => 'FBA2', table_name => c_tab.table_name); dbms_output.put_line ('Table added '||c_tab.table_name); END LOOP; -- c_tab COMMIT; END; /
The tables are now added to the application and the application is registered to be used with Flashback archive but actually the archiving is not activated. And this is where the main difference between the individual management of tables and the application usage comes into play. While individual tables must be activated or deactivated one by one the application can be managed with one single command:
EXECUTE dbms_flashback_archive.enable_application (application_name => 'CUSTOMER');
With a corresponding command the archiving can be disabled (dbms_flashback_archive.disable_application). This association even exists if you drop the flashback data archive. When you recreate it you can simply enable your application again which makes management much easier.
Management of Table DDL
What happens if you want to make changes to the table layout, e.g. change the name of a column, add or remove columns? This was not possible with older release but with the actual version you can simply add columns and remove them if necessary. One additional flashback table is maintained for every ordinary table which is namedsys_fba_ddl_colmap_. This table lists all changed to the structure of a table.
SQL> ALTER TABLE personen DROP COLUMN bemerkung; Table altered. SQL> SELECT startscn, endscn, column_name, TYPE, historical_column_name FROM sys_fba_ddl_colmap_25308; STARTSCN ENDSCN COLUMN_NAME TYPE HISTORICAL_COLUMN_NAME -------- -------- -------------------- ------------ ---------------------- 11156386 PERSID NUMBER(10) PERSID 11156386 ANREDE VARCHAR2(5) ANREDE 11156386 VORNAME VARCHAR2(50) VORNAME 11156386 NACHNAME VARCHAR2(50) NACHNAME 11156386 GEBURTSTAG DATE GEBURTSTAG 11503985 11504152 D_11504152_BEMERKUNG VARCHAR2(20) BEMERKUNG
As you can see the dropped column is named D_11504152_BEMERKUNG to indicate that it has been dropped.
But be aware your changes are archived and in case you drop one column the entire content of the table is stored in the flashback archive table.
Historical Data Movement
But what if you want to migrate your database? Data Pump simply ignores any flashback tables. So how can we move the archived information to a new location? The solution is: via a temporary history table.
As it isn’t possible to export the flashback archive tables you must create interim tables first like with the following command:
CREATE TABLE adressen_archive AS SELECT * FROM SYS_FBA_HIST_25389;
To ensure that you get all data regardless if the query is limited to actual data or not you should execute the following command on both the source and the target database:
Now you can use data pump for the export and import of all data despite the fact that “adressen_archive” is holding the content of the flashback archive.
On the target you need to enable flashback archive for your tables again as this information will get lost during the export and import procedure.
ALTER TABLE adressen FLASHBACK ARCHIVE;
And as the last step you import your history data in the corresponding flashback archive table again using the following command:
BEGIN dbms_flashback_archive.import_history ( owner_name1 => 'FBA3', table_name1 => 'ADRESSEN', temp_history_name => 'ADRESSEN_ARCHIVE'); END; /
The correctness can be verified by quering the corresponding Flashback Archive history table again:
SELECT * FROM SYS_FBA_HIST_25373;
Keep in mind that the number is different from the one on the source database.