Total Pageviews

Tuesday, August 18, 2020

How to Decide Undo Size and retention ..?

 

Optimal and Needed UNDO Size 

From oracle 9i, rollback segments are re-named as undo logs, traditionally it were stored in rollback segments until a commit or rollback segments were issued.

Automatic undo management allows the DBA how long information should be retained after commit. The larger your undo tablespace the more you can hold for long running DML operation (Preventing snapshot to old error on long running queries).

You can choose to allocate a specific size for the UNDO tablespace and then set the optimal UNDO_RETENTION according to UNDO size. This is specially useful when you disk space is limited and you do not want to allocate more space than required UNDO size.

The formula to calculate the Optimal undo retention is,

OPTIMAL UNDO RETENTION = ACTUAL UNDO SIZE /(DB_BLOCK_SIZE *       UNDO_BLOCK_PER_SEC)

Get the values for the above formula using the below queries and subsitute to get the optimal undo retention value,

Find Actual Undo Size:

SELECT SUM(a.bytes) "UNDO_SIZE"
  FROM v$datafile a, v$tablespace b, dba_tablespaces c
 WHERE c.contents = 'UNDO' AND c.status = 'ONLINE'
   AND b.name = c.tablespace_name AND a.ts# = b.ts#;

UNDO_SIZE
----------
7948206080

Find Undo Blocks per Second:

SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
      "UNDO_BLOCK_PER_SEC"
  FROM v$undostat;

UNDO_BLOCK_PER_SEC
------------------
6.47

Find Undo Block Size:

SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [KByte]"
 FROM v$parameter
WHERE name = 'db_block_size';

DB_BLOCK_SIZE [Byte]
--------------------
8192

Therefore,
OPTIMAL UNDO RETENTION = 7948206080 /(8192 * 6.47)=149959.8145 [sec]

Using the below query you can find all those information collectively:

Script to find required Optimal Undo Retention:

SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
       ROUND((d.undo_size / (to_number(f.value) *
       g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
  FROM (
       SELECT SUM(a.bytes) undo_size
          FROM v$datafile a, v$tablespace b, dba_tablespaces c
         WHERE c.contents = 'UNDO' AND c.status = 'ONLINE'
           AND b.name = c.tablespace_name AND a.ts# = b.ts#
       ) d,
       v$parameter e, v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
              undo_block_per_sec
         FROM v$undostat
       ) g
WHERE e.name = 'undo_retention' AND f.name = 'db_block_size';

ACTUAL UNDO SIZE [MByte]   UNDO RETENTION [Sec] OPTIMAL UNDO RETENTION [Sec]
----------------------    ---------------- -----------------------------
  7580  10800   149960

Script to Calculate Needed Undo size for database:
In Oracle, UNDO size can be controlled with the undo_retention parameter and the size of the UNDO tablespace, thus the setting for these are determined by the level of DML activity in the database:
1. If you are using heavy DML operation make sure there is enough sized rollback segments.
2. If you expect heavy DML load then must have multiple undo tablespace.
3. Try to limit the number of simultaneous users per UNDO to four.
4. For large batch transactions create special large extent rollback segments in a separate tablespace from the other rollback segments,
only bring them online when needed and use SET TRANSACTION command to assign them to specific transactions.
5. Try to avoid running large batch transactions simultaneously with OLTP or smaller transactions.

UNDO_SIZE = UNDO_RETENTION * DB_BLOCK_SIZE * UNDO_BLOCK_PER_SEC

  SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
       SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
       g.undo_block_per_sec) / (1024*1024)
      "NEEDED UNDO SIZE [MByte]"
  FROM (
       SELECT SUM(a.bytes) undo_size
         FROM v$datafile a, v$tablespace b, dba_tablespaces c
        WHERE c.contents = 'UNDO' AND c.status = 'ONLINE'
          AND b.name = c.tablespace_name AND a.ts# = b.ts#
       ) d,
      v$parameter e, v$parameter f,
       (
       SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
         undo_block_per_sec
         FROM v$undostat
       ) g
 WHERE e.name = 'undo_retention' AND f.name = 'db_block_size';

ACTUAL UNDO SIZE [MByte]    UNDO RETENTION [Sec] NEEDED UNDO SIZE [MByte]
-----------------------     ----------------         -----------------------
  7580  10800    545.90  

Tuesday, April 28, 2020

Query To check Table Growth.

SELECT ds.tablespace_name
,ds.segment_name
,dobj.object_type
,ROUND(SUM(dhss.space_used_delta) / 1024 / 1024 /1024 ,2) "Growth (GB)"
FROM dba_hist_snapshot snpshot
,dba_hist_seg_stat dhss
,dba_objects dobj
,dba_segments ds
WHERE begin_interval_time > TRUNC(SYSDATE)-360
AND snpshot.snap_id = dhss.snap_id
AND dobj.object_id = dhss.obj#
AND dobj.owner = ds.owner
AND dobj.object_name = ds.segment_name
AND ds.owner ='&owner' AND ds.SEGMENT_NAME IN ('&table_name') AND dobj.object_type='TABLE'
GROUP BY ds.tablespace_name,ds.segment_name,dobj.object_type
ORDER BY 3 ASC;

Thursday, August 22, 2019

Known Ora Errors


Ora- Errors

1)Error while creating index
ORA-01450: maximum key length (3215) exceeded
Sol:Create Index without online keyword

Query to get exact Table Size = Table Size + Lob Size + Index Size

with segment_rollup as (
  select owner, table_name, owner segment_owner, table_name segment_name from dba_tables
    union all
  select table_owner, table_name, owner segment_owner, index_name segment_name from dba_indexes
    union all
  select owner, table_name, owner segment_owner, segment_name from dba_lobs
    union all
  select owner, table_name, owner segment_owner, index_name segment_name from dba_lobs
), ranked_tables as (
  select rank() over (order by sum(blocks) desc) rank, sum(blocks) blocks, r.owner, r.table_name
  from segment_rollup r, dba_segments s
  where s.owner=r.segment_owner and s.segment_name=r.segment_name
   -- and r.owner=upper('&schema_name')
  group by r.owner, r.table_name
)
select rank, round(blocks*8/1024/1024/1024) tb, table_name,owner
from ranked_tables
where rank<=20;

Monday, September 25, 2017

Table Fragmentation

Table Fragmentation in Oracle Database - to get performance benifit

In Oracle schema there are tables which has huge difference in actual size (size from User_segments) and expected size from user_tables (Num_rows*avg_row_length (in bytes)). This all is due to fragmentation in the table or stats for table are not updated into user_tables.

We will discuss:
1) What is Table Fragmentation?
2) How to understand HWM (High Water Mark) in table?
3) What are the reasons to reorganization of table?
4) How to find most fragmented tables?
5) How to reset HWM / remove fragmentation?
6) How to get more performance benefits from most fragmented tables?
7) Demo

1) What is Table Fragmentation?

If a table is only subject to inserts, there will not be any fragmentation. Fragmentation comes with when we update/delete data in table. The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get  reuse ever at all). This leaves behind holes in table which results in table fragmentation.

To understand it more clearly, we need to be clear on how oracle manages space for tables.

When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

Note that table fragmentation is different from file fragmentation. When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.(High Water Mark).

2) How to understand HWM (High Water Mark) in table?

HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark when doing a full table scan.

DDL statement always resets the HWM.

3) What are the reasons to reorganization of table?

a) Slower response time (from that table)
b) High number of chained (actually migrated) rows. 
c) Table has grown many folds and the old space is not getting reused.

Note: Index based queries may not get that much benefited by reorg as compared to queries which does Full table 
scan.

4) How to find most fragmented tables?

In Oracle schema there are tables which has huge difference in actual size (size from User_segments) and expected size from user_tables (Num_rows*avg_row_length (in bytes)). This all is due to fragmentation in the table or stats for table are not updated into dba_tables.

5) What actions to be taken on most fragmented tables?

Steps to Check and Remove Table Fragmentation:- 

i)  Gather table stats:

To check exact difference in table actual size (dba_segments) and stats size (dba_tables). The difference between these value will report actual fragmentation to DBA. So, We have to have updated stats on the table stored in dba_tables. Check LAST_ANALYZED value for table in dba_tables. If this value is recent you can skip this step. Other wise i would suggest to gather table stats to get updated stats.

exec dbms_stats.gather_table_stats('&schema_name','&table_name');

ii) Check Table size:

Now again check table size using and will find reduced size of the table.

select table_name,bytes/(1024*1024*1024) from dba_table where table_name='&table_name';  -- keep a track to match after fragmentation 

iii) Check for Fragmentation in table:

Below query will show the total size of table with fragmentation, expected without fragmentation and how much % of size we can reclaim after removing table fragmentation. Database Administrator has to provide table_name and schema_name as input to this query.

SQL>
set pages 50000 lines 32767;
select owner,
       table_name,
       round((blocks * 8), 2) || 'kb' "Fragmented size",
       round((num_rows * avg_row_len / 1024), 2) || 'kb' "Actual size",
       round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2) || 'kb',
       ((round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2)) /
       round((blocks * 8), 2)) * 100 - 10 "reclaimable space % "
  from dba_tables
 where table_name = '&table_Name'
   AND OWNER LIKE '&schema_name';
/

Note: This query fetch data from dba_tables, so the accuracy of result depends on dba_table stats.
To find Top 10 fragmentation tables
SQL>
select *
      from (select table_name,
               round((blocks * 8), 2) "size (kb)",
               round((num_rows * avg_row_len / 1024), 2) "actual_data (kb)",
               (round((blocks * 8), 2) -
               round((num_rows * avg_row_len / 1024), 2)) "wasted_space (kb)"
          from dba_tables
         where (round((blocks * 8), 2) >
               round((num_rows * avg_row_len / 1024), 2))
         order by 4 desc)
 WHERE ROWNUM <= 10;

If you find reclaimable space % value more than 20% then we can expect fragmentation in the table.

 Suppose, DBA find 50% reclaimable space by above query, So he can proceed for removing fragmentation.

5) How to reset HWM / remove fragmentation?

We have three options to reorganize fragmented tables:

1. Alter table move (to another tablespace, or same tablespace) and rebuild indexes:- 
   (Depends upon the free space available in the tablespace)  
2. Export and import the table:- (difficult to implement in production environment)
3. Shrink command (from Oracle 10g onwards)
   (Shrink command is only applicable for tables which are tablespace with auto segment space management)

Here, I am following Options 1 and 3 option by keeping table availability in mind. 

Option: 1 

Alter table move (to another tablespace, or same tablespace) and rebuild indexes:-
Collect status of all the indexes on the table:-

We will record Index status at one place, So that we get back them after completion of this exercise,  

SQL> select index_name,status from dba_indexes 
where table_name like '&table_name';

Move table in to same or new tablespace:
---------------------------------------
In this step we will move fragmented table to same tablespace or from one tablespace to another tablespace to reclaim fragmented space. Find Current size of you table from dba_segments and check if same or any other tablespace has same free space available. So, that we can move this table to same or new tablespace.

Steps to Move table in to same tablespace:

-----------------------------------------
alter table <table_name> move;   ------> Move to same tablespace

OR

Steps to Move table in to new tablespace:
----------------------------------------
alter table <table_name> enable row movement;
alter table <table_name> move tablespace <new_tablespace_name>;

Now, get back table to old tablespaces using below command

alter table table_name move tablespace old_tablespace_name;

Now,Rebuild all indexes:
-----------------------
We need to rebuild all the indexes on the table because of move command all the index goes into unusable state.

SQL> select status,index_name from dba_indexes where table_name = '&table_name';

STATUS INDEX_NAME
-------- ------------------------------
UNUSABLE INDEX_NAME                            -------> Here, value in status field may be valid or unusable.

SQL> alter index <INDEX_NAME> rebuild online;  -------> Use this command for each index
Index altered.

SQL> select status,index_name from dba_indexes where table_name = '&table_name';

STATUS INDEX_NAME
-------- ------------------------------
VALID INDEX_NAME                               -------> Here, value in status field must be valid.

After completing these steps, table statistics must be gathered.

Option: 2 Export and import the table:-

Click here to read from my posts

Option: 3 Shrink command (from Oracle 10g onwards):-
------------------------------------------

Shrink command: 
--------------
Its a new 10g feature to shrink (reorg) the tables (almost) online which can be used with automatic segment space 
management.

This command is only applicable for tables which are tablespace with auto segment space management.

Before using this command, you should have row movement enabled.

SQL> alter table <table_name> enable row movement;
Table altered.

There are 2 ways of using this command.

1. Rearrange rows and reset the HWM:
-----------------------------------
Part 1: Rearrange (All DML's can happen during this time)

SQL> alter table <table_name> shrink space compact;
Table altered.

Part 2: Reset HWM (No DML can happen. but this is fairly quick, infact goes unnoticed.)

SQL> alter table <table_name> shrink space;
Table altered.

2. Directly reset the HWM:
-------------------------
(Both rearrange and restting HWM happens in one statement)
SQL> alter table <table_name> shrink space; 
Table altered.

Advantages over the conventional methods are:
--------------------------------------------
1. Unlike "alter table move ..",indexes are not in UNUSABLE state.After shrink command,indexes are updated also.
2. Its an online operation, So you dont need downtime to do this reorg.
3. It doesnot require any extra space for the process to complete.

After completing these steps, table statistics must be gathered.

6) How to get more performance benefits from most fragmented tables?

After doing above steps, you must gather statistics to tell optimizer to create best execution plan for better performance during query execution. Here I have given some auto sampling method to gather stats. Most of cases I got performance benefits when I did auto sampling method.

Gather table stats:
------------------
SQL> exec dbms_stats.gather_table_stats('&owner_name','&table_name');
PL/SQL procedure successfully completed.
OR
SQL> exec dbms_stats.gather_table_stats('&owner_name', '&table_name', estimate_percent => dbms_stats.auto_sample_size);
OR 
SQL> exec dbms_stats.gather_table_stats(ownname=>'&owner_name',
tabname=>'&table_name',estimate_percent => 100, cascade=>true, method_opt=>'for all columns size AUTO'); 

–- For entire schema
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('&schema_name',DBMS_STATS.AUTO_SAMPLE_SIZE);

Check Table size:
-----------------
Now again check table size using and will find reduced size of the table.

SQL> select table_name,bytes/(1024*1024*1024) from dba_table where table_name='&table_name';

Now match with your earlier track, You must have some benefits. Sure performance will improve.

7) Demonstration:

Here I ll show you one demo activity. But when you will do, you first complete in your pre-prod database and collect performance statistics before and after. Then based on benefit, you can plan for production.

Demo:
1) Take all invalid objects counts for whole database as wel as applied schema

select count(1) from dba_objects where status='INVALID' -- 2386

select count(1) from dba_objects where status='INVALID' and owner='CUSTOMER' -- 0

2) Take Top list ( preferably 10) of tables for fragmenation

select *
  from (select table_name,
               round((blocks * 8), 2) "size (kb)",
               round((num_rows * avg_row_len / 1024), 2) "actual_data (kb)",
               (round((blocks * 8), 2) -
               round((num_rows * avg_row_len / 1024), 2)) "wasted_space (kb)"
          from dba_tables
         where (round((blocks * 8), 2) >
               round((num_rows * avg_row_len / 1024), 2))
         order by 4 desc)
 WHERE ROWNUM <= 10;

Output:

TABLE_NAME                      size (kb) actual_data (kb) wasted_space (kb)
------------------------------ ---------- ---------------- -----------------
CUSTOMER_SERVICES_DTLS       12382432      10341757.49       2040674.51
PKG_ACTUAL_AVAILABLE           7291976       5736686.1         1555289.9
PROCESSED_TRAN                  1601072       367932.44         1233139.56
PROCESSED_CUURENCY              1314672       145479.1          1169192.9
ACTUAL_SERVICES_DTLS            7452568       6332113.25        1120454.75
SERVICEREQUESTDETAILS           3037840       1932758.36        1105081.64
PKG_RESULTREPORTDTLS            1436632       440030.4          996601.6
BATCH_TXN_SERIALITEM            2621128       1820127.37        801000.63
CUSTOMER_BILLDETAILS            233616        1451156.52        782459.48


10 rows selected

-- Find size
SQL> select segment_name, bytes/(1024*1024*1024) "Size GB" from
  2  dba_segments where segment_name='CUSTOMER_SERVICES_DTLS';
  

SEGMENT_NAME                           Size GB
-------------------------------------- ----------
CUSTOMER_SERVICES_DTLS             11.828125

SQL> 

3) Take one sample table. Here we ll take "CUSTOMER_SERVICES_DTLS". Find the owner.

SQL>
select owner,table_name,tablespace_name,num_rows,blocks
from dba_tables where table_name='CUSTOMER_SERVICES_DTLS';

output:

OWNER      TABLE_NAME                     TABLESPACE_NAME    NUM_ROWS     BLOCKS
---------------------------- ------------------------------ ---------- ----------
CUSTOMER      CUSTOMER_SERVICES_DTLS     CUSTOMER              74055662    1542825

4) Do below activities for safe purpose:

a) Take DDL

-- Create table
create table CUSTOMER.CUSTOMER_SERVICES_DTLS
(
 xxxxxx
) tablespace CUSTOMER;
--Create/Recreate indexes 
create index CUSTOMER.INDX_TXNID on CUSTOMER.CUSTOMER_SERVICES_DTLS (TXNID)
  tablespace CUSTOMER;
create index CUSTOMER.INDX_SYSTEMUPDATEDDATE on CUSTOMER.CUSTOMER_SERVICES_DTLS (SYSTEMUPDATEDDATE)
  tablespace CUSTOMER;

b) take logical backup using expdp:

expdp directory=data_pump dumpfile=CUSTOMER_SERVICES_DTLS.dmp logfile=CUSTOMER_SERVICES_DTLS.log tables=CUSTOMER.CUSTOMER_SERVICES_DTLS exclude=statistics

SQL> 

5) Verify all index status

SQL> select index_name,status
  2  from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';

INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             VALID
INDX_SYSTEMUPDATEDDATE          VALID

SQL> 

6) Now move the table:
SQL> connect / as sysdba
SQL> set timing on;
SQL> alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move;

Table altered.

Elapsed: 00:11:12.18
SQL> 

(Note: Based of table size, more archivelogs will be generated. You must have sufficient space on required tablespace/ datafile, including TEMP tablespace)

7) Now again verify these:
a) No. of records
SQL> select count(rowid) from CUSTOMER.CUSTOMER_SERVICES_DTLS;

COUNT(ROWID)
------------
    74055662

SQL> 

b) Index statistics

SQL> select index_name,status
  2  from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';

INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             UNUSABLE
INDX_SYSTEMUPDATEDDATE          UNUSABLE

SQL> 

Here Indexes are "Unusable" status. So these must be rebuild.

8) Rebuild the Indexes

SQL> alter index CUSTOMER.INDX_TXNID rebuild online;

Index altered.
SQL> alter index CUSTOMER.INDX_SYSTEMUPDATEDDATE rebuild online;

Index altered.
SQL> 

Now check the index stats

SQL> select index_name,status from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';
INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             VALID
INDX_SYSTEMUPDATEDDATE          VALID

SQL> 

Now all are valid.

9) Now Chek no. of rows and blocks

SQL>
select owner,table_name,tablespace_name,num_rows,blocks
from dba_tables where table_name='CUSTOMER_SERVICES_DTLS';

output:

OWNER      TABLE_NAME                     TABLESPACE_NAME    NUM_ROWS     BLOCKS
---------------------------- ------------------------------ ---------- ----------
CUSTOMER      CUSTOMER_SERVICES_DTLS     CUSTOMER              74055662    151033

See here no. of blocks reduced.

10) Now Gather table stats:

SQL> exec dbms_stats.gather_table_stats(ownname=>'CUSTOMER',tabname=>'CUSTOMER_SERVICES_DTLS',estimate_percent => 100, cascade=>true, method_opt=>'for all columns size AUTO'); 


11) Check Table size:

Now again check table size using and will find reduced size of the table.

SQL> 
SQL> select segment_name, bytes/(1024*1024*1024) "Size GB" from
  2  dba_segments where segment_name='CUSTOMER_SERVICES_DTLS';
  

SEGMENT_NAME                           Size GB
-------------------------------------- ----------
CUSTOMER_SERVICES_DTLS             10.02131

SQL> 


Here table size reduced also.

12) Now crosscheck all valid/ invalid object counts and match. You can release your database if you have taken downtime.


Issues may come during the above activity:

SQL> alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move; 
alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 8192 in tablespace CUSTOMER


i.e., you don't have sufficient space on required tablespace and temp tablespace also.

So add more datafiles and tempfles if your datafiles and tempfile reached 32G.



Monday, January 30, 2017

User managed recovery of sysaux datafile

User managed recovery of sysaux datafile
******************************************

1)Take bkp of database by putting db in begin backup mode
********************

SQL> alter database begin backup;

Database altered.

SQL> select FILE_NAME,TABLESPACE_NAME,BYTES/1024/1024 from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME                BYTES/1024/1024
------------------------------ ---------------
/home/cybage/app/cybage/oradata/temp/users01.dbf
USERS                                        5

/home/cybage/app/cybage/oradata/temp/undotbs01.dbf
UNDOTBS1                                    40

/home/cybage/app/cybage/oradata/temp/sysaux01.dbf
SYSAUX                                     500
                            ....etc

2)copy to bkp location and end backup
**************
cp * /home/cybage/bkp/

SQL> alter database end backup;

Database altered.


3)To stimulate delete datafile from OS
**************
[cybage@oracle bkp]$ rm /home/cybage/app/cybage/oradata/temp/sysaux01.dbf

4)go to mount state
*******************
SQL> startup nomount;
ORACLE instance started.

Total System Global Area  584568832 bytes
Fixed Size                  2215544 bytes
Variable Size             251658632 bytes
Database Buffers          327155712 bytes
Redo Buffers                3538944 bytes
SQL> alter database mount;

Database altered.


5)Restore database by copying file to orginal location
*************
[cybage@oracle bkp]$ cp sysaux01.dbf /home/cybage/app/cybage/oradata/temp/

SQL> recover database ;
Media recovery complete.

6)open database.
*************
SQL> alter database open;

Database altered.

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
TEMP      READ WRITE

Transportable Tablespace



Transportable Tablespaces
1)Create tablespace
CREATE TABLESPACE test_data
  DATAFILE '/home/cybage/app/cybage/oradata/orcl/test_data01.dbf'
  SIZE 1M AUTOEXTEND ON NEXT 1M;

2)Create User assign created tablespace name
CREATE USER test_user IDENTIFIED BY test_user
  DEFAULT TABLESPACE test_data
  TEMPORARY TABLESPACE temp
  QUOTA UNLIMITED ON test_data;

3)Provide Grants
grant connect,resource,CREATE SESSION, CREATE TABLE to test_user;

4)Create table from that user and add the data
CREATE TABLE test_tab (
  id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT test_tab_pk PRIMARY KEY (id)
);
 
INSERT /*+ APPEND */ INTO test_tab (id, description)
SELECT level,
       'Description for ' || level
FROM   dual
CONNECT BY level <= 10000;
 
5)to check whether database self contained
 
CONN / AS SYSDBA
EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'TEST_DATA', incl_constraints => TRUE);
 
6) ALTER TABLESPACE test_data READ ONLY;
 
7)Take export
 expdp directory=data_pump_dir transport_tablespaces=test_data dumpfile=test_data.dmp logfile=test_data_exp.log
.
 
8)Export of tablespaces
expdp directory=data_pump_dir transport_tablespaces=test_data dumpfile=test_data.dmp logfile=test_data_exp.log

9)Transport dump file and data file that to destination directory
Destination Database
10)create user and provide grants
CREATE USER test_user IDENTIFIED BY test_user;
GRANT CREATE SESSION, CREATE TABLE TO test_user;

11)Transportable datafiles
impdp directory=data_pump_dir dumpfile=test_data.dmp logfile=test_data_imp.log transport_datafiles=' /home/cybage/app/cybage/oradata/temp/test_data01.dbf'