Migrating a Apache Derby Table to Oracle Database

Apache Derby is an open source, lightweight relational database that may be embedded in a Java application or run in client/server mode. Java DB is Oracle’s supported distribution of Apache Derby. Oracle SQL Developer may be used to migrate Apache Derby to Oracle Database. For migrating large quantities of data the Sqoop tool is preferable for migrating Derby to Oracle Database. In this tutorial we shall migrate an Apache Derby database table to Oracle Database using Sqoop to first import the Derby table data into HDFS and subsequently using Sqoop to export from HDFS to Oracle Database. This tutorial has the following sections.
 
 

Setting the Environment

 
We have used Oracle Database and Apache Derby installed on Oracle Linux 6.5.
The following software is required for this tutorial.
 
-Oracle Database (11g used)
-Apache Derby (10.11 used)
-Hadoop 2.0.0
-Sqoop 1.4
-Java 7
 
Create a directory /derby to install the software and set its permissions to global (777).
 
mkdir /derby
chmod -R 777 /derby
cd /derby
 
Download and install Java 7.
 
tar zxvf jdk-7u55-linux-i586.gz
 
Download and install Hadoop 2.0.0 (CDH 4.6).
 
wget http://archive.cloudera.com/cdh4/cdh/4/hadoop-2.0.0-cdh4.6.0.tar.gz
tar -xvf hadoop-2.0.0-cdh4.6.0.tar.gz
 
Create symlinks for the Hadoop conf and bin directories.
 
ln -s /derby/hadoop-2.0.0-cdh4.6.0/bin-mapreduce1 /derby/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce1/bin
ln -s /derby/hadoop-2.0.0-cdh4.6.0/etc/hadoop /derby/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce1/conf
Download and install Sqoop 1.4.
 
wget http://archive-primary.cloudera.com/cdh4/cdh/4/sqoop-1.4.3-cdh4.6.0.tar.gz
tar -xvf sqoop-1.4.3-cdh4.6.0.tar.gz
 
Download and install Derby 10.11.
 
wget http://apache.mirror.iweb.ca//db/derby/db-derby-10.11.1.1/db-derby-10.11.1.1-bin.tar.gz
tar -xvf db-derby-10.11.1.1-bin.tar.gz
 
Copy the Oracle Database JDBC driver jar file ojdbc6.jar to the Sqoop lib directory.
 
cp ojdbc6.jar /postgres/sqoop-1.4.3-cdh4.6.0/lib
Add a user called derby, which is used to start NameNode and DataNode, to the hadoop group.
 
useradd –g hadoop derby
 
Set environment variable for Oracle Database, Apache Derby, Sqoop, Hadoop, and Java in the bash shell file. Also set the HADOOP_NAMENODE_USER and HADOOP_DATANODE_USER env variables.
 
vi ~/.bashrc
export HADOOP_PREFIX=/derby/hadoop-2.0.0-cdh4.6.0
export HADOOP_CONF=$HADOOP_PREFIX/etc/hadoop
export SQOOP_HOME=/derby/sqoop-1.4.3-cdh4.6.0
export JAVA_HOME=/derby/jdk1.7.0_55
export DERBY_INSTALL=/derby/db-derby-10.11.1.1-bin
export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=ORCL
export HADOOP_MAPRED_HOME=/derby/hadoop-2.0.0-cdh4.6.0
export HADOOP_HOME=/derby/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce1
export HADOOP_CLASSPATH=$HADOOP_HOME/*:$HADOOP_HOME/lib/*:SQOOP_HOME/lib/*:$DERBY_INSTALL/lib/*
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_MAPRED_HOME/bin:$ORACLE_HOME/bin:$DERBY_INSTALL/bin:$SQOOP_HOME/bin
export CLASSPATH=$HADOOP_CLASSPATH
export HADOOP_NAMENODE_USER=derby
export HADOOP_DATANODE_USER=derby
 

Creating an Oracle Database Table

 
Create an Oracle Database table OE.WLSLOG using the following SQL script.
 
CREATE TABLE OE.wlslog (time_stamp VARCHAR2(4000), category VARCHAR2(4000), type VARCHAR2(4000), servername VARCHAR2(4000), code VARCHAR2(4000), msg VARCHAR2(4000));
 
Run the SQL script in SQL*Plus to create the database table.
 
IMAGE1
 

Creating an Apache Derby Table

 
Next, create the Derby database table that is to be migrated to Oracle Database. Start the Derby Network server with the following command.
 
java -jar $DERBY_INSTALL/lib/derbyrun.jar server start
 
The Derby server gets started on port 1527.
 
 
Start the interactive SQL scripting tool ij, which we shall use to create a table.
 
java org.apache.derby.tools.ij
 
Connect with the Derby Network server and specify the database as dbderby in the connection URL. Specify create=true to create the database dbderby.
 
connect 'jdbc:derby://localhost:1527/dbderby;create=true';
 
Run the following SQL script in ij to create a table.
 
CREATE TABLE wlslog(time_stamp VARCHAR(255) PRIMARY KEY,category VARCHAR(255),type VARCHAR(255),servername VARCHAR(255), code VARCHAR(255),msg VARCHAR(255));
 
A Derby table gets created. The output from the preceding commands is shown below.
 
 
Add data to the wlslog table using the following SQL script.
 
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:16-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STANDBY');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:17-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to STARTING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:18-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to ADMIN');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:19-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RESUMING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:20-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000361','Started WebLogic AdminServer');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:21-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000365','Server state changed to RUNNING');
INSERT INTO wlslog(time_stamp,category,type,servername,code,msg) VALUES('Apr-8-2014-7:06:22-PM-PDT','Notice','WebLogicServer','AdminServer','BEA-000360','Server started in RUNNING mode');
 
Table data gets added.
 
 

Starting HDFS

 
In this section configure Hadoop and HDFS properties and start the HDFS. Set the fs.defaultFS and hadoop.tmp.dir properties in the /derby/hadoop-2.0.0-cdh4.6.0/etc/hadoop/core-site.xml file.
 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
 
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.0.2.15:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:///var/lib/hadoop-0.20/cache</value>
</property>
</configuration>
 
Create the Hadoop temp directory and set its permissions to global (777).
 
rm -rf /var/lib/hadoop-0.20/cache
mkdir -p /var/lib/hadoop-0.20/cache
chmod -R 777 /var/lib/hadoop-0.20/cache
 
Set the dfs.permissions.superusergroup, dfs.namenode.name.dir, dfs.replication and dfs.permissions properties in the /derby/hadoop-2.0.0-cdh4.6.0/etc/hadoop/hdfs-site.xml file.
 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<!-- Put site-specific property overrides in this file. -->
 
<configuration>
<property>
<name>dfs.permissions.superusergroup</name>
<value>hadoop</value>
</property><property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/1/dfs/nn</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
 
Create the NameNode storage directory and set its permissions to global (777).
 
rm -rf /data/1/dfs/nn
mkdir -p /data/1/dfs/nn
chmod -R 777 /data/1/dfs/nn
 
Format NameNode and start the NameNode and DataNode.
 
hadoop namenode -format
hadoop namenode
hadoop datanode
 
Create a directory for Sqoop in the HDFS lib jars and put the Sqoop lib jars in the HDFS. The Sqoop lib jars are used at runtime.
 
hadoop dfs -mkdir /derby/sqoop-1.4.3-cdh4.6.0/lib
hadoop dfs -chmod -R 777 /derby/sqoop-1.4.3-cdh4.6.0/lib
hdfs dfs -put /derby/sqoop-1.4.3-cdh4.6.0/lib/* hdfs://10.0.2.15:8020/derby/sqoop-1.4.3-cdh4.6.0/lib
 

Importing Apache Derby Table Data into HDFS

 
We shall use the Sqoop import tool to import the Derby table data into HDFS. The following parameters are required to be specified in the sqoop import command.
 
Sqoop command parameter
Description
Value
-libjars
Specifies the Derby client jar to be added to the Sqoop classpath
$DERBY_INSTALL/lib/derbyclient.jar
--connect
Specifies the connection url for Derby database
"jdbc:derby://localhost:1527/dbderby"
--driver
Specifies the Derby JDBC driver class
"org.apache.derby.jdbc.ClientDriver"
--username
Specifies the username
"app"
--password
Specifes the password
"app"
--table
Specifies the Derby table
"wlslog"
--columns
Specifies the Derby table columns
"time_stamp,category,type,servername,code,msg"
--target-dir
Specifies the target directory in HDFS
"/derby/import"
--split-by
Specifies the column used as the primary key coulmn
time_stamp
–verbose
Verbose output
 
 
Run the following sqoop import command.
 
sqoop import -libjars $DERBY_INSTALL/lib/derbyclient.jar --connect "jdbc:derby://localhost:1527/dbderby" --driver "org.apache.derby.jdbc.ClientDriver" --password "app" --username "app" --table "wlslog" --columns "time_stamp,category,type,servername,code,msg" --target-dir "/derby/import" --split-by time_stamp –verbose
 
A MapReduce job runs to import Derby table data into HDFS. As indicated by the output 7 records that we added to Derby table wlslog get imported.
 
 
A detailed output from the sqoop import command is as follows.
 
4/12/14 20:41:27 INFO mapred.JobClient: Running job: job_local157238451_0001
14/12/14 20:41:27 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/12/14 20:41:28 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/12/14 20:41:28 INFO mapred.JobClient: map 0% reduce 0%
14/12/14 20:41:29 INFO mapred.LocalJobRunner: Waiting for map tasks
14/12/14 20:41:29 INFO mapred.LocalJobRunner: Starting task: attempt_local157238451_0001_m_000000_0
14/12/14 20:41:29 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/12/14 20:41:29 INFO util.ProcessTree: setsid exited with exit code 0
14/12/14 20:41:29 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@db7fe1
14/12/14 20:41:29 DEBUG db.DBConfiguration: Fetching password from job credentials store
14/12/14 20:41:29 INFO mapred.MapTask: Processing split: 1=1 AND 1=1
14/12/14 20:41:30 DEBUG db.DataDrivenDBInputFormat: Creating db record reader for db product: APACHE DERBY
14/12/14 20:41:30 INFO db.DBRecordReader: Working on split: 1=1 AND 1=1
14/12/14 20:41:30 DEBUG db.DataDrivenDBRecordReader: Using query: SELECT time_stamp, category, type, servername, code, msg FROM wlslog AS wlslog WHERE ( 1=1 ) AND ( 1=1 )
14/12/14 20:41:30 DEBUG db.DBRecordReader: Using fetchSize for next query: 1000
14/12/14 20:41:30 INFO db.DBRecordReader: Executing query: SELECT time_stamp, category, type, servername, code, msg FROM wlslog AS wlslog WHERE ( 1=1 ) AND ( 1=1 )
14/12/14 20:41:31 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
14/12/14 20:41:31 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
14/12/14 20:41:31 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
14/12/14 20:41:31 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
14/12/14 20:41:31 INFO mapred.LocalJobRunner:
14/12/14 20:41:32 INFO mapred.Task: Task:attempt_local157238451_0001_m_000000_0 is done. And is in the process of commiting
14/12/14 20:41:32 INFO mapred.LocalJobRunner:
14/12/14 20:41:32 INFO mapred.Task: Task attempt_local157238451_0001_m_000000_0 is allowed to commit now
14/12/14 20:41:33 INFO output.FileOutputCommitter: Saved output of task 'attempt_local157238451_0001_m_000000_0' to /derby/import
14/12/14 20:41:33 INFO mapred.LocalJobRunner:
14/12/14 20:41:33 INFO mapred.Task: Task 'attempt_local157238451_0001_m_000000_0' done.
14/12/14 20:41:33 INFO mapred.LocalJobRunner: Finishing task: attempt_local157238451_0001_m_000000_0
14/12/14 20:41:33 INFO mapred.LocalJobRunner: Map task executor complete.
14/12/14 20:41:33 INFO mapred.JobClient: map 100% reduce 0%
14/12/14 20:41:33 INFO mapred.JobClient: Job complete: job_local157238451_0001
14/12/14 20:41:34 INFO mapred.JobClient: Counters: 18
14/12/14 20:41:34 INFO mapred.JobClient: File System Counters
14/12/14 20:41:34 INFO mapred.JobClient: FILE: Number of bytes read=6718428
14/12/14 20:41:34 INFO mapred.JobClient: FILE: Number of bytes written=6880331
14/12/14 20:41:34 INFO mapred.JobClient: FILE: Number of read operations=0
14/12/14 20:41:34 INFO mapred.JobClient: FILE: Number of large read operations=0
14/12/14 20:41:34 INFO mapred.JobClient: FILE: Number of write operations=0
14/12/14 20:41:34 INFO mapred.JobClient: HDFS: Number of bytes read=0
14/12/14 20:41:34 INFO mapred.JobClient: HDFS: Number of bytes written=717
14/12/14 20:41:34 INFO mapred.JobClient: HDFS: Number of read operations=1
14/12/14 20:41:34 INFO mapred.JobClient: HDFS: Number of large read operations=0
14/12/14 20:41:34 INFO mapred.JobClient: HDFS: Number of write operations=2
14/12/14 20:41:34 INFO mapred.JobClient: Map-Reduce Framework
14/12/14 20:41:34 INFO mapred.JobClient: Map input records=7
14/12/14 20:41:34 INFO mapred.JobClient: Map output records=7
14/12/14 20:41:34 INFO mapred.JobClient: Input split bytes=87
14/12/14 20:41:34 INFO mapred.JobClient: Spilled Records=0
14/12/14 20:41:34 INFO mapred.JobClient: CPU time spent (ms)=0
14/12/14 20:41:34 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/12/14 20:41:34 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/12/14 20:41:34 INFO mapred.JobClient: Total committed heap usage (bytes)=47144960
14/12/14 20:41:34 INFO mapreduce.ImportJobBase: Transferred 717 bytes in 20.3487 seconds (35.2356 bytes/sec)
14/12/14 20:41:34 INFO mapreduce.ImportJobBase: Retrieved 7 records.
14/12/14 20:41:34 DEBUG util.ClassLoaderStack: Restoring classloader: java.net.URLClassLoader@15aa488
[root@localhost derby]#
 
 

Exporting HDFS Data into Oracle Database

 
In this section we shall export the HDFS data that was imported from Derby to Oracle Database using the Sqoop export tool. The following parameters are required in the sqoop export command.
 
Sqoop export parameter
Description
Value
--connect
The connection URL for Oracle Database
"jdbc:oracle:thin:@127.0.0.1:1521:ORCL"
--hadoop-home
The Hadoop home directory
"/derby/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce1"
--username
Username for Oracle Database
"OE"
--password
Password for Oracle Database
"OE"
--export-dir
The HDFS directory to export. The directory is same as the --target-dir for the sqoop import command
"/derby/import"
--table
Oracle Database table to export HDFS data to
"OE.WLSLOG"
--verbose
Verbose output
 
 
Run the following sqoop export command to export HDFS data to Oracle Database.
 
sqoop export --connect "jdbc:oracle:thin:@127.0.0.1:1521:ORCL" --hadoop-home "/derby/hadoop-2.0.0-cdh4.6.0/share/hadoop/mapreduce1" --password "OE" --username "OE" --export-dir "/derby/import" --table "OE.WLSLOG" --verbose
 
A MapReduce job runs to export HDFS data to Oracle Database. As indicated by the output 7 records get exported.
 
 
A detailed output from the Sqoop export command is as follows.
 
14/12/14 20:44:11 INFO mapred.JobClient: Running job: job_local1874327070_0001
14/12/14 20:44:11 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/12/14 20:44:11 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.sqoop.mapreduce.NullOutputCommitter
14/12/14 20:44:11 INFO mapred.LocalJobRunner: Waiting for map tasks
14/12/14 20:44:11 INFO mapred.LocalJobRunner: Starting task: attempt_local1874327070_0001_m_000000_0
14/12/14 20:44:11 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/12/14 20:44:11 INFO util.ProcessTree: setsid exited with exit code 0
14/12/14 20:44:12 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@74bba0
14/12/14 20:44:12 INFO mapred.MapTask: Processing split: Paths:/derby/import/part-m-00000:537+90,/derby/import/part-m-00000:627+90
14/12/14 20:44:12 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/derby/import/part-m-00000
14/12/14 20:44:12 DEBUG db.DBConfiguration: Fetching password from job credentials store
14/12/14 20:44:12 INFO mapred.JobClient: map 0% reduce 0%
14/12/14 20:44:52 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/derby/import/part-m-00000
14/12/14 20:44:52 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
14/12/14 20:44:53 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
14/12/14 20:44:52 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
14/12/14 20:44:53 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
14/12/14 20:44:53 INFO mapred.LocalJobRunner:
14/12/14 20:44:53 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
14/12/14 20:44:53 INFO mapred.Task: Task:attempt_local1874327070_0001_m_000000_0 is done. And is in the process of commiting
14/12/14 20:44:53 INFO mapred.LocalJobRunner:
14/12/14 20:44:53 INFO mapred.Task: Task 'attempt_local1874327070_0001_m_000000_0' done.
14/12/14 20:44:53 INFO mapred.LocalJobRunner: Finishing task: attempt_local1874327070_0001_m_000000_0
14/12/14 20:44:53 INFO mapred.LocalJobRunner: Starting task: attempt_local1874327070_0001_m_000001_0
14/12/14 20:44:53 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/12/14 20:44:53 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@f28feb
14/12/14 20:44:53 INFO mapred.MapTask: Processing split: Paths:/derby/import/part-m-00000:0+179
14/12/14 20:44:53 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/derby/import/part-m-00000
14/12/14 20:44:53 INFO mapred.JobClient: map 25% reduce 0%
14/12/14 20:44:53 DEBUG db.DBConfiguration: Fetching password from job credentials store
14/12/14 20:45:34 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
14/12/14 20:45:34 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
14/12/14 20:45:34 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
14/12/14 20:45:34 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
14/12/14 20:45:34 INFO mapred.LocalJobRunner:
14/12/14 20:45:34 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
14/12/14 20:45:34 INFO mapred.Task: Task:attempt_local1874327070_0001_m_000001_0 is done. And is in the process of commiting
14/12/14 20:45:34 INFO mapred.LocalJobRunner:
14/12/14 20:45:34 INFO mapred.Task: Task 'attempt_local1874327070_0001_m_000001_0' done.
14/12/14 20:45:34 INFO mapred.LocalJobRunner: Finishing task: attempt_local1874327070_0001_m_000001_0
14/12/14 20:45:34 INFO mapred.LocalJobRunner: Starting task: attempt_local1874327070_0001_m_000002_0
14/12/14 20:45:34 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/12/14 20:45:34 INFO mapred.JobClient: map 50% reduce 0%
14/12/14 20:45:34 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7ba8d4
14/12/14 20:45:34 INFO mapred.MapTask: Processing split: Paths:/derby/import/part-m-00000:179+179
14/12/14 20:45:34 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/derby/import/part-m-00000
14/12/14 20:45:34 DEBUG db.DBConfiguration: Fetching password from job credentials store
14/12/14 20:46:15 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
14/12/14 20:46:15 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
14/12/14 20:46:15 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
14/12/14 20:46:15 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
14/12/14 20:46:15 INFO mapred.LocalJobRunner:
14/12/14 20:46:15 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
14/12/14 20:46:16 INFO mapred.Task: Task:attempt_local1874327070_0001_m_000002_0 is done. And is in the process of commiting
14/12/14 20:46:16 INFO mapred.LocalJobRunner:
14/12/14 20:46:16 INFO mapred.Task: Task 'attempt_local1874327070_0001_m_000002_0' done.
14/12/14 20:46:16 INFO mapred.LocalJobRunner: Finishing task: attempt_local1874327070_0001_m_000002_0
14/12/14 20:46:16 INFO mapred.LocalJobRunner: Starting task: attempt_local1874327070_0001_m_000003_0
14/12/14 20:46:16 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/12/14 20:46:16 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@fc4de4
14/12/14 20:46:16 INFO mapred.MapTask: Processing split: Paths:/derby/import/part-m-00000:358+179
14/12/14 20:46:16 DEBUG mapreduce.CombineShimRecordReader: ChildSplit operates on: hdfs://10.0.2.15:8020/derby/import/part-m-00000
14/12/14 20:46:16 DEBUG db.DBConfiguration: Fetching password from job credentials store
14/12/14 20:46:16 INFO mapred.JobClient: map 75% reduce 0%
14/12/14 20:46:57 DEBUG mapreduce.AutoProgressMapper: Instructing auto-progress thread to quit.
14/12/14 20:46:57 DEBUG mapreduce.AutoProgressMapper: Waiting for progress thread shutdown...
14/12/14 20:46:57 INFO mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
14/12/14 20:46:57 DEBUG mapreduce.AutoProgressMapper: Progress thread shutdown detected.
14/12/14 20:46:57 INFO mapred.LocalJobRunner:
14/12/14 20:46:57 DEBUG mapreduce.AsyncSqlOutputFormat: Committing transaction of 1 statements
14/12/14 20:46:57 INFO mapred.Task: Task:attempt_local1874327070_0001_m_000003_0 is done. And is in the process of commiting
14/12/14 20:46:57 INFO mapred.LocalJobRunner:
14/12/14 20:46:57 INFO mapred.Task: Task 'attempt_local1874327070_0001_m_000003_0' done.
14/12/14 20:46:57 INFO mapred.LocalJobRunner: Finishing task: attempt_local1874327070_0001_m_000003_0
14/12/14 20:46:57 INFO mapred.LocalJobRunner: Map task executor complete.
14/12/14 20:46:57 INFO mapred.JobClient: map 100% reduce 0%
14/12/14 20:46:57 INFO mapred.JobClient: Job complete: job_local1874327070_0001
14/12/14 20:46:57 INFO mapred.JobClient: Counters: 18
14/12/14 20:46:58 INFO mapred.JobClient: File System Counters
14/12/14 20:46:58 INFO mapred.JobClient: FILE: Number of bytes read=24544690
14/12/14 20:46:58 INFO mapred.JobClient: FILE: Number of bytes written=25163140
14/12/14 20:46:58 INFO mapred.JobClient: FILE: Number of read operations=0
14/12/14 20:46:58 INFO mapred.JobClient: FILE: Number of large read operations=0
14/12/14 20:46:58 INFO mapred.JobClient: FILE: Number of write operations=0
14/12/14 20:46:58 INFO mapred.JobClient: HDFS: Number of bytes read=4720
14/12/14 20:46:58 INFO mapred.JobClient: HDFS: Number of bytes written=0
14/12/14 20:46:58 INFO mapred.JobClient: HDFS: Number of read operations=78
14/12/14 20:46:58 INFO mapred.JobClient: HDFS: Number of large read operations=0
14/12/14 20:46:58 INFO mapred.JobClient: HDFS: Number of write operations=0
14/12/14 20:46:58 INFO mapred.JobClient: Map-Reduce Framework
14/12/14 20:46:58 INFO mapred.JobClient: Map input records=7
14/12/14 20:46:58 INFO mapred.JobClient: Map output records=7
14/12/14 20:46:58 INFO mapred.JobClient: Input split bytes=576
14/12/14 20:46:58 INFO mapred.JobClient: Spilled Records=0
14/12/14 20:46:58 INFO mapred.JobClient: CPU time spent (ms)=0
14/12/14 20:46:58 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/12/14 20:46:58 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/12/14 20:46:58 INFO mapred.JobClient: Total committed heap usage (bytes)=196214784
14/12/14 20:46:58 INFO mapreduce.ExportJobBase: Transferred 4.6094 KB in 174.6882 seconds (27.0196 bytes/sec)
14/12/14 20:46:58 INFO mapreduce.ExportJobBase: Exported 7 records.
14/12/14 20:46:58 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader@10f243b
[root@localhost derby]#
 
 

Selecting Data in Oracle Database Table

 
Having migrated the Derby table data to Oracle Database run a SQL query in SQL*Plus to select the Oracle Database data.
 
The 7 records migrated from Derby get listed.
 
The output from the SELECT query is as follows.
 
SQL> SELECT * FROM OE.WLSLOG;
 
TIME_STAMP
--------------------------------------------------------------------------------
CATEGORY
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
SERVERNAME
--------------------------------------------------------------------------------
CODE
--------------------------------------------------------------------------------
MSG
--------------------------------------------------------------------------------
Apr-8-2014-7:06:22-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000360
Server started in RUNNING mode
 
Apr-8-2014-7:06:16-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000365
Server state changed to STANDBY
 
Apr-8-2014-7:06:17-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000365
Server state changed to STARTING
 
Apr-8-2014-7:06:18-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000365
Server state changed to ADMIN
 
Apr-8-2014-7:06:19-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000365
Server state changed to RESUMING
 
Apr-8-2014-7:06:20-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000361
Started WebLogic AdminServer
 
Apr-8-2014-7:06:21-PM-PDT
Notice
WebLogicServer
AdminServer
BEA-000365
Server state changed to RUNNING
 
 
7 rows selected.
 
SQL>
 
In this tutorial we migrated a Apache Derby table to Oracle Database.