Quantcast
Channel: csv | ThatJeffSmith
Viewing all 27 articles
Browse latest View live

Formatting Query Results to CSV in Oracle SQL Developer

$
0
0

I find that programs that are developed by the same folks that use that program in production are some of the best applications out there. The developer is the user. They have to eat their own dog food – an unpleasant metaphor, but one that’s pretty well understood. Two great examples of that here are APEX and of course SQL Developer.

Of course not every dog gets to add his own secret ingredients to his dinner!

I also find that lazy developers make the best developers. They are so lazy that they will spend a few extra minutes to write a program that writes their programs for them. And so you’ll find this cool kind of stuff all over the application.

Here’s an example of that in SQL Developer -

Quick ResultSet Exports as Script Output

I’m too lazy to hit execute > SaveAs > Open File. I just want to get my delimited text output RIGHT NOW!

The ‘old’ way -

Our old friend, the Export Dialog

And the ‘new’ way (well, new to me!) -

Have the query results pre-formatted in the format of your choice!

The Code

SELECT /*csv*/ * FROM scott.emp;
SELECT /*xml*/ * FROM scott.emp;
SELECT /*html*/ * FROM scott.emp;
SELECT /*delimited*/ * FROM scott.emp;
SELECT /*insert*/ * FROM scott.emp;
SELECT /*loader*/ * FROM scott.emp;
SELECT /*fixed*/ * FROM scott.emp;
SELECT /*text*/ * FROM scott.emp;

You need to execute your statement(s) as a script using F5 or the 2nd execution button on the worksheet toolbar. You’ll notice the hint name matches the available output types on the Export wizard.

You can try XLSX if you want, but I’m not sure how useful the output will be.

Here’s the raw output from the previous examples in case you’re not sitting at your work desk when you read this (click to expand):


> SELECT /*csv*/ * FROM scott.emp
"EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO"
7369,"SMITH","CLERK",7902,17-DEC-80 12.00.00,800,,20
7499,"ALLEN","SALESMAN",7698,20-FEB-81 12.00.00,1600,300,30
7521,"WARD","SALESMAN",7698,22-FEB-81 12.00.00,1250,500,30
7566,"JONES","MANAGER",7839,02-APR-81 12.00.00,2975,,20
7654,"MARTIN","SALESMAN",7698,28-SEP-81 12.00.00,1250,1400,30
7698,"BLAKE","MANAGER",7839,01-MAY-81 12.00.00,2850,,30
7782,"CLARK","MANAGER",7839,09-JUN-81 12.00.00,2450,,10
7788,"SCOTT","ANALYST",7566,19-APR-87 12.00.00,3000,,20
7839,"KING","PRESIDENT",,17-NOV-81 12.00.00,5000,,10
7844,"TURNER","SALESMAN",7698,08-SEP-81 12.00.00,1500,0,30
7876,"ADAMS","CLERK",7788,23-MAY-87 12.00.00,1100,,20
7900,"JAMES","CLERK",7698,03-DEC-81 12.00.00,950,,30
7902,"FORD","ANALYST",7566,03-DEC-81 12.00.00,3000,,20
7934,"MILLER","CLERK",7782,23-JAN-82 12.00.00,1300,,10

> SELECT /*xml*/ * FROM scott.emp
<?xml version='1.0'  encoding='UTF8' ?>
<RESULTS>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7369]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[SMITH]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[CLERK]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7902]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[17-DEC-80 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[800]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[20]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7499]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[ALLEN]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[SALESMAN]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[20-FEB-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1600]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[300]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7521]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[WARD]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[SALESMAN]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[22-FEB-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1250]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[500]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7566]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[JONES]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[MANAGER]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7839]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[02-APR-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[2975]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[20]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7654]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[MARTIN]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[SALESMAN]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[28-SEP-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1250]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[1400]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[BLAKE]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[MANAGER]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7839]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[01-MAY-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[2850]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7782]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[CLARK]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[MANAGER]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7839]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[09-JUN-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[2450]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[10]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7788]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[SCOTT]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[ANALYST]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7566]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[19-APR-87 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[3000]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[20]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7839]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[KING]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[PRESIDENT]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[17-NOV-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[5000]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[10]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7844]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[TURNER]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[SALESMAN]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[08-SEP-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1500]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[0]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7876]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[ADAMS]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[CLERK]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7788]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[23-MAY-87 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1100]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[20]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7900]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[JAMES]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[CLERK]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7698]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[03-DEC-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[950]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[30]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7902]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[FORD]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[ANALYST]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7566]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[03-DEC-81 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[3000]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[20]]></COLUMN>
	</ROW>
	<ROW>
		<COLUMN NAME="EMPNO"><![CDATA[7934]]></COLUMN>
		<COLUMN NAME="ENAME"><![CDATA[MILLER]]></COLUMN>
		<COLUMN NAME="JOB"><![CDATA[CLERK]]></COLUMN>
		<COLUMN NAME="MGR"><![CDATA[7782]]></COLUMN>
		<COLUMN NAME="HIREDATE"><![CDATA[23-JAN-82 12.00.00]]></COLUMN>
		<COLUMN NAME="SAL"><![CDATA[1300]]></COLUMN>
		<COLUMN NAME="COMM"><![CDATA[]]></COLUMN>
		<COLUMN NAME="DEPTNO"><![CDATA[10]]></COLUMN>
	</ROW>
</RESULTS>
> SELECT /*html*/ * FROM scott.emp
<html><head>
<meta http-equiv="content-type" content="text/html; charset=UTF8">
<!-- base href="http://apexdev.us.oracle.com:7778/pls/apx11w/" -->
<style type="text/css">
table {
background-color:#F2F2F5;
border-width:1px 1px 0px 1px;
border-color:#C9CBD3;
border-style:solid;
}
td {
color:#000000;
font-family:Tahoma,Arial,Helvetica,Geneva,sans-serif;
font-size:9pt;
background-color:#EAEFF5;
padding:8px;
background-color:#F2F2F5;
border-color:#ffffff #ffffff #cccccc #ffffff;
border-style:solid solid solid solid;
border-width:1px 0px 1px 0px;
}
th {
font-family:Tahoma,Arial,Helvetica,Geneva,sans-serif;
font-size:9pt;
padding:8px;
background-color:#CFE0F1;
border-color:#ffffff #ffffff #cccccc #ffffff;
border-style:solid solid solid none;
border-width:1px 0px 1px 0px;
white-space:nowrap;
}
</style>
<script type="text/javascript">
window.apex_search = {};
apex_search.init = function (){
	this.rows = document.getElementById('data').getElementsByTagName('TR');
	this.rows_length = apex_search.rows.length;
	this.rows_text =  [];
	for (var i=0;i<apex_search.rows_length;i++){
        this.rows_text[i] = (apex_search.rows[i].innerText)?apex_search.rows[i].innerText.toUpperCase():apex_search.rows[i].textContent.toUpperCase();
	}
	this.time = false;
}

apex_search.lsearch = function(){
	this.term = document.getElementById('S').value.toUpperCase();
	for(var i=0,row;row = this.rows[i],row_text = this.rows_text[i];i++){
		row.style.display = ((row_text.indexOf(this.term) != -1) || this.term  === '')?'':'none';
	}
	this.time = false;
}

apex_search.search = function(e){
    var keycode;
    if(window.event){keycode = window.event.keyCode;}
    else if (e){keycode = e.which;}
    else {return false;}
    if(keycode == 13){
		apex_search.lsearch();
	}
    else{return false;}
}</script>
</head><body onload="apex_search.init();">
<table border="0" cellpadding="0" cellspacing="0">
<tbody><tr><td><input type="text" size="30" maxlength="1000" value="" id="S" onkeyup="apex_search.search(event);" /><input type="button" value="Search" onclick="apex_search.lsearch();"/> 
</td></tr>
</tbody></table>
<br>
<table border="0" cellpadding="0" cellspacing="0">
<tr>	<th>EMPNO</th>
	<th>ENAME</th>
	<th>JOB</th>
	<th>MGR</th>
	<th>HIREDATE</th>
	<th>SAL</th>
	<th>COMM</th>
	<th>DEPTNO</th>
</tr>
<tbody id="data">

	<tr>
<td align="right">7369</td>
<td>SMITH</td>
<td>CLERK</td>
<td align="right">7902</td>
<td>17-DEC-80 12.00.00</td>
<td align="right">800</td>
<td align="right">&nbsp;</td>
<td align="right">20</td>
	</tr>
	<tr>
<td align="right">7499</td>
<td>ALLEN</td>
<td>SALESMAN</td>
<td align="right">7698</td>
<td>20-FEB-81 12.00.00</td>
<td align="right">1600</td>
<td align="right">300</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7521</td>
<td>WARD</td>
<td>SALESMAN</td>
<td align="right">7698</td>
<td>22-FEB-81 12.00.00</td>
<td align="right">1250</td>
<td align="right">500</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7566</td>
<td>JONES</td>
<td>MANAGER</td>
<td align="right">7839</td>
<td>02-APR-81 12.00.00</td>
<td align="right">2975</td>
<td align="right">&nbsp;</td>
<td align="right">20</td>
	</tr>
	<tr>
<td align="right">7654</td>
<td>MARTIN</td>
<td>SALESMAN</td>
<td align="right">7698</td>
<td>28-SEP-81 12.00.00</td>
<td align="right">1250</td>
<td align="right">1400</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7698</td>
<td>BLAKE</td>
<td>MANAGER</td>
<td align="right">7839</td>
<td>01-MAY-81 12.00.00</td>
<td align="right">2850</td>
<td align="right">&nbsp;</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7782</td>
<td>CLARK</td>
<td>MANAGER</td>
<td align="right">7839</td>
<td>09-JUN-81 12.00.00</td>
<td align="right">2450</td>
<td align="right">&nbsp;</td>
<td align="right">10</td>
	</tr>
	<tr>
<td align="right">7788</td>
<td>SCOTT</td>
<td>ANALYST</td>
<td align="right">7566</td>
<td>19-APR-87 12.00.00</td>
<td align="right">3000</td>
<td align="right">&nbsp;</td>
<td align="right">20</td>
	</tr>
	<tr>
<td align="right">7839</td>
<td>KING</td>
<td>PRESIDENT</td>
<td align="right">&nbsp;</td>
<td>17-NOV-81 12.00.00</td>
<td align="right">5000</td>
<td align="right">&nbsp;</td>
<td align="right">10</td>
	</tr>
	<tr>
<td align="right">7844</td>
<td>TURNER</td>
<td>SALESMAN</td>
<td align="right">7698</td>
<td>08-SEP-81 12.00.00</td>
<td align="right">1500</td>
<td align="right">0</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7876</td>
<td>ADAMS</td>
<td>CLERK</td>
<td align="right">7788</td>
<td>23-MAY-87 12.00.00</td>
<td align="right">1100</td>
<td align="right">&nbsp;</td>
<td align="right">20</td>
	</tr>
	<tr>
<td align="right">7900</td>
<td>JAMES</td>
<td>CLERK</td>
<td align="right">7698</td>
<td>03-DEC-81 12.00.00</td>
<td align="right">950</td>
<td align="right">&nbsp;</td>
<td align="right">30</td>
	</tr>
	<tr>
<td align="right">7902</td>
<td>FORD</td>
<td>ANALYST</td>
<td align="right">7566</td>
<td>03-DEC-81 12.00.00</td>
<td align="right">3000</td>
<td align="right">&nbsp;</td>
<td align="right">20</td>
	</tr>
	<tr>
<td align="right">7934</td>
<td>MILLER</td>
<td>CLERK</td>
<td align="right">7782</td>
<td>23-JAN-82 12.00.00</td>
<td align="right">1300</td>
<td align="right">&nbsp;</td>
<td align="right">10</td>
	</tr>
</tbody></table><!-- SQL:
null--></body></html>
> SELECT /*delimited*/ * FROM scott.emp
"EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO"
7369,"SMITH","CLERK",7902,17-DEC-80 12.00.00,800,,20
7499,"ALLEN","SALESMAN",7698,20-FEB-81 12.00.00,1600,300,30
7521,"WARD","SALESMAN",7698,22-FEB-81 12.00.00,1250,500,30
7566,"JONES","MANAGER",7839,02-APR-81 12.00.00,2975,,20
7654,"MARTIN","SALESMAN",7698,28-SEP-81 12.00.00,1250,1400,30
7698,"BLAKE","MANAGER",7839,01-MAY-81 12.00.00,2850,,30
7782,"CLARK","MANAGER",7839,09-JUN-81 12.00.00,2450,,10
7788,"SCOTT","ANALYST",7566,19-APR-87 12.00.00,3000,,20
7839,"KING","PRESIDENT",,17-NOV-81 12.00.00,5000,,10
7844,"TURNER","SALESMAN",7698,08-SEP-81 12.00.00,1500,0,30
7876,"ADAMS","CLERK",7788,23-MAY-87 12.00.00,1100,,20
7900,"JAMES","CLERK",7698,03-DEC-81 12.00.00,950,,30
7902,"FORD","ANALYST",7566,03-DEC-81 12.00.00,3000,,20
7934,"MILLER","CLERK",7782,23-JAN-82 12.00.00,1300,,10

> SELECT /*insert*/ * FROM scott.emp
REM INSERTING into scott.emp
SET DEFINE OFF;
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7369,'SMITH','CLERK',7902,to_date('17-DEC-80 12.00.00','DD-MON-RR HH.MI.SS'),800,null,20);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7499,'ALLEN','SALESMAN',7698,to_date('20-FEB-81 12.00.00','DD-MON-RR HH.MI.SS'),1600,300,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7521,'WARD','SALESMAN',7698,to_date('22-FEB-81 12.00.00','DD-MON-RR HH.MI.SS'),1250,500,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7566,'JONES','MANAGER',7839,to_date('02-APR-81 12.00.00','DD-MON-RR HH.MI.SS'),2975,null,20);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7654,'MARTIN','SALESMAN',7698,to_date('28-SEP-81 12.00.00','DD-MON-RR HH.MI.SS'),1250,1400,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7698,'BLAKE','MANAGER',7839,to_date('01-MAY-81 12.00.00','DD-MON-RR HH.MI.SS'),2850,null,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7782,'CLARK','MANAGER',7839,to_date('09-JUN-81 12.00.00','DD-MON-RR HH.MI.SS'),2450,null,10);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7788,'SCOTT','ANALYST',7566,to_date('19-APR-87 12.00.00','DD-MON-RR HH.MI.SS'),3000,null,20);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7839,'KING','PRESIDENT',null,to_date('17-NOV-81 12.00.00','DD-MON-RR HH.MI.SS'),5000,null,10);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7844,'TURNER','SALESMAN',7698,to_date('08-SEP-81 12.00.00','DD-MON-RR HH.MI.SS'),1500,0,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7876,'ADAMS','CLERK',7788,to_date('23-MAY-87 12.00.00','DD-MON-RR HH.MI.SS'),1100,null,20);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7900,'JAMES','CLERK',7698,to_date('03-DEC-81 12.00.00','DD-MON-RR HH.MI.SS'),950,null,30);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7902,'FORD','ANALYST',7566,to_date('03-DEC-81 12.00.00','DD-MON-RR HH.MI.SS'),3000,null,20);
Insert into scott.emp (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7934,'MILLER','CLERK',7782,to_date('23-JAN-82 12.00.00','DD-MON-RR HH.MI.SS'),1300,null,10);

> SELECT /*loader*/ * FROM scott.emp
 7369|"SMITH"|"CLERK"|7902|17-DEC-80 12.00.00|800||20|
 7499|"ALLEN"|"SALESMAN"|7698|20-FEB-81 12.00.00|1600|300|30|
 7521|"WARD"|"SALESMAN"|7698|22-FEB-81 12.00.00|1250|500|30|
 7566|"JONES"|"MANAGER"|7839|02-APR-81 12.00.00|2975||20|
 7654|"MARTIN"|"SALESMAN"|7698|28-SEP-81 12.00.00|1250|1400|30|
 7698|"BLAKE"|"MANAGER"|7839|01-MAY-81 12.00.00|2850||30|
 7782|"CLARK"|"MANAGER"|7839|09-JUN-81 12.00.00|2450||10|
 7788|"SCOTT"|"ANALYST"|7566|19-APR-87 12.00.00|3000||20|
 7839|"KING"|"PRESIDENT"||17-NOV-81 12.00.00|5000||10|
 7844|"TURNER"|"SALESMAN"|7698|08-SEP-81 12.00.00|1500|0|30|
 7876|"ADAMS"|"CLERK"|7788|23-MAY-87 12.00.00|1100||20|
 7900|"JAMES"|"CLERK"|7698|03-DEC-81 12.00.00|950||30|
 7902|"FORD"|"ANALYST"|7566|03-DEC-81 12.00.00|3000||20|
 7934|"MILLER"|"CLERK"|7782|23-JAN-82 12.00.00|1300||10|

> SELECT /*fixed*/ * FROM scott.emp
"EMPNO"                       "ENAME"                       "JOB"                         "MGR"                         "HIREDATE"                    "SAL"                         "COMM"                        "DEPTNO"                      
"7369"                        "SMITH"                       "CLERK"                       "7902"                        "17-DEC-80 12.00.00"          "800"                         ""                            "20"                          
"7499"                        "ALLEN"                       "SALESMAN"                    "7698"                        "20-FEB-81 12.00.00"          "1600"                        "300"                         "30"                          
"7521"                        "WARD"                        "SALESMAN"                    "7698"                        "22-FEB-81 12.00.00"          "1250"                        "500"                         "30"                          
"7566"                        "JONES"                       "MANAGER"                     "7839"                        "02-APR-81 12.00.00"          "2975"                        ""                            "20"                          
"7654"                        "MARTIN"                      "SALESMAN"                    "7698"                        "28-SEP-81 12.00.00"          "1250"                        "1400"                        "30"                          
"7698"                        "BLAKE"                       "MANAGER"                     "7839"                        "01-MAY-81 12.00.00"          "2850"                        ""                            "30"                          
"7782"                        "CLARK"                       "MANAGER"                     "7839"                        "09-JUN-81 12.00.00"          "2450"                        ""                            "10"                          
"7788"                        "SCOTT"                       "ANALYST"                     "7566"                        "19-APR-87 12.00.00"          "3000"                        ""                            "20"                          
"7839"                        "KING"                        "PRESIDENT"                   ""                            "17-NOV-81 12.00.00"          "5000"                        ""                            "10"                          
"7844"                        "TURNER"                      "SALESMAN"                    "7698"                        "08-SEP-81 12.00.00"          "1500"                        "0"                           "30"                          
"7876"                        "ADAMS"                       "CLERK"                       "7788"                        "23-MAY-87 12.00.00"          "1100"                        ""                            "20"                          
"7900"                        "JAMES"                       "CLERK"                       "7698"                        "03-DEC-81 12.00.00"          "950"                         ""                            "30"                          
"7902"                        "FORD"                        "ANALYST"                     "7566"                        "03-DEC-81 12.00.00"          "3000"                        ""                            "20"                          
"7934"                        "MILLER"                      "CLERK"                       "7782"                        "23-JAN-82 12.00.00"          "1300"                        ""                            "10"                          

> SELECT /*text*/ * FROM scott.emp
"EMPNO"null"ENAME"null"JOB"null"MGR"null"HIREDATE"null"SAL"null"COMM"null"DEPTNO"
7369null"SMITH"null"CLERK"null7902null17-DEC-80 12.00.00null800nullnull20
7499null"ALLEN"null"SALESMAN"null7698null20-FEB-81 12.00.00null1600null300null30
7521null"WARD"null"SALESMAN"null7698null22-FEB-81 12.00.00null1250null500null30
7566null"JONES"null"MANAGER"null7839null02-APR-81 12.00.00null2975nullnull20
7654null"MARTIN"null"SALESMAN"null7698null28-SEP-81 12.00.00null1250null1400null30
7698null"BLAKE"null"MANAGER"null7839null01-MAY-81 12.00.00null2850nullnull30
7782null"CLARK"null"MANAGER"null7839null09-JUN-81 12.00.00null2450nullnull10
7788null"SCOTT"null"ANALYST"null7566null19-APR-87 12.00.00null3000nullnull20
7839null"KING"null"PRESIDENT"nullnull17-NOV-81 12.00.00null5000nullnull10
7844null"TURNER"null"SALESMAN"null7698null08-SEP-81 12.00.00null1500null0null30
7876null"ADAMS"null"CLERK"null7788null23-MAY-87 12.00.00null1100nullnull20
7900null"JAMES"null"CLERK"null7698null03-DEC-81 12.00.00null950nullnull30
7902null"FORD"null"ANALYST"null7566null03-DEC-81 12.00.00null3000nullnull20
7934null"MILLER"null"CLERK"null7782null23-JAN-82 12.00.00null1300nullnull10

So that was kind of a ‘trick’ – I’m not sure it’s a documented feature, although Kris did talk about it WAAAAAAAY back in 2007.

Now you can just Run > Copy > Paste!


Defaults for Exporting Data in Oracle SQL Developer

$
0
0

I was testing a reported bug in SQL Developer today – so the bug I was looking for wasn’t there (YES!) but I found a different one (NO!) – and I was getting frustrated by having to check the same boxes over and over again.

What I wanted was INSERT STATEMENTS to the CLIPBOARD.

Now what I want!

Not what I want!

I’m always doing the same thing, over and over again. And I never go to FILE – that’s too permanent for my type of work. I either want stuff to the clipboard or to the worksheet. Surely there’s a way to tell SQL Developer how to behave?

Oh yeah, check the preferences

So you can set the defaults for this dialog. Go to:
Tools – Preferences – Database – Utilities – Export

Now I will always start with 'INSERT' and 'Clipboard' - woohoo!

Now I will always start with ‘INSERT’ and ‘Clipboard’ – woohoo!

Now, I can also go INTO the preferences for each of the different formats to save me a few more clicks.

I prefer pointy hats (^) for my delimiters, don't you?

I prefer pointy hats (^) for my delimiters, don’t you?

So, spend a few minutes and set each of these to what you’re normally doing and save yourself a bunch of time going forward.

Oracle SQL Developer version 4.1 Early Adopter Now Available

$
0
0

Go get it.

Then come back here and read a couple of things.

  • it REQUIRES Java 8
  • it’s not production ready – send feedback to the Forum not to Support

Two new features to help you get started.

SQL Formatting in the Worksheet

Like the /csv/ formatting ‘hints’ you can add to your statements? Well now, you can just toggle that output mode for an entire session.

SET SQLFORMAT ...

SET SQLFORMAT …

You ‘unset’ the formatting by using the ‘set sqlformat’ command with no arguments. So if you want to spool 3 queries out to files, now you can set the format once, and not muck with your queries at all. And if you want to automate that via an OS script, keep reading :)

File Dialogs

Exporting the same file to Excel, over and over again?

Most of the file open and save dialogs will allow you to re-open or re-save file names/directories. This is going to save you a LOT of time – we hope!

Note the drop-down control on the file-picker

Note the drop-down control on the file-picker

There’s More, Much More

Be sure to also follow Kris’ blog. We’ll have a lot to talk about over the next few months. He’s quite excited about this font stuff, and pretty colors you can do up in a command prompt now. So expect lots on that from him.

Next I’ll talk about one of the most popular features in the tool, and how we enhanced the workflow to make it faster and automatable – loading Excel and delimited data to Oracle using SQL Developer.

SQL Developer 4.1: Easier Excel Imports

$
0
0

The most read post on this site? ‘How to Import from Excel to Oracle with SQL Developer‘ – and it has been since I published it almost 3 years ago.

People really like Excel. Or I should put it this way – Excel is the way information is shared in an organization. It makes no matter how much money was invested in BI, reporting, and databases – at the end of the day, it’s going into a spreadsheet.

That being said, sometimes you need to take data in a spreadsheet, or text delimited file, and move it back into Oracle. Either as a new table, or into an existing table.

For version 4.1, we tweaked the existing wizard to save you a bunch of clicks. The wizard works as it’s always done, so don’t freak out that we went to change stuff for the sake of change.

But something bothered us, and we figured it bothered you too.

You couldn’t see the data you were importing once it came to mapping the columns. So you’d have to go back, back in the wizard to the preview window, and then forward, forward to the column definitions.

Or maybe you had the file open in another editor so you could review it as necessary.

Well, that’s nonsense. So let’s make it so you can ALWAYS see the data.

Oh, and we tweaked a bunch of other things too :)

How it works in version 4.1

Get to the data FASTER. Now you can define the input file and see it on the same screen. Also, if this is a file you’ve just created with a SQL Developer export, or it’s a file you’ve recently imported – it’s going to show up on the file-history-drop-down-thingy.

Looks for this in SQL Developer whenever you're prompted to save or open a file.

Looks for this in SQL Developer whenever you’re prompted to save or open a file.

Next we need to know how you want to bring the data in…

Note the preview window remains, no more forgetting what the data looks like.

Note the preview window remains, no more forgetting what the data looks like.

Now let’s map the columns.

Yup, data is still there.

Yup, data is still there.

So for this example, I’m creating a new table. So for each column, I need to define the datatype.

We now take a 'best guess' on the datatype - if it looks like a number, we put in number. And, I can see the values that are being proposed for this column.

We now take a ‘best guess’ on the datatype – if it looks like a number, we put in number. And, I can see the values that are being proposed for this column.

But what if it’s NOT number?

OK, we THINK this is a date, but we're not able to guess the date format.

OK, we THINK this is a date, but we’re not able to guess the date format.

We do give you a drop down list of some date formats. You can choose one, or enter your own. AND, we grab the date format from your NLS preferences and make it the first choice. Now, I happened to build this excel file WITH SQL Developer, so picking that date format just happens to be right for me :)

Note a few other things going on in this screen:

  • On-screen validation: we used to ASK if you wanted to validate your import at the very end of the wizard. Now we validate for you automatically as you go through each column. Problems are highlighted with the warning or error images in the data preview window. We also add a message to explain to you why we think there is a problem.
  • Reviewed columns are marked: each column is italicized until you’ve actually looked at it. So if you’re reviewing 200 columns, you can tell right away which once you’ve looked at – or not.
  • SizingIf it’s a number or a string, we look at the preview window amount of rows and best-guess the column sizing and precision. We’re tweaking this for the next update, say +2 on scale based on ‘biggest’ number found. We’re also looking at some pre-defined text sizes for columns, say 10, 100, 256, 4000, 32k…Remember, this is only for NEW tables.
Ok, I picked the right date format, all is well now.

Ok, I picked the right date format, all is well now.

Last step…

...review and go!

…review and go!

But wait, there’s more!

We’ve made it easier to do the same imports over and over again. And you can now run these via the command-line interface. But those topics will have to wait for another post. Stay tuned!

A Quick 4.1 Trick: SET SQLFORMAT

$
0
0

One of your favorite SQL Developer ‘tricks’ is the ability to pre-format query output. So instead of getting standard output back, maybe you want query results to come back as CSV.

But using that requires you to add code to your existing SQLs. Maybe instead it would be cool to set the overall script output format?

Now that we have our own SQL*Plus command line interface (AKA SQLcl), the commands that are available there are now also available in SQL Developer proper.

For example: instead of hacking up your individual statements to get your query results to be formatted to CSV, HTML, XML, etc., you can use the SET SQLFORMAT command to set the desired script output format for your SQL queries.

For example.

Instead of running: SELECT /*CSV*/ * from HR.EMPLOYEES;

I can run

SET SQLFORMAT CSV
SELECT * FROM HR.EMPLOYEES;

When I’m done with getting the output in that format, I can ‘UNSET’ it…and get the standard output back.

No need to hack up your queries anymore with the format you want - you can now set it as a default for ALL of your script output.

No need to hack up your queries anymore with the format you want – you can now set it as a default for ALL of your script output.

In addition to the normal formats, we now have a new one, ANSICONSOLE. One of the benefits, we bring the results back all at once, and check the column widths, and then resize the output such that it’s easier to read. No need to set column widths with various SQL*Plus formatting commands.

we try to make the query output as 'pretty' as possible

we try to make the query output as ‘pretty’ as possible

Loading Data via External Tables = Fast

$
0
0

Do as I say, not as I do.

Because I am like most of you, I am very lazy.

Case in point: loading some data from a CSV into an Oracle table. I can use a wizard in SQL Developer and in a few clicks, have it loaded. Usually I’m playing with a hundred rows. Or maybe a few thousand.

But this time I needed to load up about 150MB of CSV, which isn’t very much. But it’s 750k rows, and it was taking more than 5 minutes to run the INSERTs against the table. And I thought that was pretty good, considering. My entire setup is running on a MacBook Air and our OTN VirtualBox Database image.

I’m setting up a scenario that others can run, and the entire lab is only allotted 30 minutes. So I can’t reserve 10 minutes of that just to do the data load.

The Solution: EXTERNAL TABLE

If you have access to the database server via a DIRECTORY object, then you are good to go. This means I can put the CSV (or CSVs) onto the server, in a directory that the database can access.

If  you don't have access to the server directly, then SQL*Loader is your next best bet.

If you don’t have access to the server directly, then SQL*Loader is your next best bet.

This wizard is pretty much exactly the same as I’ve shown you before. There’s an additional dialog, and the output is a script that you run.

You need to give us the database directory name, the name of your file, and if you want an errors and logging file, what you want to call them as well.

But when we’re done, we get a script.

The script will create the directory, if you need it, grants privs, if you need them, and drop your staging table, if you want to. That’s why those steps are commented out.

And I tweaked my script even further, changing out the INSERT script to include a function call…but setting up the table from the CSV file was a lot easier using the wizard.

SET DEFINE OFF
--CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/Users/oracle/data_loads;
--GRANT READ ON DIRECTORY DATA_PUMP_DIR TO hr;
--GRANT WRITE ON DIRECTORY DATA_PUMP_DIR TO hr;
--drop table OPENDATA_TEST_STAGE;
CREATE
  TABLE OPENDATA_TEST_STAGE
  (
    NAME         VARCHAR2(256),
    AMENITY      VARCHAR2(256),
    ID           NUMBER(11),
    WHO          VARCHAR2(256),
    VISIBLE      VARCHAR2(26),
    SOURCE       VARCHAR2(512),
    OTHER_TAGS   VARCHAR2(4000),
    WHEN         VARCHAR2(40),
    GEO_POINT_2D VARCHAR2(26)
  )
  ORGANIZATION EXTERNAL
  (
    TYPE ORACLE_LOADER DEFAULT DIRECTORY ORDER_ENTRY ACCESS PARAMETERS
    (records delimited BY '\r\n' CHARACTERSET AL32UTF8
    BADFILE ORDER_ENTRY:' openstreetmap-pois-usa.bad'
    DISCARDFILE ORDER_ENTRY:' openstreetmap-pois-usa.discard'
    LOGFILE ORDER_ENTRY:' openstreetmap-pois-usa.log'
    skip 1
    FIELDS TERMINATED BY ';'
    OPTIONALLY ENCLOSED BY '"'
    AND '"'
    lrtrim
    missing FIELD VALUES are NULL
    ( NAME       CHAR(4000),
    AMENITY      CHAR(4000),
    ID           CHAR(4000),
    WHO          CHAR(4000),
    VISIBLE      CHAR(4000),
    SOURCE       CHAR(4000),
    OTHER_TAGS   CHAR(4000),
    WHEN         CHAR(4000),
    GEO_POINT_2D CHAR(4000)
    )
    ) LOCATION ('openstreetmap-pois-usa.csv')
  )
  REJECT LIMIT UNLIMITED;
 
 
SELECT * FROM OPENDATA_TEST_STAGE WHERE ROWNUM <= 100;
 
 
INSERT
INTO
  OPENDATA_TEST
  (
    NAME,
    AMENITY,
    ID,
    WHO,
    VISIBLE,
    SOURCE,
    OTHER_TAGS,
    WHEN,
    GEO_POINT_2D
  )
SELECT
  NAME,
  AMENITY,
  ID,
  WHO,
  VISIBLE,
  SOURCE,
  OTHER_TAGS,
  to_timestamp_tz(COL_TIMES, 'YYYY-MM-DD"T"HH24:MI:SSTZR'),
  GEO_POINT_2D
FROM
  OPENDATA_TEST_STAGE3 ;

A Small Tweak

My TABLE has a timestamp column. I REFUSE to store DATES as strings. It bites me in the butt EVERY SINGLE TIME. So what I did here, because I’m lazy, is I loaded up the EXTERNAL TABLE column containing the TIMESTAMP as a VARCHAR2. But in my INSERT..SELECT, I throw in a TO_TIMESTAMP to do the conversion.

EXTERNAL TABLES are marked in the navigator with the GREEN ARROW decorators. In the external table, my timestamps have a 'T' text column to mark the 'time' portion.

EXTERNAL TABLES are marked in the navigator with the GREEN ARROW decorators. In the external table, my timestamps have a ‘T’ text column to mark the ‘time’ portion.

The hardest part, for me, was figuring out the format that represented the timestamp data. After a few trial and errors I managed that

2009-03-08T19:25:16-04:00 equated to a YYYY-MM-DD”T”HH24:MI:SSTZR. I got tripped up because I was single quote escaping the ‘T’ instead of double quoting it “T.” And then I got tripped up again because I was using TO_TIMESTAMP() vs TO_TIMESTAMP_TZ().

With my boo-boos fixed, instead of taking 5, almost 6, minutes to run:

747,973 ROWS inserted.
 
Elapsed: 00:00:27.987
Commit complete.
 
Elapsed: 00:00:00.156

Not too shabby. And the CREATE TABLE…STORAGE EXTERNAL itself is instantaneous. The data isn’t read in until you need it.

Last time I checked, 28 seconds vs 5 minutes is a lot better. Even on my VirtualBox database running on my MacBook Air.

More SET SQLFORMAT fun in SQLcl

$
0
0

DELIMITED text files are popular ways of passing data around.

CSV anyone? The C stands for ‘Comma’ – regardless of what your smug European friends may have told you 😉 #TonguePlantedFIRMLYInCheek

Anyways, in SQL Developer, when using the export dialog to get a DELIMITED export for your dataset, you can set the delimiter and the string enclosure for your columns.

Don't like commas as delimiters? Set your own.

Don’t like commas as delimiters? Set your own.

So in the command line interface AKA SQLcl:

The first argument defines the delimiter, the 2nd defines the left enclosure, and the 3rd defines the right enclosure.

The first argument defines the delimiter, the 2nd defines the left enclosure, and the 3rd defines the right enclosure.

So you could have BEER emoji separated values files…

Speaking of SET SQLFORMAT…

Back in October, we made a tweak to the ANSICONSOLE. It’s VERY configurable now in terms of how you want numbers displayed. Don’t miss this awesome post from @krisrice.

Shopping for CSV with SQL Developer’s Cart

$
0
0

A friend of mine asked how he could generate CSV for 40 Oracle tables in Oracle SQL Developer.

He could of course use Tools > Database Export to accomplish this.

But, it was always the same 40 tables. And using the object picker in the wizard can get tedious, especially if it’s the same 40 tables every day/week/month/epoch.

So, I told him to go shopping.

The Cart, Again

I’ve talked about the Cart a few times. It was even one of my 30 Tricks in 30 Days Posts awhile back. But in that post I talked about building deployment scripts, not table exports.

So let’s do this.

Open the Cart. View > Cart.

Go shopping. Literally, select one or more tables and drag them over into the cart.

Then check the options you want. In this case, no DDL, just Data.

Save your cart. Give it a good name. Then you can easily re-use it later.

Hit the Export button.

Set your options. In this case, I want a file per table in a single directory. And I want the data format to be CSV.

Lots and lots of choices here.

Lots and lots of choices here.

Say ‘Apply’ and SQL Developer will start generating the files.

I always pick the wrong line at check-out.

I always pick the wrong line at check-out.

You can run the process in the background if you’d like…

So it’s done, now let’s go take a look.

Ding, ding, ding. We're good to go.

Ding, ding, ding. We’re good to go.

But Jeff, GUIs are so Yesterday

Sure. So use the SQL Developer CLI – not to be confused with SQLcl.

This would be sdcli. It’s the full SQL Developer sans the graphics. You can use it to export carts. Just set all of your cart options and save them to files. So you need to save your cart. And you need to save your database export options to a file.

remember, use good names

remember, use good names

And then feed that cart filename and database export config filename to sdcli.

Raw text below in case the print's too small to read here.

Raw text below in case the print’s too small to read here.

┌─[12:55:28]─[wvu1999]─[MacBook-Air-Smith]:/Applications/SQLDeveloper.app/Contents/Resources/sqldeveloper/sqldeveloper/bin$
└─>./sdcli cart help
 
 Oracle SQL Developer
 Copyright (c) 1997, 2015, Oracle and/or its affiliates. All rights reserved.
 
Invalid CART command: help
CART Usage:
cart <command> <command arguments>
cart <command> -help|h
Supported commands:
export -cart <savedcart.xml> -config|cfg <exportconfig.xml> [-target|tgt <dirorfilename>] [-logfile <filenameorstderr>] [-deffile <exportdefinitionfile>]
cloud -cart <savedcart.xml> -config|cfg <deploycloudconfig.xml> [-target|tgt <filename>] [-logfile <filenameorstderr>] [<clouddefinitionfile>]
copy -cart <savedcart.xml> -config|cfg <copyconfig.xml> [-logfile|log <filenameorstderr>] [-deffile <copydefinitionfile>]
Examples:
cart export -cart /home/carts/cart.xml -cfg /home/carts/exporttools.xml
Export the objects included in cart.xml using the options saved in exporttools.xml
cart cloud -cart /home/carts/cart.xml -cfg /home/carts/cloudtools.xml
Deploy the objects included in cart.xml using the options saved in cloudtools.xml.
cart copy -cart /home/carts/cart.xml -cfg /home/carts/copytools.xml
Copy the objects included in cart.xml using the options saved in copytools.xml

Don’t Forget the Cloud!

I talk about using the Cart to batch automate uploads to our Database Schema service here…and it goes over the syntax for the CLI some more in case you need help.


Bulk Load an Oracle Table from CSV via REST

$
0
0

I have 1,500 rows I need to shove into a table. I don’t have access to the database, directly.

But my DBA is happy to give me a HTTPS entry point to my data.

What do I do?

Let’s look at a Low-Code solution:

Oracle REST Data Services & Auto REST for tables.

With this feature, you can say for a table, make a REST API available for:

  • querying the table
  • inserting a row
  • updating a row
  • deleting a row
  • getting the metadata for a table (DESC)
  • bulk load the table

Now I’ve previously shown how to INSERT a record to a table with ORDS via POST.

But that’s just one row at a time.

Let’s do it for 1500 rows. And I don’t have the data in JSON. No, my data nerd has given me a CSV ‘dump.’

How do I get it in?

If you want to consult the ORDS Docs, this is what we’re going to be using (DOCS).

For the POST to be received happily by ORDS, we ASSUME:

  • the URI is avail, as the table has been REST enabled
  • the first rows will be the column names
  • the rest of the rows are your data

You have lots of options you can pass as parameters on the call. See the DOCS link above.

Ok, let’s do it.

Build Your Table

I’m going to run this code in SQL Developer.

CREATE TABLE stuff AS SELECT OWNER, object_name, object_id, object_type FROM all_objects WHERE 1=2;
 
CLEAR SCREEN
SELECT /*csv*/ OWNER, object_name, object_id, object_type FROM all_objects 
fetch FIRST 1500 ROWS ONLY;

That spits out a new table:

Empty table, 4 columns.

…and some CSV that looks like this:

"OWNER","OBJECT_NAME","OBJECT_ID","OBJECT_TYPE"
"SYS","I_FILE#_BLOCK#",9,"INDEX"
"SYS","I_OBJ3",38,"INDEX"
"SYS","I_TS1",45,"INDEX"
"SYS","I_CON1",51,"INDEX"
"SYS","IND$",19,"TABLE"
"SYS","CDEF$",31,"TABLE"
"SYS","C_TS#",6,"CLUSTER"
"SYS","I_CCOL2",58,"INDEX"
"SYS","I_PROXY_DATA$",24,"INDEX"
"SYS","I_CDEF4",56,"INDEX"
"SYS","I_TAB1",33,"INDEX"
"SYS","CLU$",5,"TABLE"
"SYS","I_PROXY_ROLE_DATA$_1",26,"INDEX"
...

REST Enable the Table
You’ve already got ORDS going. You’ve already got your schema REST enabled, now you just need to do this bit to get your GET, POST, PUT, & DELETE HTTPS methods available for the Auto Table bits.

Alias the table, always secure the table.

Now we can make the call.

We POST to the endpoint, it’s going to follow this structure:
/ords/schema/table/batchload

The CSV will go in the POST body.

POST /ords/hr/stuff/batchload?batchRows=500 I’m asking ORDS to do inserts in batches of 500 records – less commits than say, batches of 25 records

The CURL would look like this:

curl -X POST \
  'http://localhost:8888/ords/hr/stuff/batchload?batchRows=500' \
  -H 'cache-control: no-cache' \
  -H 'postman-token: 1eb3f365-f83d-c423-176d-7e8cd08c3eab' \
  -d '"OWNER","OBJECT_NAME","OBJECT_ID","OBJECT_TYPE"
"SYS","I_FILE#_BLOCK#",9,"INDEX"
"SYS","I_OBJ3",38,"INDEX"
"SYS","I_TS1",45,"INDEX"
"SYS","I_CON1",51,"INDEX"
"SYS","IND$",19,"TABLE"
"SYS","CDEF$",31,"TABLE"
"SYS","C_TS#",6,"CLUSTER"
"SYS","I_CCOL2",58,"INDEX"
"SYS","I_PROXY_DATA$",24,"INDEX"
"SYS","I_CDEF4",56,"INDEX"
"SYS","I_TAB1",33,"INDEX"
"SYS","CLU$",5,"TABLE"
"SYS","I_PROXY_ROLE_DATA$_1",26,"INDEX"
...

And the results…about 6 seconds later.

1500 rows loaded, no errors.

And just because I like to double check…

Bingo!

The first time I tried this, it was with ?batchRows=100, so 15 COMMITs for the load, and it took 12 seconds. So I cut the time in half by doing baches of 500 rows at a time. You’ll want to experiment for yourself to find an acceptable outcome.

Trivia, caveats, etc.

The ORDS code that takes the CSV in and INSERTs it to the table is the SAME code SQL Developer uses here:

SQLDev can create and populate your table from CSV, XLS, etc

And it’s the same code SQLcl users here:

CSV to rows in your table, ez-pz

This is not the ideal way to load a LOT of data.

ORDS employs INSERT statements to insert your data using the AUTO route. An external table and CTAS will always be faster. And of course you have SQL*Loader and DataPump. But those require database access. This does NOT.

Or, you could always roll your own code and build your own RESTful Service. And perhaps you should. But if quick and dirty are good enough for you, we won’t tell anyone.

Loading data from OSS to Oracle Autonomous Cloud Services with SQL Developer

$
0
0

Ok that title has a TON of buzz and marketing words in it.

But, we have the Oracle Cloud. And available there is a service where we take care of your database for you – Autonomous. We make sure it’s up, and that it’s fast.

We have 1 autonomous cloud service today and will have and second one coming SOON.

These services come with an S3 compatible Object Store OSS (Oracle Object Storage, complete with S3 API support) , so you can move data/files around.

For the new Autonomous Transaction Processing (ATP) Service, this feature in SQL Developer will be available in version 18.3 of SQL Developer

In SQL Developer version 18.1 and higher, we make it pretty easy to take data files you may have uploaded to your OSS and load that data to new or existing tables in your Autonomous DB Service.

We basically make the files available to the database, map the columns just right, and then make calls to DBMS_CLOUD (Docs) for getting the data in there. DBMS_CLOUD is an Oracle Cloud ONLY package that makes it easy to load and copy data from OSS.

We also make it very easy to TEST your scenarios – find bad records, fix them, and run again if necessary.

This is all done with a very familiar interface, the Import Data dialog.

If your existing database connection is of type ‘Cloud PDB’, then when you get to the first page, you’ll see this available in the source data drop down.

Pick THIS one.

Then you need to select your proper credentials and tell us which file you want (we do NOT have a OSS file browser TODAY, but we do want one.) So you need to have the URL handy.

Paste it in, and hit the preview button.

THAT’S RIGHT, WE’RE PULLING THE DATA DOWN FROM OSS TO LET YOU PREVIEW AND SETUP THE FILE LOAD SCENARIO.

This will work for NEW or Existing tables. For this scenario I’m going with an Existing table.

The next page is almost the same as you’re used to, but a few important differences:

If we were building a NEW table, we can tell it to JUST create the external table, or to also load the data over from the external to the new table.

Once you have your options in a happy place, the rest of the wizard is pretty much the same..until you get to the Test dialog.

This is where it gets FUN

Let’s imagine you are a genius who never makes mistakes. You’ll get to witness yourself in all your glory when you run the test and see SUCCESS, a populated External Table Data panel and an EMPTY bad file contents panel.

So what we’re trying to achieve here is saving you a LOT of wasted time. We want to make sure the scenario works for say the first 1,000 records before we go to move the ENTIRE file over and process it. If there IS a problem, you can fix it now.

The test will simply create the External table and show the results of trying to query it via your load parameters as defined in the previous screens.

Yes, we’re basically just making calls to DBMS_CLOUD for you.

So it looks like it’s worked. Let’s go preview the data.

That looks like employees to me.

And just to make sure, let’s go peak at the rejected (Bad File Contents) panel.

Sweet!

But Jeff, I’m not perfect, I made a boo-boo.

No worries, let’s see what it looks like when there is problem with the definition of the external table or with the data, or both.

Oh, it’s not liking my date format, or something?

Man, it sure would be nice to SEE what that rejected row looks like.

Just click on the Bad File Contents!

Looks like I have my column mapping reversed for employee_id and hire_date, oops.

So instead of starting over, just go back in the wizard, re-map the columns, test again.

And THEN click ‘Finish’ to actually run the full scenario. And then when were done, we’ll have a log of the complete scenario and we can browse the table.

The Table!

We open the log of the scenario for you, and then you can manually browse the table like you’re used to. Or get to doing your cool reports, graphs, and SQL stuff.

When the wizard is DONE, you’ll have the log of the entire operation, and you can then go browse your table.

What are these tables?

These seem to keep popping up…

You can nuke/drop these as needed, but they’re basically just a collection of CLOBs that show the contents of the logs and bad file from your SQL Dev DBMS_CLOUD runs.

What’s Coming Next?

We’re enhancing the Cart so you create deployments of multiples files and tables in a single scenario. And then run those as often as necessary.

We’re also working on SQL Developer Web so that there is a data loading facility there so you can get rocking and rolling right away without even having to pull up the desktop tool.

More news here later this year.

Quick Tip: Spooling to ‘Excel’

$
0
0

Problem: I have 3 queries I want to run. I want the end result to be a single spreadsheet that has all the query results included.

This problem may sound familiar to you, I have talked about how to do this with the GUI here – but you get one workbook per exported TABLE.

The problem with my previous ‘solution’ is that you would need to code your queries to database VIEWs and then export those.

Here’s a quick and dirty way to get everything you want to an ‘excel’ file. You’ll get a CSV file, which you can then open in Excel and convert if you’d like.

My queries are simple, number of rows are small – to make the post simple, but you can substitute your stuff and should be A-OK.

cd c:\users\jdsmith
SET sqlformat csv
SET feedback off
SET echo off
spool blog_queries_excel.csv
SELECT * FROM regions;
SELECT * FROM locations;
SELECT * FROM departments;
spool off

Execute this code in SQL Developer with F5 or in SQLcl..and your output will look like so, when opened in Excel:

Ta-da.

What does the code do?

cd tells us where to read and write for working with files (it changes the SQLPath essentially)

set sqlformat csv tells us to take the ouptut and put it through our formatter, using the csv style.

set feedback off tells us to suppress messages like ’27 rows selected’ or ‘table created’

set echo off tells us not to include the query that was executed in the output

spool tells us to copy the output to a file

If you want to suppress the output in the console or script output panel and JUST write to the file, then do this

But Jeff, I want lines between my tables…

Then change the script…turn feedback back on, or use PROMPTs or simply select the whitespace as desired into the output.

I hope this helps!

End of day, when you’re done with your file, and you’re in Excel, you’ll start cleaning it up IN EXCEL. All of this I’ve just shown you is just a kickstarter to get the data into the file that much faster.

SQLcl and the Load (CSV) command

$
0
0

I was going to refer someone on StackOverflow to my post on the LOAD command in SQLcl, but then I realized I hadn’t written one yet. Oops. So here’s that post.

One of the new (that is, a command in SQLcl that is NOT in SQL*Plus) commands is ‘LOAD.’

You can find all the new commands highlighted if you run ‘help’

Guess what this does…

No need to guess what LOAD does, just consult the help.

SQL> help load
LOAD
-----
 
Loads a comma separated value (csv) file into a table.
The first row of the file must be a header row.  The columns in the header row must match the columns defined on the table.
 
The columns must be delimited by a comma and may optionally be enclosed in double quotes.
Lines can be terminated with standard line terminators for windows, unix or mac.
File must be encoded UTF8.
 
The load is processed with 50 rows per batch.
If AUTOCOMMIT is set in SQLCL, a commit is done every 10 batches.
The load is terminated if more than 50 errors are found.

A quick demo

Let’s SPOOL some CSV to a file, then use the LOAD command to put that data into a new table.

SQL> SET sqlformat csv
SQL> cd /Users/thatjeffsmith
SQL> spool objects.csv
SQL> SELECT * FROM all_objects fetch FIRST 100 ROWS ONLY;
"OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT_ID","OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","GENERATED","SECONDARY","NAMESPACE","EDITION_NAME","SHARING","EDITIONABLE","ORACLE_MAINTAINED","APPLICATION","DEFAULT_COLLATION","DUPLICATED","SHARDED","CREATED_APPID","CREATED_VSNID","MODIFIED_APPID","MODIFIED_VSNID"
"SYS","I_FILE#_BLOCK#","",9,9,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ3","",38,38,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TS1","",45,45,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CON1","",51,51,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","IND$","",19,2,"TABLE",07-FEB-18,21-NOV-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","CDEF$","",31,29,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","C_TS#","",6,6,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_CCOL2","",58,58,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_PROXY_DATA$","",24,24,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CDEF4","",56,56,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TAB1","",33,33,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","CLU$","",5,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PROXY_ROLE_DATA$_1","",26,26,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ1","",36,36,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UNDO$","",15,15,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UNDO2","",35,35,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_TS#","",7,7,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_FILE1","",43,43,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_COL2","",49,49,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ#","",3,3,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_OBJ#","",2,2,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_CDEF3","",55,55,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_COBJ#","",29,29,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","CCOL$","",32,29,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_OBJ5","",40,40,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","PROXY_ROLE_DATA$","",25,25,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CDEF1","",53,53,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_USER#","",10,10,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","C_FILE#_BLOCK#","",8,8,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","FET$","",12,6,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CON2","",52,52,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ4","",39,39,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","CON$","",28,28,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_CDEF2","",54,54,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","ICOL$","",20,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COL3","",50,50,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_CCOL1","",57,57,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","COL$","",21,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_ICOL1","",42,42,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UET$","",13,8,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","PROXY_DATA$","",23,23,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","USER$","",22,10,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PROXY_ROLE_DATA$_2","",27,27,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJ2","",37,37,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TAB$","",4,2,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COBJ#","",30,30,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER#","",11,11,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","FILE$","",17,17,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJ$","",18,18,"TABLE",07-FEB-18,15-OCT-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","TS$","",16,6,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UNDO1","",34,34,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","BOOTSTRAP$","",59,59,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_COL1","",48,48,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_FILE2","",44,44,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_IND1","",41,41,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER2","",47,47,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_USER1","",46,46,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","SEG$","",14,8,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:25","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJERROR$","",60,60,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","OBJAUTH$","",61,61,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_OBJAUTH1","",62,62,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:26","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_OBJAUTH2","",63,63,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","C_OBJ#_INTCOL#","",64,64,"CLUSTER",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",5,"","METADATA LINK","","Y","N","","N","N",,,,
"SYS","I_OBJ#_INTCOL#","",65,65,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","HISTGRM$","",66,64,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_H_OBJ#_COL#","",67,67,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","HIST_HEAD$","",68,68,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_HH_OBJ#_COL#","",69,69,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_HH_OBJ#_INTCOL#","",70,70,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","FIXED_OBJ$","",71,71,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_FIXED_OBJ$_OBJ#","",72,72,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:27","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TAB_STATS$","",73,73,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_TAB_STATS$_OBJ#","",74,74,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","IND_STATS$","",75,75,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_IND_STATS$_OBJ#","",76,76,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","OBJECT_USAGE","",77,77,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_STATS_OBJ#","",78,78,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","PARTOBJ$","",79,79,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_PARTOBJ$","",80,80,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","DEFERRED_STG$","",81,81,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_DEFERRED_STG1","",82,82,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","DEPENDENCY$","",83,83,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:28","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","ACCESS$","",84,84,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_DEPENDENCY1","",85,85,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_DEPENDENCY2","",86,86,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_ACCESS1","",87,87,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","USERAUTH$","",88,88,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_USERAUTH1","",89,89,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","UGROUP$","",90,90,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_UGROUP1","",91,91,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_UGROUP2","",92,92,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","TSQ$","",93,10,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","SYN$","",94,94,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","VIEW$","",95,95,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","TYPED_VIEW$","",96,96,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","SUPEROBJ$","",97,97,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_SUPEROBJ1","",98,98,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","I_SUPEROBJ2","",99,99,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
"SYS","SEQ$","",100,100,"TABLE",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",1,"","METADATA LINK","","Y","N","USING_NLS_COMP","N","N",,,,
"SYS","I_VIEW1","",101,101,"INDEX",07-FEB-18,07-FEB-18,"2018-02-07:19:20:29","VALID","N","N","N",4,"","NONE","","Y","N","","N","N",,,,
 
100 ROWS selected. 
 
SQL> spool off
SQL> CREATE TABLE demo_load AS SELECT * FROM all_objects WHERE 1=2;
 
TABLE DEMO_LOAD created.
 
SQL> LOAD demo_load objects.csv
--Insert failed in batch rows  101  through  103 
--ORA-01400: cannot insert NULL into ("HR"."DEMO_LOAD"."OWNER")
--Row 101 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Row 102 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('100 rows selected. ','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Row 103 data follows:
INSERT INTO DEMO_LOAD(OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,SHARING,EDITIONABLE,ORACLE_MAINTAINED,APPLICATION,DEFAULT_COLLATION,DUPLICATED,SHARDED,CREATED_APPID,CREATED_VSNID,MODIFIED_APPID,MODIFIED_VSNID)
VALUES ('','','',NULL,NULL,'',to_date(''),to_date(''),'','','','','',NULL,'','','','','','','','',NULL,NULL,NULL,NULL);
--Number of rows processed: 103
--Number of rows in error: 3
1 - WARNING: LOAD processed WITH errors
SQL> commit;

Wait, what’s with the 3 failed rows at the end?

If I tail the csv file I created, there’s a few extra lines due to feedback…hence the 3 rows failed to run – which is good 🙂

Browsing the TABLE in SQL Developer it looks like it ran just as it should.

DATEs and TIMESTAMPs came in just A-OK as well 🙂

If I go in and remove those 3 lines, truncate the table, and run the LOAD again…

Cleaner 🙂

Is this the BEST way to load CSV?

Probably not – I would still advise folks it’s much faster to use sqlldr or External TABLEs, but it would be hard to argue this isn’t simpler, especially when you’re dealing with reasonable amounts of data.

Reasonable being a number of rows that are adapt at being INSERTed one at a time.

Batch Loading CSV to a TABLE in Oracle Autonomous Database using AUTOREST API

$
0
0

Signing up for an Autonomous Database is easy. You can have an instance up and running in just a few minutes. And now, you can even have one for FREE.

But one of the first things you’re going to want to do is shove some TABLEs into your schema, or just load some data.

We’re working on making this even easier, but let’s quickly recap what you can already do with our tools.

A Saucerful of “Secrets” Ways to Load Data to the Cloud with our DB Tools

Taking Advantage of AUTO TABLE and ORDS

If you already have a TABLE in your schema, and you want to create a REST API for accessing said table, we make that easy.

It’s a right-click in SQL Developer (Desktop)

This UI is coming to SQL Developer Web, soon.

Or you could of course just run this very simple PL/SQL block –

BEGIN
    ORDS.ENABLE_OBJECT(p_enabled =&gt; TRUE,
                       p_schema =&gt; 'JEFF',
                       p_object =&gt; 'HOCKEY_STATS',
                       p_object_type =&gt; 'TABLE',
                       p_object_alias =&gt; 'hockey_stats',
                       p_auto_rest_auth =&gt; TRUE);
    COMMIT;
END;

Another quick aside, if you need to catch up on these topics, I’ve talked about creating your application SCHEMA and REST Enabling it for SQL Developer Web access.

And, I’ve talked about using the CSV Load feature available with the ORDS AUTO Table mechanism.

My TABLE

I have a HOCKEY_STATS table, that I want to load from some CSV data I have on my PC. It’s an 8MB file (35000 rows, 70 columns).

Now, I could use the Import from CSV feature in SQL Developer (Desktop) to populate the TABLE…

Took approximately 16 seconds to batch load 35,000 records to my Autonomous Database service running in our Ashburn Data Center from Cary, NC – using the MEDIUM Service.

That’s not super quick, but it was super easy.

But what if I need a process that can be automated? And my API du jour is HTTP and REST?

Let’s POST up my CSV to the TABLE API

Let’s find the URI first. Go into your Development Console page for your service – you’ll see we show you what all of your ORDS API calls will start with:

You don’t have to guess or reverse engineer your ORDS calls from the SQL Developer Web or APEX links anymore.

After the ‘/ords/’ I’m going to include my REST Enabled SCHEMA alias, which I have specified as ‘tjs’ in place of ‘JEFF’, and then my TABLE alias, which I’ve just left as ‘hockey_stats’.

So if I want to do a CSV load, I need to HTTPS POST to

<pre lang='text'>
https://ABCDEFGHIJK0l-somethingash.adb.us-ashburn-1.oraclecloudapps.com/ords/tjs/hockey_stats/batchload?batchRows=1000
</pre>

The ‘/batchload?batchRows=1000’ at the end tells ORDS what we’re doing with the TABLE, and how to do it. This is documented here – and you’ll see there’s quite a few options you can tweak.

Before I can exercise the API, I need to assign the ORDS Privilege for the TABLE API to the ‘SQL Developer’ ORDS Role. That will let me authenticate and authorize via my ‘JEFF’ Oracle database user account.

There’s a PL/SQL API for this as well as an interface in APEX.

If that sounds ‘icky’ then you can also take advantage or our built-in OAUTH2 Client (example).

Now, let’s make our call. I’m going to use a REST Client (Insomnia) but I could easily just use cURL.

Almost 10 seconds…not blazing fast, but again, very easy (no code!) and it’s sending 8MB over HTTP….

I could tweak the batchRows parameter, and see if I could get faster loads, I’m sure I could. But the whims of public internet latency and the nature of the data I’m sending up in 16 KB chunks will make this a fun ‘it depends’ tech scenario.

Excel/CSV Import to a new TABLE in SQL Developer Web

$
0
0

You have some data, but not in your trusty Oracle Database yet.

How to get it there?

You have many options of course, but today I want to talk about a brand new one, Oracle SQL Developer Web (SDW).

Version 19.4 launched for all Oracle customers via our Oracle REST Data Services product. You can download and run this today for any of your on premise Oracle databases.

Quick shout-out to ORACLE-BASE: he ALREADY has a nice suite of videos and technical posts about getting started with SDW, so don’t miss those!

What I want to show you today is how to quickly take an existing CSV or Excel file and use it to create a NEW TABLE, and insert that data.

Getting started is as simple as a drag and drop

In the SQL Worksheet area of SDW, there’s a prominent panel on the bottom that practically BEGS you to give us some data to add to your database.

Once you start using this feature, you’ll see a log of imports listed here, but you scan still drag and drop files.

Do what it says with your mouse, or you can click the the little Cloud button in the toolbar next to the trashcan.

You’ll either get prompted for the file, or if you’ve dropped a file there, you’ll see a popup dialog with your data.

Does this look like the right data? If YES, click Next.

The little gear button allows you to make a few tweaks as to how we interact and interpret the data in your file –

Click the Gear button to access these properties.

The ‘Preview’ window defaults to 100 rows. That’s the amount of data we’ll take a peak at in order to ‘best guess’ how to shape your table on the next set of screens.

Hopefully you have a data model of sorts (in your head, at least!), and you KNOW what your column definitions should be.

The Most Important Part

You need to tell us how to store this data in your new table, which I’m calling ‘ACTIVITIES2.’ We’ll default the table name to the name of the input file.

We’ll default the column names in the table to the column names in the input file, taking are to add underscores. You can override anything you don’t like.

Don’t forget the Format Mask for your DATEs and TIMESTAMPs!

We’ll best guess the Format Mask for the temporal data types for you, but if we can’t manage it, you can enter your own. And I’ve had to do this for my Strava data here, actually.

If you switch over your VARCHAR2 data type to NUMBER, be sure to update the Precision. There’s a bug in 19.4 where we leave it as 4000 – that won’t work, so I’ve switched mine back to 38.

You can scroll right to see more of the 100 rows avail in the preview window to make sure things look ‘right.’

Click ‘NEXT’ to see what we’re going to do, before we actually do it.

Click Finish if you’re happy!

If you see something amiss, you can always go ‘Back’ and adjust. But we’re going to click Finish and see what happens.

It doesn’t take more than a second to actually array batch insert 500 rows, so it’s done by the time we print the dialog 🙂

A failed row, boo. Let’s see why.

If I click on that entry in the Data Loading log, I can get the details (as stored in the SDW$ERR$_ACTIVITIES2 TABLE)

Yeah, the database didn’t like that COMMA in the number value.

I can easily fix that data and do the IMPORT again manually, or I could do an INSERT as SELECT from that SDW$ table.

But I’m going to go LOOK at my data!

Notice that we automatically refresh the Worksheet browser list of TABLES, I can now see my ‘ACTIVITIES2’ table!

I’m trying to get back into shape this year. Sigh, don’t EVER get old/fat if you can help it!

SQL Developer Web in Autonomous

This IMPORT feature is only available for existing tables in our Autonomous Database Cloud Services, as SQL Developer Web hasn’t been upgraded to version 19.4 yet. That’s scheduled to happen ‘soon,’ so you’ll see this new feature appear there shortly.

SQL Developer Web, The Movie!

There’s a SQL Developer Web playlist with 4 videos and growing!

Loading Data into Oracle with SQLcl

$
0
0

When it comes to load data, especially very large amounts of data – if Data Pump is available, use that. If you can create an External Table, do that. If you have access to SQL*Loader, use that.

But.

Not only is that a lot of IF’s, there’s also the question as to how developer-friendly those interfaces can be, especially if you don’t live in an Oracle Database each and every day.

As a quick aside, I recommend you read Tim’s latest rant. By the way, he calls his posts rants, that’s not me ascribing a pejorative to the good Doctor. His post title pretty much says it all, The Problem With Oracle : If a developer/user can’t do it, it doesn’t exist.

The TL/DR; take is pretty simple – if the interface isn’t readily available AND intuitive, a developer isn’t very likely to use it. Or bother to take the time to learn it.

If you’re still with me, what I wanted to talk about today is the LOAD command in SQLcl.

Yes, I know it’s not as fast as SQL*Loader. And I’ve extolled the virtues of SQL*Loader before! But maybe you don’t have an Oracle Client on your machine, and maybe you lack the patience to learn the CTL file syntax and yet another CLI.

So instead, if you’re already in SQLcl and you simply want to batch load some data to a table, I invite you to check out the latest and greatest we’ve introduced in version 20.2 of SQLcl.

New for 20.2

  • set load – # of rows per batch, # of errors to allow, date format, truncate before we go?
  • set loadformat – CSV? are there column names in line 0/1? what’s the delimiter? “strings” or ‘strings?’

So while the LOAD command isn’t new, the amount of flexibility you have now is very much new. You’re no longer ‘stuck with the defaults.’

Let’s load some funky data.

We’ll throw in a date column, some weird string enclosures, no column headers, a blank line to start things…here’s what one row looks like

4|^|"US"|^|"Much like the regular bottling from 2012, this comes across as rather rough and tannic, with rustic, earthy, herbal characteristics. Nonetheless, if you think of it as a pleasantly unfussy country wine, it's a good companion to a hearty winter stew."|^|"Vintner's Reserve Wild Child Block"|^|87|^|65|^|"Oregon"|^|"Willamette Valley"|^|"Willamette Valley"|^|"Paul Gregutt"|^|"@paulgwine?ÿ"|^|"Sweet Cheeks 2012 Vintner's Reserve Wild Child Block Pinot Noir (Willamette Valley)"|^|"Sweet Cheeks"|^|436|^|08-JUL-2020 13.54.37

Yes, I have some wine review data – thanks Charlie for the heads-up!

So let’s setup our config for the run, starting with the profile of the data itself.

Our defaults aren’t going to cut it.

Column_names, delimiter, enclosure, and skip_rows all need tweaked.
Setting enclosure_left, without providing something for the right, defaults to both.

Now let’s look at the overall load settings.

Let’s use truncate on, and set the date_format, and bump up the batch_rows.
We’ll play with a few different batch_row sizes and commit levels, so the TRUNCATE will come in handy.

Let’s do this!

The command syntax is simply, load <table_name> <file_name>…and that’s it.

I actually got this to load the first time I tried! I’ll be having a cookie/beer later as a reward 🙂

Now, what if we did something stupid, like set batch_rows to 5? There may be a case where that makes sense, but not here, not for this scenario.

I know someone will say that 4 seconds is slow compare to SQL*Loader, and you’d be right.

For quick and dirty data work, 50k rows in 4 seconds will do just fine. And nothing to install or configure, just unzip SQLcl and go. I’m pretty sure your average developer could use this interface and feature.

Trivia: This is the same code we use in…

…SQL Developer Desktop and ORDS. When you REST enable a table and use the batch loading POST endpoint, that’s running the same code SQLcl is using!


DBMS_CLOUD and USER_LOAD_OPERATIONS with Oracle Autonomous Database

$
0
0

Loading data is a hot topic when it comes to databases, and it always has been. INSERTs, Data Pump, SQL*Loader, External Tables, IMP, RMAN, CREATE/INSERT as SELECTs, using ORDS and AutoREST, importing from Excel…and that’s maybe half of your options for Oracle Database.

One of the PL/SQL packages in Oracle Autonomous is DBMS_CLOUD (Docs) – and it allows you to access files in an Object Store, including the one you get in the Oracle Cloud (OSS).

I can read these files, create new ones, delete them – from a database session. So a very common use case for this package is to be able to read data from one of these files and put that data into a table.

ORACLE-BASE has a nice tutorial, and I don’t want to re-hash covered ground, but I did want to do a quick example, and give a shout-out to a logging view that DBMS_CLOUD uses.

Pre-Authenticated Requests

Objects (files, directories, buckets…) in the Object Store require you to be authenticated and authorized in order to be able to read or write or even get a listing of what’s in your Object Store.

But…what if you had a FILE that you wanted to make available to ANYONE who had its address? The Oracle Cloud and the Object Store allows you to create a ‘pre-authenticated request’ – that, everything you need to access the resource is included in the URI for said resource.

Warning: be VERY careful with these.

Loading the data

I have an EXISTING table :

CREATE TABLE CHANNELS
   (channel_id CHAR(1),
    channel_desc VARCHAR2(20),
    channel_class VARCHAR2(20)
   );

I have my file in the Object Store:

S,Direct Sales,Direct
T,Tele Sales,Direct
C,Catalog,Indirect
I,Internet,Indirect
P,Partners,Others
J,thatJeffSmith,Direct

I need to create my pre-authenticated request…and copy that generated URL, then feed that to a very simplified call to DBMS_CLOUD.COPY_DATA:

BEGIN
  DBMS_CLOUD.COPY_DATA(
     table_name =>'CHANNELS',
     file_uri_list =>'https://objectstorage.us-ashburn-1.oraclecloud.com/p/b/something/o/channels.txt',
     format => json_object('delimiter' VALUE ',') );
END;
/

Don’t bother with that URI – you’ll need to upload and create your own. But that’s a VERY simple call to load data.

Here’s my TABLE. Here’s my FILE. Here’s how to PARSE the data in my file.

Cue SQL Developer Web

I can run my PL/SQL call directly in the SQL worksheet in SQL Developer Web. Just login as the user, and use the Execute as Script button (2nd green button in toolbar).

That’s very simple – just imagine some SERIOUS files, not a 6 or 7 line CSV.

USER_LOAD_OPERATIONS

Your schema has a VIEW that tracks all data load operations you’ve attempted with the DBMS_CLOUD package (Docs).

If we query our new table and our view, we can see what we’ve got going on –

I’m not very smart, it took me 2 tries to get it right.

Takeaways

  • if you’re going to be using Oracle Autonomous – get comfortable with the DBMS_CLOUD package
  • the USER/DBA_LOAD_OPERATIONS views are handy for tracking what’s you’ve been doing
  • you can use pre-authenticated requests to make access to your files in the object store – PUBLIC

We (the Database Tools Team) have built other interfaces to take advantage of the Object Store and DBMS_CLOUD. And we continue to build more!

Some related articles.

The post DBMS_CLOUD and USER_LOAD_OPERATIONS with Oracle Autonomous Database first appeared on ThatJeffSmith.

Tips on Importing Excel to your Database &‘Data Modeling’

$
0
0
From Excel to Oracle…only to have it exported back to Excel. Poor Sisyphus!

Why did I put Data Modeling in quotes? Because ingesting an excel file to your database and having a table created ‘as is’ IS NOT data modeling! Unless that Excel spreadsheet was the end result of a data modeling exercise, but let’s not kid ourselves.

But this post isn’t meant to soley be a rant, I want to HELP YOU be successful!

I’ve found certain problems arise more often than others when I have an occasion to grab some data off the innerwebs or someone sends me their ‘problematic’ Excel file that they want ‘sucked into the database.’

Today’s post was largely inspired by @krisrice sending me this link. A redditor did a survey of 5,000+ developers, and I decided to put that into my DB for giggles, and to also do some regression testing on SQLDev, Database Actions, ORDS, and SQLcl.

The cruelest joke is that after you dump through all these hoops to get your data into Oracle, at the end of the day know that someone else will suck it right back out to Excel.

Anyways, I figured after fixing up this spreadsheet, you might enjoy seeing all of the different things I pay attention to so that I don’t waste time trying to get data loaded. Helping 200,000+ people import their data from Excel to Oracle Database has trained my brain to be very particular about certain things…

Note that these tips may or may NOT be specific to SQL Developer or the tools.

1. Use the Right Data Types

If you’re importing data to a NEW table, that is, you’re using the CSV or Excel file to define the table properties, bring in DATEs as DATE columns. Or TIMESTAMPs.

Our tooling will try to help you interogate the text that represents a DATE so it can be correctly inserted into a DATE field.

If we’re not able to decipher it, you’ll need to provide it. And you can get quite creative in supplying the Date/Time formats fed to the TO_DATE() function. I will spend more than a few minutes experimenting and checking out the Docs if it’s an odd pairing of whitespace, delimiters, and date jumbling…

The FORMAT tells the database how to interpret your numbers and letters as a DATE.

Not sure if you should go CLOB or just ‘big’ VARCHAR2s? Use VARCHAR2 until you can’t.

2. Tip on Dates: VALIDATE_CONVERSION

You can use a simple query through the DUAL table using this function to see if you’ve got it just right for the data you’re about to import.

Here’s that Docs link.

3. Or Don’t Worry about it, bring everything in as VARCHAR(4000/32000)

The quickest way to success is don’t define anything really other than your table and column names. Then once the data is in the database, use SQL or more DDL to ‘fix it up, proper.’

I will tell you that I’ve personally found it easier to do the ‘futzing around’ on the client side. Once it comes into the database, I’m likely to forget about making it ‘good.’

If it’s really-real data, I’ll do actual data modeling. That is, ask critical questions – what does this data represent, how will it be used, is it related to anything else, should be be normalized – broken down into separate tables, etc.

I’ll then go CREATE that schema, and then use the tools to simply import the data into the proper places.

Are you the ONLY one who will ever see this code? Is performance not a big deal because it’s a few measly thousand rows? Then maybe this is OK. But don’t let this shortcut make it’s way into anything close to your code and regular processes.

4. Set your preview window to ALL THE ROWS if…

…you’re not sure how wide to make the text columns.

By default our tools scan the first 100 rows of your data, looking to the width of the strings, and trying to determine if any of those strings are actually dates.

If you leave this at 100, know that row 101 will have a field 1 character longer than the previous 100.

So when you go to run the scenario, some of your rows will fail. We’ll be nice and tell you which ones, so you can re-size, and run those failed INSERTs to bring in the ‘bad’ records.

But, it gets annoying.

So my advice?

Do the proper modeling – how wide should a zipcode be? This is a trickier answer than it appears, and if you think ZipCodes should be numbers, boy are you in for a surprise!

If you have no idea how wide they should be, or if there is no predetermined, logical reason to restrict the size, then go with the maximum (4000 or 32k). Just know that if you’re going to be indexing these columns, you may run into some restrictions later.

If I’m importing 5,000 records, then I’ll set the preview window to 5000, and let the tool pick my widths for me and be done with it.

If I’m importing 10,000,000 rows from CSV..well then, I don’t set the preview window to 10,000,000. I do some thinking or I set everything to the max.

5. Beware Excel Macros!

Just remove them all. Easiest way to do this? Select everything in the workbook, copy to clipboard, and then paste back in using the Paste Values option – macros don’t get copied to the Clipboard. Then save to a new file and move on.

6. Beware HUGE Excel Files!

I mean like tens or hundreds of megabytes. An excel file is actually a collection of archived (zipped!) XML files. Opening one and parsing it is a pain, and NOT cheap! So if you’re wondering why SQLDev is taking ‘forever’ to chunk through your Excel file, it’s because there’s a LOT of work to be done.

If it’s a HUGE Excel file, save it as a CSV, and import THAT. CSV are plain text files…MUCH easier to parse, scan, read into memory. And easier = faster.

If you go this route, be sure you’re not losing things like leadings 0’s on fields that look like numbers but should actually be strings, and make sure you don’t have strings that line-wrap, as a CR/LF generally indicates a new record in CSV.

Fighting strings with multiple lines in a field is the biggest pain with CSV…

7. Mind the NULLS and Blank Lines

If your table doesn’t have a PRIMARY KEY defined or any columns defined as a NOT NULL, then if your incoming CSV/Excel has blank lines/rows, you’ll see those same empty rows (ALL NULLS) when you’ve imported your data.

The fix is easy – add a Primary Key constraint, on either a ‘natural key’, something unique in your incoming data set or use a IDENTITY column, with the option to generate the value by default ON NULL.

If I want to have a new column in my table added, I’ll often put that in there and give it a DEFAULT value, then my import will run and the row will have what I want it to, even if the data’s not included in the originating CSV/Excel.

8. Detecting patterns…

Are you doing this on a regular basis? That should be screaming to the developer brain inside of each of us that there’s a probably a better way than this ad hoc point and click stuff.

Almost anything can be scripted. Including bringing in data from Excel and CSV. We’re putting ALL of this GUI power into SQLcl as commands. I’ll have more to say on this later this year.

9. Don’t forget the RESTful Path

The ?batchload POST APIs on REST Enabled tables let you do the same CSV batch loading that SQLcl or SQL Developer offers in their CLI and GUIs. And it’s not terribly slow.

POST /ords/admin/huge_csv/batchload?batchRows=5000 I’m asking ORDS to do inserts in batches of 5,000 records – less commits than say, batches of 25 records.

10. We’re doing this for a good reason

Once the data is in our database, we have a single source of truth, not 10 versions of an Excel file floating around. This is an EXCELLENT reason to build an APEX app around the Interactive Grid. But also, our data will be included in our backups, it’ll be easily accessed via SQL or any standard database tool.

Folks will always demand Excel, but at least make sure the data they’re grabbing is good at the moment it’s leaving home.

11. Almost forgot… “stupid column names”

DO NOT DO THIS. Do not punish your developers, your end users, your applications by being forced to support horrible object and column names just because they’re in your excel file.

Just because you can do things like CREATE TABLE “TaBLE” (“COLUMN” integer); … doesn’t mean you should. And in fact, almost never. If you’re migrating from say SQL Server and you don’t want to rewrite a ton of code, I’m happy to give you an exemption on this rule.

The post Tips on Importing Excel to your Database & ‘Data Modeling’ first appeared on ThatJeffSmith.

Using SQLcl to load CSV to a table, without COLUMN HEADERS

$
0
0

I don’t have SQLcl yet, DOWNLOAD it!

Question from the ‘comments’ today:

I would like to have a parameter for dataload WITHOUT column names having to be in the 1st line.

Simply load the columns in the order they are in the txt/csv file.

Often delivered or generated txt files have no header information at all and you still know how to handle them (you know the target table AND the source file structures and that they must match.)

No joke, I really have such use cases and have (yet) to “construct” the first line (column names) by concat a “default” file and the data file. ugly…

a customer…

When I get these questions, I just love being able to say, “we can in fact to just that!”

Our TABLE

If we are going to load data, we’re going to need a table. Let’s build a quick-n-dirty copy of HR.EMPLOYEES.

CREATE TABLE emps_no_headers2
    AS
        SELECT
            employee_id,
            first_name,
            last_name,
            salary
        FROM
            employees
        WHERE
            1 = 2;

Our new table looks like this –

I truncated the table after I did the CTAS…accidentally w/o the WHERE clause, hence the STATS.

To load CSV, we’re going to need CSV. Let’s generate the CSV from our existing EMPLOYEES table.

CSV export, no column headers!

Now the default behavior when using our command to load delimited data is to treat the first line of the incoming data as the list of COLUMN headers for the table, and use that to map which items in the data being streamed goes to which column in the table being populated by SQLcl.

The command is called LOAD.

LOAD has two sets of options:

  • set loadformat – how the data being imported will be processed
  • set load – what and how the load operation will actually take place

We want to tell SQLcl to NOT expect column headers in our CSV, so we’re going to use ‘set loadformat’.

Your Load Options

You can use help set loadformat to get help…setting your load formats.

Hi friends in Europe! I know what you are thinking. You can change the delimiter(;) !

Want some help? Just ask for it…

Yes Virginia, there is an UNLOAD command too.

Loading the table

No mess, no fuss. Just works.

Let’s do a trick! Let’s take a CSV and get DDL for it.

Maybe you don’t have a table yet, you JUST have a CSV file. And THIS CSV file does have column headers. Let’s see what SQLcl 21.3 can do with that.

So I have a CSV. And I WANT a table from that. I can use the GUI of course. But we’re DEVS, no mouse, no mouse!

Image
load table tablename filename show_ddl

We’ll scan the data, look for max column widths, date formats, etc, and rename columns that aren’t legal/valid for an Oracle schema. That ‘show_ddl’ bit is new for 21.3, and says, just show me what you WOULD do, without actually doing it.

The post Using SQLcl to load CSV to a table, without COLUMN HEADERS first appeared on ThatJeffSmith.

SQLcl LOAD CSV, create new tables or just generate the DDL

$
0
0

Subscribers and my mom will probably remember that I’ve briefly talked about this feature before, but it was really just a tease. I wanted to go into a bit more detail today.

In version 21.3, SQLcl got a wicked cool new feature. And yes, it sounds pandering of me to say that, but every now and then I see new features come out that turn out to be JUST as handy as we imagined them when we set out to build them.

This is one of those feature I’ll probably toss out in every Tips & Tricks talk I do going forward.

You have: a delimited text file or simply a CSV.

This is my personal data dump from the social media app known as Untappd.

Putting that data into a table.

The Old-School Way

This still works, of course.

But, it’s a wizard, has multiple steps, and I don’t have SQLDev started, and I’m already at my prompt, ready to go, NOW.

The New-School Way

The LOAD command has a new parameter you can toss onto a job, ‘NEW’.

This looks promising!

So what does this command do? Well, we scan your data from the CSV, we look at the column headers to come up with new column names, and then we look at the data itself – how wide are the strings, is that a DATE format we recognize, etc.

Then we show you that DDL, execute it, and then load the data from the CSV into the new table.

Which looks a LITTLE something like this –

The output keeps going…this is just the first bit showing me the load options I have going.

The most interesting thing of note here is this:

scan_rows 1000

That’s a LOAD command option telling SQLcl to look at the first 1,000 rows of my CSV to ‘measure’ the column widths and dates to build the DDL/INSERTs around.

If you have wider data past the first 50 or 100 rows, you’ll get a lot of REJECTED INSERTs.

SET LOAD SCAN_ROWS 1000

Note the higher you set this, the more resources you’ll burn reading the data and doing the number crunching.

So, I don’t need to do anything really, I can just toss my CSV at SQLcl, and let it put it into a table for me.

Here’s a quick animation…

Create and Load the table, collect stats, INFO+, and a little reporting query of my new data to play with.

Or maybe I just want the propose table DDL…

SHOW DDL, what’s that?

So go through the 1000 rows, and figure out the DDL, but just SHOW it to me, don’t execute it.
load newtable file.csv SHOW_DDL 
If you add the ‘NEW’ onto the command, it will actually execute the scenario.

The post SQLcl LOAD CSV, create new tables or just generate the DDL first appeared on ThatJeffSmith.

Database Tools Service Deep Dive: The SQL Worksheet

$
0
0

Remember that we launched a brand new OCI service for developers and database users last week? One of the features I briefly touched on was the SQL Worksheet.

You can find the official Docs on the DBTools Worksheet feature here.

It’s a huge stepping-stone for migrating the entirety SQL Developer Web/Database Actions as seen in your Autonomous Database Cloud Services or in your own ORDS deployments and configured databases – to an OCI Cloud Native application and set of pages.

While it may be a bit spartan in look and feel, it does offer quite a few features!

Let’s demo a few, in the form of Animated GIFs.

Opening a Session from an Existing Connection

By ‘Existing Connection’ I mean a connection resource you’ve defined. From there we can create an actual database connection and do something like…open a SQL Worksheet.

This is in real time…when the connection name pops up, that means it’s available for use.

Run Query, See Results in Grid

It can run anything that SQLcl or SQL Developer could run – thanks to ORDS and REST SQL.

Help with Object/Column Names

Insight works much like in rest of our tools. If you let up on the keyboard long enough, we’ll automatically suggest help, or if you want to ask for it directly, use Ctrl+Spacebar.

It’s very sad, this table is completely empty.

Help with Code: That looks broken to me, Jeff.

Our tools have more than just an object look-up feature. It also has the entirety of the SQL/PLSQL language spec burned into a parser. That means if I type something ‘bad,’ we can paint it on the screen as problematic.

So we can suggest data types, but also let me know when I’ve borked my VARCHAR2() definition.

Mousing over the ‘squiggle’ will give me the text our parser is expecting to see next – just like in SQLDev Web/Desktop.

Load a Script from the Object Store, and Run it

Navigate your object store bucket, find and load your file. Run it, see the results.

Each item in the output can be clicked to see it’s specific output/response. You’ll notice also that they worksheet has syntax highlighting, and a minimap on the right, so you can more easily scroll through your code and find what you’re looking for.

Query my new table, page the results.

We grab 100 rows at a time, and I’ve loaded 200 rows.

Please don’t judge me…harshly.

Just give me my Excel already.

Yes, yes, I know. You want your data back out in a file. We have CSV available today – and we’re working on getting you native Excel integration, so stay tuned on that topic.

Run query, click, Download, pick your format.

The very best Feature – select your database!

If you have multiple databases defined in the DBTools service via connections, you can easily stay ‘in page’, and simply switch over to your other database and run your queries/scripts.

Going from Jeff user on my 19c database over to ADMIN on my 21c instance.

One last thing – History

It’s stored locally in your browser –

This is nice in that you can see what you’ve done lifetime – not just in this session – from THIS browser.

And you can browse those entries, pick one or more of them, move them up to the worksheet, and run them like you just typed them.

And that’s all folks, today, anyway.

1,000 Posts…

It’s been a good decade or so. I’ve had no shortage of material to share with everyone. Thanks for joining me on this wild ride!

The post Database Tools Service Deep Dive: The SQL Worksheet first appeared on ThatJeffSmith.

Viewing all 27 articles
Browse latest View live