Quantcast
Channel: csv | ThatJeffSmith
Viewing all 27 articles
Browse latest View live

Oracle Data Loading Tips & Tricks Video and Slides

$
0
0

Slides

Most of the goodness is in the video, below.

Video

53 minutes long, I’ll be adding bookmarks for the different topics, ASAP.

Don’t forget to like, subscribe, and click that notification bell…

11 Tips for Loading Data from Excel

I wrote this post not too long ago. But long enough ago that I completely forgot I wrote it. And then I put together these slides and video, and ended up saying many of the things I said back in 2021.

Read more here on data typing, date formats, preview windows, REST, & more.

The post Oracle Data Loading Tips & Tricks Video and Slides first appeared on ThatJeffSmith.


Batch Loading CSV to Oracle Database, Again, via REST APIs

$
0
0

Do you know what’s sad? An empty table, that’s really, really sad. So in this blog post, I’ll demonstrate batch loading CSV to Oracle…using ORDS and REST APIs for AUTOREST enabled TABLEs.

I have structure, but no data.

Create your Database

If you already have a database with ORDS, you can skip to the next section. But, I’m going to give you everything you need to get started from ZERO.

Environment: Always Free Autonomous Transaction Processing (ATP) Cloud Oracle Database

This will only take about 5 minutes if you don’t already have one. And I know you folks love the YouTubes. So here’s a new channel from someone you may recognize, our former and future intern, Layla!

It’s less than a 4 minute video, go watch, like, and subscribe!

Now, after you’ve created your database, you’re going to want a fresh, new application schema. DO NOT USE ADMIN.

Create a new USER

Users are basically synonymous with schemas in Oracle. Think of a schema as the collection objects owned by a USER. We’re going to create an EMPLOYEES table in a new schema, called HR.

Logged in as the ADMIN (or another DBA) user, we’re going to go to the Administration and Users section of SQL Developer Web.

Click on THIS box.

You can see your existing users, edit them (reset passwords!), and create new ones.
Click THIS BUTTON.

Ok, now this next part is VERY important. We’re going to create a new user, and we want to make sure they can do the stuff we want them to be able to do, but nothing more. This is known as a minimal required privileges philosophy.

DO NOT FORGET TO GIVE YOUR NEW USER QUOTA ON THE DATA TABLESPACE.

Use a strong password, obviously.

DO NOT LEAVE THIS AS DEFAULT

You need the ‘Web Access’ so you can REST Enable the table. The EMPLOYEES table REST APIs will actually get executed using the HR user in the database.

Upper right corner, assign some QUOTA on the tablespace, which in Autonomous will almost ALWAYS be ‘DATA’. Our employees table only has like 100 rows in it, so ’25M’ is more than enough.

And that’s basically it, click ‘Create User.’

Now you could grant MORE roles, but we have ‘CONNECT’ and ‘RESOURCE’ selected out-of-the box.

Once the user has been created, you can now login as that user. Do that now.

On the USERS page, you’ll see your new user, and there’s a handy link you can use to get directly to their login page.

Click that ‘go’ button.

Creating our TABLE, REST Enable it

I’m going to test my account, make sure I can actually create objects AND store data in my schema.

Check and check!

Ok, onto our actual table, EMPLOYEES.

TABLE: EMPLOYEES (sans foreign keys and triggers)

CREATE TABLE EMPLOYEES(
    EMPLOYEE_ID NUMBER(6,0), 
    FIRST_NAME VARCHAR2(20) , 
    LAST_NAME VARCHAR2(25)  CONSTRAINT EMP_LAST_NAME_NN NOT NULL ENABLE, 
    EMAIL VARCHAR2(25)  CONSTRAINT EMP_EMAIL_NN NOT NULL ENABLE, 
    PHONE_NUMBER VARCHAR2(20) , 
    HIRE_DATE DATE CONSTRAINT EMP_HIRE_DATE_NN NOT NULL ENABLE, 
    JOB_ID VARCHAR2(10)  CONSTRAINT EMP_JOB_NN NOT NULL ENABLE, 
    SALARY NUMBER(8,2), 
    COMMISSION_PCT NUMBER(2,2), 
    MANAGER_ID NUMBER(6,0), 
    DEPARTMENT_ID NUMBER(4,0)
);

You can copy/paste this text into your SQL worksheet, and hit ctrl+enter to execute the code immediately.

Refresh your table list –

My table is there, and I can REST Enable it!

Signed in as the object owner (HR), I can simply right-click and REST Enable my table. Clicking ‘Ok’ would run this code (ORDS is our PL/SQL API for ORDS):

BEGIN
    ORDS.ENABLE_OBJECT(
        P_ENABLED      => TRUE,
        P_SCHEMA      => 'HR',
        P_OBJECT      => 'EMPLOYEES',
        P_OBJECT_TYPE      => 'TABLE',
        P_OBJECT_ALIAS      => 'employees',
        P_AUTO_REST_AUTH      => TRUE
        );
    COMMIT;
END;

Note that I’m choosing NOT to alias the EMPLOYEES table on the URI. I however DO recommend you do just that. So switch P_OBJECT_ALIAS to something like ‘not_employees.’

On the REST Enable dialog, we also have the option to require AUTH. We want to ALWAYS say ‘Yes!’

Any request will need to be Authorized via this role or privilege.

Click ‘Enable!’

If that worked (it will!), you’ll see the tables list refresh with little connection plug next to it.

Right-click on the table, again.

I now have a cURL option! Click that!
We try to make this easy for you, want ‘BATCH LOAD’ and you can pick your environment.

So, we’re good to go, we can load our table now, right?

ALMOST.

Our data (CSV)

We’re going to need some data, so you will you. I’ll share 🙂

"EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID"
100,"Steven","King","SKING","515.123.4567",18-06-1987,"AD_PRES",240000,,,90
101,"Neena","Kochhar","NKOCHHAR","515.123.4568",22-09-1989,"AD_VP",17000,,100,90
102,"Lex","De Haan","LDEHAAN","515.123.4569",14-01-1993,"AD_VP",17000,,100,90
103,"Alexander","Hunold","AHUNOLD","590.423.4567",03-01-1990,"IT_PROG",9000,,102,60
104,"Bruce","Ernst","BERNST","590.423.4568",20-05-1991,"IT_PROG",6000,,103,60
105,"David","Austin","DAUSTIN","590.423.4569",25-06-1997,"IT_PROG",4800,,103,60
106,"Valli","Pataballa","VPATABAL","590.423.4560",05-02-1998,"IT_PROG",4800,,103,60
107,"Diana","Lorentz","DLORENTZ","590.423.5567",07-02-1999,"IT_PROG",4200,,103,60
108,"Nancy","Greenberg","NGREENBE","515.124.4569",18-08-1994,"FI_MGR",12000,,101,100
109,"Daniel","Faviet","DFAVIET","515.124.4169",16-08-1994,"FI_ACCOUNT",9000,,108,100
110,"John","Chen","JCHEN","515.124.4269",28-09-1997,"FI_ACCOUNT",8200,,108,100
111,"Ismael","Sciarra","ISCIARRA","515.124.4369",30-09-1997,"FI_ACCOUNT",7700,,108,100
112,"Jose Manuel","Urman","JMURMAN","515.124.4469",07-03-1998,"FI_ACCOUNT",7800,,108,100
113,"Luis","Popp","LPOPP","515.124.4567",07-12-1999,"FI_ACCOUNT",6900,,108,100
114,"Den","Raphaely","DRAPHEAL","515.127.4561",08-12-1994,"PU_MAN",11000,,100,30
115,"Alexander","Khoo","AKHOO","515.127.4562",18-05-1995,"PU_CLERK",3100,,114,30
116,"Shelli","Baida","SBAIDA","515.127.4563",24-12-1997,"PU_CLERK",2900,,114,30
117,"Sigal","Tobias","STOBIAS","515.127.4564",24-07-1997,"PU_CLERK",2800,,114,30
118,"Guy","Himuro","GHIMURO","515.127.4565",31-10-1998,"PU_CLERK",2600,,114,30
119,"Karen","Colmenares","KCOLMENA","515.127.4566",10-08-1999,"PU_CLERK",2500,,114,30
120,"Matthew","Weiss","MWEISS","650.123.1234",18-07-1996,"ST_MAN",8000,,100,50
121,"Adam","Fripp","AFRIPP","650.123.2234",10-04-1997,"ST_MAN",8200,,100,50
122,"Payam","Kaufling","PKAUFLIN","650.123.3234",01-05-1995,"ST_MAN",7900,,100,50
123,"Shanta","Vollman","SVOLLMAN","650.123.4234",10-10-1997,"ST_MAN",6500,,100,50
124,"Kevin","Mourgos","KMOURGOS","650.123.5234",16-11-1999,"ST_MAN",5800,,100,50
125,"Julia","Nayer","JNAYER","650.124.1214",16-07-1997,"ST_CLERK",3200,,120,50
126,"Irene","Mikkilineni","IMIKKILI","650.124.1224",28-09-1998,"ST_CLERK",2700,,120,50
127,"James","Landry","JLANDRY","650.124.1334",14-01-1999,"ST_CLERK",2400,,120,50
128,"Steven","Markle","SMARKLE","650.124.1434",08-03-2000,"ST_CLERK",2200,,120,50
129,"Laura","Bissot","LBISSOT","650.124.5234",20-08-1997,"ST_CLERK",3300,,121,50
130,"Mozhe","Atkinson","MATKINSO","650.124.6234",30-10-1997,"ST_CLERK",2800,,121,50
131,"James","Marlow","JAMRLOW","650.124.7234",16-02-1997,"ST_CLERK",2500,,121,50
132,"TJ","Olson","TJOLSON","650.124.8234",10-04-1999,"ST_CLERK",2100,,121,50
133,"Jason","Mallin","JMALLIN","650.127.1934",14-06-1996,"ST_CLERK",3300,,122,50
134,"Michael","Rogers","MROGERS","650.127.1834",26-08-1998,"ST_CLERK",2900,,122,50
135,"Ki","Gee","KGEE","650.127.1734",12-12-1999,"ST_CLERK",2400,,122,50
136,"Hazel","Philtanker","HPHILTAN","650.127.1634",06-02-2000,"ST_CLERK",2200,,122,50
137,"Renske","Ladwig","RLADWIG","650.121.1234",14-07-1995,"ST_CLERK",3600,,123,50
138,"Stephen","Stiles","SSTILES","650.121.2034",26-10-1997,"ST_CLERK",3200,,123,50
139,"John","Seo","JSEO","650.121.2019",12-02-1998,"ST_CLERK",2700,,123,50
140,"Joshua","Patel","JPATEL","650.121.1834",06-04-1998,"ST_CLERK",2500,,123,50
141,"Trenna","Rajs","TRAJS","650.121.8009",17-10-1995,"ST_CLERK",3500,,124,50
142,"Curtis","Davies","CDAVIES","650.121.2994",29-01-1997,"ST_CLERK",3100,,124,50
143,"Randall","Matos","RMATOS","650.121.2874",15-03-1998,"ST_CLERK",2600,,124,50
144,"Peter","Vargas","PVARGAS","650.121.2004",09-07-1998,"ST_CLERK",2500,,124,50
145,"John","Russell","JRUSSEL","011.44.1344.429268",02-10-1996,"SA_MAN",14000,0.4,100,80
146,"Karen","Partners","KPARTNER","011.44.1344.467268",06-01-1997,"SA_MAN",13500,0.3,100,80
147,"Alberto","Errazuriz","AERRAZUR","011.44.1344.429278",11-03-1997,"SA_MAN",12000,0.3,100,80
148,"Gerald","Cambrault","GCAMBRAU","011.44.1344.619268",16-10-1999,"SA_MAN",11000,0.3,100,80
149,"Eleni","Zlotkey","EZLOTKEY","011.44.1344.429018",30-01-2000,"SA_MAN",10500,0.2,100,80
150,"Peter","Tucker","PTUCKER","011.44.1344.129268",30-01-1997,"SA_REP",10000,0.3,145,80
151,"David","Bernstein","DBERNSTE","011.44.1344.345268",24-03-1997,"SA_REP",9500,0.25,145,80
152,"Peter","Hall","PHALL","011.44.1344.478968",20-08-1997,"SA_REP",9000,0.25,145,80
153,"Christopher","Olsen","COLSEN","011.44.1344.498718",30-03-1998,"SA_REP",8000,0.2,145,80
154,"Nanette","Cambrault","NCAMBRAU","011.44.1344.987668",09-12-1998,"SA_REP",7500,0.2,145,80
155,"Oliver","Tuvault","OTUVAULT","011.44.1344.486508",23-11-1999,"SA_REP",7000,0.15,145,80
156,"Janette","King","JKING","011.44.1345.429268",30-01-1996,"SA_REP",10000,0.35,146,80
157,"Patrick","Sully","PSULLY","011.44.1345.929268",04-03-1996,"SA_REP",9500,0.35,146,80
158,"Allan","McEwen","AMCEWEN","011.44.1345.829268",01-08-1996,"SA_REP",9000,0.35,146,80
159,"Lindsey","Smith","LSMITH","011.44.1345.729268",10-03-1997,"SA_REP",8000,0.3,146,80
160,"Louise","Doran","LDORAN","011.44.1345.629268",15-12-1997,"SA_REP",7500,0.3,146,80
161,"Sarath","Sewall","SSEWALL","011.44.1345.529268",03-11-1998,"SA_REP",7000,0.25,146,80
162,"Clara","Vishney","CVISHNEY","011.44.1346.129268",12-11-1997,"SA_REP",10500,0.25,147,80
163,"Danielle","Greene","DGREENE","011.44.1346.229268",19-03-1999,"SA_REP",9500,0.15,147,80
164,"Mattea","Marvins","MMARVINS","011.44.1346.329268",24-01-2000,"SA_REP",7200,0.1,147,80
165,"David","Lee","DLEE","011.44.1346.529268",23-02-2000,"SA_REP",6800,0.1,147,80
166,"Sundar","Ande","SANDE","011.44.1346.629268",24-03-2000,"SA_REP",6400,0.1,147,80
167,"Amit","Banda","ABANDA","011.44.1346.729268",21-04-2000,"SA_REP",6200,0.1,147,80
168,"Lisa","Ozer","LOZER","011.44.1343.929268",12-03-1997,"SA_REP",11500,0.25,148,80
169,"Harrison","Bloom","HBLOOM","011.44.1343.829268",23-03-1998,"SA_REP",10000,0.2,148,80
170,"Tayler","Fox","TFOX","011.44.1343.729268",24-01-1998,"SA_REP",9600,0.2,148,80
171,"William","Smith","WSMITH","011.44.1343.629268",23-02-1999,"SA_REP",7400,0.15,148,80
172,"Elizabeth","Bates","EBATES","011.44.1343.529268",24-03-1999,"SA_REP",7300,0.15,148,80
173,"Sundita","Kumar","SKUMAR","011.44.1343.329268",21-04-2000,"SA_REP",6100,0.1,148,80
174,"Ellen","Abel","EABEL","011.44.1644.429267",12-05-1996,"SA_REP",11000,0.3,149,80
175,"Alyssa","Hutton","AHUTTON","011.44.1644.429266",19-03-1997,"SA_REP",8800,0.25,149,80
176,"Jonathon","Taylor","JTAYLOR","011.44.1644.429265",24-03-1998,"SA_REP",8600,0.2,149,80
177,"Jack","Livingston","JLIVINGS","011.44.1644.429264",23-04-1998,"SA_REP",8400,0.2,149,80
178,"Kimberely","Grant","KGRANT","011.44.1644.429263",24-05-1999,"SA_REP",7000,0.15,149,
179,"Charles","Johnson","CJOHNSON","011.44.1644.429262",04-01-2000,"SA_REP",6200,0.1,149,80
180,"Winston","Taylor","WTAYLOR","650.507.9876",24-01-1998,"SH_CLERK",3200,,120,50
181,"Jean","Fleaur","JFLEAUR","650.507.9877",23-02-1998,"SH_CLERK",3100,,120,50
182,"Martha","Sullivan","MSULLIVA","650.507.9878",21-06-1999,"SH_CLERK",2500,,120,50
183,"Girard","Geoni","GGEONI","650.507.9879",03-02-2000,"SH_CLERK",2800,,120,50
184,"Nandita","Sarchand","NSARCHAN","650.509.1876",27-01-1996,"SH_CLERK",4200,,121,50
185,"Alexis","Bull","ABULL","650.509.2876",20-02-1997,"SH_CLERK",4100,,121,50
186,"Julia","Dellinger","JDELLING","650.509.3876",24-06-1998,"SH_CLERK",3400,,121,50
187,"Anthony","Cabrio","ACABRIO","650.509.4876",07-02-1999,"SH_CLERK",3000,,121,50
188,"Kelly","Chung","KCHUNG","650.505.1876",14-06-1997,"SH_CLERK",3800,,122,50
189,"Jennifer","Dilly","JDILLY","650.505.2876",13-08-1997,"SH_CLERK",3600,,122,50
190,"Timothy","Gates","TGATES","650.505.3876",11-07-1998,"SH_CLERK",2900,,122,50
191,"Randall","Perkins","RPERKINS","650.505.4876",19-12-1999,"SH_CLERK",2500,,122,50
192,"Sarah","Bell","SBELL","650.501.1876",04-02-1996,"SH_CLERK",4000,,123,50
193,"Britney","Everett","BEVERETT","650.501.2876",03-03-1997,"SH_CLERK",3900,,123,50
194,"Samuel","McCain","SMCCAIN","650.501.3876",01-07-1998,"SH_CLERK",3200,,123,50
195,"Vance","Jones","VJONES","650.501.4876",17-03-1999,"SH_CLERK",2800,,123,50
196,"Alana","Walsh","AWALSH","650.507.9811",24-04-1998,"SH_CLERK",3100,,124,50
197,"Kevin","Feeney","KFEENEY","650.507.9822",23-05-1998,"SH_CLERK",3000,,124,50
198,"Donald","OConnell","DOCONNEL","650.507.9833",21-06-1999,"SH_CLERK",2600,,124,50
199,"Douglas","Grant","DGRANT","650.507.9844",13-01-2000,"SH_CLERK",2600,,124,50
200,"Jennifer","Whalen","JWHALEN","515.123.4444",17-09-1987,"AD_ASST",4400,,101,10
201,"Michael","Hartstein","MHARTSTE","515.123.5555",18-02-1996,"MK_MAN",13000,,100,20
202,"Pat","Fay","PFAY","603.123.6666",17-08-1997,"MK_REP",6000,,201,20
203,"Susan","Mavris","SMAVRIS","515.123.7777",07-06-1994,"HR_REP",6500,,101,40
204,"Hermann","Baer","HBAER","515.123.8888",07-06-1994,"PR_REP",10000,,101,70
205,"Shelley","Higgins","SHIGGINS","515.123.8080",08-06-1994,"AC_MGR",12000,,101,110
206,"William","Gietz","WGIETZ","515.123.8181",07-06-1994,"AC_ACCOUNT",8300,,205,110

A few important things, this needs to be CSV, with column headers. It’s easier if the strings are double-quoted. And our employees have a HIRE_DATE, defined as a DATE. So we’re going to need a DATE FORMAT we can use to tell ORDS what to expect when it sees those values. More on that in a few paragraphs.

You CAN deviate from those ‘rules,’ but if you do, you’ll need to use some of the optional parameters when calling the POST API on our EMPLOYEES table.

Are we ready to try?

Authentication & Authorization

We’re going to try to access the API now, but it’s not going to work. I want you to see what happens when we’re authenticated but not authorized.

Instead of using cURL (I hate it), let’s try a REST API client/GUI like Insomnia or Postman.

I’ve talked before here how we should be using the OAuth2 workflow to authenticate to our REST APIs vs using Database Authentication in Autonomous. But…we’ll keep it simpler here in this use case. And by simple, I mean Basic AUTH.

Using our HR username and password on the POST request, we get something perhaps a bit unexpected?

Whut, 401?

We’re authenticated, but NOT authorized. Our session doesn’t have the privilege required to access the REST API we published on our table. Remember we clicked the ‘protect’ switch when enabling the table, and it showed us a ROLE and PRIV? We need grant the privilege to the ‘SQL Developer’ role if we want to use database authentication.

To remedy that, head to the REST panel, then click on ‘Security’ and ‘Privileges.’

Find your privilege and click ‘the kebab button’ on the upper right-hand corner of the card, and select Edit.

Go ahead and give our Privilege a label and description.

The important part is up there where it says ‘Roles.’

We’re going to move ‘SQL Developer’ over to the Roles list for this privilege, and hit ‘Save.’

The ‘SQL Developer’ role is inherited by any Authenticated request that used database username and password. So once we have that, we won’t get the HTTP 401 (DENIED!) error.

Loading CSV to Oracle via REST. Look ma, no code!

You can basically just hit the ‘go’ button again now, but not quite. Remember we were talking about the DATE formats? Yeah, we need to account for that.

We’re going tell ORDS what our date format is in the CSV text file by using a parameter in the POST URI.

Bookmark these Docs

The parameter we want to use here is ‘dateFormat’ –

There’s a whole bunch we COULD use, but the one we MUST use with DATES is dateFormat.

So let’s go look at our data and our REST request.

Our date format is pretty simple, it’s just DD-MM-YYYY.

batch loading csv to Oracle
Hitting ‘Send,’ and we get magic!

There, we had 107 rows processed, and all loaded without errors.

And I can test this by querying my table, again.

That run above says ‘1.91’ seconds, but I’ve just tried it again and got 548ms. But remember, it can be even faster! Like, loading 10,000,000 records in less than 28 seconds. And I’ll show you how to do it with cURL, even.

The post Batch Loading CSV to Oracle Database, Again, via REST APIs first appeared on ThatJeffSmith.

Hours wasted watching Grey’s Anatomy? Finding out with SQL!

$
0
0

I have kids. Our family has had a family Netflix account since 2017. My daughter decided to jump into 18 Seasons of Grey’s Anatomy last year, and has been binging it ever since.

So when I put together a post last week talking about using SQL to analyze Spotify streaming data, my brain quietly whispered to itself, I bet Netflix data would be more fun.

So let’s do it. Let’s move this data into a database, and use SQL to figure out how much time she’s invested in Dr. Meredith Grey, and her friends.

More fun examples, using your own data to learn SQL, build REST APIs, etc.

Untappd || iTunes || Twitter || Strava || Netflix || Spotify


Step 0: Download your Netflix data.

You request it from your online account. Do that in your browser, not on your Netflix app.

If you know where to look, it’s easy to find.

You’ll have to confirm the request. They say it will take up to 30 days. It took only 1 day for me.

Then you’ll get an email saying it’s ready. You’ll following the link, provide your password (again), and download a Zip file.

They provide A LOT of your personal data. From what I can tell, ALL of it. Kudos, Netflix!

We’re going to import that ‘ViewingActivity.csv’ file as a new table in an Oracle Database, so you’ll need one of those. I’ve talked about that before, a lot.

If you don’t have SQL Developer Web, if you DO have an Oracle Database and a copy of Oracle SQL Developer, I’ve shown how to do this same table import here.

Step 1: Import your data

I’m going to use my Always Free Oracle Autonomous Database, and the Database Actions (SQL Developer Web!) interface in my browser to do it.

This will open a wizard, and on the 2nd page, you can define the columns. I’ve left the data type defaults, but have changed the precision of the text fields.

We’re going to treat these ‘duration’ bits of data as INTERVAL types, later.

Note that the wizard has recognized that ‘START_TIME’ is a date, and it’s recommending we bring it in as a TIMESTAMP, and has even recognized the proper format mask to read those values in.

We can click to the end of the wizard, and hit ‘Finish.’

When it’s done processing my 2.5MB of data, I can see it’s imported more than 14,000 rows. Is that sad? I don’t know, the four of us watch a TON of online content.

I simply added this data as a table called ‘NETFLIX’ –

It’s the entire history, not just the last 365 days that Spotify gave me for my request.

Step 2: Start doing SQL

Again, kudos to Netflix for including a ‘data dictionary’ in their personal data package. It’s quite nice –

I’ve highlighted the bits of data I’m going to need in my SQL to answer my ‘question.’
  • Profile – my daughter
  • Duration – how long did she spend watching it
  • Title – anything Grey’s Anatomy
  • Supplemental Video Type – just episodes, not teasers, trailers, bonus content, etc

Starting Simpler, what have I seen recently?

I know MY data, so let’s just make sure this CSV dump is accurate. So I’m going to ask what I’ve watched recently.

Yup, this sounds about right.
SELECT
    START_TIME,
    DURATION,
    TITLE
FROM
    NETFLIX
WHERE
        PROFILE_NAME = 'Jefferson'
    AND SUPPLEMENTAL_VIDEO_TYPE IS NULL
ORDER BY
    START_TIME DESC

DURATION was brought in as a VARCHAR2, or a string. So doing MATHs on this column later will get a bit tricksy. However, the results seem ‘right’ to me. I gave up on re-watching the original Fletch about 30 minutes into it. I just wasn’t feeling it. So yeah, it looks good.

Let’s find my daughter’s ‘raw’ data for Grey’s

This is already really, really fun data. Disclaimer: this is totally a coincidence.
SELECT 
      START_TIME,
      DURATION,
      TITLE
FROM
    NETFLIX
WHERE
        PROFILE_NAME = 'Daughter'
    AND SUPPLEMENTAL_VIDEO_TYPE IS NULL
    AND TITLE LIKE 'Grey''s%';

The only tricky thing here is the TITLE predicate clause. Grey’s Anatomy has an apostrophe, and that’s also the character we use to enclose strings in Oracle SQL. So the ‘trick’ is you have to escape it. One way is to just use an extra quote.

Tip: Perhaps better way to deal with this is to use our built-in function q[].
Docs Link.
LiveLabs Tutorial.

This is very handy, especially when you have a mix of single and double quotes.
SELECT 
      START_TIME,
      DURATION,
      TITLE
FROM
    NETFLIX
WHERE
        PROFILE_NAME = 'Daughter'
    AND SUPPLEMENTAL_VIDEO_TYPE IS NULL
    AND TITLE LIKE q'[Grey's Anatomy%]');

OK, now how do I sum up the duration?

That string is representing what Oracle basically considers an INTERVAL YEAR TO SECOND piece of data. Except in this case it’s only HOURS:MINUTES:SECONDS.

So what I’m going to do is create a VIRTUAL COLUMN that uses this data to create an actual INTERVAL interpretation of this data.

Let’s walk and chew some gum…add the VIRTUAL COLUMN and query the ‘new’ data.

We’re not physically storing new information here. VIRTUAL COLUMNs in an Oracle table allow for that data to be derived from other values in the row.

In this case we have

DURATION 0:13:3

and we want

DURATION 00 0:13:3

ALTER TABLE NETFLIX ADD 
    ( 
     DURATION_INTERVAL INTERVAL DAY TO SECOND
        generated always AS
          (
              TO_DSINTERVAL('00 ' || duration)
          ) virtual
    );

The TYPE of the new VIRTUAL COLUMN is ‘INTERVAL DAY TO SECOND.’ And the way it’s being derived is taking the value of the DURATION column and prefixing it with ’00 ‘

Then with that string being ‘computed’, I’m then sending that as an input to the TO_DSINTERVAL function, which returns an INTERVAL value.

Ok, so now I can just do a SUM on DURATION_INTERVAL, right?

Well, not quite. Unfortunately the database doesn’t give us a SUM() function for the INTERVAL type. So we need to do something else.

From the INTERVAL value, we’re going to extract the total number of minutes. And from that value, we’re going to total that up, and divide by ’60’ to get the total number of hours.

So, ~ 200 hours, 20 minutes.

I asked my wife, does that sound about right? And yes, about 8 straight days of binging over the last 18 months sounded about right.

Ok, let’s look at that code, esp the the SELECT

-- Hours spent watching Grey's Anatomy
SELECT
    SUM(EXTRACT(MINUTE FROM duration_interval))/60 time_wasted
    --title, duration_interval
FROM
    NETFLIX
WHERE
        PROFILE_NAME = 'Daughter'
    AND SUPPLEMENTAL_VIDEO_TYPE IS NULL
    AND TITLE LIKE 'Grey''s%';

It’s always easier to break the nested function calls into the various pieces, so let’s do that.

  • extract(minute from duration_interval) – pull out the minutes portion of time
  • sum sum(extract…) – add this number up
  • /60 – take the sum and divide to get ‘hours’

Wait a second, those pesky seconds really add up, right?

Now, simply writing this post and the description has forced me into a different level of thinking. What about the SECONDS? Over 666 viewings, surely the fractional number of minutes will be significant?

Let’s find out.

Crap, that’s an EXTRA 5 hours.

So if I ignored the seconds, I’d be off ~2.5% in my answer. Is that significant to worry about? Probably not. But you need to KNOW what you’re asking, when you’re writing the SQL. Assumptions DIG HUGE HOLES that you might have to crawl out of later.

So to answer my original question, my daughter has spent ~ 205 hours watching Grey’s Anatomy since May 22nd, 2022.

Kudos to my fellow product managers here on the Database team at Oracle. They helped me a bit with this SQL, especially Michelle with the Extract function().

Connor reminds me that this gets much easier if you take advantage of SQL Macros (introduced in 21c) and even EASIER EASIER once 23c is released.

Disclaimer: I’ve probably spent at least 50 hours watching this show with her. Enough to know the writing got EXTREMELY lazy when it came to killing off the main cast of characters.

Disclaimer 2: if you’re going to leave a nasty comment about my kid’s taste in pop culture entertainment or my parenting skills, just don’t. Feel free to bash my data modeling and SQL skills, as always.

DDL for my TABLE

CREATE TABLE NETFLIX
   (
    PROFILE_NAME VARCHAR2(10), 
    START_TIME TIMESTAMP (6), 
    DURATION VARCHAR2(20), 
    ATTRIBUTES VARCHAR2(256), 
    TITLE VARCHAR2(256), 
    SUPPLEMENTAL_VIDEO_TYPE VARCHAR2(256), 
    DEVICE_TYPE VARCHAR2(256), 
    BOOKMARK VARCHAR2(20), 
    LATEST_BOOKMARK VARCHAR2(20), 
    COUNTRY VARCHAR2(256), 
    DURATION_INTERVAL INTERVAL DAY (2) TO SECOND (6) GENERATED ALWAYS AS (TO_DSINTERVAL('00 '||"DURATION")) VIRTUAL 
   ) ;

Working with Oracle SQL and Temporal Data, The Movie

Watch this, it’s great. Chris is our SQL evangelist, and he’s been working with Connor to answer these types of questions for years.

Pure slice of gold.

The post Hours wasted watching Grey’s Anatomy? Finding out with SQL! first appeared on ThatJeffSmith.

SQL Developer Web: Cleaning up data loading error logs

$
0
0
What are these, and how do I get rid of them?

I spend a lot of time loading data to my Oracle Database. I’m continuously loading different types of CSV, Excel, JSON, Avro files to make sure folks have a good experience when they’re using our tools.

But this post is more about the janitorial work one does AFTER your data loading tasks have been completed.

When loading data to your tables, we take advantage of a database feature provided via DBMS_ERRLOG (Docs.) This allows us to LOG any failed rows to be inserted to your tables. The error log tables all start with SDW$ERR$_.

You’ll have one of these logging tables for each table that’s had data imported using SQL Developer Web.

Let’s import some data

Need some interesting data to play with? Maybe you should try your own! I’ve talked about this a lot, but otherwise I’ll assume you have no shortage of CSV and Excel files laying around that you might want to put SQL over.

More fun examples, using your own data to learn SQL, build REST APIs, etc.

Untappd || iTunes || Twitter || Strava || Netflix || Spotify


In the right hand corner of the Worksheet, you’ll see the ‘Data Load’ button.

I’ve added 4 or 5 files and hit the ‘Run All’ button.

Once it’s finished, I can see some new tables! Let’s go browse one.

Woohoo, my music is here.

If I go to query one of the SDW$ERR$_ tables, I can see there weren’t really any failed inserted rows.

I’m happy about this.

Ok, my data is imported, I don’t need these logging tables anymore.

Let’s filter our list of tables, and I guess start dropping them.

No need type these out!

Yes, I could write a script, or even create a JOB to drop these tables on a regular basis, or I could just ‘click the button.’

Using the Data Loading dialog to drop our error logging tables

Let’s re-open the data loading dialog, this time from the table browser.

There’s a ‘History’ item.

Yeah, that’s what we want, the History.

Clicking into that brings me into this screen, pay attention to the toolbar, there’s a ‘trashcan’ button.

This does what we want.

Clicking that button we get a warning –

Yeah, yeah, yeah – nuke them already.

The action can’t be reverted because not only do we DROP the tables, we drop them with the PURGE keyword. That means they won’t be available for recovery from the Recycle Bin.

My history is gone, and so are all of my accompanying SDW$ error tables.

This button saved me a lot of typing. I figure you might be loading data too on a regular basis, and if you want to clean up your system, this will come in handy!

The post SQL Developer Web: Cleaning up data loading error logs first appeared on ThatJeffSmith.

REPEAT after me, it’s easy to monitor with Oracle SQLcl!

$
0
0

I’ve talked about the REPEAT command before, in fact it was one of my earlier posts on SQLcl, all the way back in 2015!

But I’ve found it’s necessary to tell a story multiple times, in multiple ways in order to really get the word out. And this is a HANDY feature.

What it does

REPEAT X Y 

Whatever the last thing (a SQL statement, or even sa SQL script!) you executed…it executes again, X number of times, with a Y second delay.

The delay Y max is 120 seconds.

And in between executions, it refreshes the screen. Which makes it work quite nicely as a custom ‘monitor.’

Now let’s put this in terms that are interesting to YOU.

What it is, that you do here

You load data. All the time. And you like to keep an eye on how that’s going. So let’s setup a scenario where we’re going to load some CSV to an Oracle table.

And while that data load is going, I want a ‘monitor’ on my CLI to show me what’s happening.

And the cool part is, we can use the REPEAT command to run both of those things.

1. The Data Loader

We need a table.

CREATE TABLE HR.BANK_TRANSFERS
 (
  TXN_ID NUMBER(4),
  SRC_ACCT_ID NUMBER(5),
  DST_ACCT_ID NUMBER(5),
  DESCRIPTION VARCHAR2(26),
  AMOUNT NUMBER(6)
 )
;

We need some data.

TXN_ID,SRC_ACCT_ID,DST_ACCT_ID,DESCRIPTION,AMOUNT
1,171,831,transfer,8948
2,172,305,transfer,6784
3,172,292,transfer,1006
4,172,294,transfer,6342
...

2. The “Monitor”

I have a script, “repeat-multiple.sql” that looks like this –

WITH metrics AS (
   SELECT statistic# AS id
        , name
        , VALUE
     FROM v$sysstat
    WHERE class = 1
      AND ( name LIKE 'user %' )
)
SELECT STAT_6.value AS USER_COMMITS
     , STAT_7.value AS USER_ROLLBACKS
     , STAT_8.value AS USER_CALLS
     , STAT_12.value AS CUMULATIVE_USER_LOGONS
     , STAT_13.value AS CUMULATIVE_USER_LOGOUTS
     , STAT_26.value AS USER_IO_WAIT_TIME
  FROM (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user commits'
) STAT_6
     , (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user rollbacks'
) STAT_7
     , (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user calls'
) STAT_8
     , (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user logons cumulative'
) STAT_12
     , (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user logouts cumulative'
) STAT_13
     , (
   SELECT MIN(VALUE) AS VALUE
     FROM metrics
    WHERE name = 'user I/O wait time'
) STAT_26;
 
 
SELECT systimestamp;
 
SELECT COUNT(*) NUM_BANK_TRANSFERS FROM bank_transfers;

Show me some session stats, show me what time it is, and show me how many rows are in my BANK_TRANSFERS table.

Let’s go!

Terminal one…run the load command. Not once, but 15x, with a 1 second delay.

You DO KNOW about the LOAD command, right?

I simply run this once

“load bank_transfers BANK_TRANSFERS.csv”

And then I follow that with the repeat command, “repeat 15 1”

5001 rows for each batch. 75,015 rows loaded in total – thanks Chris for the maths help.

Terminal two, watching what’s happening over in terminal one.

I run the @script and then follow that with another “repeat 10 1”

Look at it go!

What did we learn today?

The REPEAT and LOAD commands are quite handy, AND can be used together! Also, if I go more than a few days without blogging, I start to get itchy. I should probably see a doctor about that.

The post REPEAT after me, it’s easy to monitor with Oracle SQLcl! first appeared on ThatJeffSmith.

On relative paths with SQLcl and Liquibase

$
0
0

This question popped twice in the last few days, and that’s always a prompt for me to put together a blog post AND sort our product docs. Here is that blog post.

But first, the question –

Can we use relative paths in our changeSets?

Customers A & B

Yes! Let’s look at an example.

In my scenario I’m running SQLcl, interactively, and I’m going to process 3 changeSets, that will:

  1. create a table, CSVS
  2. load said table with the SQLcl LOAD command
  3. print the current working directory from inside the SQLcl script runner

Our controller changeLog will use relative paths.

The changeSet in step 2 will use a relative path for the location of the CSV file.

My SQLcl runtime CWD will be the rel-paths-load directory, home of the controller.xml

So I can simply run –

lb update -changelog-file controller.xml

The output

That directory listing/path gets important in a few paragraphs.

We can also see in there feedback like

'Running Changeset: dirA/table-load.xml'

That’s using the relative file path location as referenced in the controller, it’s in a subdirectory, under the controller in dirA.

The table1-load changeSet, using SQLcl’s LOAD command

I’ve talked about this previously, full example is here.

But let’s do a fresh take. In the previous example, I used the ‘cd’ command in my changeSet to tell SQLcl where to look, but maybe you want to use relative paths in SQLcl’s script engine as well.

<changeSet id="8002" author="thatJeffSmith" failOnError="false"   runAlways="true" >
    <n0:runOracleScript objectName="load-tble-1" objectType="SCRIPT" ownerName="HR" sourceType="STRING" replaceIfExists="false">
      <n0:source><![CDATA[
       load csvs ..\..\..\..\..\users\jdsmith\desktop\lb\rel-paths-load\dirA\dirB\csvs.csv
       commit;
      ]]></n0:source>
    </n0:runOracleScript>
</changeSet>

The scriptrunner process that kicks off to handle the runOracleScript changeSet adopts the current working directory from where SQLcl was kicked off. I’m starting SQLcl from where it’s installed,

c:\sqlcl\23.3\sqlcl\bin

So when I want to reference the location of my CSV file in dirB, the relative path gets a bit fun. For debugging/testing, I created another changeSet that simply printed for me the CWD from inside the scriptrunner –

<changeSet id="8003" author="thatJeffSmith" failOnError="false"   runAlways="true" >
    <n0:runOracleScript objectName="create-tble-1" objectType="SCRIPT" ownerName="HR" sourceType="STRING" replaceIfExists="false">
      <n0:source><![CDATA[
          !dir
      ]]></n0:source>
    </n0:runOracleScript>
</changeSet>

Yeah, I’m just doing a !dir to see where I’m at (in Windows CMD speak.) And here’s a callback from the output I showed at the beginning, seeing where the SQLcl scriptrunner path is set to…

Hence the ../../../../ to get back to C:\

Don’t forget the -search-path option for the lb update cmd

Instead of starting off with a ‘cd C:\Users\JDSMITH\Desktop\lb\rel-paths-load’ in my interactive SQLcl shell session, I can use the -search-path directive (Liquibase Docs) to tell Liquibase where to look for files.

lb update -search-path C:\…\Desktop\lb\rel-paths-load -changelog-file controller.xml

23.4 Sneak Peek

There can be lots of typing when you’re working in interactive mode. We’re in the process of setting up tab-completion for all of the commands in SQLcl, and 23.4 will have it ready for the liquibase (lb) commands.

We’re still working on this, sorry for the YELLING, we’ll get that sorted for the release.

The 23.4 update also includes more than 15 bug fixes for the Liquibase feature, and we will have that ready in time for the Winter holiday break.

The post On relative paths with SQLcl and Liquibase first appeared on ThatJeffSmith.

Oracle SQL Developer Extension for VS Code 24.1.1 is here!

$
0
0

Go get it, now.

The changeLog is interesting, even for a ‘dot 1’ release. We have few nice new features for you!

Export to Clipboard, vs file

On the destination, simply toggle from File to Clipboard, and you’re good to go.

Browsing a package spec, but want the body?

Just right-click!

Dedicated Connections per Worksheet

If I open 5 SQL worksheets on a database connection, you’ll end up with 5 separate connections to the database. This means you can have multiple, long running tasks going concurrently.

It also means you can be much more productive, or consume many more resources than you could before. We suggest closing worksheets when you’re doing doing work, to free up the server resources. This isn’t new advice, we offer the same for our other tools like SQL Developer.

You can also disable this feature and return to the pre-24.1.1 behavior, switch Sessions per attached worksheet to ‘off.’

There’s more, much more…

Thanks again to all the community users who shared their feedback. Many of these items came directly from y’alls.

Like this bug fix…

“Duplicated table aliases showing in autocomplete”

previously the X. got duplicated to x.x.commission_pct – 🤮

50k more thank you’s

We got to 50,000+ installs! Now let’s see how fast we can get to 100,000 and 1,000,000!

The post Oracle SQL Developer Extension for VS Code 24.1.1 is here! first appeared on ThatJeffSmith.

Viewing all 27 articles
Browse latest View live