Interactive Demo

Demonstration of Oracle SQL compatible functions and syntax

BigAnimal lets you run Oracle SQL queries in the cloud via EDB Postgres Advanced Server. This topic demonstrates two Oracle SQL-syntax queries running unmodified on a BigAnimal test cluster, populated with the Chinook sample database.

Watch the video, or load up psql and follow along below!

Connecting to the demo cluster with psql

You can use any recent version of psql to connect to EDB Postgres Advanced Server. If you choose to use the version that ships with Advanced Server, you'll get a few nice SQL*Plus compatibility features (with more availability in EDB*Plus). The queries and commands that we'll examine here will work the same in either version of psql. For convenience, these examples use the version of psql available in the EDB Postgres Advanced Server container image used by Cloud Native PostgreSQL (and internally by BigAnimal). You can follow along by installing Docker and running:

docker pull quay.io/enterprisedb/edb-postgres-advanced
docker run --rm -it quay.io/enterprisedb/edb-postgres-advanced /bin/bash
Output
[postgres@0fe3c244563c /]$ 
Note

If you prefer a graphical tool to execute Oracle syntax compatible queries or run Oracle PL/SQL compatible code, we recommend pgAdmin.

The connection string for this demo's Advanced Server cluster looks like this:

postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require

In case you're unfamiliar with PostgreSQL connection URIs, let's break that down:

  • demo is the user role we're connecting as. This is a user set up with select privileges on the database.
  • password is the password for this user.
    Passwords in connection strings.

    This example illustrates a complete connection URL, including the password. This is fine for a demonstration, and may also be acceptable for applications configuration if access to the configuration is limited. Avoid this practice for admin, superuser, or other roles used interactively - psql will prompt for a password if none is supplied.

  • p-vmpc7c40fm.pg.biganimal.io is the host name for the Advanced Server cluster on BigAnimal that I'm connecting to.
  • 5432 is the usual PostgreSQL port number.
  • chinook is the name of the database.
  • sslmode=require ensures that we establish a secure connection.

With that in hand, we can launch psql:

psql postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require
Output
psql (14.2.1, server 13.6.10 (Debian 13.6.10-1+deb10))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

chinook=>

Let's take a look at the schema:

\dt
Output
List of relations
 Schema |     Name      | Type  |   Owner   
--------+---------------+-------+-----------
 public | album         | table | edb_admin
 public | artist        | table | edb_admin
 public | customer      | table | edb_admin
 public | employee      | table | edb_admin
 public | genre         | table | edb_admin
 public | invoice       | table | edb_admin
 public | invoiceline   | table | edb_admin
 public | mediatype     | table | edb_admin
 public | playlist      | table | edb_admin
 public | playlisttrack | table | edb_admin
 public | track         | table | edb_admin
(11 rows)

There's an employee table, let's examine its definition:

\d+ employee
Output
Table "public.employee"
   Column   |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description 
------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 employeeid | numeric                     |           | not null |         | main     |              | 
 lastname   | character varying(20)       |           | not null |         | extended |              | 
 firstname  | character varying(20)       |           | not null |         | extended |              | 
 title      | character varying(30)       |           |          |         | extended |              | 
 reportsto  | numeric                     |           |          |         | main     |              | 
 birthdate  | timestamp without time zone |           |          |         | plain    |              | 
 hiredate   | timestamp without time zone |           |          |         | plain    |              | 
 address    | character varying(70)       |           |          |         | extended |              | 
 city       | character varying(40)       |           |          |         | extended |              | 
 state      | character varying(40)       |           |          |         | extended |              | 
 country    | character varying(40)       |           |          |         | extended |              | 
 postalcode | character varying(10)       |           |          |         | extended |              | 
 phone      | character varying(24)       |           |          |         | extended |              | 
 fax        | character varying(24)       |           |          |         | extended |              | 
 email      | character varying(60)       |           |          |         | extended |              | 
Indexes:
    "pk_employee" PRIMARY KEY, btree (employeeid)
Foreign-key constraints:
    "fk_employeereportsto" FOREIGN KEY (reportsto) REFERENCES employee(employeeid)
Referenced by:
    TABLE "customer" CONSTRAINT "fk_customersupportrepid" FOREIGN KEY (supportrepid) REFERENCES employee(employeeid)
    TABLE "employee" CONSTRAINT "fk_employeereportsto" FOREIGN KEY (reportsto) REFERENCES employee(employeeid)
Access method: heap

This table has a "reportsto" field - that means this is a hierarchical reporting structure, with some employees reporting to other employees who may in turn report to still other employees.

Demo #1: exposing an organization hierarchy with CONNECT BY

Let's construct a hierarchical query to expose this chain of command.

Modern SQL would use a recursive CTE for this, as those are widely supported. But Oracle has, for decades, supported an alternative mechanism for querying hierarchy in the form of CONNECT BY - let's put that into action:

SELECT firstname, lastname, (
	SELECT LISTAGG(lastname, ', ')
	FROM employee rt
	START WITH rt.employeeid=e.reportsto
	CONNECT BY employeeid = PRIOR reportsto
	) AS "chain of command"
FROM employee e;
Output
firstname | lastname | chain of command 
-----------+----------+------------------
 Andrew    | Adams    | 
 Nancy     | Edwards  | Adams
 Jane      | Peacock  | Edwards, Adams
 Margaret  | Park     | Edwards, Adams
 Steve     | Johnson  | Edwards, Adams
 Michael   | Mitchell | Adams
 Robert    | King     | Mitchell, Adams
 Laura     | Callahan | Mitchell, Adams
(8 rows)

Here, we use CONNECT BY and the LISTAGG function in a subquery to generate the chain of command for each employee: who they report to, who that person reports to, etc.

Now, the LISTAGG() function was introduced in Oracle 11g Release 2. Very few database systems support it. PostgreSQL does support string_agg(), and in the previous example that could be used as a drop-in replacement...

SELECT firstname, lastname, (
	SELECT string_agg(lastname, ', ')
	FROM employee rt
	START WITH rt.employeeid=e.reportsto
	CONNECT BY employeeid = PRIOR reportsto
	) AS "chain of command"
FROM employee e;
Output
firstname | lastname | chain of command
-----------+----------+------------------
 Andrew    | Adams    |
 Nancy     | Edwards  | Adams
 Jane      | Peacock  | Edwards, Adams
 Margaret  | Park     | Edwards, Adams
 Steve     | Johnson  | Edwards, Adams
 Michael   | Mitchell | Adams
 Robert    | King     | Mitchell, Adams
 Laura     | Callahan | Mitchell, Adams
(8 rows)

But the semantics of the two functions are different for even slightly less-trivial uses, specifically when using the grouping construct.

Let's demonstrate that.

Demo #2: group concatenation with LISTAGG

As we saw above, this database has "album" and "track" tables containing metadata on digital recordings. We can use some analytic functions, including LISTAGG, to put together a report on average track storage requirements for albums with "baby" in the title.

SELECT UNIQUE title,
       ROUND(AVG(bytes) OVER (PARTITION BY mediatypeid)/1048576 ) media_avg_mb,
       LISTAGG(t.name || ' (' || ROUND(bytes/1048576) || ' mb)', chr(10))
         WITHIN GROUP (ORDER BY trackid)
         OVER (PARTITION BY title)  track_list
FROM track t
JOIN album USING (albumid)
JOIN mediatype USING (mediatypeid)
WHERE lower(title) LIKE '%baby%'
ORDER BY title;
Output
title     | media_avg_mb |                    track_list
--------------+--------------+---------------------------------------------------
 Achtung Baby |            9 | Zoo Station (9 mb)                               +
              |              | Even Better Than The Real Thing (7 mb)           +
              |              | One (9 mb)                                       +
              |              | Until The End Of The World (9 mb)                +
              |              | Who's Gonna Ride Your Wild Horses (10 mb)        +
              |              | So Cruel (11 mb)                                 +
              |              | The Fly (8 mb)                                   +
              |              | Mysterious Ways (8 mb)                           +
              |              | Tryin' To Throw Your Arms Around The World (7 mb)+
              |              | Ultraviolet (Light My Way) (10 mb)               +
              |              | Acrobat (8 mb)                                   +
              |              | Love Is Blindness (8 mb)
(1 row)

If we try replacing LISTAGG with string_agg in this example, it's going to fail - the expression syntax for string_agg is different.

SELECT UNIQUE title,
       ROUND(AVG(bytes) OVER (PARTITION BY mediatypeid)/1048576 ) media_avg_mb,
       string_agg(t.name || ' (' || ROUND(bytes/1048576) || ' mb)', chr(10))
         WITHIN GROUP (ORDER BY trackid)
         OVER (PARTITION BY title)  track_list
FROM track t
JOIN album USING (albumid)
JOIN mediatype USING (mediatypeid)
WHERE lower(title) LIKE '%baby%'
ORDER BY title;
Output
ERROR:  function string_agg(text, text, numeric) does not exist
LINE 3:        string_agg(t.name || ' (' || ROUND(bytes/1048576) || ...
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

Now, this isn't terribly difficult to correct, but it requires restructuring the query to replace the grouping construct - such work can quickly accumulate errors. Fortunately, EDB Postgres Advanced Server supports LISTAGG AND string_agg, so this query doesn't need to change when migrating from Oracle.

Compatibility preserves the value of your existing work

In both of the examples shown here, you probably would not use the functions and syntax demonstrated for new work; there are better, more familiar or at least more widely-available equivalents provided natively by PostgreSQL (and many other databases). But by supporting them, EDB Advanced Server gives you the ability to reuse existing logic with minimal modification, allowing you to focus your time and expertise on solving new problems.

Try it on your own cluster: export and import

If you'd like to try these examples on your own BigAnimal cluster, follow these instructions to import the example database.

  1. Connect to your EDB Postgres Advanced Server cluster as edb_admin using psql. You can grab the command for this from your cluster's Overview tab - it'll look like this (where <YOUR CLUSTER HOSTNAME> is specific to your cluster):

    psql -W "postgres://edb_admin@<YOUR CLUSTER HOSTNAME>:5432/edb_admin?sslmode=require"

    You'll be prompted for the password you specified when creating your cluster.

  2. Create a new database and connect to it (you'll be prompted again for your cluster password):

    CREATE DATABASE chinook;
    \c chinook
  3. Create a limited-privilege user for connecting to this database:

    CREATE USER demo WITH PASSWORD 'password';
    ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO demo;

    See Details on managing access in BigAnimal databases for more information.

  4. Quit psql (\q).

  5. To export and import, you'll need compatible versions of pg_dump and pg_restore. If you aren't already running EDB Postgres Advanced Server, you can use the container image used by Big Animal to ensure compatibility.

    docker pull quay.io/enterprisedb/edb-postgres-advanced
    docker run --rm -it quay.io/enterprisedb/edb-postgres-advanced /bin/bash
  6. Export the Chinook sample database from EDB's cluster using pg_dump:

    pg_dump --format custom \
        "postgres://demo:password@p-vmpc7c40fm.pg.biganimal.io:5432/chinook?sslmode=require" \
        > /tmp/chinook.dump
  7. Now grab the host name of your cluster from the Connect tab on the BigAnimal portal and use it in place of <YOUR CLUSTER HOSTNAME> to invoke pg_restore:

    pg_restore --no-owner \
        -d "postgres://edb_admin@<YOUR CLUSTER HOSTNAME>:5432/chinook?sslmode=require" \
        /tmp/chinook.dump

    You may see an error about pg_stat_statements - you can safely ignore this error. pg_stat_statements is a very useful extension and is installed by default on BigAnimal clusters, but since we're connecting as the admin user and not a superuser, we can't modify it. The rest of the schema and data has been restored however.)

  8. Finally, connect to this database:

    psql "postgres://demo:password@<YOUR CLUSTER HOSTNAME>:5432/chinook?sslmode=require"

...and try some queries on your own cluster!

Next steps