Enabling Debug on APEX Builder

When reporting a problem in the forums or in an Oracle Support ticket, if that problem occurs in the APEX Builder itself, the first instinct is to report the error message received. For example, “I’m getting ORA-01722: Invalid Number.”

Providing additional context about where that error occurs can greatly help members of the APEX community to diagnose what might be happening. Often you can obtain this extra info – for example, a PL/SQL call stack – by using a second instance of the APEX Builder to enable debug tracing on your original APEX Builder session where you’re experiencing the problem.

Using a Second APEX Builder to Debug

To do this, you need to use two separate browsers (let’s say Chrome and Firefox). It can also be done using an incognito/private window of the same browser, but for simplicity of explanation, I will explain it with Chrome and Firefox. In our scenario, “Chrome APEX” will be the builder experiencing the problem and “Firefox APEX” will be the second instance used to debug the first.

Using “Firefox APEX” Builder to enable debug tracing on “Chrome APEX” Builder’s session
  1. Open the APEX Builder in Chrome and get to the step just before the error will occur when you will click (Next) or (Save) or (Apply Changes)
  2. Open the APEX Builder in Firefox and login to same workspace as in 1
  3. In Firefox APEX builder, click the icon with “person with wrench/spanner” > Monitor Activity > Active Sessions
  4. Study the address bar in Chrome APEX to notice the session id
  5. In Firefox APEX builder, click on the session id from step 4
  6. In Firefox APEX builder, set the debug level to “Info” on the session of the Chrome APEX builder and click (Apply Changes)
  7. In Chrome APEX Builder, click (Next) or (Save) or (Apply Changes) to cause the problem to occur
  8. In Firefox APEX builder, set the debug level back to “Disabled” on session for Chrome APEX and click (Apply Changes). This also refreshes the list of Page Views to show the latest debug tracing requests that were captured.
  9. In Firefox APEX builder, notice the latest debug log entry with highlighted link to view debug trace of Chrome APEX
  10. In Firefox APEX builder, viewing the debug trace of Chrome APEX, once you’ve identified the error is contained in the request log you are viewing, then save the log to your coputer by choosing Action > Download to download the log as HTML.

Now, you can attach the HTML file to your forum posting or Oracle Support ticket. In the case of the forum, you may have to rename the file to have a .html.txt extension to make the forum attachment restrictions happy.

This will help anyone reviewing your post or support ticket to better assist you in triaging what might be going wrong and ideally quickly suggest a workaround or alternative way of accomplishing the same task.

Flexing Fusion Apps Filters

I enjoyed returning to the APEX Alpe Adria conference to chat with and learn from members of the APEX community. If you joined me in Maribor, thanks for your interest in extending Fusion Applications with Oracle APEX. You can download the slides from my session here. In a nutshell, using Fusion Apps business objects in APEX is now as easy as working with local tables.

Fusion Apps (FA) business object REST APIs support filtering using SQL-like predicates and sorting using object attribute names. APEX delegates all searching and sorting to the FA service by automatically using these features. In addition, APEX requests only the data for attributes the current region is using to keep payloads small and fast. These optimizations help you deliver a great user experience. This article explains how to use these features in APEX pages, List of Values, and in PL/SQL business logic.

Setting the Stage

Each Fusion Apps pillar – HCM, ERP, CX, SCM, etc. – has a set of standard business objects. It also lets admins with appropriate privilege extend the app data model. To manage data specific to a company’s unique requirements, an FA customer admin can add new custom attributes to standard objects. In some pillars, they can also create new custom business objects and relate them to other objects.

Each standard or custom object has a corresponding REST API. All share a common set of features and include custom attributes automatically. An FA business object REST service can describe itself, retrieve filtered data, and handle create, update, and delete operations. It also supports bulk operations, where a batch of changes can be done in a single transaction.

Whenever a client uses an FA REST API to create or change data, the service evaluates all appropriate business object validation rules and executes any processing logic. These include behaviors defined by the FA developers at the Oracle “factory”as well as ones added by FA admins as customizations. If the requested operation fails, the response includes one or more error messages to help the end-user understand what requires attention before submitting again. APEX automatically presents these messages to the end-user in a familiar way.

Lead Tracking for an Art Gallery

An art gallery using the Fusion CX suite might use the standard Leads object to track potential customers who have shown interest in purchasing art, together with custom objects like Artwork_c, ArtMedium_c, Artist_c, and InterestedArtworks_c to track the artworks, media, artists, and pieces a particular lead is interested in purchasing. The Resources standard object stores the staff members (a.k.a. human resources) working at the gallery who follow-up after taking ownership of a lead. The following diagram shows a simplified version of the business object model. Notice the convention that the names of custom objects and custom attributes have the _c suffix to distinguish them from the names of the standard objects and attributes.

Two Fusion Apps standard objects (Leads, Resources) and four custom objects

FA REST Filter Predicates

Fusion Apps business objects REST APIs support SQL-like filter predicates. You phrase them in terms of the case-sensitive business object attribute names. For example, to retrieve leads with a budget greater than or equal to 10000, you would use the following filter expression with the leads endpoint:

Budget_c >= 10000

You can combine conditions using conjunctions and group conditions using parentheses. So, to find leads with names that start with ‘A’ or ‘Z’ in addition to having the above budget, you can write:

Budget_c >= 10000 and (Name like 'A%' or Name like 'Z%')

For case-insensitive comparisons, use the upper() function around the attribute name on the left-hand-side of the expression. To find leads containing a letter a or z somewhere in the name, you can use:

Budget_c >= 10000 and (upper(Name) like '%A%' or upper(Name) like '%Z%')

Each filter predicate has the name of an attribute on the left hand side –optionally surrounded by upper() – followed by an operator, and finally a literal operand value on the right hand side. When using the between operator, you provide two literal values as in this example:

(Budget_c between 7000 and 1000) and (Name like '%A%' or Name like '%Z%')

When filtering on date or timestamp values, the literal values must be strings in the ISO8601 format, so to find leads created any time on December 31st, 2023 you use the following filter. It references one of the common audit attributes that include CreationDate, CreatedBy, LastUpdateDate, and LastUpdatedBy:

CreationDate between '2023-12-31T00:00:00' and '2023-12-31T23:59:59'

To filter on a date field having no time component, leave off the time portion of the literal date string. For example, to find leads whose decision date is before May 1st, 2024, use:

DecisionDate_c < '2024-05-01'

To find leads with qualified status having a rank of warm or hot, you can write a filter like:

StatusCode = 'QUALIFIED' and Rank in ('WARM','HOT')

To find leads that have specified some decision date, you can use:

DecisionDate_c is not null

Conversely, to check for leads with no decision date provided, use:

DecisionDate_c is null

When an FA REST service exposes so-called accessor attributes to related objects, you can reference them in a filter predicate. For example, to find any leads who have shown interest in artwork whose price is between 500 and 1000, you can write the filter:

InterestedArtworks_c.ArtworksObject_c.Price_c between 500 and 1000

Using an External Filter on a Page

After defining a REST Data Source of type Oracle Cloud Applications (SaaS) REST Service for a particular FA REST API, you can use it in any APEX region. Whenever a region uses a REST Data Source, the Property Editor shows the External Filter and Order By section with an Enable switch. After flipping it to the on position, you can enter an external filter clause and/or external order by clause. The syntax of both is specific to the type of the REST data source. One created for an FA REST API using the Oracle Cloud Applications (SaaS) REST Service type lets you specify any legal filter predicate we saw above.

When writing the one or more predicate expressions in your external filter, you can use a literal value for the right-hand-side operand. Alternatively, you can reference a value from APEX session state using substitution syntax like this:

(DecisionDate_c < &P3_DECISION_DATE.) and
(InterestedArtworks_c.ArtworksObject_c.Price_c 
between &P3_LOW_PRICE. and &P3_HIGH_PRICE.)
and Budget_c > 5000 

The value of the page items you reference in the external where clause can be computed by page processes or computations in the Pre-Rendering section of the page. This would come in handy if, for example, the operand for filtering on the lead’s DecisionDate_c field needed to be a function of the current date (formatted in ISO8601 format).

If the region type where you’ve configured an external filter allows end-user filtering operations as well, then any filters added by the end user are combined with the external filter using the and conjunction in the filter expression that APEX sends to the FA REST endpoint.

Using an External Order By on a Page

To configure a particular sort order for a region based on an FA REST API, use an external order by clause. The syntax is simple. Just provide a comma-separated list of case-sensitive attribute names. For example, to sort on the lead name, just mention the attribute name like this:

Name

To sort descending by budget amount, you can use the additional descqualifier:

Budget_c:desc

If you do not provide a sort direction qualifier, then an ascending sort is assumed, but you can always explicitly specify the asc qualifier, too. For example to sort descending by budget and then ascending by name, you can configure the external order by to be:

Budget_c:desc,Name:asc

External Filter & Order By for LOVs

When editing a List of Values on a Fusion Apps REST Data Source, you can use the same external where clause and external order by clause features explained above. Just look for the appropriate fields in the LOV’s Advanced REST Data Source Attributes section.

If your LOV is based on an FA REST data source with infrequently-changing lookup data, you can markedly improve performance by enabling the Use Synchronization Table switch. This will appear after you’ve configured REST Synchronization for the FA REST data source in question. Flipping this switch to the on position causes the LOV to use the locally synchronized data instead of making a REST call on each access. In this case, the External Filter and External Order By fields are no longer applicable.

If you need to apply an external filter to the REST synchronization process itself, you can do that using the p_sync_external_filter_expr parameter of the dynamic_synchronize_data() procedure in the apex_rest_source_sync package. By combining this with a scheduled automation, you can perform the REST synchronization on an appropriate schedule using your custom external filter whenever the synchronization runs against the Fusion Apps REST APIs to which the data source is related.

Using Fusion Apps REST APIs in PL/SQL

Using the procedures and functions in the apex_exec package, you can work programmatically with any REST Data Source. When working with Fusion Apps REST data sources, I like to work with the data using the business object attribute names instead of having to remember the corresponding data profile column names that apex_exec uses. The sample download at the end includes a helper package I created named oca_ds, which is short for Oracle Cloud Applications Data Source. It complements the apex_exec package to simplify working with Fusion Apps REST Data Sources programmatically.

Assuming that leads is the static id of a REST Data Source for working with the Fusion Apps leads endpoint, we can query the top 5 qualified leads with a rank of warm or hot having both a budget and decision date specified like this:

declare
   l_ctx apex_exec.t_context;
   type t_lead is record (
        name varchar(255),
        budget number,
        created timestamp with time zone);
   type t_leads is table of t_lead index by pls_integer;
   l_leads t_leads;
begin
   l_ctx := oca_ds.open_query(
               p_select   => 'Name,Budget_c,CreationDate',
               p_from     => 'leads',
               p_where    => q'~
                              StatusCode = 'QUALIFIED' and
                              Rank in ('WARM','HOT') and
                              Budget_c is not null and
                              DecisionDate_c is not null
                              ~',
               p_order_by => 'Budget_c:desc',
               p_max_rows => 5);
   while apex_exec.next_row(l_ctx) loop
      l_leads(l_leads.count+1) := 
           t_lead(oca_ds.get_varchar2(l_ctx,'Name'),
                  oca_ds.get_number(l_ctx,'Budget_c'),
                  oca_ds.get_timestamp_tz(l_ctx,'CreationDate'));
   end loop;
   oca_ds.close(l_ctx);
   -- Work with top 5 leads in l_leads here...
exception
   when others then
      oca_ds.close(l_ctx);
      raise;
end;

Inserting New Leads in PL/SQL

Of course you can use a Form or Interactive Grid region to easily insert, update, or delete data from the Leads REST data source without writing any code, but should you need to do so programmatically, it’s easy. To insert data using an FA REST data source, use the oca_ds.open_dml function to create an apex_exec.t_context object that works like a temporary rowset. Then for each new row you want to add, call oca_ds.insert_row() and then use oca_ds.set_value() to set the value of each attribute necessary. Finally, to process the pending changes call apex_exec.execute_dml().

For example, you can write the following code to insert two new leads:

declare
   l_ctx apex_exec.t_context;
begin
  -- Insert two new leads
   l_ctx := oca_ds.open_dml(
               p_attributes =>'Name,Budget_c,DecisionDate_c',
               p_from       => 'leads');
   -- Insert first lead
   oca_ds.insert_row(l_ctx);
   oca_ds.set_value(l_ctx,'Name','Remy Ratatouille');
   oca_ds.set_value(l_ctx,'DecisionDate_c',date'2024-07-31');
   oca_ds.set_value(l_ctx,'Budget_c',1800);
   -- Insert second lead
   oca_ds.insert_row(l_ctx);
   oca_ds.set_value(l_ctx,'Name','Anton Ego');
   oca_ds.set_value(l_ctx,'DecisionDate_c',date'2024-05-15');
   oca_ds.set_value(l_ctx,'Budget_c',800);
   -- Save the changes
   apex_exec.execute_dml(l_ctx);
   oca_ds.close(l_ctx);
exception
   when others then
      oca_ds.close(l_ctx);
      raise;
end;

Updating a Lead in PL/SQL

Updating data using an FA REST data source follows a similar pattern. Use the oca_ds.open_dml function to create an apex_exec.t_context object. Then for each row you want to update, call oca_ds.update_row() and then use oca_ds.set_value() to set the value of each attribute necessary. Finally, to process the pending changes call apex_exec.execute_dml(). Often you will start by querying the row first to consider its existing values first, before updating some of them. In that case you can call apex_exec.set_values() to copy all the row values from the queried row into the row to be updated.

For example, to update a lead named ‘Anton Ego’ to increase his budget by 20% and adjust his decision date to be a week (7 days) farther in the future, you can write the following code. Notice that we include the APEX$ResourceKey attribute in the list since it’s the unique identifier of the row that’s needed to perform the update.

declare
   l_ctx     apex_exec.t_context;
   l_dml_ctx apex_exec.t_context;
   -- other declarations here
begin
  -- Retrieve and update opportunity with resource key 98765
  -- to increase its amount by 20% and close date by one week
  l_ctx := oca_ds.open_query(
              p_select => 'APEX$ResourceKey,Budget_c,DecisionDate_c',
              p_from   => 'leads',
              p_where  => q'~Name='Anton Ego'~');
   if not apex_exec.next_row(l_ctx) then
      raise_application_error(-20001,'Lead Anton Ego not found');
   end if;
   l_dml_ctx := oca_ds.open_dml(
      p_from => 'leads',
      p_attributes => 'APEX$ResourceKey,Budget_c,DecisionDate_c',
      p_lost_update_detection => true);
   oca_ds.update_row(l_dml_ctx);
   -- Set the row version checksum used for lost update protection
   apex_exec.set_row_version_checksum(
      p_context   => l_dml_ctx,
      p_checksum  => apex_exec.get_row_version_checksum(l_ctx));
   -- Copy the values of the fetched lead into the update row
   apex_exec.set_values(
      p_context         => l_dml_ctx,
      p_source_context  => l_ctx );   
   -- Adjust the budget and decision date appropriately
   oca_ds.set_value(l_dml_ctx,'Budget_c', 
                    oca_ds.get_number(l_dml_ctx,'Budget_c') * 1.2);
   oca_ds.set_value(l_dml_ctx,'DecisionDate_c', 
                    oca_ds.get_date(l_dml_ctx,'DecisionDate_c') + 7);
   -- Save the changes
   apex_exec.execute_dml(l_dml_ctx);
   oca_ds.close(l_dml_ctx);
   oca_ds.close(l_ctx);
exception
   when others then
     oca_ds.close(l_dml_ctx);
     oca_ds.close(l_ctx);
     raise;
end;

Deleting a Lead in PL/SQL

Deleting data using an FA REST data source is a bit simpler. As before, use the oca_ds.open_dml function to create an apex_exec.t_context object. Then for each row you want to delete, call oca_ds.delete_row() and then use oca_ds.set_value() to set the value of the resource key APEX$ResourceKey. Finally, to process the pending changes call apex_exec.execute_dml().

Assuming the system-assigned resource key for the Anton Ego lead is 4, the following code deletes this lead:

declare
   l_dml_ctx apex_exec.t_context;
begin
  -- Delete lead with resource key '4'
   l_dml_ctx := oca_ds.open_dml(
                   p_from       => 'leads',
                   p_attributes => 'APEX$ResourceKey');
   oca_ds.delete_row(l_dml_ctx);
  -- Set the resource key of the row to be deleted
  oca_ds.set_value(l_dml_ctx,'APEX$ResourceKey', '4');
  -- Save the changes
  apex_exec.execute_dml(l_dml_ctx);
  oca_ds.close(l_dml_ctx);
exception
   when others then
     oca_ds.close(l_dml_ctx);
     raise;
end;

Finding the Logged-in User’s PartyId

As a final use case to explore, let’s imagine we are an art gallery salesperson logging in to the APEX app to work our leads for the day. Fusion Apps represents each art gallery staff member – i.e. human resource – as a Resource object in the system. When a staff member is associated with other business objects, that resource’s PartyId attribute value is used as a foreign key.

For example, a lead gets assigned to an “owner” to contact the potential customer to qualify if there is a real opportunity to sell some art. To find the list of leads owned by a particular salesperson like “Matt Hooper”, we need to know the value of Matt’s PartyId and then query the leads REST API to filter for leads where OwnerId equals the logged-in user’s PartyId value.

To simplify using the current APEX user’s PartyId in external filters on various pages of the app, we can look it up using their Username from the resources REST API after authentication and store that PartyId value in APEX session state for the duration of the user’s session. Assume we’ve created a REST Data Source of type Oracle Cloud Applications (SaaS) REST Service for the Fusion Apps resources endpoint and assigned it a static id of resources. Furthermore, assume we’ve created shared component application items named APP_USER_PARTYID and APP_USER_FULLNAME. You can write the following code in a shared component Application Process with the After Authentication process point to lookup the current user’s PartyId and PartyName from the resources REST data source and store those values in the application items:

declare
    l_ctx apex_exec.t_context;
begin
    l_ctx := oca_ds.open_query(
        p_select=> 'PartyId,PartyName',
        p_from => 'resources',
        p_where => 'upper(Username) = '||
                       apex_exec.enquote_literal(:APP_USER));
    if apex_exec.next_row(l_ctx) then
        :APP_USER_FULLNAME := oca_ds.get_varchar2(l_ctx,'PartyName'));
        :APP_USER_PARTYID  := oca_ds.get_number(l_ctx,'PartyId'));
    end if;
    oca_ds.close(l_ctx);
end set_session_info_for_user;

After setting this up, other pages in the app can reference the value of APP_USER_PARTYID in external filters using substitution syntax as we saw earlier in this article. For example, in a region based on the Leads REST data source, to filter the leads to show only ones owned by the logged-in user, we could use an external filter of:

OwnerId = &APP_USER_PARTYID.

Can Any Attribute Appear in a Filter?

A last consideration worth mentioning is that not all business object attributes can be used in a filter predicate. To discover which attributes you can use, consult the Oracle Cloud Apps Data Source Explorer utility app. There you can see if the attribute has a Y or an N in the Queryable? column. The adjective “queryable” is a bit of a misnomer. An attribute with an N in the Queryable? column can still participate in the query in the select list. However, it cannot appear in a filter predicate. For example, as shown below, the InventoryItemDescription attribute in the Leads object is not queryable.

Reviewing which attributes are Queryable in the Oracle Cloud Apps Data Source Explorer

An attempt to reference InventoryItemDescription in a filter predicate results in the error:

Search filter includes a non-queryable column or unsupported operator

To install the Oracle Cloud Apps Data Source Explorer app visit the Gallery > Utility Apps tab and click the Install button like you’ve done for other sample apps. It helps you explore interesting information about all the Fusion Apps REST APIs your applications use.

Downloading the Helper Package

Hopefully, this information about Fusion Apps REST API filters and how to use them will come in handy the next time you create an APEX application that works with these powerful business object APIs. You can download the source of my oca_ds helper package here. For more information, see the Working with REST Data Sources for Oracle Cloud SaaS Applications section of the APEX developer documentation. It explains all the features for working easily with Fusion Applications REST APIs.

Simplify APEX App REST APIs with JSON Duality Views

When writing your APEX apps, it’s best to put business logic in packages. This practice eases maintenance, improves performance, and favors reuse. I set out to combine an Oracle 23c JSON Relational Duality View with my business logic package to simplify building REST APIs for my APEX apps. Read on to learn what I discovered. If you’re already using Oracle 23c Free, download the sample app at the end to try on your own system.

As we’ll see, the duality of JSON and relational interaction is a superpower. You get equally functional read and write access to the same data from both JSON and relational points of view. This packed a productivity punch for the use case I tried. It lets tools that work best with relational data keep doing so, while simultaneously offering easy access to the same data for JSON use cases like external systems integrations. The best of both worlds!

NOTE: The sample app uses APEX 23.2 and Oracle 23c Free (23.3).

(Technical Note: to see a larger version of any figure, right click on it and choose Open Image in New Tab from the context menu…)

Overview of the Data Model

The sample app uses the simple data model shown below, tracking things organized into categories and stocked in certain quantities in warehouses.

Data Model for a simple app tracking categorized things with stock in warehouses

APEX App and Business Logic

The APEX pages in the sample app include an interactive report of things with a companion form-and-grid page to create or edit a thing and its related warehouse stock entries. Two additional grid pages let users edit the warehouses and categories reference data. We’ll focus on the Thing form-and-grid page since that’s where users create and modify data, and where interesting business logic gets enforced.

APEX app is an Interactive Report with Form + Grid page to edit a thing and its stock

The eba_demo_thing package has functions and procedures that implement the app’s business logic. The table lists their names and describes what they do.

Program UnitDescription
validate_warehouseCheck warehouse is valid
validate_categoryCheck category is valid
validate_quantityCheck stock quantity is natural number
validate_availableCheck available date is no older than a year
validate_priceCheck price is between 0.01 & 100,000.00
consolidate_thing_warehousesConsolidate thing stock for same warehouse
closest_ninety_nine_centReturn closest price that ends with 0.99
check_thing_required_fieldsCheck thing required fields are provided
check_stock_required_fieldsCheck stock required fields are provided
Overview of application business logic program units in eba_demo_thing package

As we saw earlier, the Thing page lets users create and edit a thing and its warehouse stock quantities. As shown below in Page Designer, we engage the business logic functions and procedures using validations and page processes in the Processing tab. Each validation is a one-line call to a function in the eba_demo_thing package that returns a boolean, passing in a page item value to be checked as an argument. Multiple Invoke API-type processes call a procedure in the same package to:

  • Adjust the price to a more user-enticing *.99 cent amount
  • Consolidate stock quantities from the same warehouse, if any
  • Default stock entries with zero quantity for any missing warehouses
Validations and Page Processes use business logic in eba_demo_thing package

We can see the business logic in action in the APEX app’s Thing page. Let’s create a new thing in the BOOKS category as shown below. Notice I’ve entered a price of 36.50 that does not end with *.99, and entered two separate stock entries for the same LAS VEGAS warehouse.

Entering a new thing in the BOOKS category with stock in LAS VEGAS warehouse

After clicking the (Create) button to dismiss the dialog, we can edit the new thing to see the page processes adjusted the price to 35.99, consolidated the 12 & 13 quantities in the LAS VEGAS warehouse, and created zero-quantity stock entries for the missing SEATTLE and BALTIMORE warehouses. So everything is working great in the APEX app.

Observing the effects of the application business logic in the web user interface

Letting External Apps Work With Things

In addition to the APEX web app for end users, let’s suppose that external systems need to create, update, and delete things, too. This requires a set of appropriate APIs, and the de facto standard approach involves REST services with JSON data payloads. The external system will use HTTP requests to a well-known “endpoint” URL like:

http://phoenix:9094/ords/diveintoapex/thing/

on our system, using standard “verbs” like:

  • GET – to retrieve things
  • GET – with a unique identifer added to the URL to access a particular thing
  • POST – to create a new thing (i.e. insert)
  • PUT – to replace an existing thing with a modified version (i.e. update)
  • DELETE – to remove an existing thing

All of the requests involve sending and/or receiving thing data in JSON format. So we need a strategy to represent a thing and its related warehouse stock quantities as a JSON document. As we’ll see next, the JSON Relational Duality View can help!

Add JSON Access with a Duality View

An Oracle 23c JSON Relational Duality View is a database view with a single JSON column named data that you can use to read and write information from one or more underlying tables. You define the view using a syntax that mirrors the structure of the JSON you want it to work with. I created a things_dv duality view using the DDL statement below. Starting with the Oracle 23.4 database release, all duality views will require the specially named attribute _id to uniformly act as their unique identifier. Even though I’m working with database release 23.3, I followed that best-practice in my things_dv definition.

Notice how I’ve mentioned my table names eba_demo_things, eba_demo_thing_categories, eba_demo_thing_stock, and eba_demo_warehouse and provided annotations like @insert, @update, @delete to indicate what DML operations I want to allow on each table. The duality view uses the foreign key constraints to understand how the tables relate. For each property, I indicate the JSON attribute name on the left side of the colon and then the appropriate database column name on the right side. When no annotations are present – as is the case for eba_demo_thing_categories and eba_demo_warehouse – then the duality view only allows referencing existing categories and warehouses, respectively.

create or replace json relational duality view things_dv as
eba_demo_things @insert @update
{
_id : id
name : name
description : description
price : price
available : available
eba_demo_thing_categories @unnest
{
categoryId : id
category : name
}
stock : eba_demo_thing_stock @insert @update @delete
{
thingStockId : id
quantity : quantity
eba_demo_warehouse @unnest
{
warehouseId : id
warehouse : name
}
}
}

I used the @unnest annotation to pull properties like categoryId and category up to the same level in the JSON as name, price, and available. I’ve also used it to pull the warehouseId and warehouse properties up to the same level as quantity. Without using this annotation, the properties from a related object will appear as a “nested” JSON object with an appropriate property name. The choice is up to the developer.

After creating the duality view, we can use our favorite tool like SQLcl or the SQL Developer extension for VS Code to query one of the things to see what it looks like in JSON. Let’s retrieve the one whose unique id is 2. The json_serialize() function takes the JSON data column name, and the returning clob pretty option lets the database know the JSON might exceed 4000 characters and should be formatted with nice indentation. We’ll see the powerful json_value() function multiple times throughout this article. It lets you easily access the value of any part of the JSON document you pass in, based on the flexible JSON Path expression language. Here we’re using the simplest possible expression to refer to the _id property at the “top” or root of the JSON document:

set pagesize 999
set long 2000
select json_serialize(data returning clob pretty) as data
from things_dv
where json_value(data,'$._id') = 2
/

The result shows data from all four tables in our data model in a single JSON document reflecting the structure we defined using the DDL duality view definition above. It includes some additional “bookkeeping” info in a _metadata property as well that is automatically added.

DATA
------------------------------------------------------------------------
{
"_id" : 2,
"_metadata" :
{
"etag" : "DC3F16548E190F49130482301365EF1C",
"asof" : "0000000000C514BC"
},
"name" : "Vibrant Horizons",
"description" : "A compilation of live performances from the renowned
jazz festival, capturing the essence of contemporary jazz.",
"price" : 22.99,
"available" : "2024-05-20T00:00:00",
"categoryId" : 2,
"category" : "MUSIC",
"stock" :
[
{
"thingStockId" : 4,
"quantity" : 7,
"warehouseId" : 3,
"warehouse" : "BALTIMORE"
},
{
"thingStockId" : 5,
"quantity" : 2,
"warehouseId" : 2,
"warehouse" : "SEATTLE"
},
{
"thingStockId" : 6,
"quantity" : 6,
"warehouseId" : 1,
"warehouse" : "LAS VEGAS"
}
]
}

A duality view is not just for querying data. It supports insert, update, and delete operations, too, if your duality view definition includes annotations allowing those. For example, we can insert a new thing in the BOOKS category with a quantity of 25 in the LAS VEGAS warehouse using the INSERT statement below:

 insert into things_dv(data)
values(q'~
{
"name" : "Building Oracle XML Applications",
"description" : "Master Oracle using XML, SQL, XSLT, and XSQL pages",
"price" : 32.99,
"available" : "2023-09-01",
"category" : "BOOKS",
"stock" : [
{
"warehouse" : "LAS VEGAS",
"quantity" : 25
}
]
}
~')

Even before committing the inserted data, we can immediately wield the power of JSON/relational duality by querying the same data back from the underlying relational tables. The figure below shows the result of using SQL Developer in VS Code to run a familiar, relational query in the same pending transaction where we just inserted the JSON document using the duality view. I wanted to find all things with a particular name, a category name of BOOKS, and 25 or more quantity in stock in the LAS VEGAS warehouse. Our newly inserted book shows up in the results! Also notice the joins with my category and warehouse tables work because the duality view used the unique indexes on the category name (BOOKS) and warehouse name (LAS VEGAS) to lookup the respective primary key values. Of course, at this point we can commit the transaction to make the new thing permanent, or rollback to abandon the inserted data.

After inserting thing in JSON via duality view, we can requery it from the tables

With the things_dv duality view in place, we have part of the solution sorted. And we’ve seen we can work with the duality view using SQL operations directly. Next, let’s focus on letting an external app access the JSON from the duality view using an ORDS REST service.

When using REST services in production, you will of course secure their access using one of the robust ways ORDS supports for doing that. But since here we’re in a learning mode, I’m creating services with no authentication for simplicity.

ORDS AutoREST for Duality Views

The easiest way to expose a duality view to REST clients is to enable AutoREST access. We can do this from SQL Developer (Web), by calling the ords.enable_object procedure, or by going to SQL Workshop > RESTful Services in the APEX Builder. By selecting the Enabled Objects in the RESTful Data Services tree, you’ll see a list of all AutoREST enabled objects. I clicked (Create AutoREST Object) and chose the THINGS_DV view from the list to enable it. I left the Object Alias the same as the name of the duality view, but as shown in the figure below you can adjust the name if you prefer. This alias is what appears in the URL that REST clients will use, as reflected in the Full URL field below.

After enabling the THINGS_DV Duality View for AutoREST access

Once we’ve AutoREST-enabled the duality view, we can create a dog_days.json file containing the following example JSON document:

 {
"name" : "David Hockney's Dog Days",
"description" : "Colourful portraits of Stanley and Boodgie",
"price" : 32.99,
"available" : "2023-09-01",
"category" : "BOOKS",
"stock" : [
{
"warehouse" : "LAS VEGAS",
"quantity" : 6
}
]
}

Then, we can use a command-line utility like curl to insert the Dog Days book as a thing into our database using the AutoREST-enabled duality view.

% curl -X POST \
-H "Content-Type: application/json" \
-d @dog_days.json \
http://phoenix:9094/ords/diveintoapex/things_dv/

The REST service responds with the JSON document that now contains the system-assigned _id property, an additional _metadata section, and a links array with multiple URLs related to the thing we just inserted. For brevity, I’ve replaced the actual links array entries with “…” below.

{"_id":22,"name":"David Hockney's Dog Days",
"description":"Colourful portraits of Stanley and Boodgie",
"price":32.99,"available":"2023-09-01T00:00:00",
"categoryId":1,"category":"BOOKS",
"stock":[{"thingStockId":232,"quantity":6,
"warehouseId":1,"warehouse":"LAS VEGAS"}],
"_metadata":{"etag":"64EC3C30EB434F8825FA09BB76EA49A6",
"asof":"0000000000BE113B"},"links":[...]}

What Checks Does a Duality View Enforce?

An AutoREST-enabled duality view is useful to quickly setup REST access for the same set of apps that you would trust to perform SELECT, INSERT, UPDATE, and DELETE operations against your duality view directly. The database will enforce basic datatype checking, not null columns, as well as ensure all check constraints, unique constraints, and referential integrity constraints are valid. However, the database is not aware of the business logic that you’ve implemented in your APEX application. For example, suppose we update the contents of the dog_days.json file as follows:

{
"name" : "David Hockney's Dog Days",
"description" : "Colourful portraits of Stanley and Boodgie",
"price" : 32.85,
"available" : "2000-09-01",
"stock" : [
{
"warehouse" : 'LAS VEGAS'
"quantity" : 6
},
{
"warehouse" : 'LAS VEGAS'
"quantity" : 5
}
]
}

After repeating the curl command above to POST the document to the AutoREST-enabled duality view endpoint, we see that the JSON document returned looks like this:

{"_id":23,"name":"David Hockney's Dog Days",
"description":"Colourful portraits of Stanley and Boodgie",
"price":32.85,"available":"2000-09-01T00:00:00",
"stock":[{"thingStockId":233,"warehouseId":1,
"warehouse":"LAS VEGAS","quantity":6},{"thingStockId":234,"warehouseId":1,"warehouse":"LAS VEGAS","quantity":5}],
"_metadata":{"etag":"EFF821D650F846223E9298C5911B0D70",
"asof":"0000000000BE22CC"},"links":[...]}

Understandably, the price of the thing was not adjusted to the nearest 99-cent price. The two quantities related to the LAS VEGAS warehouse were not consolidated, the stock entries for the missing warehouses were not defaulted, and the available date from the year 2000 was allowed even though the application requires it to be within the last year or in the future.

Some kinds of business logic can be written as database table triggers, but more complex use cases like the warehouse quantity consolidation or ones involving querying and changing multiple related tables can run into mutating table errors using that approach. In many cases, developers find it best to represent their business logic in an application layer of PL/SQL packages as we’ve done here with the eba_demo_thing package documented above.

If we know ahead of time that the AutoREST API will be used by an app we trust to always insert and update valid data, or if we have enforced everything we need as table constraints or in table triggers, then the AutoREST-enabled duality view alone is all we’d need. However, if we want to enforce all the same business logic that you implement in your APEX application, then instead of the AutoREST approach, we need to use ORDS REST services that call our own PL/SQL APIs . We’ll explore that option next.

Creating External API with Business Logic

I started by creating a second eba_demo_thing_json package to handle the JSON-specific operations that we’ll surface as web services using ORDS. Doing this makes it possible to write unit tests that exercise the JSON API without involving HTTP requests. As shown below, there are functions to get multiple things; get, post, or put a thing and return it; and a procedure to delete a thing. Notice that I’ve used the new json type instead of json_object_t so I can easily use JSON path expressions on the contents. We’ll see multiple examples below.

create or replace package eba_demo_thing_json as
-- Get Things with optional search
-------------------------------------------------
function get_things(
p_offset in number default null,
p_limit in number default null,
p_search in varchar2 default null)
return json;

-- Get a Thing by id
-------------------------------------------------
function get_thing(p_id in number)
return json;

-- POST (Insert) a new Thing
-------------------------------------------------
function post_thing(p_json in json)
return json;

-- PUT (Update & Replace) an existing Thing by id
-------------------------------------------------
function put_thing(p_id in number, p_json in json)
return json;

-- DELETE an existing Thing by id
-------------------------------------------------
procedure delete_thing(p_id in number);
end eba_demo_thing_json;

If you’ve used json_object_t in the past, no worries. The key difference is that values of json type can also participate in SQL statements directly, while ones of json_object_t type are useful in PL/SQL only. It is straightforward to work with the same JSON document in PL/SQL as either json or json_object_t as necessary. You can pass a json value to the json_object_t constructor to work with it using that interface, and calling the to_json method on a json_object_t value returns it as a json value instead. I prefer using the simpler-looking json type in my APIs to more easily involve its values in json_value and json_transform expressions in my code explained later.

A third package eba_demo_thing_ords encapsulates the ORDS-specific aspects of the service implementation, like writing out the HTTP response payload. Its procedures are a thin wrapper around the JSON API package program units, and accept the incoming payload as CLOB arguments, since that is how ORDS makes it available to us as we’ll see in the next step.

create or replace package eba_demo_thing_ords as 
-- GET Things with optional search
-------------------------------------------------
procedure get_things(
p_offset in number default null,
p_limit in number default null,
p_search in varchar2 default null);

-- GET a Thing by id
--------------------------------------------
procedure get_thing(p_id number);

-- POST (Insert) a new Thing
--------------------------------------------
procedure post_thing(p_json clob);

-- PUT (Update & Replace) an existing Thing
--------------------------------------------
procedure put_thing(p_id number, p_json clob);

-- DELETE an existing Thing by id
--------------------------------------------
procedure delete_thing(p_id number);
end eba_demo_thing_ords;

With these two packages in place, I used the APEX RESTful Services editor to define a thing module with templates for / and :id, and added appropriate one-line PL/SQL block handlers for each one. The figure below shows what the POST handler for /thing/ looks like. It’s a one-line call to eba_demo_thing_ords.post_thing passing in the special ORDS :body_text bind variable that contains the incoming request payload as a CLOB.

Handler for /thing/ template’s POST operation, passing request payload with :body_text

The complete set of operations I configured includes:

  • GET /thing/ →  eba_demo_thing_ords.get_things
  • POST /thing/ →  eba_demo_thing_ords.post_thing
  • DELETE /thing/:id →  eba_demo_thing_ords.delete_thing
  • GET /thing/:id →  eba_demo_thing.get_thing
  • PUT /thing/:id →  eba_demo_thing.put_thing

Trying External API with Business Logic

Now, if we use the endpoint URL for our PL/SQL-based ORDS REST service to post the Dog Days JSON document from the command line:

% curl -X POST \
-H "Content-Type: application/json" \
-d @dog_days.json \
http://phoenix:9094/ords/diveintoapex/thing/  

We get back an error that the category is missing, which I had not noticed before.

{"$error":"Required field missing: Category"}

After updating dog_days.json to have a category property and trying again to post the file using curl

{
"name" : "David Hockney's Dog Days",
"description" : "Colourful portraits of Stanley and Boodgie",
"price" : 32.85,
"category" : "BOOKS",
"available" : "2000-09-01",
"stock" : [
{
"warehouse" : "LAS VEGAS",
"quantity" : 12
},
{
"warehouse" : "LAS VEGAS",
"quantity" : 13
}
]
}

We get an error that the available date is too far in the past:

{"$error":"A new thing's available date must be in the past year or the future"}

Hoping the third time’s the charm, after correcting the dog_days.json available date value to be 2023-09-01 and trying the curl command again… It works and we get the following response below. Notice the price got adjusted to 31.99 and the categoryId & warehouseId got filled in appropriately. The stock quantities for the two LAS VEGAS warehouse rows got consolidated, and the missing warehouse entries were defaulted to have a zero stock.

{ "_id":21,
"name":"David Hockney's Dog Days",
"description":"Colourful portraits of Stanley and Boodgie",
"price":31.99,
"available":"2023-09-01T00:00:00",
"categoryId":1,
"category":"BOOKS",
"stock":[
{ "thingStockId":221,
"quantity":25,
"warehouseId":1,
"warehouse":"LAS VEGAS"},
{ "thingStockId":224,
"quantity":0,
"warehouseId":2,
"warehouse":"SEATTLE"},
{ "thingStockId":225,
"quantity":0,
"warehouseId":3,
"warehouse":"BALTIMORE"}
],
"_metadata":{ "etag":"2EA7CCB8CA550DCAE2E0180D61267D9F",
"asof":"0000000000BF5695"}}

Next let’s try an update. If we change our dog_days.json file to contain the contents returned by our most recent curl command above, we can then edit it to adjust the values of the price, available, and quantity values for the three warehouses as shown below:

{ "_id":21,
"name":"David Hockney's Dog Days",
"description":"Colourful portraits of Stanley and Boodgie",
"price":29.45,
"available":"2024-03-04",

"categoryId":1,
"category":"BOOKS",
"stock":[
{ "thingStockId":221,
"quantity":11,
"warehouseId":1,
"warehouse":"LAS VEGAS"},
{ "thingStockId":224,
"quantity":22,
"warehouseId":2,
"warehouse":"SEATTLE"},
{ "thingStockId":225,
"quantity":33,
"warehouseId":3,
"warehouse":"BALTIMORE"}
],
"_metadata":{ "etag":"2EA7CCB8CA550DCAE2E0180D61267D9F",
"asof":"0000000000BF5695"}}

After saving the edits to this file, we can try updating the thing business object by using curl to PUT the document back at its specific URL that includes the 21 value as its resource key at the end of the URL:

% curl -X PUT \
-H "Content-Type: application/json" \
-d @dog_days.json \
http://phoenix:9094/ords/diveintoapex/thing/21 

Notice in the response that the price was adjusted to the nearest ninety-nine cents, the warehouse quantities were updated, and the etag value changed.

{ "_id":21,
"name":"David Hockney's Dog Days",
"description":"Colourful portraits of Stanley and Boodgie",
"price":28.99,
"available":"2024-03-04T00:00:00",
"categoryId":1,
"category":"BOOKS",
"stock":[
{ "thingStockId":221,
"quantity":11,
"warehouseId":1,
"warehouse":"LAS VEGAS"},
{ "thingStockId":224,
"quantity":22,
"warehouseId":2,
"warehouse":"SEATTLE"},
{ "thingStockId":225,
"quantity":33,
"warehouseId":3,
"warehouse":"BALTIMORE"}],
"_metadata":{ "etag":"AD67594C22A913DB020442DC2F68D2DA",
"asof":"0000000000BF9CED"}}

By leveraging the power of a duality view and ORDS REST services, and combining them with our business logic in a PL/SQL package, we achieved the goal we set out to accomplish. We now have the same application logic shared by both our APEX app and JSON-based REST APIs. Now let’s learn a bit more about the implementation details.

Flexing Our JSON/Relational Duality

If we switch back to the APEX application, we can edit the new thing we were just working with using JSON to see the latest information about David Hockney’s Dog Days book in the web UI.

Editing the thing in APEX that was just updated through the JSON REST API

Let’s see what happens if we use the APEX application to edit something about this thing, for example the quantity available of David Hockney’s Dog Days book in the SEATTLE warehouse. After adjusting the amount from 22 to 99 and clicking (Apply Changes), let’s repeat the curl command to update the same thing from the command line.

% curl -X PUT \
-H "Content-Type: application/json" \
-d @dog_days.json \
http://phoenix:9094/ords/diveintoapex/thing/21

This time the update fails and we see the error that the thing was changed by another user.

{"$error":"Thing changed by another user. Get the thing again and retry."}

The duality view automatically computes the etag as a checksum of the thing object’s data, including data of its nested stock information. The act of updating the SEATTLE warehouse quantity above changed data that was intrinsic to the David Hockney’s Dog Days thing object, so that caused its etag checksum to change. The value of the etag present in the dog_days.json file we are trying to PUT back to the JSON REST API no longer matches and the duality view automatically fails the update. By default the etag checksum takes all object properties into account, but the developer can use the @nocheck annotation when defining the duality view on properties that she doesn’t want to be included in the checksum calculation.

Exploring JSON Service API Implementation

The implementation of the post_thing function in the eba_demo_thing_json package performs four basic steps:

  1. Performs before insert or update processing
  2. Inserts JSON payload into things_dv duality view, returning _id value
  3. Executes after insert or update processing
  4. Returns the latest version of thing object from the duality view by id
-- PACKAGE: eba_demo_thing_json 
-----------------------------
-- POST (Insert) a new Thing
-----------------------------
function post_thing(
p_json in json)
return json
is
l_id number;
l_json json := strip_links(p_json);
begin
before_insert_or_update(l_json,c_operation_insert); /* 1 */
insert into things_dv(data) /* 2 */
values (l_json)
returning json_value(data,'$._id') into l_id;
after_insert_or_update(l_id,c_operation_insert); /* 3 */
return get_thing(l_id); /* 4 */
end post_thing;

The put_thing function is nearly identical. It does an update statement on the things_dv duality view instead of an insert and passes the constant c_operation_update to the before and after processing procedures.

-- PACKAGE: eba_demo_thing_json 
--------------------------------------------------------------
-- PUT (Update & Replace) an existing Thing by id
--------------------------------------------------------------
function put_thing(p_id number, p_json json)
return json
is
l_json json := strip_links(p_json);
begin
before_insert_or_update(l_json,c_operation_update); /* 1 */
update things_dv /* 2 */
set data = l_json
where json_value(data,'$._id') = p_id;
after_insert_or_update(p_id,c_operation_update); /* 3 */
return get_thing(p_id); /* 4 */
end put_thing;

Take another look at the elegant simplicity of the code above. By incorporating the duality view, we reduce a huge amount of coding work to a single line of declarative SQL. The INSERT statement does the “magic” to create a new JSON-based thing, and the UPDATE statement handles changing an existing thing.

This saves us from writing any code related to saving the initial or modified JSON document to the underlying tables, performing the lookups related to the category and warehouse, as well as detecting whether the row has been changed by another user. We don’t have to detect what has changed, figure out which underlying tables to involve, or anything really. We just supplement the simple SQL INSERT or UPDATE operation with the additional business logic before and after that is unique to our application.

Dissecting Before Insert/Update Logic

The before_insert_or_update procedure is below. It receives the JSON document as an in out nocopy parameter that it can reason over, pass around to other procedures, and modify as necessary before the resulting document will be inserted into or updated in the duality view. We’ve organized the code to make it easy to understand. We perform required fields checks, ensure there’s at least one warehouse, and adjust any missing or null warehouse quantities to be zero. Then we validate the available date if we’re performing an insert, and then validate the price, category, and warehouse and quantity values. Finally we adjust the price to the nearest 99-cent value.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- BEFORE INSERT OR UPDATE ON Thing business logic
--------------------------------------------------------------
procedure before_insert_or_update(
p_json in out nocopy json,
p_operation t_operation)
is
begin
check_required_fields(p_json);
check_at_least_one_warehouse(p_json);
adjust_missing_or_null_qty(p_json);
if p_operation = c_operation_insert then
validate_available(p_json);
end if;
validate_price(p_json);
validate_category(p_json);
validate_warehouse_and_qty(p_json);
adjust_price_to_99_cents(p_json);
end before_insert_or_update;

The helper procedures are all part of the sample app you can download and study, so we won’t analyze every line here. However, we’ll focus on three that taught me interesting techniques I discovered while building the demo.

In adjust_price_to_99_cents, notice how we can use json_value to access the value of any property in the JSON document using an appropriate JSON path expression. Here we’re accessing the price to pass it to another function that computes the closest 99-cent price. If the l_new_price is different from the l_price then we use the json_transform function to update the value of the price property.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- Adjust price to nearest *.99
--------------------------------------------------------------
procedure adjust_price_to_99_cents(p_json in out nocopy json)
is
l_price eba_demo_things.price%type;
l_new_price eba_demo_things.price%type;
begin
l_price := json_value(p_json,'$.price');
l_new_price := eba_demo_thing.closest_ninety_nine_cent(l_price);
if l_price != l_new_price then
select json_transform(p_json,
set '$.price' = l_new_price)
into p_json;
end if;
end adjust_price_to_99_cents;

In the adjust_missing_or_null_qty procedure we’re using a slightly more complex JSON path expression to update any stock array entries with a null or missing quantity property to set their quantity = 0. These examples only scratch the surface of what json_transform can do. Please see the documentation to learn more about its abilities to easily make multiple changes to a JSON document with a compact, familiar syntax.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- Adjust missing or null quantities to zero
--------------------------------------------------------------
procedure adjust_missing_or_null_qty(p_json in out nocopy json)
is
begin
select json_transform(p_json,
set '$.stock[*]?(@.quantity == null ||
!exists(@.quantity)).quantity' = 0)
into p_json;
end adjust_missing_or_null_qty;

The last interesting thing I’ll highlight is the ability for json_path to return values into user-defined types, including collection types. This makes processing JSON data a lot more natural for PL/SQL developers. In the validate_warehouse_and_qty procedure, notice I’ve defined a t_stock record type and a t_stock_tab type that’s a table of t_stock records. With the new support in 23c we can use the returning t_stock_tab clause of the json_value function to easily assign the array of stock objects to the l_stock_tab variable of the same t_stock_tab type. Then, we can use a familiar PL/SQL loop to process the results. Here I’m using it to validate the quantity and warehouse values of each stock array entry.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- Validate warehouse and quantity values
--------------------------------------------------------------
procedure validate_warehouse_and_qty(p_json in json)
is
type t_stock is record(
warehouse eba_demo_warehouse.name%type,
warehouseId eba_demo_thing_stock.warehouse%type,
quantity eba_demo_thing_stock.quantity%type
);
type t_stock_tab is table of t_stock;
l_stock_tab t_stock_tab;
begin
l_stock_tab := json_value(p_json,'$.stock'
returning t_stock_tab error on error);
for j in 1..l_stock_tab.count loop
if not eba_demo_thing.validate_quantity(
l_stock_tab(j).quantity) then
error('INVALID_QUANTITY');
end if;
if not eba_demo_thing.validate_warehouse(
l_stock_tab(j).warehouse,
l_stock_tab(j).warehouseid) then
error('INVALID_WAREHOUSE');
end if;
end loop;
end validate_warehouse_and_qty;

Examining After Insert/Update Logic

In the after_insert_or_update procedure, we consolidate the warehouses and if we’re performing an insert operation, default any missing warehouses. Here again I took advantage of the JSON/Relational duality since I’m reusing shared business logic from the eba_demo_thing package that I wrote using familiar relational table processing code.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- AFTER INSERT OR UPDATE ON Thing business logic
--------------------------------------------------------------
procedure after_insert_or_update(
p_id in number,
p_operation t_operation)
is
begin
eba_demo_thing.consolidate_thing_warehouses(p_id);
if p_operation = c_operation_insert then
eba_demo_thing.default_missing_warehouses(p_id);
end if;
end after_insert_or_update;

For example, in default_missing_warehouses I found it easiest to get the job done using a merge into statement with a when not matched clause. With JSON/Relational duality, you can use the right tool for the job without worrying about keeping the JSON and relational worlds in sync.

-- PACKAGE: eba_demo_thing
--------------------------------------------------------------
-- Default any missing warehouses for a thing with zero stock
--------------------------------------------------------------
procedure default_missing_warehouses(p_id in number)
is
begin
merge into eba_demo_thing_stock tgt
using (select id from eba_demo_warehouse) src
on (src.id = tgt.warehouse and tgt.thing_id = p_id)
when not matched then
insert (thing_id,warehouse,quantity)
values (p_id,src.id,0);
end default_missing_warehouses;

Sharing Both Logic and Error Messages

As shown below, the APEX application defines translatable error messages as Shared Component text messages.

Translatable text messages used to inform end-users about business logic errors

In the context of the Thing page in the APEX app, the validations reference these messages using the built-in substitution variable syntax &APP_TEXT$MESSAGE_KEY. with the prefix APP_TEXT$ followed by the name of the translatable text. For example, as shown below the Validate Category validation references &APP_TEXT$INVALID_CATEGORY. in the Error Message setting.

Using APEX_TEXT$ substitution string to reference INVALID_CATEGORY text message

I wanted all the JSON API business logic to use the same text messages, so I implemented the error() function below to let the eba_demo_thing_json package code get the job done. By setting the name of the workspace, it sets the the right context to reference any error message by key for a given app id using the apex_lang.message function.

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- Raise error based on translatable message using message key
--------------------------------------------------------------
procedure error(
p_key in varchar2,
p0 in varchar2 default null)
is
begin
apex_util.set_workspace(c_workspace);
raise_application_error(-20001,
apex_lang.message(p_name => p_key,
p0 => p0,
p_application_id => c_app_id));
end error;

Then, throughout the code in eba_demo_thing_json, code calls this error() function. One example of this is in the validate_category function below:

-- PACKAGE: eba_demo_thing_json
--------------------------------------------------------------
-- Validate category
--------------------------------------------------------------
procedure validate_category(
p_json in json)
is
begin
if not eba_demo_thing.validate_category(
json_value(p_json,'$.category'),
null) then
error('INVALID_CATEGORY');
end if;
end validate_category;

Pagination, Filtering & Total Count in SQL

For my last “trick”, I found I needed to implement pagination and searching functionality for the GET /thing/ endpoint. My first instinct was to define the GET /thing/ handler as an ORDS Collection Query type handler with the query:

select data
from things_dv

However, this setup returned the data in a way that was different from the AutoREST-enabled duality view’s GET /things_dv/ endpoint we saw earlier. In particular, each JSON document is understandably presented as the value of the data column in the query result like this:

{
"items" : [
{"data":{"_id":1, ... }},
{"data":{"_id":2, ... }},
{"data":{"_id":3, ... }}
],
"links" : [
:
]
}

After checking with colleagues on the ORDS team to ask if there was some declarative way to suppress the data column’s appearance in the results, I learned there wasn’t currently a way to achieve that. This makes sense, since AutoREST is optimized for ease of use. When you enable a table, view, or duality view for AutoREST access, you instantly get a rich set of filtering, pagination, and CRUD capabilities. However, once I needed to route the the POST, PUT, and DELETE operations through my own PL/SQL package API to enforce app-specific business logic, I learned it wasn’t yet possible to take advantage of the AutoREST functionality only for the GET operation.

In the end, I found a SQL-only solution to achieve a simple set of pagination and filtering features for my GET handler that I implement in the get_things procedure of the eba_demo_thing_json package:

---------------------------------------
-- Get Things with optional search
---------------------------------------
function get_things(
p_offset in number default null,
p_limit in number default null,
p_search in varchar2 default null)
return json;

After creating the thin wrapper procedure in eba_demo_thing_ords, I defined the GET /thing/ handler as shown below, with skip, pageSize, and search parameters defined as shown.

Configuring the GET /thing/ handler

The get_things function body is below. It returns the JSON document created by a SQL statement whose three parts include:

  1. Common table expression x to select thing ids based on an optional p_search string (matching on name and description properties) and compute total rows
  2. Common table expression y to select up to p_limit rows from things_dv using the ids x identifies and also determine has_more status (true/false)
  3. Main query that constructs JSON document with an items array of thing duality view documents from y, adding in skip, pageSize, hasMore, and totalRows properties.
-------------------------------------------------
-- Get Things with optional search
-------------------------------------------------
function get_things(
p_offset in number default null,
p_limit in number default null,
p_search in varchar2 default null)
return json
is
l_ret json;
begin
with x as (
select json_value(data,'$._id') as id,
count(*) over () as total_rows
from things_dv
where( p_search is null
or upper(json_value(data,'$.name'))
like '%'||upper(p_search)||'%'
or upper(json_value(data,'$.description'))
like '%'||upper(p_search)||'%' )
order by json_value(data,'$._id.number()')
offset nvl(p_offset,0) rows
fetch next nvl(p_limit,3) + 1 rows only
),
y as (select data,
case
when (select count(*)
from x) > nvl(p_limit,3)
then true
else false
end as has_more
from things_dv
where json_value(data,'$._id')
in (select id from x)
order by json_value(data,'$._id.number()')
fetch first nvl(p_limit,3) rows only
)
select json_object(
'items' value (
select json_arrayagg(data
order by json_value(data,
'$._id.number()')
returning clob)
from y),
'pageSize' value nvl(p_limit,3),
'skip' value nvl(p_offset,0),
'hasMore' value (select has_more
from y
fetch first row only),
'totalRows' value (select total_rows
from x
fetch first row only)
returning clob)
into l_ret;
return l_ret;
end get_things;

The result is that GET-ing a URL like:

http://phoenix:9094/ords/diveintoapex/thing/?pageSize=2

returns a response like:

{
"items": [
{
"_id": 1,
"name": "Echoes of Silence",
"description": "Debut album featuring a mix of instrumental and electronic sounds by emerging artist Luna Wave.",
"price": 15.99,
"available": "2024-03-01T00:00:00",
"categoryId": 2,
"category": "MUSIC",
"stock": [ ... ],
"_metadata": {
"etag": "FBC495C12B222462B92C943C65C1D951",
"asof": "0000000000C68DB7"
}
},
{
"_id": 2,
"name": "Vibrant Horizons",
"description": "A compilation of live performances from the renowned jazz festival, capturing the essence of contemporary jazz.",
"price": 22.99,
"available": "2024-05-20T00:00:00",
"categoryId": 2,
"category": "MUSIC",
"stock": [ ... ],
"_metadata": {
"etag": "DC3F16548E190F49130482301365EF1C",
"asof": "0000000000C68DB7"
}
}
],
"pageSize": 2,
"skip": 0,
"hasMore": true,
"totalRows": 9
}

Adding a search parameter to narrow the results like this:

http://phoenix:9094/ords/diveintoapex/thing/?search=world&pageSize=2

yields a response like:

{
"items": [
{
"_id": 5,
"name": "The Hidden Truth",
"description": "Documentary uncovering the mysteries of the ancient world, narrated by acclaimed historian Dr. Emily Santos.",
"price": 9.99,
"available": "2024-07-22T00:00:00",
"categoryId": 3,
"category": "MOVIES",
"stock": [ ... ],
"_metadata": {
"etag": "7674537EF60C2FB955256B1B3A1AA824",
"asof": "0000000000C68D4B"
}
},
{
"_id": 6,
"name": "Comedy Nights",
"description": "Box set of the most hilarious stand-up comedy specials from the decade, featuring top comedians worldwide.",
"price": 19.99,
"available": "2024-08-05T00:00:00",
"categoryId": 3,
"category": "MOVIES",
"stock": [ ... ],
"_metadata": {
"etag": "B6CBA70E784BFA0810D87A0E4CF83A72",
"asof": "0000000000C68D4B"
}
}
],
"pageSize": 2,
"skip": 0,
"hasMore": false,
"totalRows": 2
}

Installing the Sample

Since the JSON Relational Duality View feature is new in Oracle 23c, running the sample app yourself requires your downloading and using Oracle 23c Free. The APEX sample app is from version APEX 23.2, so you will also have to download the latest APEX release and install it into your Oracle 23c database. Using the latest available ORDS release is always recommended as well. This is the combination I’ve tested and documented here. Then, you can download the sample app from here and import it into your APEX 23.2 workspace.

In addition to these prerequisites, the only other installation step is to edit the source of the eba_demo_thing_json package to update the value of the two constants c_workspace and c_app_id to reflect the name of the workspace into which you install the sample and the application id the sample gets assigned in your workspace:

create or replace package body eba_demo_thing_json as 
-- TODO: Change the values of these two constants
-- ~~~ to reflect your workspace and app id
c_workspace constant varchar2(50) := 'DIVEINTOAPEX';
c_app_id constant number := 100;
:
end eba_demo_thing_json;

Hope you find the example useful!

Dipping My Toe into Workflow

APEX workflow is a new feature I was keen to experiment with. In a familiar visual designer, you automate a business process with a flow diagram. Its sequence of activities can include business logic, conditional branching, local or remote API calls, time delays, email, and push notifications. It’s also easy to wait for approvals or action tasks end users need to complete. Should your needs go beyond the core activity set, you can develop custom ones or use community-created activities using process plug-ins. I’m eager to share my initial experiences to offer insights I gathered along the way. You’ll find links at the end to download the sample app and to read additional APEX Workflow articles of interest by my colleagues.

(Technical Note: to see a larger version of any figure, right click on it and choose Open Image in New Tab from the context menu…)

Calculated Simplicity

To learn APEX workflow, I thought of the simplest process a math major’s mind could muster: adding two numbers. “Hey! That’s not a business process!” you rightly note, but humor me. This approach helped me understand all the “moving parts” first before embellishing my simple calculator workflow to include decisions and notifications. I decided to start small.

I created an eba_demo_simplecalc package and wrote the following compute() function to represent some parameterized business logic that my app needs to perform. It accepts two number operands and the operation to perform on them (plus, minus, multiplication, or division):

-- In package eba_demo_simplecalc
function compute(
    p_operand1  in number,
    p_operation in varchar2,
    p_operand2  in number)
    return         number
is
begin
    return case p_operation
                when '+' then p_operand1 + p_operand2
                when '-' then p_operand1 - p_operand2
                when '*' then p_operand1 * p_operand2
                when '/' then p_operand1 / p_operand2
           end;
end compute;

Then I built the simple Calculate Without Workflow page below to start with a baseline calculator page that did not use workflow at all. It lets a user enter two number values and has a select list with the four available operations. A display-only page item shows the computed result after the user clicks the (Calculate) button.

Calculate Without Workflow page invokes parameterized business logic and shows the result

As shown below, this simple page has a single entry in the Processing section that invokes the compute() function using an Invoke API page process. It has a Server-side Condition to run when the Calculate button is pressed, and I configured its three parameters to get their values from the corresponding page items and return the function result into the display only page item. I quickly had a working calculator page that let a user add, subtract, multiply, or divide any two numbers. In this baseline page, no data is stored in any table. The operands, operation, and computed result only live in APEX session state.

Invoke API page process calls the COMPUTE() function, passing page item values as parameters

Reusing Logic in a Business Process

Next, I learned how to reuse my compute() business logic from a workflow. Under Shared Components > Workflows, I clicked (Create) to get started. I was already familiar with the Invoke API page process, so using the Property Editor I instinctively changed the type of the default activity in the visual designer from Execute Code to an Invoke API type activity instead. The Workflow Designer looks and works like Page Designer, so I already knew how to use it. Next, I needed to configure the compute() function’s inbound parameters and its function return, so that made me think, “Where will these values come from and where will the computed result go?”

Workflow with three input parameters, one result variable, and a Calculate Result activity

A workflow is a “headless” process that the APEX engine runs in the background using a session that’s distinct from those of end users accessing pages. So, the workflow has no access to pages or page items. Instead, it accepts initial values at start time using parameters. These values are read-only. While the workflow is running – which can last for a few seconds or many weeks depending on the business process – APEX automatically manages the storage and retrieval of workflow variable values you can change over time. As shown in the figure above, my Simple Calc (Only Params) workflow accepts three parameters to let the workflow initiator pass in two number operands and the operation. It also has one variable to store the computed result of the calculation. Both parameters and variables have a static ID that you use to reference their value as substitution strings or bind variables. They also have a display label that shows in the navigator tree to improve readability. Here, I opted for parameter static IDs of P_OPERAND1, P_OPERATION, and P_OPERAND2 and a variable static ID of V_RESULT. The P_ and V_ prefixes are not mandatory, but they helped me remember which names were parameters and which were variables.

The workflow Title property – which determines how end-users will see the workflow instance in the console – can reference parameters as substitution strings to include dynamic info. Therefore, I set my Title to the string below:

Calculate &P_OPERAND1. &P_OPERATION. &P_OPERAND2. (Using Only Params)

Workflow Versions

As business requirements change, a process can evolve over time. APEX supports this using workflow versions. When you create a new workflow the Workflow Designer also creates the first version for you. While the name can be anything meaningful, I just named mine “1.0“. Notice it shows in the workflow navigator tree as “1.0 [Dev]” to indicate this version is in development mode.

Very observant readers may have noticed my workflow’s Result variable is actually indented under the “1.0 [Dev]” version node in the navigator tree. Since the set of variables can change over time as well, they are part of the workflow definition that’s included in the workflow version. This is why they’re called “version variables.”

While building your app, you can use your workflow in development mode when running from the APEX Builder. However, to run the app outside of the APEX Builder or to deploy it another environment, you will need to activate the development version of all workflows in use before end-users can use them. For simplicity, in this article we’ll just keep all the workflows to a single development-mode version and always run the app from the builder.

Workflow Definition Versus Instance

The workflow version diagram represents the definition of a business process. It starts with a start activity and shows how to continue on to other activities until eventually concluding in an end activity. It will always have a single starting point, but may allow multiple possible paths to reach an end activity. Think of it like a recipe, blueprint, or template that describes how each particular concrete example of the workflow will behave.

A workflow instance is a particular example of the workflow definition that your application starts, usually by passing in one or more parameter values that provide the context information that makes this example unique. The APEX workflow designer lets you specify the definition and the Workflow page process (or APEX_WORKFLOW PL/SQL API) lets your application start a new workflow instance that will follow the activities you laid out in the designer. Now let’s put these abstract ideas into practice in this example. To perform a calculation using my Simple Calc (Only Params) workflow definition, I needed to create a page that uses the Workflow page process to start a new instance of the workflow to perform a specific calculation.

Calculating Using the Workflow

I copied my original No Workflow page to a new Workflow Params Only page and changed the page process type from Invoke API to be type Workflow instead as shown below. In the Settings section of the Property Editor, I set the Type of workflow action to Start since clicking the (Calculate Using Workflow) button should start the workflow. I chose the Definition of Simple Calc (Only Params), the name of the workflow I created above. Finally, I configured each of the three workflow parameters to get its value from the corresponding page item.

Using a Workflow page process of type Start to start a workflow

Running the page, after entering two operands and an operation and clicking the (Calculate Using Workflow) button, as shown below, it appears that nothing happens. However, behind the scenes the workflow has started. The result is not immediately visible in the page as it was in my original No Workflow page. The workflow is now responsible for computing the result and it is doing that in the background. So we need a different way to see the value of the result, since it lives inside the workflow version variable V_RESULT with display label Result. In the next section we’ll create a workflow console page to inspect the workflow instance and see the result of the calculation in the Result variable value.

After submitting the calculation for 4587 x 4 using a workflow to compute the result

Reviewing Results in the Console

A Workflow Console page lets you see the progress of workflows. You create one using the Create Page wizard by choosing the Workflow Console page type. The wizard will create two pages: one lists the workflows, and the other shows the details of a selected workflow in the list. As shown below, I named the list page Workflows Started and the form page Workflow Detail. The Report Context setting determines whether the list page shows the current user the workflows they initiated, those in which they are a participant, or the ones they can administer. I chose “Initiated by Me” so it will show all the workflow instances I start.

Creating a workflow console list and details page to show workflows the current user initiated

Refreshing the browser and navigating to this new Workflows Started page, I see the list below containing the one workflow instance I have started so far. We can see from the badge that it has already completed. To see the workflow instance’s details, we just click on the title whose text we notice was computed using our Title property we configured above that used substitution strings.

Console page showing Simple Calc (Only Params) workflow we started by clicking (Calculate)

As shown below, the Workflow Detail modal drawer page appears showing the workflow activities completed and the value of the Result variable.

Inspecting the result of the calculation in the workflow details page

Seeing Workflows for One App or All?

One small change I made to the default Workflows Started page was to adjust the SQL query of its Initiated by Me region of type Content Row so that it only showed the workflow instances related to the current application. By default it shows workflows from all apps in the workspace, but that’s not what I needed for this simple learning app, so I tweaked the query to look like the following to pass the :APP_ID bind variable as the value of the p_application_id parameter to the pipelined table function that returns the workflow instance information:

select ..., 
       title,
       initiator,
       ...
  from table ( apex_workflow.get_workflows (
                   p_context => 'INITIATED_BY_ME',
                   -- Leave out this param to show workflows
                   -- from all apps in the workspace
                   p_application_id => :APP_ID
                   ) )

Exploring a Faulted Workflow

There’s not much that can go wrong when performing a simple math calculation, but there is one thing: dividing by zero! I wanted to explore what would happen if I asked my simple workflow-based calculator to perform a calculation that would generate an error. I also wanted to learn how the workflow administrator could fix a faulted workflow in the console by editing a problematic variable value involved and retrying the faulted activity. Recall from above that the value of a workflow parameter like P_OPERAND2 is read-only. So I needed to slightly change the workflow to:

  1. Add a second version variable V_OPERAND2 of type NUMBER,
  2. Initialize its default value from the value of the parameter P_OPERAND2, and
  3. Adjust the Invoke API activity to pass V_OPERAND2 to the compute() function

After doing this, I entered the values you see below to compute 5678 divided by zero (0) and clicked (Calculate Using Workflow).

Starting the workflow to generate an error by calculating 5678 divided by zero (0).

Again, it appears that nothing has happens in the page, but as shown below the Workflows Started console page shows us what’s happened. The workflow started and faulted due to the divide by zero error.

Workflows Started page showing faulted workflow

Clicking on the title of the faulted workflow shows the details you see in the figure below. We see the error that caused the fault, and can edit the value of the V_OPERAND2 variable (with label Operand2) by clicking the Edit link to the right of the variable.

Workflow details for a faulted workflow due to divide by zero error

We can fix the error by entering a corrected value for the workflow version variable. For example, let’s enter the value 1 instead and click (Apply Changes) as shown below…

Fixing a faulted workflow by editing the value of a version variable

Lastly, we can click (Retry) as shown below to have the APEX engine try processing the faulted activity again…

Retrying a faulted workflow after correcting a version variable value

The workflow now completes successfully and we can see the result by clicking on the workflow title in the Workflows Started page as shown below.

Workflow completed successfully after retry

Associating a Workflow to a Row by PK

In the previous workflow, we passed in all of the initial information required using parameters. However, very often the information a workflow needs is already contained in the columns of a table in your app schema. In those cases, the only parameter value you may need to pass is a primary key from the table in question so it’s clear to which row the new workflow instance is associated.

In fact, this use case is so common that you don’t even need to create your own workflow parameter to handle it. Every APEX workflow has an optional, predefined parameter named APEX$WORKFLOW_DETAIL_PK whose display label is Details Primary Key. I set out next to investigate using this feature with my simple calculator workflow.

The first step was creating an EBA_DEMO_CALCULATION table shown below to store the calculation rows to which each workflow instance will be associated by primary key. The table has a primary key (ID) and then columns for the two operands, the operation, and the result of the calculation.

APEX Quick SQL diagram of the EBA_DEMO_CALCULATION table

Next I created a Calculations Interactive Report page with an associated form page named Calculation. These pages let me quickly list, create, edit, and delete calculation rows. Then I created the Simple Calc (Details PK) workflow you see below. To associate it with the EBA_DEMO_CALCULATION table, as shown below I clicked on the “1.0 [Dev]” version node in the navigator tree on the left, and then in the Additional Data section of the Property Editor, set the table name and configured the primary key column (ID) to use.

Configuring Simple Calc (Details PK) to associate it with EBA_DEMO_CALCULATION table

This table name configuration is one of the two required ingredients to make a table-associated workflow function correctly. The other important step occurs in the configuration of the page process that starts this workflow.

In the Calculation form page, the wizard has already created a page process of type Form – Automatic Row Processing (DML) to save the calculation row, and a Close Dialog page process to dismiss the dialog and return to the calling page. I added a new Workflow page process just between these two. As shown below, I configured it to start workflow Simple Calc (Details PK), but this time I’ve added one additional configuration value that makes a big difference. I set the Details Primary Key Item to P4_ID. This is the name of the page item that will provide the primary key value of the EBA_DEMO_CALCULATION table row to which the new workflow instance will be associated. The P4_ID page item is the primary key in the form region that creates, edits, or deletes a row in the EBA_DEMO_CALCULATION table. However, since its item type is Hidden it does not appear on the visual layout.

Set Details Primary Key Item to provide a value at workflow start time for APEX$WORKFLOW_DETAIL_PK

Now we have a workflow with a Details Primary Key Item value being passed in for the built-in APEX$WORKFLOW_DETAIL_PK parameter, and we’ve configured the Additional Data on the workflow version so APEX knows to which column in what table this primary key parameter value applies. This combination produces an interesting benefit. The APEX engine will automatically make all column values available by name for the appropriate EBA_DEMO_CALCULATION table row for the duration of the workflow. This lets us easily refer to the value of the first operand using its OPERAND1 column name, the second operand using OPERAND2 and the operation to perform using the column name OPERATION. As shown below, I’ve configured the parameters of the Calculate Result Invoke API activity in the workflow using these column name values.

Using details table column names for EBA_DEMO_CALCULATION row associated with the workflow

The second activity Store Result is another Invoke API activity that calls the UPDATE_RESULT procedure in the EBA_DEMO_SIMPLECALC package to store the computed result in V_RESULT back into the RESULT column of the appropriate EBA_DEMO_CALCULATION table row. For procedure parameter p_calculation_id it passes the item value APEX$WORKFLOW_DETAIL_PK and for parameter p_result it uses the V_RESULT item’s value.

Also notice that I introduced a single P_SUBJECT parameter (with label Subject) to let the page that starts the workflow pass in the entire subject line value. I did this since the workflow title cannot reference the workflow version-level values like variables or details table column names as substitution variables. By introducing the P_SUBJECT parameter and configuring the workflow title to be &P_WORKFLOW. I can easily set the workflow instance subject line to a useful value including dynamic contents in the page that starts the workflow instance to perform the calculation.

The figure below shows how the value of the P_SUBJECT parameter is set in the Calculation page’s workflow start page process. It includes the values of page items to achieve an appropriate subject, which the new workflow instance uses as its title.

Configuring dynamic content of the workflow P_SUBJECT parameter using page items values

I then used the new Calculations interactive report page and clicked (Create) to open the Calculation form to add a new calculation row. I entered the values shown below and clicked (Calculate Using Workflow). This inserted the new EBA_DEMO_CALCULATION row, started the workflow passing in the system-assigned primary key in P4_ID as the details primary key value, and closed the dialog.

Creating a new calculation row, with result computed by a working using a Details PK

When the dialog closes, I saw the page below. The workflow ran so quickly that it had already computed the result of 80 and stored it in the EBA_DEMO_CALCULATION row’s RESULT column using the details primary key value to identify which calculation row to update.

Calculations interactive report page after first Calculation row got created

Checking the Workflows Started page, I could also inspect the details of the workflow instance using that approach as shown below. I added the Details Primary Key collapsible region to the Workflow Details page to make it easier to see which details primary key value is associated to the current workflow instance being shown. As we see, the system assigned a primary key value of 1 to the first EBA_DEMO_CALCULATION row created. Our page passed it into the APEX$WORKFLOW_DETAIL_PK built-in parameter at workflow start time. It also passed in the workflow’s P_SUBJECT parameter, whose value included the substitution variable &P4_ID. so the primary key value 1 also reflects in the title of the workflow instance.

Viewing the completed workflow instance that used a details primary key for the calculation

Adding Some Fancier Features

With these basic examples under my belt, I spent some time to create a fancier version of the details-primary-key-using workflow above. In the sample app it’s called Fancier Calc (Details PK) and it appears below. It expands on the previous example by adding conditional notification of the result by email and/or by push notification. This required adding a few new parameters to let the page starting the workflow pass in: the current username, an email address to notify, a number of minutes to delay before notifying, and a boolean parameter to control whether to notify the current user using push notification.

Gettin’ fancy wit it: adding conditional email, push notification, and wait to a workflow

It uses several new Switch activities to change the route of the workflow depending on various conditional evaluations. The Any Notifications? switch uses a PL/SQL expression to test whether :p_email_to_notify is not null or whether :p_push_notify_current_user is equal to ‘Y‘. The Any Wait? switch checks if :p_minutes_delay_before_notify is non-null and greater than zero (0). The Notify Email? switch tests if :p_email_to_notify is not null, and the Notify Push? switch evaluates if :p_current_username is not null and :p_push_notify_current_user equals ‘Y‘.

Notice that even though this last parameter is defined to be of type BOOLEAN, when referencing its value in PL/SQL or as a substitution parameter it will use the implicit or explicitly configured True Value and False Value as text. I chose to explicitly configure these values to ‘Y‘ and ‘N‘ respectively in the Property Editor for the Push Notify Current User parameter. I configured the Send Email activity using the &P_EMAIL_TO_NOTIFY. email address as the recipient, and the Push Notify activity using the &P_CURRENT_USERNAME. as the user to notify. Also notice the transition lines that emerge from the Switch activities have names on them. I chose to use “Yes” and “No” for the names of all of them, but of course you can use any descriptive names that make your workflow easier to maintain.

Consequently, I also enhanced the Calculation page to let the user choose between the simple or fancier workflow, as well as providing additional values if they choose to use the fancier version as shown below. Hopefully if you try out sample app, you can experiment using the fancier option to send yourself email and push notifications.

Enhanced Calculation page allowing either simple or fancier workflow choice

NOTE: If you install the sample app and try it yourself, make sure to (1) regenerate the Push Notification Credentials after importing the app and (2) opt-in to receive push notifications in the user settings page of the runtime application. You’ll find the (Regenerate Credentials) button in APEX Builder’s Shared Components > Progressive Web App page. The push notification opt-in is in the Settings page you can access in the dropdown menu under the logged-in username at runtime.

Why Pass in Current Username?

You may ask yourself why I needed to pass in the value of the current username as a workflow parameter, and that would be an excellent question! I originally tried to get the Push Notification activity working by simply referencing the familiar &APP_USER. substitution variable. But I frustratingly found it would never deliver the notification as expected. After reaching out to colleagues for advice, they reminded me the workflow is happening in the background and is always operating as the nobody user, which is the value of &APP_USER. when no user is logged in.

Therefore, my experiments to send a push notification to the nobody user were ignored since no user named nobody was enrolled in receiving push notifications. By passing in the p_current_username parameter value from the Calculation page, its value at workflow start time gets evaluated in the context of the page starting the workflow. This gives the workflow parameter the correct value of the end-user logged in using the page. Subsequently, the Push Notification activity’s use of &P_CURRENT_USERNAME. as the recipient of the push notification ensures it gets delivered to the logged-in user accessing the page that starts with workflow if they have opted-in to receive notifications.

NOTE: I subsequently realized I could have avoided passing in the current username in my simple example app by instead using the built-in workflow substitution string APEX$WORKFLOW_INITIATOR as the push notification recipient username. However, I decided to leave the sample they way it is in case the user to be notified by push notification is not the same username that starts the workflow in the application. Just be aware that APEX already keeps track of the username of the workflow initiator using this special item name.

Using SQL Query for Additional Data

In the Simple Calc (Details PK) workflow discussed earlier, we configured the Additional Data section of the workflow version to name the EBA_DEMO_CALCULATION table and its ID primary key column. This let the automatic row lookup work based on the Details Primary Key Item value passed in the APEX$WORKFLOW_DETAIL_PK parameter by the Workflow page process that started the flow. In turn, that allowed us to reference any of the columns of the EBA_DEMO_CALCULATION table by name anywhere in the workflow as needed. The Fancier Calc (Details PK) workflow shows off another useful alternative: using a SQL Query instead of a table name for Additional Data.

It specifies the following query that references the value of the :apex$workflow_detail_pk in its WHERE clause. Since the automatic lookup by primary key is not happening in this situation, the developer is responsible to perform the primary key lookup in their query using this built-in bind variable.

select id,
       operand1,
       operation,
       operand2,
       case operation
        when '+' then '+'
        when '-' then '-'
        when '*' then '×'
        when '/' then '÷'
       end as operation_translated
  from eba_demo_calculation
  where id = :apex$workflow_detail_pk

Just like what happened when using the table name and primary key column approach, all of the SELECT list column names in the Additional Data query are made available to reference throughout the workflow wherever needed. In the sample app, the Send Email activity references the value of the OPERATION_TRANSLATED column above by using &OPERATION_TRANSLATED. This lets your workflow perform a join or use the power of SQL to bring in whatever useful data might be relevant to every activity in the workflow. Each activity in the workflow also has an optional Additional Data query you can use to retrieve information that’s only relevant to a single activity.

NOTE: When using the SQL Query style Additional Data, if your query inadvertently retrieves no rows, then your workflow will fault with a “user-defined error” message, so be aware that this is what that error means. One way I encountered this error while creating the sample app was by accidentally using a bind variable named :apex$workflow_details_pk (with a plural “details” in the name when it needed to use the singular “detail”). Since no bind variable by this name-with-a-typo exists, its value evaluated to null and that caused my Additional Data SQL query to return no rows.

Cleaning Up Dev-Mode Workflow Instances

The apex_workflow package comes with a handy procedure called remove_development_instances() that lets you delete workflow instances you created during your development. It accepts an optional p_application_id parameter to provide the ID of the app whose dev-mode workflow instances you want to delete. If you don’t pass an app id, it cleans up the dev-mode workflow instances for all applications in the workspace. I added a button shown below to the Workflows Started page in the sample app that invokes this new procedure, passing in the :app_id bind variable to affect only the workflow instances for the current sample application.

Handy button in the sample to clear development-mode workflow instances for the current app

On the Calculations list page, as shown below I added another button that deletes the rows from the EBA_DEMO_CALCULATION table that are not referenced by a workflow.

Useful button to remove calculation rows not referenced by a workflow

Using the combination of these two buttons, you can first delete the dev-mode workflow instances you have accumulated while trying out the sample. Then you can click the other button to remove the calculation rows that no longer reference a workflow (since you just deleted them in the step before!).

NOTE: While the simple example workflows in this article are not using approval or action tasks, one interesting corollary of the new remove_development_instances() procedure is that it will also clean up any approval or action task instances that were initiated by the deleted dev-mode workflows.

Tips on Moving & Connecting Activities

While exploring the workflow capabilities to prepare this article, I gradually became more confident in how to achieve a nice-looking diagram with neatly aligned activities and transition lines that looked how I wanted them to. It took time to discover the way that felt most productive, so I wanted to share what I learned here in case it could save you some time. The video below puts the following tips into action, showing how to:

  • Drag and drop an activity from the palette
  • Grab the arrowhead to reattach a transition
  • Use anchor dots to position a line
  • Add and remove “elbow” points to bend lines
  • Create and connect activities in one gesture
  • Multi-select to move groups of activities
  • Drop a new activity between two others
Video with tips to achieve clean layouts with Oracle APEX Workflow Diagram Builder

Creating a Custom Activity

As a last exercise, I wanted to try extending the core set of APEX workflow activities to add a custom one into the mix. Sticking with the simple calculator theme, my goal was to create a Simple Calculator activity that any APEX Workflow could use. I started by creating a process type plug-in, and made sure to check the Workflow Activity checkbox in the Supported for section. I defined four plug-in attributes to capture the (page or workflow version) item names to provide Value1, Operation, Value2, and Result. To make it self-contained, I copied the source of the compute() function from the beginning of this article into the plug-in’s PL/SQL code area, and also added the following function whose name I configured as the plug-in’s Execution Function Name:

-- function compute() removed for brevity
-- see eba_demo_simplecalc package for source

function simple_calculator (
    p_process in apex_plugin.t_process,
    p_plugin  in apex_plugin.t_plugin)
    return       apex_plugin.t_process_exec_result 
is
    c_value1      number        := nv(p_process.attribute_01);
    c_operation   varchar2(200) :=  v(p_process.attribute_02);
    c_value2      number        := nv(p_process.attribute_03);
    c_result_name varchar2(200) :=    p_process.attribute_04;
    l_result apex_plugin.t_process_exec_result;
begin
    apex_session_state.set_value(
        c_result_name, 
        compute(c_value1,c_operation,c_value2));
    l_result.success_message := 'Success'; 
    return l_result;
end simple_calculator;

The code uses the v() and nv() functions to retrieves the string and number values of the items named by the first three custom attributes. From the fourth custom attribute, it gets the name of the item into which the result should be set. Finally it uses the set_value() function in the apex_session_state package to set the value of the result item to the result returned from the compute() function.

With this simple custom activity plug-in ready, next I created the Simple Calc Using Process Plugin workflow you see below. Notice how the custom activity Simple Calculator automatically appears in the Activities Palette. To configure it, I used the Property Editor to assign the names of the workflow parameters to use for Value 1, Operation, and Value 2 attributes, and the name of the version variable to use to hold the result.

Workflow using a custom Simple Calculator activity we created above

A new Workflow Process Plugin page is a slight modification of the Workflow Params Only page we created earlier, but its Workflow page process starts the new Simple Calc Using Process Plugin workflow instead.

Sample page to create new workflow instance using the Simple Calc Using Process Plugin workflow

After trying out the new page, a quick check of the Workflows Started console page shows the results of all of our tests during this article.

Workflows Started page showing all workflows we tried in this article

Summary

Of course, you’d never really use APEX workflow to just add two numbers, but hopefully this simple exercise that I found helpful to learn the basics will also prove useful to you as you think about real-world business processes you can now automate with ease.

I encourage you to download the sample app from here and give it a spin. After importing the app, there are two additional setup steps to perform before the push notification will work. First, regenerate the Push Notification Credentials. In addition, ensure you have opted-in to receive push notifications in the user settings page of the runtime application. You’ll find the (Regenerate Credentials) button in APEX Builder’s Shared Components > Progressive Web App page. The push notification opt-in is in the Settings page you can access in the dropdown menu under the logged-in username at runtime. Also check that your mobile device or desktop is not set in “do not disturb” mode that suppresses delivery of push notifications. If it was in this mode, you’ll find your push notifications have quietly accumulated in the “notification center” on your device.😊

For additional reading on APEX Workflow including business-focused use cases involving approvals and tasks see my colleague Ananya Chatterjee’s blog series Simplify Business Process Management Using APEX Workflow and her article Multi Level Expense Approval using APEX Workflow. To learn more about APEX Workflow Development Lifecycle and Management check out my colleague Ralf Mueller’s article. For more background on end-user action tasks, see my colleague Richard Allen’s Using Action Tasks blog post. Many thanks to Ralf, Ananya, Richard, and Ottmar on my team who answered a lot of questions I had as I worked my way through this first encounter with APEX workflow.

Planning with Gantt & Calendar #JoelKallmanDay

Overview

Building an app to solve a personal need is a great learning opportunity. Sometimes I plan multi-day trips and must decide among overlapping events. The annual animation festival in Annecy, France, is one example. Hundreds of talks and screenings span various venues in a lovely little lakeside locale. Another is a periodic pilgrimage to New York City to see as many musicals and museums as I can fit in a week. An APEX app helps me plan my trip activities. This year I celebrate Joel Kallman Day by sharing my app and explaining the smorgasbord of APEX tricks I learned by building it.

(Technical Note: to see a larger version of any figure, right click on it and choose Open Image in New Tab from the context menu…)

Recurring Trips with Overlapping Events

Both the Annecy Festival and my New York City trips happen periodically during a particular year over a series of days, so a trip name and year represent each edition of a trip (e.g. New York City, 2023). Each edition has a start and end day and consists of events to attend that may repeat multiple times during the week. These different event runs might be in distinct venues, like a movie that shows in different cinemas around Annecy. In contrast, all the event runs might be in the same Broadway theatre. In the case of the Annecy Festival, films can be from a particular country, in an original language, and have subtitles in another language, so as shown below my data model also includes countries and languages.

Data model for Activity Planner application

Gantt Chart Showing Per-Day Show Runs

To easily see the overlapping events on each day of the trip, I use a day selector and Gantt chart as shown in the figure below. The query for the Gantt chart region includes columns DAY_START and DAY_END whose values come from combining the selected day with user preferences for day start time and day end time. The query also includes STARTS_AT and ENDS_AT datetime values for each event run that falls on the selected day. It concatenates the name of the event and the name of the venue into a NAME column that’s used as the task name. To suppress the display of task name along the left edge of each row, I selected the Attributes tab of the chart region, and set Show Row Axis to OFF.

Gantt Chart showing many overlapping event runs on Wednesday, September 20th

Day Selector’s Data-Driven Date Range

The EVENT_DAYS Shared Components List of Values (LOV) driving the day selector was an interesting query to write. In the EBA_DEMO_ANN_EDITIONS table for a trip edition we have a START_DATE and END_DATE for the consecutive sequence of days over which the trip edition takes place. I needed a query that returned all of the dates in order starting with the start date and ending with the end date for the current trip edition. The application item CURRENT_EDITION holds its ID value. The query ended up looking like this:

SELECT
  to_char(FIRST_DAY + LEVEL - 1,'fmDy DD Mon') AS DAY_NAME_IN_PERIOD,
  LEVEL AS DAY_NUMBER
FROM (SELECT FIRST_DAY, LAST_DAY
        FROM EBA_DEMO_ANN_EDITIONS
       WHERE ID = :CURRENT_EDITION)
CONNECT BY
  FIRST_DAY + LEVEL - 1 <= LAST_DAY
ORDER BY
  DAY_NUMBER ASC

This query drives the day selector below with an ordered list of formatted day names between start date and end date for the current trip edition:

Day selector for the current trip edition on the Schedule page

Stretching Chart to Fill the Viewport

I could have hard-coded the height of the Gantt chart region on the Attributes tab of the property editor, but that didn’t meet my needs. In the weeks leading up to a trip, I use the app on different desktop computers with differently-sized monitors as well as on my iPad, so I explored how the Gantt chart could stretch to use whatever vertical space is available.

After many failed experiments, I landed on the following technique. It uses the CSS calc() function to compute the value of the height property. First, I defined a CSS class on the page called vertical-stretch and assigned this class to the Gantt chart region. Then I defined the class in the page-level CSS > Inline settings using the syntax:

.vertical-stretch {
   height: calc(100vh - 17rem);
}

The calculation references the full (100%) viewport height by using 100vh and subtracts from that the height of the rest of the elements on the page above the Gantt chart. Using CSS rem units that represent the height (i.e. font-size) of the root element on the page, through trial and error I discovered 17rem closely matched the space used by the other content on the page. Applying the vertical-stretch CSS class to the Gantt chart region means, “Your height is the viewport space remaining after accounting for the vertical space occupied by the elements above the Gantt chart”. This approach made the app look exactly how I wanted on every monitor size and iPad where I use it.

When resizing the window interactively, the chart also reacts as expected as shown in this short video:

Editing Events by Clicking on a Bar

The Event page (7) in the app lets the user create, edit, or delete an event and all its related event runs. When looking at a particular day’s events in the Gantt chart on the Schedule page (14), I wanted clicking on any bar to open the event related to that particular event run. So I set out to find how to obtain the event run ID related to each bar in the Gantt chart. I started by configuring the Gantt chart’s Task Id property to get its value from the event run ID column in the region’s query.

Next, I hoped to find the value of this task ID somewhere in the HTML markup in the page. By right-clicking on a Gantt bar rectangle and choosing Inspect Element, the browser developer tools showed me that each bar was a Scalable Vector Graphics (SVG) group element <g> containing two <path> elements and a <text> element like this:

<!-- Example SVG group of shapes for Task Id 162 "bar" -->
<g id="_dvtActiveElement43100180_162">
  <path d="..." class="oj-gantt-task-backdrop"></path>
  <path d="..." class="oj-gantt-task oj-gantt-task-bar"></path>
  <text         class="oj-gantt-task-label" ... >
    Local Fauna Preview (Philippe Labaune Gallery)
  </text>
</g>

The <path> element containing the class="oj-gantt-task" looked like the right element to target for a click handler. So I defined a dynamic action to react to the Click event on the CSS class oj-gantt-task using the jQuery Selector with a dot before the class name as shown below:

Configuring a dynamic action on the Click event of the element with CSS class oj-gantt-task

I also noticed the Gantt Chart Task Id value was part of the value of the id attribute on the <path> element’s containing parent <g> element. For example, for an event run with ID = 162, the id attribute of the <g> parent element for the clicked-on bar had a value like “_dvtActiveElement43100180_162“. I learned from my colleagues that the dynamic action event handler can access the triggering <path> element using the expression this.triggeringElement. So, my dynamic action event handler passes the triggering element to an eventRunIdForClickedGanttBar() helper function in my page. It returns the Task Id value by parsing it out of the parent element’s id attribute like this:

function eventRunIdForClickedGanttBar(p_this) {
  return extractNumberFromUnderscoreString($(p_this).parent()[0].id);
}

The calling dynamic action event handler looks like this. It sets the value of a hidden page item P14_SELECTED_EVENT_RUN_ID to hold the event run ID that was clicked on, sets the Request to EDIT_EVENT_RUN, and submits the page.

const lTaskId = eventRunIdForClickedGanttBar(this.triggeringElement);
// Set Task id to hidden page item, and submit
// the page with EDIT_TASK request
apex.page.submit({
    request: "EDIT_EVENT_RUN",
    set: {
        "P14_SELECTED_EVENT_RUN_ID": lTaskId
    }
});

In the page’s Processing tab, an After Processing branch redirects to the Event page (7) when the Request = EDIT_EVENT_RUN, passing along this hidden page item value for the target page’s P7_EVENT_RUN_ID page item value. It also passes its own page number 14 as the value of the P7_CALLING_PAGE_ID page item. In contrast, when the Home page (1) calls the Event page to edit an event, its link passes in the value 1 for this page item.

Passing the event run ID for the clicked-on Gantt bar to the Event page

The Event page uses the P7_CALLING_PAGE_ID item value to conditionally return to the right calling page when the user cancels or applies their changes in the Event page. The target Event page has a Pre-Rendering page process Get Event from Event Run Id that looks up the parent event ID to edit based on the “child” row’s event run ID passed in.

Saving Time During Event Data Entry

Each event has a duration in minutes, and each event run has a start time and end time. Using two dynamic actions on the Event page shown below, I speed up data entry by computing the duration if it’s null when a pair of Starts At and Ends At times are entered for an event run. In addition, once the duration is set, any other event runs entered benefit from another dynamic action that automatically computes the Ends At time once the user enters a Starts At and tabs out. This feature makes it very quick to enter all the dates and times a particular musical like “& Juliet” is offered during a trip edition.

Event page editing the Broadway musical & Juliet and its event runs

The page contains the following two helper functions in its JavaScript Function and Global Variable Declaration section the the property editor:

function addMinutesToTime(time, minutes) {
  return apex.date.format(
           apex.date.add(
             new Date("2023-01-01 "+time),
             parseInt(minutes),
             apex.date.UNIT.MINUTE),
           "HH24:MI");
}

function minutesBetween(time1, time2) {
  var date1 = apex.date.parse(time1, "HH24:MI");
  var date2 = apex.date.parse(time2, "HH24:MI");
  var diff = Math.abs(date2.getTime() - date1.getTime());
  return Math.floor((diff / 1000) / 60);
}

Then, the dynamic action on the interactive grid’s ENDS_AT column has a Set Value action step that executes JavaScript to call minutesBetween() to compute the duration of the event:

minutesBetween($v('STARTS_AT'), $v('ENDS_AT'))

It only runs if the duration is currently null and both the STARTS_AT and ENDS_AT columns in the current row have a value using the client-side condition boolean expression:

$v('STARTS_AT')       !== '' && 
$v('ENDS_AT')         !== '' &&
$v('P7_DURATION_MIN') === ''

Similarly, the dynamic action on the grid’s STARTS_AT column has a a Set Value action step that executes JavaScript to call addMinutesToTime() to compute the ENDS_AT time, based on the duration of the event:

addMinutesToTime($v('STARTS_AT'), 
                 $v('P7_DURATION_MIN'))

It only runs if the duration is not null, the STARTS_AT has a value, and the ENDS_AT is null using the client-side condition boolean expression:

$v('STARTS_AT')       !== '' && 
$v('P7_DURATION_MIN') !== '' &&
$v('ENDS_AT')         === '' 

Marking Event Runs as Starred/Ticketed

When reviewing a large list of events, there’s a good chance that you’re not interested in all of them. The app lets the user “star” a particular event run to mark it as a “favorite”. This results in setting the STARRED column value to “Y“. In addition, some trips involve a booking or ticketing process to confirm a seat for a particular event run. The app lets the user indicate they’ve confirmed their place by setting a Tickets switch to ON. In the figure below, I’m starring both Wednesday, September 20th performances of musical “& Juliet” and marking the 14:00 showing as ticketed.

Marking a particular event run as both Starred and Ticketed

Conditionally Coloring Selected Bars

It can be useful to conditionally color the Gantt chart bars based on application-specific criteria. In my app, I want “starred” events to show with one color that makes them distinct, while “ticketed” events show with another unique color. This was easy to accomplish. I first included the following SELECT list expression in the query for the Gantt chart region in the Schedule page (14):

case 
   when r.tickets           = 'Y  then 'tickets'
   when r.event_starred     = 'Y' 
     or r.event_run_starred = 'Y' then 'starred'
end as css_classes

Then I set the Task CSS Classes property of the Gantt chart to &CSS_CLASSES. The final step was adding the following simple CSS class definitions to the page’s CSS > Inline section in the property editor:

.tickets {
   fill: blue;
}

.starred {
   fill: yellow;
}

After the user applies changes on the Event page, it conditionally branches back to the calling page. Since the Schedule page passed its page number 14 into the P7_CALLING_PAGE_ID page item, the user returns to the Schedule page. As shown below, we immediately see conditionally colored event run bars reflected in the Gantt chart. As dictated by the above CSS classes, the bar for the ticketed event run is now blue while the bar for the starred one is now yellow.

Gantt bar for starred showing appears in yellow, while ticketed showing is blue

Narrowing Focus: Hiding Clashing Events

When you have identified your starred event runs, it can be useful to narrow your focus to only see your favorites. This can help you decide which particular event runs to book or buy tickets for. The Schedule page has a switch at the top labeled Hide Starred Session Clashes. When this is switched on, as shown below the Gantt chart suppresses all other event runs that would overlap with any of your favorites. Very useful, since you can’t be in two places at once!

Hiding event runs that clash with your favorites to focus your attention

This feature is implemented directly in the SQL query for the Gantt chart region using an appropriate WHERE clause predicate. It uses the switch page item as a bind variable and uses a NOT EXISTS(…) clause to include any unstarred events that do not have clashes with any starred ones.

and (:p14_hide_starred_session_clashes = 'N' 
     or
     (:p14_hide_starred_session_clashes = 'Y' and
       ((r.event_starred='Y' or r.event_run_starred='Y')
         or
         (r.event_starred='N' 
          and not exists (
            select 1
              from eba_demo_ann_event_runs_v starred_run
             where (
               starred_run.event_starred='Y' or
               starred_run.event_run_starred='Y')
               and ((
                 /* starts before and ends after current */
                 starred_run.starts_at < r.starts_at and
                 starred_run.ends_at   > r.ends_at)
                 or (
                 /* starts after and ends before current */
                 starred_run.starts_at > r.starts_at and
                 starred_run.starts_at < r.ends_at)
                 or (
                 /* ends after current starts & before current ends */
                 r.starts_at         <= starred_run.ends_at and
                 starred_run.ends_at <= r.ends_at)
                 or (
                 /* starts after current & before current ends */
                 r.starts_at           <= starred_run.starts_at and
                 starred_run.starts_at <= r.ends_at)))))))

Focusing Further: After Getting Tickets

After you booked your seat or bought your ticket for one particular showing, you no longer need to see all of the other available opportunities to see the same event. The Schedule page has a second switch at the top labeled Hide Already Ticketed Events. As shown below, the Tuesday 19th September showing of “& Juliet” does not appear since above we already marked the Wednesday 2pm showing as being ticketed.

Hiding other showings of an event you have already bought tickets for

Again, this feature is implemented with a different SQL WHERE clause predicate using this other switch page item as a bind variable. When the variable is set to “Y” it uses an event_id not in (…) with a subselect to exclude event runs for ticketed events from days other than the currently selected day (:p14_day).

and (nvl(:p14_hide_ticketed_from_other_days,'N') = 'N' 
     or (
       nvl(:p14_hide_ticketed_from_other_days,'N') = 'Y' 
       and 
       event_id not in (
         select distinct event_id
           from eba_demo_ann_event_runs_v
          where tickets='Y' 
            and event_day !=  :p14_day)
       )
     )

Since the Gantt chart query references three different page items as bind variables, we have to ensure that its Page Items to Submit property mentions all their names so that their values are sent to the server whenever the region refreshes its query results. The figure below shows the three page item names in a comma-separated list.

Always ensure Page Items to Submit includes names of any page items used as bind variables

Geocoded Date in the Venues Page

As shown below, the Venues page uses the techniques explained in my article Refitting Filtered Map Points to provide a filterable list of venues with an automatically-synchronized map that zooms and recenters to show the locations of the filtered venue list.

Filtering the list of venues to see the narrowed results on a map

The companion Venue page for creating and editing venues uses a Geocoded Map page item to simplify finding the longitude and latitude for a given address. In order to have the Geocoded Map page item show the appropriate pin on the map when editing an existing venue, we use the following expression in the region query to create an appropriate SDO_GEOMETRY value based on the v.longitude and v.latitude values stored in the table. Importantly, the Query Only property of the corresponding P11_GEOCODED_ADDRESS page item is set to ON so that the APEX engine doesn’t try to update any SDO_GEOMETRY column in the underlying EBA_DEMO_ANN_VENUES table that has LATITUDE and LONGITUDE instead.

/* 
 * if latitude/longitude have values, then
 * return sdo_geometry structure representing
 * the point (longitude,latitude)
 */
case 
  when v.longitude is not null 
   and v.latitude is not null then
     sdo_geometry(2001,
                  4326,
                  sdo_point_type(
                    v.longitude,
                    v.latitude,
                    null),
                  null,
                  null)
        end geocoded_address

Since we’re not asking APEX to save the SDO_GEOMETRY value directly to an underlying SDO_GEOMETRY column, when the user enters a new address and triggers the geocoding, we need to pick out the longitude and latitude so they get stored correctly in the respective, underlying columns of the same name. The P11_GEOCODED_ADDRESS page item has a dynamic action named When Address Confirmed that reacts to the Result Selection [Geocoded Address] event to trigger two Set Value dynamic action steps to set the P11_LONGITUDE and P11_LATITUDE page items with the values the geocoding service has returned. They reference the this.data.longitude and this.data.latitude JavaScript expressions to get the job done.

Geocoded Address page item lets APEX resolve longitude and latitude automatically

APEX’s geocoding service depends on knowing the country code of the address the user is typing in. The Venue page has a Pre-Rendering page process in the Before Header section named Default Trip Info for Current Edition. It contains the following PL/SQL block that looks up the trip ID and country code based on the value of the current edition the user is working with. Note that it only assigns the P11_TRIP_ID if the primary key P11_CODE is null (meaning that we’re creating a new venue):

for j in (select e.trip_id,t.country_code 
            from eba_demo_ann_editions e 
            left join eba_demo_ann_trips t on t.id = e.trip_id
           where e.id = :CURRENT_EDITION) loop
    if :P11_CODE is null then
        :P11_TRIP_ID := j.trip_id;
    end if;
    :P11_CURRENT_TRIP_COUNTRY_CODE := j.country_code;
end loop;

Then, the Geocoded Address page item is configured to use the P11_CURRENT_TRIP_COUNTRY_CODE page item for geocoding country information. This ensures that when defining a venue for my Annecy trip it uses FR for France as the geocoding country and when creating one for my New York City trip, it uses US for the United States.

Configuring geocoded address country

Let’s suppose we defined another trip for “Annecy Festival” with the country of France configured, then defined a new edition of this trip for the year 2024, and then finally set this new “Annecy Festival 2024” as the current trip event to work with using the Settings page explained in the next section. Then as shown below, the geocoding of venue addresses would be done with France as the country.

Venue address geocoding uses current trip edition country code

Settings Page

The Settings page (8) lets the user change four user preferences: the current trip edition to view/edit, whether to show countries and languages information, and the start time and end time to use in the Schedule Gantt chart page. The page shown below uses conditional Before Header computations to set the values of the page’s items based on first trying to use user preferences, then conditionally falling-back to using the same-named application items. The application item names (which match the corresponding user preferences) are CURRENT_EDITION, SHOW_COUNTRIES_AND_LANGUAGES, DAY_START_TIME, and DAY_END_TIME. I admit that I wasn’t super excited about the way I had to capture the Day Start Time and Day End Time using four separate select lists, but this was the user experience I had in mind for the task. Maybe a future APEX release will offer a time picker page item for this purpose.

Setting four different user preferences that affect the Activity Planner app

To save the Settings page item changes back to the user preferences, the page has one page process in the Processing section for each user preference that each use the built-in User Preferences page process type to save the updated value to the corresponding same-named user preference. Another four Execute Code page processes in the same section set the preference values into the corresponding application items, since those are what get referenced in various SQL queries around the app.

Focusing Calendar on a Date Range

Once you’ve marked one or more event runs as ticketed, get a bird’s eye view of your trip on the Calendar page. This page focus the users attention on the current trip edition by ensuring the right start day is shown immediately. It sets the calendar to use the START_DATE of the current edition to be the current date displayed, and to start the week display with this date, too.

Calendar gives bird’s eye view of ticketed event runs during the current trip edition

Assuming the calendar region has a static ID of calendar, setting its initial display date requires just one line of JavaScript in the Execute when Page Loads section of the page properties that calls the gotoDate() function on the calendar widget:

apex.region("calendar")
    .widget()
    .data('fullCalendar')
    .gotoDate($v('P12_START_DATE'))

With similar ease, after first computing the day of the week the START_DATE represents, we can set the first day of the calendar week in the calendar region’s Initialization JavaScript Function. The calendar expects its firstDay property to be a day number: 0=Sunday, 1=Monday, etc. I use a Pre-Rendering computation in the Before Header section to compute the value of a hidden P12_ZERO_BASED_FIRST_DAY using the following SQL query:

select to_number(to_char(first_day,'D'))-1
from eba_demo_ann_editions
where id = :CURRENT_EDITION

Setting the calendar’s firstDay property is essentially a one-line task. The code for the Initialization JavaScript Function on the Attributes tab of the property editor for the calendar region appears below. It assigns the value of the zero-based first day to the firstDay property of the pOptions object passed in.

function ( pOptions ) {
    pOptions.firstDay = $v('P12_ZERO_BASED_FIRST_DAY');                                    
    return pOptions;
}

Augmenting App Title with Page Name

It’s a personal preference of course, but as you can see in the previous screenshots, I like the “Top” style Navigation Menu layout since its options stay visible. I combine this style with the use of the After Logo position that lets me place additional context like the page name in the application navigation bar. This leaves more room for content on each page. For example, on the Calendar page I added a Page Title Static Content region with the following HTML Code:

<span class="apex-logo-text"> -&nbsp; Calendar</span>

This uses the same font as the logo text, and adds a hyphen separator and the name of the current page. As shown below, I set its Template to None and its Position to After Logo. This combo gives me a result that feels clean and spacious, and I’ve done the same thing on every non-modal page of the app.

Adding custom content like the page name to the After Logo position in the navigation bar

Grouping Event Runs by Event

The application’s Home page (1) has an Interactive Report showing a searchable list of all event runs for the current trip edition. To group the list of event runs by the event they belong to, I ran the page and used the Actions > Format > Control Break menu to add the Event break column as shown below.

Adding control break group to an Interactive Report to group event runs by event name

Next, to ensure the event runs sort in the most logical sequence for the end user, I used the Actions > Data > Sort menu to configure the report to first sort by Event then sort by Event Day Number as shown here.

Defining the sort order to show event runs in a logical day order

Finally, I saved the current state of the Interactive Report as the new default primary report for all users. As shown below, I used the Actions > Report > Save Report menu, choosing As Default Report Settings in the Save Report dialog, then ensured the Default Report Type was set to Primary in the Save Default Report dialog before clicking Apply.

Updating the primary default report to use current settings of the Interactive Report

Of course, the option to update the primary default report does not appear for end users. I was able to access it since I ran this page as a developer from the APEX Builder.

Formatting a URL as a Hyperlink

My colleague Jayson Hanes helped me figure out how to display an info icon on the Event page to open the event URL in a new browser tab. This should have been a simple task, but the first several approaches I tried didn’t work like I wanted. Luckily, Jayson gave me this helpful tip. The trick was to add a P7_URL_LINK page item of type Display Only to my page, set its Icon property to the fa-info-circle-o, then use the Advanced > Pre Text setting to add an enclosing, open tag for the hyperlink. It uses the substitution string &P7_URL!ATTR. in the value of the <a> tag’s href attribute. The special suffix !ATTR makes sure the value is escaped correctly to appear in an HTML attribute. I added a title attribute so the hyperlink would gain a “tooltip” when the end-user hovers over the info icon with their mouse.

<a href="&P7_URL!ATTR." 
   target="_blank" 
   title="Click to open this page in a new tab">

Then I used the corresponding Post Text setting on the page item to add the closing </a> tag. This produced exactly the desired effect I wanted as shown below. Thanks, Jayson!

Giving the end user an easy way to open the Event URL in a new browser tab

PWA: Instant Desktop/Tablet Use

I took most of the screenshots of the Activity Planner app in this article while running it as a Progressive Web App (PWA). This lets me launch the app directly from the dock of my Mac laptop whenever I need to use it. Combined with my enabling the Persistent Authentication feature at the APEX instance level, this produces a great user experience. I launch the planner app like any other app on my desktop or iPad and instantly continue my trip planning work.

Enabling an APEX app for PWA use takes just two clicks on the Shared Components > Progressive Web Page page in the APEX Builder. Just switch the Enable Progressive Web App switch to ON, as well as the Installable switch just below it. Enabling Persistent Authentication is an APEX instance administrator task in the Manager Instance > Security page.

Saving Quick SQL in Supporting Objects

The idea for this final tip was hatched after realizing every script in the Supporting Objects area of an APEX app could be made conditional. The feature uses the same options as a Server-side Condition elsewhere in the Builder. It offers a powerful way to control which scripts should run when. A simple corollary is that I can easily store the Quick SQL text for my application data model as a script with Condition = Never. This makes it easy to reference and maintain the QuickSQL as requirements change and enhancements get implemented, but ensures that APEX never tries to execute it.

Consistent Switch Value Labels in Grids

While doing final edits of this article I noticed that the Starred and Tickets switch columns in the Event page’s interactive grid would show values Yes and No for unmodified event run rows, but changed to show On and Off for rows I had edited. I resolved that by explicitly setting the On Label and Off Label properties for the Switch component settings in my app under Shared Components > Component Settings> Switch as shown below.

Explicitly setting the On and Off labels for Switch components

Planning Your Next Trip?

Just in case you might find my Activity Planner app useful, or even just educational to experiment with, you can download the app here. Its Supporting Objects scripts install the event run data for my most recent New York City trip, with none of the event runs starred or ticketed yet so you can experiment with those aspects yourself. Enjoy!

Broadway musical “& Juliet” cast members take their bows

Only Unused Items in Grid LOV

I saw an interesting question in the APEX discussion forum this morning asking how a Popup LOV in an interactive grid could show only the items that were not already referenced in the grid. It’s a question I’d asked myself multiple times before, but one whose answer I hadn’t researched until today…

My solution combines a tip from my colleague Jeff Kemp’s article Interactive Grid: Custom Select List on each row with the get_ig_data plug-in that simplifies using pending grid data in SQL or PL/SQL. Fellow Oracle employee Bud Endress had mentioned the plug-in to me multiple times and I watched the APEX Instant Tips #66: Getting Oracle APEX IG data video with Insum’s Anton Nielsen and Michelle Skamene to learn more. You can download the sample app at the end to try it out yourself.

Overview of the Data Model

The sample uses the following simple data model to create or edit orders for items from an inventory comprised of twenty-six different food items from Apple and Banana to Yuzu and Zucchini.

Data model for the sample app

The simple order create and edit page lets you type in an order comment, and an interactive grid of food items and their quantities. The goal was to have the popup LOV display only the items that aren’t already mentioned in other grid rows. As shown below, if the order being created or edited already includes Banana and Apple then the LOV for another row leaves those out of the list.

Getting JSON Grid Data in a Page Item

The get_ig_data dynamic action plugin copies data from an interactive grid and places it in a JSON format into a page item. I first downloaded the get_ig_data plug-in from here on apex.world, and installed it into my app. Then I created a hidden page item P3_GRID_JSON_CONTENT to contain the JSON. Finally, I added a dynamic action step using the plug-in to execute when the value of the ITEM_ID column in the grid gets changed.

As shown in the figure below, it’s configured to only include the value of the ITEM_ID column in the grid rows and to return the JSON into the (hidden) P3_GRID_JSON_CONTENT page item. Notice that I ensured the Fire on Initialization switch was set to ON so that on page load the JSON would be populated into the hidden page item as well.

Configuring the Get IG Data dynamic action plug-in as an Value Changed action step

Including the JSON in the LOV Query

Initially I left the P3_GRID_JSON_CONTENT page item visible by setting it to be a Display Only item type. This let me study the JSON format the plug-in creates. After selecting Banana and Apple, I saw that the JSON looked like this:

[
  {"ITEM_ID":{"v":"2","d":"Banana"},"INSUM$ROW":1},
  {"ITEM_ID":{"v":"1","d":"Apple"},"INSUM$ROW":2}
]

I noticed that it was an array of JSON objects, each containing an ITEM_ID property for the column I’d configured the plug-in to include. Since the ITEM_ID is an LOV-based column, the plug-in provides the value as another JSON object with v and d properties, giving me access to both the return value and the user-friendly display string. For my purposes, I only needed the return value, so I updated the ITEM_ID column’s Popup LOV SQL Query to look like the statement below. It includes a where id not in (...) clause to elminate the set of item ids we get from the grid JSON document. Using an appropriate json_table() clause in the subselect, I query just the v property of the ITEM_ID property of each row in the JSON.

select name d, id r
from eba_demo_item
where id not in (select item_id
                   from json_table(:P3_GRID_JSON_CONTENT, '$[*]'
                      columns (
                         item_id number path '$.ITEM_ID.v' 
                      )) where item_id is not null)
order by name

Sending Pending Grid Data to the Server

Notice that the LOV SQL Query above references the page item bind variable :P3_GRID_JSON_CONTENT. That hidden page item contains pending edits that might not yet be saved in the database, so we need to ensure that this client-side information is submitted to the server when the Popup LOV performs its request to the server to retrieve its data. This is where the trick I learned from Jeff Kemp’s article came into play. I was unable to find the Page Items to Submit field in the property editor, where I knew I needed to type in P3_GRID_JSON_CONTENT to achieve the result I wanted. His tip involved first setting the Parent Column(s) property to the primary key column name. In my case, this column is named ID. That column won’t be changing, but providing a parent column value “unlocks” the ability to set the Page Items to Submit property. After performing those two steps, the page designed looked like the figure below.

Configuring the Popup LOV SQL Query to return items not already in the grid

This was all that was required to achieve the goal. Unfortunately, I later learned that the original forum question asker was using APEX version 19.1 and verified that the earliest APEX version the get_ig_data plug-in supports is 20.1… d’oh! So while my sample didn’t provide him a ready-to-use solution for APEX 19.1, it did help me learn a few cool new tricks about APEX in the process.

Downloading the Sample

You can download the APEX 23.1 sample app from here and give it a spin for yourself. Thanks again to Jeff, Anton, and Michelle for the useful article and video.

P.S. If the grid may be long or filtered, you might consider augmenting this idea by changing the where id not in (select …) statement to also include the list of existing ITEM_ID values in the EBA_DEMO_ITEM_ORDER_LINES where the ORDER_ID = :P3_ID and then also passing the P3_ID as one of the “Page Items to Submit”.

Multilevel Story Approvals

Oracle APEX lets low-coders easily automate human approvals. With a bit of coordination logic, you can manage multi-step ones, too. Read on to learn how a company gets stories for their corporate website approved by all the required stakeholders before publishing. You’ll find the link to download and try the sample app at the end.

Who Writes & Approves Stories?

To make the sample app more realistic and interesting, we imagine a small widget-making company with a corporate web site. Each new story for the site needs a multi-step approval before publishing. The figure below shows the company’s eight colleagues. KIM is the founder and runs the company. CHAZ leads Marketing and is RORY‘s manager. GINA directs Editorial and mentors TINA. Last but not least, PAT oversees Product Management (PM) and manages BO and JANE.

Hierarchy of users involved in writing and approving stories

Typically product managers BO and JANE author stories for the website. These first get approved by their manager, then by someone in the Editorial and Marketing departments, and finally by KIM the CEO before the story can go live.

Adding Workplace Info for APEX Users

Oracle APEX user accounts have a user_name and email , but no built-in department name or manager field. Since the multi-step process depends on approvals by manager and by multiple different departments, I added tables to the application’s data model to track department names, which department each user belongs to, and who their manager is. The figure below shows the page in the sample app for managing departments. The MARKETING, EDITORIAL, and PM (product management) ones are predefined.

Defining workplace department names to associate with APEX user accounts

The page for managing users is below. The initial users’ departments and managers are also setup by the app’s Supporting Objects installation scripts.

Defining a management hierarchy and department for APEX user accounts

Overview of the App Data Model

The two pages above manage rows in the eba_demo_story_department and eba_demo_story_users table shown in the data model below. Stories bound for the corporate website live in the eba_demo_story table. The eba_demo_story_appr_step_def table contains the “template” approval steps that every story needs to follow. Finally, the eba_demo_story_appr_steps tracks the progress of each approval step for each story requiring approval.

Five-table data model for the Story Approval sample

Defining the Approval Steps

The page to manage the approval step “template” is below. The four steps you see come pre-installed. They illustrate the different ways the sample lets you define the approver for a step. It can be a specific username like KIM, a first-to-fourth level manager of the story author (e.g. #MANAGER1#), or all the members of a particular department (e.g. #EDITORIAL#). Each step also provides a descriptive title.

Defining the approval steps for a story

Submitting a Story for Approval

As shown below, from the Stories page in the sample, Jane clicks (Create) to author a story about a new version of the company’s widget.

Jane creates a new story for the website that requires approval before publishing

After Jane creates the story, she can revisit the Story page to edit it. As shown below, when she’s ready to submit it for approval, she clicks on the (Submit for Approval) button. It uses an Invoke API page process to call the submit_for_approval() procedure in the eba_demo_story_approvals package explained in the next section.

Jane submits her story for approval

Once she’s submitted her story for approval, the approval status appears in the Stories page as shown below. Its status is Submitted for Approval and indicates it’s waiting on PAT in step 1 of 4 to give the first level manager approval.

Users see the current status of Jane’s story on the Stories page

Any user viewing the story after it’s been submitted for approval sees the Approvals tab shown below with the current progress of the approval chain. It’s a classic report region showing the rows in the approval steps table, for the given story id and iteration number of the approval chain for that story.

Viewing the progress of a story’s approval chain

The Approval Coordination Package

The eba_demo_story_approvals package coordinates the multi-step process. When the Story page invokes submit_for_approval(), passing the story id, it inserts rows in the approval steps table (eba_demo_story_appr_steps). These come directly from the “template” steps in the approval steps definition table (eba_demo_story_appr_step_def) as shown below. In the process it determines what iteration number of the approval process is about to start, and includes the story_id and iteration number in the table as well.

Approvals coordinator package uses step definitions to create a story’s approval steps

The coordinator package initiates the single Story Approval task definition (with static id STORY_APPROVAL) once for each step of the approval process. It does this by calling the create_task() function in the apex_approval package. The function returns the task ID of the new task instance, which the coordinator package stores in the task_id column of the approval steps tracking table for the current step.

-- Create new STORY_APPROVAL task instance for approval step
l_new_task_id := 
   apex_approval.create_task(
        p_task_def_static_id => 'STORY_APPROVAL',
        p_subject            => apex_string.format(
                                  p_approval_step.title
                                  ||' for "%s"',
                                  l_story.name),
        p_initiator          => l_story.created_by,
        p_parameters         => approval_params(
                                 p_approval_step,
                                 p_total_steps),
        p_detail_pk          => p_approval_step.story_id);

When creating the task instance, it passes in key information using task parameters. For example, it provides the number of the current approval step (p_approval_step), the total number of approval steps (p_total_steps), and a comma-separated list of one or more approver usernames (p_potential_owners) for the current step in the approval “chain”. It also sets the subject of the current approval task based on the current approval step’s title, the initiator to be the story’s creator, and the Detail Primary Key to be the story ID. This information lets a single task definition handle all levels.

Story Approval Task Definition

As the figure below depicts, the STORY_APPROVAL task definition has actions for approval and rejection. Each is a one-line PL/SQL call to the handle_approval() or handle_rejection() procedure in the coordinator package.

Diagram of how the approvals coordinator package and the task definition cooperate

For example, the Complete (Approved) action’s PL/SQL code is the following line, passing in the task ID using the built-in :APEX$TASK_ID bind variable.

eba_demo_story_approvals.handle_approval(:APEX$TASK_ID);

As conveyed by the arrow on the left side of the handle_approval() procedure in the figure, if another approval step exists with a null status it repeats the process described above to call create_task() again for the next step. Using the current iteration number, it determines the next step (if any) by selecting the first row from the approval steps table with the current story’s ID where status is null ordered ascending by the step column.

Finally, notice in the figure above that the STORY_APPROVAL task definition contains several other actions that send notification emails:

  • On creation, it notifies the one or more potential approvers
  • On request information, it alerts the story creator that more info is needed
  • On submit info, it informs the approver that info has been provided
  • On approval, if it’s the final approval step, it emails the story creator

Enhancing the Task Details Page

After Jane’s first-level manager Pat receives the email that a story awaits her approval, Pat logs in and sees Jane’s story in her Approvals page as shown below. Notice how the title of approval step 1 and the story title were incorporated into the subject of the task.

Pat sees Jane’s story approval in her inbox

Clicking on the subject of the approval, Pat opens the Story Review page below. I created this page by clicking on the (Create Task Details Page) button on the STORY_APPROVAL task definition edit page in the APEX builder. By default it has all of the built-in APEX approvals functionality. I then simplified it in Page Designer by hiding two regions and enhanced it to add the story text to review. Of course Pat can approve or reject the task, but she can also add comments, request more information from Jane, review the approval audit history, or delegate the approval to a colleague. Since Pat also happens to be the business administrator for all approvals in the company, she’s allowed to perform additional actions like invite an additional approver or change the due date.

Story Approval Review page based on the generated Task Details page

In short, the enhancements I made were:

  • Hid Overview & Details regions by setting Server-Side Condition to Never
  • Added region to display read-only story text
  • Searched page for Task approved & Task rejected and changed TaskStory

I updated similar generic-sounding Task approved and Task rejected messages on the Approvals Unified Task List page, too. These changes along with other small tweaks to No Data Found messages around the app make a big difference for end users so are well worth the time. Now when a user approves a story, they aren’t scratching their head wondering, “What’s a task?” or “What’s a row?” The figure below shows the app-specific “Story approved” success message and the “No stories need your approval at the moment.” No Data Found text on the Approvals page.

Fine-tuning the approvals and story review pages with Story-focused messages

After Pat approves the story, the task definition’s approval action invokes the coordinator package’s handle_approval() as mentioned above. It determines there is another step to perform and creates a new task instance for that new step. Pat sees in the Stories page below that either Gina or Tina in the editorial department is next in line.

Coordinator package created a second task instance for step 2 of 4 in the approval chain

Letting Coordinator Pass In Approvers

A task definition can specify its approvers (called “potential owners”) using a static value, a SQL query, or using PL/SQL. In the SQL Query case, it needs to return a one-column result containing one username per row. In the PL/SQL case, it needs to return a comma-separated list of case-sensitive usernames. Did I mention the usernames ARE case SeNsItIvE? This is the most common problem developers encounter using task definitions. Since APEX user accounts are always in UPPERCASE, keep that in mind when using approval features.

That said, in the multi-step use case we need the approval step coordinator logic to pass in the list of approvers for the current task instance at create time. It gets the list of approvers for the current step from the approver column of the approval steps table. We enable this by defining the P_POTENTIAL_OWNERS parameter in the STORY_APPROVAL task definition, and setting its Potential Owner to the PL/SQL expression consisting simply of the task parameter bind variable :P_POTENTIAL_OWNERS. Accordingly the coordinator package passes in a comma-separated list of case-sensitive usernames for the value of the P_POTENTIAL_OWNERS parameter.

Expanding Approver Substitutions

We saw above that step 2 in the approval chain was assigned to GINA or TINA. When multiple usernames appear in the potential owners list, the approval task shows in the Unified Task List inbox page for all of them. When one claims the task to work on, approves, or rejects it, the others no longer see it.

The coordinator’s submit_for_approval() procedure starts by calling copy_story_approval_steps_for(). It “clones” the approval step definition template rows for the new multi-step story approval process being created. As it loops through the approval step definition rows, it calls the expand_substitution() function for any approver value like #%# to replace it by the one or more usernames that the substitution token represents.

Step 1 in the story approval steps definition uses the substition token #MANAGER1# to require a first-level manager’s approval. Step 2 specifies #EDITORIAL# to represent a list of anyone in the Editorial department. The expand_substitution() is straightforward. If the substitution token matches the regular expression ^#MANAGER[1-4]#$ then it’s replaced by the corresponding Nth-level manager if it exists. To compute the values of up to four levels of managers in one row, the code queries the eba_demo_story_users table using the story creator’s username, and outer joins with the same table four times:

select  mgr1.username manager1_username,
        mgr2.username manager2_username,
        mgr3.username manager3_username,
        mgr4.username manager4_username
from eba_demo_story_users usr
left outer join eba_demo_story_users mgr1 
             on mgr1.id = usr.manager_id
left outer join eba_demo_story_users mgr2
             on mgr2.id = mgr1.manager_id
left outer join eba_demo_story_users mgr3
             on mgr3.id = mgr2.manager_id
left outer join eba_demo_story_users mgr4
             on mgr4.id = mgr3.manager_id
where usr.username = upper(p_story_creator)

If instead the substitution token is a name surrounded by pound signs, it trims them off both sides and uses the department name to query the eba_demo_story_department table joined with the users table. Using listagg() it retrieves a comma-separated list of department users like this:

l_department_name := trim(both '#' from p_substitution_text);
select listagg(u.username,',')
  into l_result
  from eba_demo_story_users u
  left join eba_demo_story_department d 
         on d.id = u.department_id
 where d.name = l_department_name;

Understanding Actions Source Query

After someone in the Editorial and Marketing departments approves the story, as shown below it’s ready for CEO Kim’s approval.

Almost done! Step 4 of 4: the story awaits its final CEO approval

Once Kim approves, the STORY_APPROVAL task definition will finally email Jane the story’s creator to let her know the good news. It’s important to understand the grand declarative power of the task definition’s Actions Source Query. Recall that every task definition has an associated system of record primary key called its Detail Primary Key. In this app, the coordinator package passes in the story ID to play this role. It’s available inside the task definition using the :APEX$TASK_PK bind variable. Other frequently-used bind variables include :APEX$TASK_ID, :APEX$TASK_INITIATOR, and :APEX$TASK_OWNER. Refer to the Substitution Strings for Tasks in the APEX doc for the complete list.

The Actions Source Query lets you easily retrieve any additional information your task definition requires to get its job done. This includes anything action code, emails, push notificiations, or their server-side conditions may require. The SQL query will typically reference the Detail Primary Key value and possibly the task ID to pull in all supplementary data required. Any of its SELECT list column names is available to reference throughout the task definition using either bind variable or substitution string syntax. The task parameter names can also be used in the same way.

The query below is the STORY_APPROVAL task definition’s Actions Source Query. It includes a CASE statement to derive the Y or N value for an IS_FINAL_APPROVAL column depending on whether the P_APPROVAL_STEP parameter equals the P_TOTAL_STEPS parameter. It includes multiple joins to the apex_workspace_apex_users table to pickup the email addresses of the story creator, potential approvers, and the task owner. It references the apex_task_participants view to access the task’s potential owners (i.e. approvers). Its SELECT list column values are referenced by name as appropriate in the task definition actions that send emails and by their server-side conditions to make the actions conditional. The admittedly odd-looking coalesce(…) expressions let you override the email address for all notifications for demo purposes as explained in a later section.

select stp.approver             as approvers,
       case 
            when :P_APPROVAL_STEP = :P_TOTAL_STEPS
            then 'Y' else 'N' 
       end                      as is_final_approval, 
       sty.name                 as story_title,
       sty.created_by           as story_author,
       coalesce(:P12_OVERRIDE_EMAIL_ADDRESS,
                styusr.email)   as story_author_email,
       (select listagg(coalesce(:P12_OVERRIDE_EMAIL_ADDRESS,
                                 aprusr.email),',')
          from apex_task_participants apr
          left outer join apex_workspace_apex_users aprusr
                       on aprusr.user_name = apr.participant
         where apr.participant_type = 'POTENTIAL_OWNER'
           and apr.task_id = :APEX$TASK_ID  
           and aprusr.email is not null) as approvers_email,
       coalesce(:P12_OVERRIDE_EMAIL_ADDRESS,
                 ownusr.email)  as task_owner_email
  from eba_demo_story sty
  left       join eba_demo_story_appr_steps stp 
               on stp.story_id = sty.id
  left outer join apex_workspace_apex_users styusr
               on styusr.user_name = sty.created_by
  left outer join apex_workspace_apex_users ownusr
               on ownusr.user_name = :APEX$TASK_OWNER
 where stp.story_id = :APEX$TASK_PK
   and   stp.step   = :P_APPROVAL_STEP

Unifying Task Comments & History

Stories don’t always emerge from the approval chain unscathed. At any step of the process, the current approver can add comments on the Story Approval Review page, request additional information from the story creator, or even outright reject the story. The sample app lets you explore these outcomes as well, and anticipates them in two key ways:

  1. It unifies comments and history across task instances in the chain, and
  2. It lets the story approval chain undergo multiple iterations

In this section we’ll explore the comments and history, and the following one explains the iterations. For example, the screenshot below shows the feedback Bo got from Pat, her first level manager. Pat has requested more information before approving it.

Bo gets feedback on a story from her manager Pat before getting her approval

Once Bo updates the story and visits the My Story Approvals page to indicate she’s submitted the information Pat requested, Pat gets an email letting her know the story is ready for another look.

Pat receive email notification that Bo’s story is ready for another look

After Pat approves, as shown in the figure below, when Gina in Editorial reviews Bo’s story next she sees the discussion from previous approval steps, too. She also requests more information from Bo, asking her to remove the use of passive voice.

Story Approval Review page shows comments across all task instances in the approval chain

By default, the generated Task Details page the APEX builder creates only shows the comments for a single task instance. To unify the display of all comments from all tasks in the approval chain, I simply modified the WHERE clause of the Comments region on that page to what appears below. It shows the comments for all the task IDs from the story approval steps table related to the current task’s story ID and iteration number. Had it been desired, the iteration could have been removed from the criteria to show all comments across all iterations of a story’s approval process.

select ... etc. ...
  from apex_task_comments
 where task_id in (
     select task_id
       from eba_demo_story_appr_steps
      where (story_id,iteration) 
         in (select story_id, iteration
               from eba_demo_story_appr_steps
              where task_id = :P3_TASK_ID)
 )

Along a similar vector, I modified the query of the page’s History region to display the task history for the same set of task IDs using the query below:

select x.event_type,
       x.event_timestamp,
       x.event_creator_lower,
       x.display_msg
  from eba_demo_story_appr_steps steps,
       table ( apex_approval.get_task_history (
                   p_task_id     => steps.task_id,
                   p_include_all => :P3_ALL_HISTORY ) ) x
 where steps.task_id in (
     select task_id
       from eba_demo_story_appr_steps
      where (story_id,iteration) 
         in (select story_id, iteration
               from eba_demo_story_appr_steps
              where task_id = :P3_TASK_ID))

Rejections and Iterations

When a story gets rejected by any approver, the handle_rejection() procedure in the coordinator package sets the status of the approval step and the story to Rejected. But that’s not the end of the line for the author. As shown below, Chaz in Marketing rejected Bo’s customer profile for a reason out of her control.

Bo reviews the comments on her rejected customer profile story

She can rework the story based on constructive feedback received and submit it again for another iteration through the approval chain. After Bo spoke with Chaz, she learned she could save her story by using different people’s names for the testimonials, so Bo made the changes and resubmitted for approval.

The submit_for_approval() procedure calls the package’s get_latest_iteration() function to determine the maximum value of the iteration column for a given story ID in the approval steps table. If it returns null, then the logic defaults the iteration number to 1. Otherwise, it increments the latest iteration number by one. As shown below, if a story going through the multi-step approval process for a second or subsequent time, the Story page’s Approvals tab conditionally displays the “Second Iteration” heading.

Conditional “Second Iteration” header identifies Bo’s second iteration of the approval chain

One interesting technique I learned while implementing this part of the sample was how easy it wasy to format the number value of the iteration into ordinal words like “Second”, “Third”, or “Fourth.” The expression below is used on the Story page in a pre-rendering computation for the hidden P10_ITERATION_TITLE page item.

initcap(to_char(to_date(:P10_LATEST_APPROVAL_ITERATION,'j'),'jspth'))

Soft Deleting and Undeleting Stories

Since approval task instances are historical records that reference the Detail Primary Key of a row in a system of record table, we don’t want to actually remove a story when an author decides one is a lost cause and deletes it. This would “orphan” the task instance related to that story and potentially cause problems. So, we let the end-users think they are deleting a story, but we really “soft delete” it. This happens by changing the deleted flag column on the eba_demo_story row in question from 0 to 1. Accordingly the Stories page contains a WHERE clause to show stories where deleted = 0.

To implement this soft delete, on the Story page I duplicated the Process form Story page process of type Form – Automatic Row Processing (DML) and renamed the original one to Process form Story (Create,Update) and the copy to Process form Story (Soft Delete). As shown below, I configured a Server-side Condition on the (Create,Update) one to execute only if the Request is contained in Value CREATE,SAVE.

Limiting one of the DML page processes to only CREATE and SAVE requests

For the (Soft Delete) one, as shown below, I set the Target Type to PL/SQL Code and entered the simple update statement to set the current story’s deleted flag column to the value 1. By also adding a Server-side Condition I confined this special-case handling to only happen for story delete requests. Thanks goes to my colleague Carsten for the tip on handle this soft-delete in a low-code way.

Implementing “soft delete” for DELETE requests using PL/SQL code to update a flag instead

With the deleted flag in place, and the soft-delete working. It was easy to add a Recently Deleted Stories page to give authors a 10-day period in which to undelete any story that “deleted” by mistake. Even after 10 days we don’t actually delete the story, but to avoid accumulating a long list of deleted stories over the months and years, we set a 10-day “grace period” which felt right. The page is shown below. It’s essentially a copy of the Stories page, but with a where deleted = 1 clause instead. Its Interactive Report region’s Link Column is configured to use an fa-undo icon instead of the usual edit icon:

<span role="img" aria-label="Undelete"><span class="fa fa-undo" aria-hidden="true" title="Undelete"></span></span>

As shown below, its link Target redirects to the Stories page (9) setting the P9_STORY_ID_TO_UNDELETE page item to the value of the clicked-on story ID to undelete and signalling special processing is required by setting the Request to UNDELETE.

Undelete link target passes story ID to undelete and Request = UNDELETE to Stories page

Over in the Stories page, a conditional Execute Code page process configured to only run if Request = UNDELETE performs the simple update statement to revert the deleted flag back to 0 for the story id passed in the parameter.

Conditionally undeleting a story by ID when Request = UNDELETE

Overriding Email Address for Notifications

For a sample app, it is convenient to have all the email notifications come to your own email address. You can do that by typing in the email you want to use on the Email page and clicking the button to save the setting for the duration of the current session.

Configuring an override email address for the session to receive all notifications in the demo

If the setting is not provided, the sample tries to retrieve email addresses from the APEX use account for the story creator and approvers. If neither email address resolves to a valid value, no email gets sent. This P12_OVERRIDE_EMAIL_ADDRESS page item with its Storage property set to Per Session (Persistent) made an appearance above in the Story Approval task definition’s Actions Source Query. Using the coalesce() expression in multiple places in reference to the value of :P12_OVERRIDE_EMAIL_ADDRESS that query allowed its value to take precedence over the email address values it tried to query from the apex_workspace_apex_users view.

Easy Demo-Purpose-Only Logins

The last aspect of the sample to explore involves the techique I’ve used to simplify switching between different users to experiment with the story approval process. For a production application, you should ABSOLUTELY use Oracle APEX accounts, sigle sign on, or some other supported authentication scheme to ensure that only expected users with valid credentials can access your app. However, for a sample aimed at helping developers learn about approvals, I’m trying out a new idea using a custom authentication scheme that considers any username that appears in the eba_demo_story_users table to be a valid user. The sample app’s login page shows the users in a management hierarchy, and allows you to easily login as any user in the tree by selecting a user node and clicking the (Login as USER) button that appears. When you first run the application, you won’t be logged in so you can select any user and click the button to login as shown below.

Choosing a user from the management hierarchy and logging in

Once you are logged in, you can easily switch to wear the “hat” of another user in the company by visiting the Switch User page shown below. It confirms who you are currently logged-in as, and lets you select any different user in the tree and login by clicking the button. In both situations, you’re seeing the Demo Purposes Only Login page. However, using an appropriate condition on the Navigation Menu items labeled Login and Switch User we help guide the user to perform the task at hand.

Easily switching to a different user to experiment with the multi-step approvals

This approach, again recommended only for demo purposes, also saves developers the trouble of having to create eight additional APEX user accounts before they can try out the sample. If any APEX account does exist with a username matching a sample user in the eba_demo_story_users table, then the sample will pickup their email address (unless it’s been overridden for the session using the facility described above).

Downloading the Sample

You can download the sample from here. Try creating some stories and running them through the approval chain. Try adjusting the approval step definitions using multiple levels of management to see what happens when a story author doesn’t have a third-level manager, for example. Try changing the departments, the users, or how users are assigned to departments or report to each other. Hope the sample proves useful and you find some exciting ways to use APEX approvals in your upcoming apps.

Parent/Child Validation Made Easy

When working with parent/child data, an app often needs to validate an aggregate condition about all of the child rows related to a parent row. For example, say each teammate assigned to an action item can have a role of “Lead” or “Member”. Some typical parent/child validations for an action item’s team members might require the team to…

  • have a Lead
  • only have one Lead, and
  • not contain any duplicate members.

Ideally, the app will refuse to save anything unless the pending changes to parent and child rows satisfy all 3 business rules. The app should show the user a message so they know what to fix before trying to save again.

Starting with APEX 23.1, aggregate validations are easy to implement using simple SQL statements. This article first explains how to do them, and then why a new feature in 23.1 makes them possible. It goes on to highlight a few other techniques I learned while building the sample app. At the end, I provide a download link to try it out for yourself.

Sample App Overview

The sample app uses the three tables shown below. One stores the Action Items, another the Staff Members, and the third holds the action item team list relating an action to the staff members collaborating to complete it.

Three tables involved in the “One Lead on a Team” sample application

The sample includes a page to maintain Staff Members and, as shown below, another page to maintain Action Items.

Page to maintain action items

You can see the page used to create, edit, or delete an Action Item below. It includes a “parent” Form region for the Action Item row and an editable Interactive Grid “child” region for the list of action team members.

Page to edit an action item and its related action team members

Ensuring Child Rows Get a Parent Id

When building an editable page including parent and child data, I first ensure any child row gets its parent id foreign key set correctly. My preferred technique is adding an Execute Code page process after the Process form Action one that saves the parent row and before the Save Interactive Grid Data one that stores the grid’s child rows. This approach works for both create and edit use cases.

This new Execute Code page process assigns the parent foreign key column with the one line of PL/SQL below. The :ACTION_ID bind variable references the ACTION_ID foreign key column in each grid row being created or modified, and :P5_ID is the ID of the parent Action Item row being created or edited.

-- Assign the parent foreign key to the team row from the grid
:ACTION_ID := :P5_ID;

As shown below, by setting the Editable Region property of the process to the Interactive Grid region Action Team, APEX runs it once for each created or modified child row in the grid region.

Ensuring child rows have their parent foreign key set correctly

If an APEX page has multiple child grids on an edit page with their parent row — perhaps each one on a different tab — just repeat this technique for each grid involved. Sequence all the foreign key assignment processes after the parent row save process and before any of the child grids’ save processes.

Adding Aggregate Validations

Next we can add the aggregate validations as additional Invoke API page processes following the page processes that save the data. The Enforce Exactly One Lead process shown below invokes the one_lead_per_action_team procedure in the eba_demo_action_validation package, passing the value of the P5_ID page item for the p_action_id parameter.

Adding the aggregate validation procedures using Invoke API page processes

The PL/SQL package containing the validation procedures has the specification shown below in the Object Browser.

eba_demo_action_validation package spec in the Object Browser

In the package body, the code for one_lead_per_action_team is below. It first checks the Action Item parent row is valid by using a zero-or-one-row for loop based on the primary key value in the p_action_id parameter. This is important since one of the operations the page can perform is to delete the action item. We want our validation code to work correctly — by consciously doing nothing — in that situation. If the loop’s SELECT returns a row, then the code inside the loop runs once. It counts how many Team child rows for the current Action Item have the role = 'LEAD' using a simple select count(*). If the l_lead_count does not equal one, then the validation check fails. We use a case statement to conditionally display the appropriate error message depending on whether the lead count is less than one or greater than one. If the lead count equals one, then by not reporting any error it lets the transaction commit successfully.

-- Ensure the action has exactly one Lead
procedure one_lead_per_action_team(
    p_action_id in number)
is
    l_lead_count number;
begin
    -- Ensure the action id still exists since it might
    -- have been deleted. If it exists, perform the check
    -- on the children rows.
    for j in (select id 
                from eba_demo_action
               where id = p_action_id) loop
        select count(*)
        into l_lead_count
        from eba_demo_team
        where action_id = p_action_id
        and role = 'LEAD';
        if l_lead_count != 1 then
            error(  case 
                        when l_lead_count < 1 
                        then c_msg_team_must_have_lead
                        else c_msg_team_max_one_lead
                    end);
        end if;
    end loop;
end;

The simple, package-private error() helper procedure is below. It calls the add_error() procedure in the apex_error package with a display location of On Error Page using the appropriate package constant. This causes the APEX engine to rollback the transaction and display the error message to the user. It uses the apex_lang.message() function to return the translated error message based on the message key passed in. The net result is that no changes submitted from this parent/child page get saved to the database when this aggregate validation error occurs.

-- Raise an error with indicated message key
procedure error(
    p_message_key in varchar2)
is
begin
    apex_error.add_error (
        p_message          => apex_lang.message(p_message_key),
        p_display_location => apex_error.c_on_error_page);
end;

The no_duplicate_team_members procedure uses a similar technique, with a slightly different select statement. Its code is shown below. We count the number of team member rows, grouping by the user_id and only returning ones having a count greater than one. If at least one of such row is returned, then we have a duplicate team member name and we report the error as above using a different error message key constant.

-- Ensure the action has no duplicate team members
procedure no_duplicate_team_members(
    p_action_id in number)
is
begin
    -- Ensure the action id still exists since it might
    -- have been deleted. If it exists, perform the check
    -- on the children rows.
    for j in (select id 
                from eba_demo_action
               where id = p_action_id) loop    
        for k in (select user_id, count(*)
                    from eba_demo_team
                   where action_id = p_action_id
                   group by user_id
                   having count(*) > 1
                   fetch first row only) loop
            error(c_msg_team_no_duplicates);
            exit;
        end loop;
    end loop;
end;

Using Translatable Error Messages

Since the application might need to be translated into multiple languages, it’s best practice to define the error messages as Text Messages in the Shared Components area of the APEX builder. The figure below shows the three text messages I defined to support the error messages my aggregate validation code may display to end users.

Translatable text messages in Shared Components showing message key and base language text

The eba_demo_action_validation package body defines the following three constants to hold the message keys:

------------------------------------------------------------
-- Constants for error message keys
------------------------------------------------------------
c_msg_team_must_have_lead constant varchar2(25) 
                                   := 'TEAM_MUST_HAVE_A_LEAD';
c_msg_team_max_one_lead   constant varchar2(25) 
                                   := 'TEAM_MAX_ONE_LEAD';
c_msg_team_no_duplicates  constant varchar2(25)
                                   := 'TEAM_NO_DUPE_MEMBERS';

Hiding the Grid’s Toolbar Save Button

Since we want the aggregate validation to always take into account both parent and child data, we use the Action Item form region’s default CREATE and SAVE buttons that both submit the page. APEX automatically submits the data from all page regions when the user clicks one of these buttons. Accordingly, to avoid the user’s saving child grid row changes on their own, we hide the default (Save) button in the Interactive Grid’s toolbar. To perform this task, the figure below shows where to find the checkboxes on the Attributes tab of the property editor after selecting the Interactive Grid region. While I was there, to keep things simple, I turned off a number of the default toolbar controls.

Hiding Interactive Grid’s built-in (Save) toolbar button to use a page submit button instead

Taking the Sample For a Spin

If the end user tries to create or edit an action item and save without having designated a team Lead, then she’ll see the error message below.

Error message when user tries to save an action item with team containing no Lead

If she tries to create or edit an action item whose team contains more than one Lead, when she tries to save she’ll see this different error message:

Error message when user tries to save action item with team having multiple Leads

Finally, if the user inadvertently adds the same staff member twice to the action item team, she’ll see the error message:

Error message when user tries to save action item with team having duplicate members

Using the SQL Commands page in the APEX builder, you can have a quick peek at the eba_demo_action and eba_demo_team tables before and after to convince yourself that no data has been inserted or updated when these parent/child validations fail.

What Changed in 23.1 to Make This Easy?

APEX 23.1 introduced a new Session State Commits setting in the Session Management section of the app definition’s Security tab. For new apps, it defaults to the value End of Request which defers the database transaction commit to the end of the current submit page request. The figure below shows where this setting lives in the APEX App Builder.

Reading the help text for the Session State Commits application definition setting

This one change, combined with the Oracle database’s read consistency model, lets aggregate validations be implemented so easily. They can use simple SQL select statements to reason over the pending parent and child row changes that APEX has made in the current database transaction. Importantly, they can do so before the transaction gets committed. This means they are still in time to “veto” the transaction by adding an error to the APEX error stack.

In the remaining sections, I highlight some additional techniques I learned while building this sample app that I found interesting. They are not directly related to parent/child validation, but they made me smile when I learned how to do them so I wanted to share the knowledge…

Dynamic Grid Cell Defaulting

The Action Item page can guide end users to follow the “One Lead on a Team” rule by using dynamic defaulting. When the first team member row is added to the grid, it should default to being a Lead. In constrast, the role should default to Member for other rows added. I wanted to learn how this could be done on the client side, so I asked my colleagues for advice.

John Snyders taught me about the Interactive Grid’s ability to associate a dynamic defaultValue function with a column using its Column Initialization JavaScript Function property. After selecting the ROLE grid column whose default value I wanted to be dynamic, the figure below shows where I needed to add my code in the Page Designer.

Configuring a dynamic defaultValue function for the ROLE grid column in Page Designer

The few lines of JavaScript appear below. The initialization function receives an options object on which it sets the defaultGridColumnOptions property to configure a custom defaultValue function for the column. This one-line function calls getTotalRecords() on the interactive grid’s model object it receives and returns the literal string LEAD if the model’s row count is zero and MEMBER otherwise.

function( options ) {
    options.defaultGridColumnOptions = {
        // Setup a dynamic default value for the ROLE column
        // to return LEAD when there aren't any rows yet, and
        // MEMBER otherwise.
        defaultValue: function(model) {
            return model.getTotalRecords(true) == 0 ? "LEAD" 
                                                    : "MEMBER";
        }
    };
    return options;
}

NOTE: If you use this technique in your own applications to return a default value for a number column or date column, remember to return the value as a String from this function.

Setting Width of the IR Columns

While fine-tuning the sample, I noticed the link column of the Interactive Reports pages for Staff and Action Items was extra wide and didn’t look so great. The Staff page initially appeared like this:

Extra wide link column in Interactive Report on the Staff page

The Action Items page had three columns, but still the column widths looked less than ideal…

Extra wide link column and Lead columns in the Action Items page

I learned how to set the width of the interactive report link column across all Interactive Reports in the application, and added the following snippet of CSS to a global app.css file in the Static Application Files area of Shared Components.

th#a-IRR-header--linkCol,
td.a-IRR-linkCol
{
  width: 6em;
  text-align: center !important;
}

Colleagues taught me it was best practice to reference the desired column width using em CSS units instead of pixels. So above I’m setting the link column to be 6 characters wide relative to the base font size of a letter M. That way, if the user zooms their browser — increasing the base font size — then the column always remains wide enough to show six of those larger-sized characters.

After creating the app.css file, I copied its path and pasted it into the CSS > File URLs section of the application definition’s User Interface tab as shown below. At runtime, APEX replaces #APP_FILES# with the URL to access the static app files content, and replaces the #MIN# as appropriate to use the more compact “minified” version of the CSS style sheet when not running in debug mode.

Setting the path of a global app.css file that “lives” in Static Application Files

To set the width of the Lead column in the Action Items page, first I set the Static Id of the LEAD column to action-team-lead, then I added the following small amount of CSS to the page-level CSS > Inline property:

td[headers=action-team-lead], th#action-team-lead {
  width: 10em 
}

The result is that the link column is set across the app to 6 characters wide — affecting both the Staff and Actions Items page — and the Lead column is set to 10 characters wide. We can see below that the Staff page now looks more professional:

Staff page with 6-character-wide link column from application-wide app.css styling

Similarly, the Action Items page also looks more polished, with the same 6-character-wide link column, and a 10-character-wide Lead column. This leaves the most amount of horizontal space for the action item name:

Action Items page with 6-character-wide link column and 10-character-wide Lead column

Merging Seed Data from Static App JSON

The last interesting technique I used in the sample app is how I’m seeding the data for the 26 example Staff members. I’m aware of the APEX built-in Supporting Objects feature called Data Package. It lets developers choose a set of tables whose data they want to install when an APEX app is first imported into a target workspace. That feature is great, but I wanted to explore how to gain more control over seed data installation to more seamlessly handle the situation where new seed data rows could be added over time in later versions of the app.

First, I added a staff.json file with the following format into the Static Application Files area in Shared Components:

{
    "staff": [
        {"NAME": "Abigail"},
        :
       etc.
        :
        {"NAME": "Zelda"}
    ]
}

Then I added the following Supporting Objects installation script that uses a merge statement to insert any Staff names from the staff.json file that don’t yet exist into the eba_demo_staff table. Its using clause selects the contents of staff.json from the static application files view and uses json_table() to select the NAME column data out of that JSON file. With this technique, subsequent versions of the app could add new NAME entries to the staff.json file and they would be added to the eba_demo_staff table at application import time. If the seed data were more complicated and multi-columned, the merge statement could also use a when matched then clause — which wasn’t needed for this simple sample — to update the values of other seed data columns that might have changed since the last time they were inserted/updated.

merge into eba_demo_staff tgt
using (
    with x as (
        select aaf.blob_content 
          from apex_application_files aaf, 
               apex_applications aa
         where aa.application_id = 
                    coalesce(
                        apex_application_install.get_application_id,
                        to_number(v('APP_ID')))
           and aaf.flow_id = aa.application_id
           and aaf.filename = 'staff.json'
    )
    select y.name
    from x,
         json_table(x.blob_content, '$.staff[*]' columns
        (
            name varchar2(255) path '$.NAME'
        )) y
) src
on (tgt.name = src.name)
when not matched then
    insert (name)
    values (src.name)

Trying the Sample App

You can download the sample app from here. Try creating a new action item and causing some of the validation messages to appear. Then try editing an existing action item and repeating the process to cause the aggregate validations to fail. Finally make sure you can delete an action item. I hope you find useful ways to incorporate this technique into the next APEX app you build.

Final Thoughts

Using appropriate SELECT statements and the technique outlined in this article, you can make quick work of other kinds of typical aggregate validation use cases like checking that the …

  • sum of a child row column does not exceed a limit
  • sum of percentages across child rows equals 100%
  • count of child rows is within an expected range

The validations can get even more complex like ensuring the changes to the line items of an order don’t put the customer’s total monthly orders above their credit limit. Let your imagination be your guide!

Bulk Data Loading from a Zip

A friend is learning APEX to automate his business mentoring athletes. In his practice, he captures training performance metrics, then needs to analyze clients’ progress toward their goals over time. His instruments record an athlete’s performance data in a comma-separated values (CSV) format. Recently, he asked if APEX could load data from a CSV file.

After showing him how easy it was, he added another twist:”What if I have a directory full of CSV files, organized into subdirectories by athlete? After a week’s sessions with clients, there could be a lot of files. Will I have to import them one by one? ” Great question!

He said each directory reflects the client name, each file name includes the date and time of the session (SESSION_2023_08_11_13_45.csv), and each file has the same format. A perfect chance to show him three more APEX features:

  • Easy file uploading,
  • Simplified Zip file processing, and
  • Automatic data loading using a pre-defined column mapping.

The sample app you can download using the link at the end of the article illustrates the declarative configuration and small amount of code required to put these techniques into action.

Setting Up the Data Model

For simulation purposes, I chose a CSV file format that includes the name of an activity followed by two numerical values in each row like this:

AAA,333,887
BBB,123,899
CCC,445,1299

I created the following tables to store the athletes, the training sessions for each athlete, and the athlete session training activity data. I added a “staging” table to hold the uploaded data temporarily before moving it to the athlete session data table, related to the appropriate athlete’s training session.

Four tables used by the sample application to upload athlete training session data

Creating the Data Load Definition

The next step was creating an APEX Data Load Definition to capture how the CSV column data maps to columns in the EBA_DEMO_UPLOADED_SESSION_DATA table. In the APEX App Builder, under Shared Components, in the Data Sources section, I chose Data Load Definitions and clicked Create. On step 1, as shown below, I entered the name athlete_session_data_csv and chose the EBA_DEMO_UPLOADED_SESSION_DATA table.

Step 1 of defining a new data load definition: providing a name and picking a table.

On step 2 of the wizard, I dragged and dropped an example SESSION_2023_08_04_18_20.csv data file into the file upload drop zone.

Step 2 of defining a data load definition: provide a sample CSV file

On step 3, I unchecked the First line contains headers checkbox and mapped the values of the three CSV file columns to appropriate columns of the EBA_DEMO_UPLOADED_SESSION_DATA table. If the CSV file had contained an initial “header” row of column names, APEX might have been able to guess the mapping automatically. However, the files my friend showed me did not have headers so I wanted to show him that use case.

Step 3 of defining a data load definition: mapping the columns

To complete the wizard, I clicked on (Create Data Load).

Create Package for Handling the Upload

I used the Object Browser to create an eba_demo_bulk_data_load package with the spec below. The from_uploaded_zip_file() procedure accepts a single parameter for the name of the uploaded zip file to process. In the package body, I initially left the implementation of from_uploaded_zip_file() as a “TODO” task by using a null; statement. I’d later circle back to add in the upload processing code that I explain in detail below. For the moment, having the specification in place was enough to “wire up” the declarative interaction between the page in the next section and the procedure.

create or replace package eba_demo_bulk_data_load is 
    procedure from_uploaded_zip_file( 
        p_file_name in varchar2); 
end;

Uploading the Zip File

On the Home page, I added a single page item named P1_ZIP_FILE_UPLOAD of type File Browse… , set its Storage Type property to Table APEX_APPLICATION_TEMP_FILES, and its Purge File at property to End of Request. I also configured its File Types property to application/zip to guide the user to only choose zip files. Finally, I added a button named Upload_and_Load_All_Data to submit the page. This is the only configuration needed to upload a file for server-side processing during the page submit.

Adding a File Browse… page item to let an end user upload a zip file.

Processing the Zip File Contents

In the Processing tab of the Home page, I added an Invoke API page process to call the from_uploaded_zip_file() procedure in the eba_demo_bulk_data_load package. I configured its Server-side Condition so it only executes when the Upload_and_Load_All_Data button is pressed.

Adding an InvokeAPI page process to execute when the Upload button is pressed

As shown below, I selected the p_file_name function parameter in the Processing tab and used the Property Editor to configure its value to come from the P1_ZIP_FILE_UPLOAD page item. A File Browse… page item’s value gives the unique name of the uploaded file in the APEX_APPLICATION_TEMP_FILES view. We’ll use this info below to process the CSV files inside the uploaded zip file.

Configuring PL/SQL function argument value to come from P1_ZIP_FILE_UPLOAD page item

With these pieces in place, I had a working zip file upload page. Next, I turned my attention to the small amount of code required to “unpack” the zip file contents and process each CSV file it contained…

Loading Each CSV File in the Zip

In the implementation of the the from_uploaded_zip_file() procedure, I started by selecting the binary contents of the uploaded zip file from the APEX_APPLICATION_TEMP_FILES view into an l_zipfile BLOB variable, using the unique file name passed in the p_file_name parameter. Note that in the WHERE clause I’m using the NAME column and not the FILENAME column. The former is the unique name APEX assigns to the uploaded file (e.g. 12345678/athletes.zip), while the latter is just the “plain” name of the file the end-user provided (e.g. athletes.zip).

-- Get the uploaded zip file
select blob_content
into l_zipfile
from apex_application_temp_files
where name = p_file_name;

Next, using the handy apex_zip package, I get the list of CSV files inside this uploaded zip file:

-- Get list of files from the zip
l_files := apex_zip.get_files(l_zipfile);

Then I loop over the file names and load them one by one into the “staging” table using the pre-defined Data Loading Definition I created above. Before each iteration, I delete all the rows from the staging table, and in the next section I’ll explain how I determine the athlete and athlete session with which to associate the loaded data from the current CSV file in the loop. Notice I use another apex_zip package function get_file_content() to access the contents of the current CSV file. I pass that as the value of the p_data_to_load parameter of the apex_data_loading.load_data() function, referencing the static id of the data loading definition as the second parameter value.

-- Process the individual files inside the zip
for j in 1..l_files.count loop
    -- Start by clearing any existing rows out of the temporary upload table
    delete from eba_demo_uploaded_session_data;
    -- Get the contents of the current CSV file in the zip file
    l_csv_file := apex_zip.get_file_content(l_zipfile,l_files(j));
    -- Load data from current CSV file into temp data loading table
    -- using 'athlete_session_data_csv' data loading definition
    l_result := apex_data_loading.load_data(
                    p_data_to_load => l_csv_file,
                    p_static_id    => 'athlete_session_data_csv');
    /* See below for logic to associate data with athlete session */
end loop;

Moving Uploaded Data to Final Destination

After loading each CSV file into the “staging” table, I use the name of the file and the name of its directory to determine the date and time of the session as well as the athlete name. The split() function in the apex_string package comes in handy to break the file name into parts using the slash as the separator character. Assuming the file names will all be of the form athletes/AMINA/SESSION_2023_08_01_12_10.csv, we can see that the athlete name is the second part of the fully-qualified file name and the session date and time is encoded in the third part. Notice I’ve used a date format mask that includes the literal text "SESSION_" and ".csv" to easily extract the date with no additional fuss.

for j in 1..l_files.count loop
    /* See above for logic to load the current CSV file */

    -- Extract the name of the athlete from the current file name
    -- that we are assuming will be of the form:
    -- athletes/AMINA/SESSION_2023_08_01_12_10.csv
    l_filename_parts := apex_string.split(l_files(j),'/');
    l_athlete_name   := l_filename_parts(2);
    l_session_date   := to_date(l_filename_parts(3),
                                '"SESSION_"YYYY_MM_DD_HH24_MI".csv"');
    /* See below for logic to lookup or create athlete */
end loop;

Using straightforward code, I lookup the current athlete’s id based on their name. If the athlete name does not exist, I create a new Athlete row for them and return the new athlete id into l_athlete_id. Then I create a new athlete session for the athlete with the appropriate date and time extracted from the current CSV file name.

for j in 1..l_files.count loop
    /* See above for previous logic in the loop */
    -- Lookup the athlete id from their name. If not found, then
    -- create a new athlete with this name and return the newly
    -- assigned athlete id into l_athlete_id
    begin
        select id
          into l_athlete_id
          from eba_demo_athletes
         where name = l_athlete_name;
    exception
        when no_data_found then
            insert into eba_demo_athletes(name)
            values(l_athlete_name)
            returning id into l_athlete_id;
    end;
    -- Create a new athlete session for the athlete id
    -- and session date (inferred from the CSV file name above)
    insert into eba_demo_athlete_sessions(athlete_id, session_date)
    values (l_athlete_id, l_session_date)
    returning id into l_new_session_id;
    /* See below for final logic to move uploaded data in place */
end loop;

Finally, I select the uploaded performance data from the “staging” table and insert it into the Athlete Session Data table along with the correct Athlete Session foreign key so it’s related to the new athlete session for the current file’s athlete.

for j in 1..l_files.count loop
    /* See above for previous logic in the loop */
    -- Finally, move all the uploaded performance data for the current
    -- athlete's session into the newly created athlete session
    -- using the l_new_session_id returned above
    insert into eba_demo_athlete_session_data
           (athlete_session_id, activity, value1, value2)
    select  l_new_session_id,   activity, value1, value2
    from eba_demo_uploaded_session_data;
end loop;

Enabling Background Processing

Using the techniques I described in Page Processing Power Unchained, it would be very easy to introduce an execution chain into the Home page’s processing tab and make the current Invoke API page process be its child process. You would accomplish that by setting the Execution Chain property of the latter to the name of the newly-created chain. By setting the new execution chain’s Execute in Background property to ON, its Temporary File Handling property to Move, and its Temporary File Items to P1_ZIP_FILE_UPLOAD, the zip file CSV upload processing could be effortlessly done in the background while the user moves on to do other tasks. This extra flourish would be appropriate if there might be a large number of CSV files to process inside.

Trying Out the Sample App

You can download the APEX 23.1 sample app from here. This example athletes.zip file contains simulated training data for multiple sessions for four different athletes: BIFF, CHIP, ANNEENA, and WILMA. Try uploading the athletes.zip file then browsing the Athletes Session Data page.