Custom Date Picker Formatting

Overview

APEX 22.2 introduced a new lighter-weight date picker that is faster, offers better accessibility, and features a flexible day formatter function. Using the Format Date Picker Days plug-in, low-coders can easily format a date picker using a SQL query or an ICS calendar file to style a set of days in the “mini month” calendar used to select a date. In this article we explore the day formatter function, learn what the plug-in can do declaratively, and see how to extend what the plug-in can do while gaining some JavaScript newbie insights in the process.

A Simple dayFormatter Function

The APEX 22.2 date picker supports an optional day formatter function. You can use it to customize the appearance of any day in its “mini month” calendar. Once you’ve assigned a dayFormatter function to a date picker, it gets invoked for each day in the mini month when the page item is rendered or refreshed. On each invocation, your function returns an object whose property values determine three aspects of the current day in the mini month:

  • Whether the day is enabled or disabled,
  • What CSS class (if any) should be used to format the day, and
  • What tooltip (if any) should be shown when the user hovers over the day.

Your dayFormatter function receives a single parameter whose value is the date being formatted. This date parameter value will always be in the standard format YYYY-MM-DD. Your function must return an object with three properties named disabled, class, and tooltip. The following simple dayFormatter function ignores the value of the date passed in and just returns the same values for every date. This results in all days being enabled, styled using the CSS class named customDayStyle, and having the tooltip “Choose a delivery date”:

function (pCurrentDate) {
   // Ignore the pCurrentDate and format every day the same
   return {
      disabled: false,                   /* Day is not disabled */
      class: "customDayStyle",           /* Use this CSS class  */
      tooltip: "Choose a delivery date"  /* Use this tooltip    */
   };
}

In practice, your day formatter function will typically implement conditional behavior based on other factors, using the values returned for the class and tooltip parameters to call attention to interesting dates in the mini month. By returning null for the class property, it signals that the day should be styled in the normal way, and returning null for tooltip indicates that the day will have no tooltip.

Assigning the dayFormatter During Page Load

Typically, you will assign your day formatter function at page load time. You can either use the Execute when Page Loads block of JavaScript code to accomplish this, or write a JavaScript action step on an Page Load dynamic action event handler. For example, to assign the above dayFormatter function to a date picker page item named P13_DELIVERY_DATE, you would write the following block of JavaScript in the Execute when Page Loads script of page number 13. Notice that the last line calls refresh() on the page item to which you’ve just assigned the day formatter function. This refresh()call causes the new day formatting function to get used immediately before the user first sees the date picker.

// Assign a dayFormatter function to P13_DELIVERY_DATE date picker
apex.items.P13_DELIVERY_DATE.dayFormatter = function (pCurrentDate) {
   // Ignore the pCurrentDate and format every day the same
   return {
      disabled: false,                   /* Day is not disabled */
      class: "customDayStyle",           /* Use this CSS class  */
      tooltip: "Choose a delivery date"  /* Use this tooltip    */
   };
};
// Refresh the page item to engage the new day formatter
apex.items.P13_DELIVERY_DATE.refresh();

Disabling Weekends in the Mini Month

A common use case is disabling weekends to guide the user to pick a weekday. You can accomplish this by assigning a simple dayFormatter function that returns true for its disabled property for any day that is Saturday or Sunday. The apex.date namespace contains many useful functions for working with date values in JavaScript. Its parse() function converts the string format of the day being formatted into a JavaScript Date object. You can then call the getDay() method on the Date object to get the day of the week (0=Sunday, 1=Monday, …, 6=Saturday). So, the following slightly modified dayFormatter only allows the user to pick a weekday for a delivery date, uses no CSS class, and returns no tooltip for weekend days that are disabled:

// Assign a dayFormatter function to P13_DELIVERY_DATE date picker
apex.items.P13_DELIVERY_DATE.dayFormatter = function (pCurrentDate) {
   // Parse the current day's date using ISO8860 format mask
   const curDate = apex.date.parse(pCurrentDate,"YYYY-MM-DD");
   const dayOfWeek = curDate.getDay();
   const isWeekend = (dayOfWeek == 6 /*Sat*/ || dayOfWeek == 0 /*Sun*/);
   return {
      disabled: isWeekend,
      class:    null,            /* no day formatting class         */
      tooltip:  isWeekend ? null /* no tooltip for weekend, else... */
                          : "Choose a delivery date"
   };
};
// Refresh the page item to engage the new day formatter
apex.items.P13_DELIVERY_DATE.refresh();

This produces the following effect:

P13_DELIVERY_DATE date picker showing disabled weekends and a custom tooltip

Getting the Format Date Picker Days Plug-in

The APEX team’s GitHub site has a Format Date Picker Days plug-in you can use to handle some day formatting use cases declaratively using either an ICS calendar file or a SQL query. To download the plug-in, visit https://oracle.github.io/apex/, scroll down to the Plug-ins section of the page shown below, and right-click on the Download Plug-In link to choose the “Save Link As…” (or similar) option in your browser. This will save a plug-in file named dynamic_action_plugin_format_datepicker_days.sql to your computer. You can import this file as a plug-in into any APEX 22.2 application or later.

Plug-ins section on APEX GitHub Page at https://oracle.github.io/apex/

Formatting Dates Using an ICS File

The 22.2.2 version of the Sample Calendar app in the gallery has a Day Formatting (Plug-in) page 540 that contains two examples of date pickers using the Format Date Picker Days plug-in to show American holidays using the Universal Theme color class u-hot-text. One date picker is the page item P540_US_HOLIDAYS_PLUGIN that references a public Google Calendar URL to the ICS file for holidays in the USA. It is configured using Page Load dynamic action named Load US Holidays into Date Picker that uses an action step of type Format Date Picker Days [Plug-In] as shown below.

Declaratively formatting a date picker using an ICS calendar file

The result is that american holidays in the calendar show with the universal theme’s “Hot Text” CSS class like this:

Date Picker showing US holidays using Universal Theme u-hot-text CSS class

The same page uses a Row Initialization [Interactive Grid] dynamic action to format the START_DATE date picker column in an Interactive Grid region on the page using the same ICS calendar file.

Formatting Dates Using a SQL Query

The Announcements sample you can download at the end of this article is a simple app that lets the user enter time-sensitive announcements. Each announcement has a display from date, a display to date, and a purge date. For the sake of argument, let’s say that announcements are only purged on the last day of the month and that we want to let the user pick a purge date only in the next three months.

By configuring the Page Load dynamic action to use an action step of Format Date Picker Days [Plug-In], we can choose the SQL Query option and use a query like the one below to return the next three last days of the month. The query uses two union all operators and the SQL last_day() and add_months() functions to return a total of three rows. The first has the last day of the current month. The second row has the last day of the next month. And the third row has the last day of two months from now. Notice that each of the three select statements returns the same date value for the start_date and end_date column since they are single-day periods. However, the plug-in supports returning a range of consecutive days as a single (start_date , end_date) pair of values in a row. All the days in that date range will be styled as indicated by the other column values in that row.

select /* last day of this month */
   trunc(last_day(sysdate))               as start_date,
   trunc(last_day(sysdate))               as end_date,
   'bold-and-red'                         as css_class,
   'Purge Day'                            as tooltip,
   0                                      as is_disabled
from dual
union all
select /* last day of next month */
   trunc(last_day(add_months(sysdate,1))) as start_date,
   trunc(last_day(add_months(sysdate,1))) as end_date,
   'bold-and-red'                         as css_class,
   'Purge Day'                            as tooltip,
   0                                      as is_disabled
from dual
union all
select /* last day of month after that */
   trunc(last_day(add_months(sysdate,2))) as start_date,
   trunc(last_day(add_months(sysdate,2))) as end_date,
   'bold-and-red'                         as css_class,
   'Purge Day'                            as tooltip,
   0                                      as is_disabled
from dual

Next you ensure that your plug-in usage correctly configures the names of the query result columns that provide the appropriate values that drive the formatting as shown below, making sure to enter any page item names into the Item to submit field if the query references them as bind variables:

Configuring the query result column names that provide day formatting information

The bold-and-red CSS class is defined in the Inline CSS section of the page. I originally had defined it like this:

.bold-and-red {
  font-weight: 900 !important;
  color: var(--ut-palette-danger) !important;
}

However, when that was not working for me, my colleague Ronny gave me the tip that some important changes to improve the accessibility of the date picker required that I write the style rule like this instead:

.bold-and-red {
}
.bold-and-red > span { 
  font-weight: 900 !important;
  color: var(--ut-palette-danger) !important;
}

This produces the Purge Date date picker that shows the last day of the next three months in bold red text like this:

Using SQL to Specify Only Available Days

Using the three rows returned by my SQL query above, the Format Date Picker Days plug-in formatted those three days in a special way. But, what if I wanted those three days to be the only three days the user can pick? Using the plug-in on its own, this requires writing a query that retrieves a row for each start and end date range that I wanted to declaratively disable, using 31-DEC-9999 to represent the “End of Time”. Maybe for some use cases that would be easy to do, but as a general strategy it seemed complicated to determine. So, I experimented with the idea of combining the declarative plugin’s behavior with the ability to write a custom dayFormatter function to see if I could accomplish my desired goal in a more general way.

What I discovered is that the plug-in implements its declaratively-configured functionality by assigning an appropriate dayFormatter function to the date picker page item. It’s a day formatter function the low-code developer did not have to write herself, but a day formatter function nonetheless. My strategy was to first let the plugin assign its day formatter function, then to incorporate that day formatter function into my own day formatter function implementation. This way, I could “intercept” the day by day formatting and override the disabled property to disable all days by default unless the day was explicitly enabled by the plug-in’s day formatter function.

The day formatter function I devised is configured in a Page Load dynamic action event handler step of type Execute JavaScript that immediately follows the action step using the Format Date Picker Days [Plug-in] type to perform the SQL-based declarative configuration. I’ve used the 22.2 feature of more descriptive action step names to make the intent of the dynamic action more easy to read:

Dynamic Action steps configure a combination of declarative and programmatic day formatting

The code of the “…and Enable ONLY Those Three Days” action steps appears below. It effectively replaces the day formatter function configured by the previous dynamic action step by the plugin, with a new day formatter function that first delegates to the plug-in-configured function, then overrides the return value of the disabled property to return true for all other days in the mini month:

// Access the dayFormatter the Plug-in just assigned before this
const pluginFormatter = apex.items.P3_PURGE_DATE.dayFormatter;

// Replace the date picker's dayFormatter with my own
apex.items.P3_PURGE_DATE.dayFormatter = function (pDateISOString) {

  // That first invokes original pluginFormatter to get its result object
  const plugInFormatterResult = pluginFormatter(pDateISOString);

  // Then returns a result that defaults disabled to true
  return {
      // disable by default if not in plug-in formatter's result object
      disabled: 'disabled' in plugInFormatterResult 
                 ? plugInFormatterResult.disabled : true,

      // return class if in plug-in formatter's result object
      class: 'class' in plugInFormatterResult 
              ? plugInFormatterResult.class : null,

      // return tooltip if in the plug-in formatter's result object
      tooltip: 'tooltip' in plugInFormatterResult 
               ? plugInFormatterResult.tooltip : null
  };
};
apex.items.P3_PURGE_DATE.refresh();

The result, is a mini month for the Purge Date field of an announcement that only allows the last day of the current and two successive months to be selected. And these three days are formatted with an extra bold, “urgent” (red) color to highlight the destructive nature of the announcement purge operation.

Complementing declarative day formatting with custom formatting

A Lesson Learned on Async Nature of JavaScript

My earlier attempts to get this “delegating day formatter” idea to work were not functioning as expected. My colleagues Stefan Dobre and Ronny Weiss helped me understand that this was due to the asynchronous nature of the JavaScript language. When two dynamic action steps are listed in sequence, if an earlier step performs an AJAX call to the APEX server, the second step in the sequence might execute before the first one has completed. The Format Date Picker Days plug-in retrieves the results of the SQL statement by performing a round-trip to the APEX engine on the server. However fast this operation may be, it is still a network exchange that takes some time. I learned from my colleagues that all I needed to do was ensure the “Wait for Result” property of the plug-in was switched to the on position as shown in the figure below. This setting guaranteed my delegating day formatter code would only run after the plug-in had retrieved its data and successfully setup its day formatter function first. After checking that setting, everything starting working correctly.

Avoiding asynch timing complexities by simply ensuring the “Wait For Result” switch is on

Getting the Sample App

To check out the APEX 22.2 Announcements sample app that has the Purge Date date picker page item described above on the modal drawer page (3) used to create or edit an announcement, download the app from here. As mentioned above, also explore page 540 in the 22.2.2 version of the Sample Calendar app. Thanks again to my colleagues Stefan and Ronny for the precious advice they offered while working on this example.

Simplify Sharing Region Data

I’ve frequently found it useful to share one region’s data in another. For example, as a user narrows her results with a faceted search or smart filter, a chart, map or dynamic content region shows the same results in a different way. My colleague Carsten explains the technique in his Add a Chart to your Faceted Search Page blog post, and it has come in handy for me many times. But after the fifth pipelined table function and object types I authored to enable the data sharing, I set out to automate the process to save myself time in the future. As an APEX developer, the ability to create development productivity apps for yourself is a cool super power!

The Region Data Sharing Helper app featured in this article lets you pick a region from any app in your workspace and easily download scripts to share that region’s results in another region in the same application. After first explaining how to use the app, I highlight some interesting details of its implementation.

Overview

The app lets you select regions for which to generate data sharing artifacts. Since data sharing requires both data and a unique region identifier, the app only shows regions with a static id having a source location of Local Database, REST Enabled SQL, or REST Source. After adding a region to the selected list for artifact generation, if needed you can adjust the columns to include and the base name. Then, you can download the generated artifacts for that region. By incorporating the SQL scripts into your APEX app, you can configure additional regions in the same app to use the original one’s data using the SELECT statement provided in the accompanying README file.

The app maintains the list of selected regions to remember the subset of columns you configure and the base name of the generated artifacts you prefer for each region. If you later update one of the original regions in the App Builder in a way that affects its columns, just click the (Regenerate) button to refresh its datasource info and download the artifacts again.

Choosing a Region

As shown in the figure below, choose a region whose data you want to share with other regions in the same app. Click (Add Region) to add it to the list of selected regions below.

Region Data Sharing Helper app showing Employees region in HR Sample app

Sometimes the selected region is immediately ready for artifact generation, but other times it may take a few seconds to show a status of Ready. If you see a status of Working, as shown below, wait 10 seconds and click the refresh icon to see if it’s ready yet.

Click (Refresh) after 10 seconds if status shows as Working

Once the status of a selected region is Ready, you can adjust the base name and included columns and save any changes. If you’ve modified the original region in your app in a way that affects its data source columns, click Regenerate to refresh the set of available columns to choose from.

Adjusting the included columns and base name as necessary

Once you’ve saved any changes, the download button will reappear. Click it to get a zip file containing the generated SQL scripts and a README file explaining what they are and how to use them.

Clicking the download button to produce the data sharing artifacts

When you no longer anticipate needing to download data sharing artifacts for a selected region, you can remove it from the list of selected ones by clicking the Delete button. This simply removes it from the helper app’s list of selected regions. You can always add it again later as a selected region if the need arises.

Exploring the Downloaded Artifacts

After clicking the download button for the Employees region on page 1 of the HR Sample application shown above, since the Base Name was employees a zip file named employees-region-data-sharing-artifacts.zip is downloaded. Extracting the zip file reveals three generated files as shown below.

Mac finder showing the contents of the downloaded zip file

The README.html is the place to start, since it explains the other two files.

README.html file explains the generated artifacts and suggests SELECT statement to use

The employees_types.sql script defines the employees_row and employees_tab types used by the pipelined table function in the other file. The employees_function.sql defines the employees_data pipelined table function and depends on the types. You can include these scripts directly into your APEX application as Supporting Objects scripts, or add them to the set of your existing installation scripts. Due to the function’s dependency on the types, however, just ensure that the types script is sequenced before the function script.

For example, incorporating the generated SQL scripts into the HR Sample app above as supporting objects scripts would look like this:

Supporting Objects scripts after adding the two data sharing SQL files

Using Pipelined Table Function in a Region

The README file contains a suggested SELECT statement to use as the source of the region where you want to reuse the original region’s data. After running the types SQL script then running the function SQL script, you can try the suggested statement in another region in the same HR Sample application. In the example below, I’ve used it as the source of a chart region on the same page as the original faceted search region.

Using the SELECT statement suggested in the README file in another region in the same app

The query I’ve used appears below, and of course I could leave out columns that are not necessary in the new region. For simplicity, I used the statement verbatim as I found it in the README file:

select ename,
       empno,
       sal,
       comm,
       deptno
  from table(employees_data)

After also configuring a dynamic action on the After Refresh event of the Employees region to declaratively refresh the Salaries chart region, we immediately see the data sharing working in action, reflecting the filtering the current user performs in the original region, too.

HR Sample app showing generated data-sharing artifacts in use to reflect filtered data in a chart

The rest of the article explains some interesting details of the Region Data Sharing Helper app implementation. If you’re primarily interested in trying out the functionality, you’ll find the download link at the end.

Cascading Select Lists for App, Page, Region

An extremely handy consequence of Oracle APEX’s model-driven architecture is that all application metadata is queryable using SQL. By using the APEX dictionary views, it’s incredibly easy to create new APEX applications that introspect application definitions and provide new value. In the case of the Region Data Sharing Helper app, I needed three select lists to let the user choose the application, page, and region for artifact generation. The query for the P1_APPLICATION select list page item appears below. It uses appropriate where clauses to avoid showing itself in the list and to only show applications that have at least one region with a non-null static id configured and a local database, remote database, or REST service data source.

select a.application_name||' ('||a.application_id||')' as name,
       a.application_id
from apex_applications a
where a.application_id != :APP_ID
and exists (select null
              from apex_application_page_regions r
             where r.application_id = a.application_id
               and r.static_id is not null
               and r.location in ('Local Database',
                                  'Remote Database',
                                  'Web Source'))
order by a.application_name

The query for the P1_PAGE select list page item is similar, retrieving only those pages in the selected application having some qualifying region. Notice how the value of P1_APPLICATION is referenced as a bind variable in the WHERE clause:

select page_name||' ('||page_id||')' as name, page_id
from apex_application_pages p
where p.application_id = :P1_APPLICATION
and p.page_function not in ('Global Page','Login')
and exists (select null
              from apex_application_page_regions r
             where r.application_id = p.application_id
               and r.page_id = p.page_id
               and r.static_id is not null
               and r.location in ('Local Database',
                                  'Remote Database',
                                  'Web Source'))
order by p.page_id

By simply mentioning P1_APPLICATION in the Parent Items(s) property of the P1_PAGE select list, the APEX engine automatically handles the cascading behavior. When the user changes the value of P1_APPLICATION, the value of P1_PAGE is reset to null, or its default value if it defines one. It also implicitly submits the value of any parent items to the server when the select list’s query needs refreshing on parent value change. To save the user a click, I’ve defined the default value for P1_PAGE using a SQL query to retrieve the id of the first page in the available list of pages.

The P1_REGION select list page item uses a similar query against the apex_application_page_regions view, listing P1_PAGE as its parent item and providing an appropriate query for the item’s default value to automatically choose the first region in the list whenever the list gets reset by the cascading select list interaction.

Adding Chosen Region to Selected List

When you choose a region and click the (Add Region) button to add it to the list selected for artifact generation, the Add Region to Selected List page process runs. It uses the built-in Invoke API action to call the add_region_to_selected_list() function in the eba_demo_region_data_sharing package. If it’s the first time this combination of app id, page id, and region static id has been added, it inserts a new row in the eba_demo_reg_data_requests table to remember the user’s selection. Then it proceeds to describe the “shape” of the region’s data source: the names and data types of its columns. That info will be recorded in the xml_describe column in this row by a background job. I reveal next why a background job was required…

Describing a Region’s Data Source Profile

No dictionary view provides the names and data types of a region’s datasource in a way that works for all kinds of data-backed regions, so I had to think outside the box. I applied a meta-flavored twist on Carsten’s data-sharing strategy and created a pipelined table function get_region_source_columns() to programmatically fetch the region datasource column metadata I needed using the following approach:

  1. Use apex_region.open_context() on the region in question
  2. Retrieve the column count using apex_exec.get_column_count()
  3. Loop through the columns to discover the name and data type of each
  4. For each one, call pipe row to deliver a row of region column metadata

The complication I encountered was that apex_region.open_context() only works on regions in the current application. However, when the Region Data Sharing Helper app is running, it is the current app in the APEX session. I needed a way to momentarily change the current application to the one containing the region to describe.

I first tried using an APEX automation to run the region describe process in the background. I hoped a call to apex_session.create_session() inside the job could establish the appropriate “current app” context before using apex_region.open_context() to describe the region. However, I discovered the APEX engine already establishes the APEX session for the background automation job, and my attempt to change it to another app id didn’t produce the desired effect.

Carsten suggested trying a one-time DBMS Scheduler job where my code would be the first to establish an APEX session without bumping into the built-in session initialization. Of course, his idea worked great! Things went swimmingly from there. The code I use inside add_region_to_selected_list() to run the DBMS Scheduler one-time background job looks like this:

-- Submit one-time dbms_scheduler job to process the
-- request to describe the region in some app in the
-- workspace other than the current utility app
dbms_scheduler.create_job (
    job_name        => dbms_scheduler.generate_job_name,
    job_type        => 'plsql_block',
    job_action      => replace(c_gen_xml_job_plsql,c_id_token,l_id),
    start_date      => systimestamp,
    enabled         => true,
    auto_drop => true);

The PL/SQL block submitted to the scheduler comes from the c_gen_xml_job_plsql string constant whose value appears below, after substituting the #ID# token with the primary key of the row in eba_demo_reg_data_requests representing the region that needs describing:

begin
  eba_demo_region_data_sharing.describe_region(#ID#);
  commit;
end;

When the background job runs describe_region(12345), that procedure retrieves the application id, page id, and region id from the eba_demo_reg_data_requests table using the id provided, calls create_apex_session_with_nls() to establish the right application context, then calls the xml_for_sql() function in my eba_demo_transform_group package to retrieve an XML document that represents the query results from the following query against the region metadata pipelined table function:

select * 
  from eba_demo_region_data_sharing.get_region_source_columns(
                                     :app_id,
                                     :page_id,
                                     :region_static_id)

It then updates the row in eba_reg_data_requests to assign this XML region column profile as the value of its xml_describe column. This XML document will have the following format:

<ROWSET>
  <ROW>
    <NAME>EMPNO</NAME>
    <DDL_NAME>empno</DDL_NAME>
    <DATA_TYPE>number</DATA_TYPE>
    <DECLARED_SIZE/>
  </ROW>
  <ROW>
    <NAME>ENAME</NAME>
    <DDL_NAME>ename</DDL_NAME>
    <DATA_TYPE>varchar2</DATA_TYPE>
    <DECLARED_SIZE>50</DECLARED_SIZE>
  </ROW>
  <!-- etc. -->
</ROWSET>

If it’s the first time we’re describing this region, it also assigns a default value to the include_columns column to reflect that all columns are selected by default. The situation when it’s not the first time we’re performing the region describe has an interesting twist I explain later when we explore regenerating the artifacts for a region.

Forms with Previous/Next Navigation

The two user-editable fields in the eba_demo_reg_data_requests row are basename and include_columns . The former represents the base name that will be used to generate the name of the object type (basename_row), the collection type (basename_tab), and the pipelined table function (basename_data). The latter is a stored as a colon-separated list of included column positions, relative to the positional order they appear in the xml_describe data profile XML document. Since you can add multiple regions to the selected list, I wanted to let the user page forward and backward through those selected entries.

To implement that row navigation, I learned a new trick by reading an article by my colleage Jeff Kemp. It revealed a neat feature of the APEX Form region that supports easy paging through an ordered set of rows. You configure it with a combination of settings on the form region itself, as well as on its Form – Initialization process in the Pre-Rendering section of the page.

On the form region, you set the data source and make sure to impose a sort order. That’s important to establish a predictable next/previous ordering for the rows the user navigates. For example, in the helper app the form region’s source is the local table eba_demo_reg_data_requests with an Order By Clause of generate_requested_on desc . This ensures the user sees the requests in most recently generated order.

The other half of the setup involves the Form – Initialization process. As shown below, after creating three page items with in-memory-only storage to hold their values, you configure the Next Primary Key Item(s), Previous Primary Key Item(s), and Current Row/Total Item with the respective names of the page items.

Form Initialization process settings for next/previous navigation

Informed by the form region’s source and sort order, along with these navigation related settings, the APEX engine automatically computes the values of these page items when the page renders. I left the P1_REQUEST_COUNT visible on the form as a display only page item so the user can see she’s on region “3 of 5”. I made the other two page items hidden, but referenced their value as appropriate in the Handle Previous and Handle Next branches I configured in my After Processing section of my page’s Processing tab.

I chose to handle the navigation buttons with a Submit Page action combined with branches so the form’s Automatic Row Processing (DML) process would save any changes the user made on the current page before proceeding to the next or previous one. If the form had been read-only, or I didn’t want to save the changes on navigation, the values of P1_NEXT_REQUEST_ID and P1_PREVIOUS_REQUEST_ID could also be referenced as page number targets in buttons that redirect to another page in the current application. Lastly, I referenced these page item values again in the server-side conditions of the NEXT and PREVIOUS buttons so that they only display when relevant.

Using Transform Group to Generate Artifacts

The artifact generation and download is handled declaratively using a transform group, a capability I explain in more detail in a previous blog post. For generating the region data sharing artifacts to be downloaded in a single zip file, I added the following generate-data-sharing-artifacts.xml static application file. It includes a single data transform whose parameterized query retrieves the region’s column names and data types, filtered by the developer’s choice of columns to include in the generated artifacts. The SELECT statement uses the xmltable() function to query the region’s data profile stored in the xml_describe column. This offered me a chance to learn about the for ordinality clause to retrieve the sequential position of the <ROW> elements that xmltable() turns into relational rows. This made it easy to combine with the apex_string.split() function to retrieve only the columns whose sequential position appears in the colon-separated list of include_columns values.

<transform-group directory="{#basename#}-region-data-sharing-artifacts">
    <data-transform>
        <query bind-var-names="id">
            select x.name, x.ddl_name, x.data_type, x.declared_size
            from eba_demo_reg_data_requests r,
            xmltable('/ROWSET/ROW' passing r.xml_describe
                        columns
                        seq           for ordinality,
                        name          varchar2(255) path 'NAME',
                        ddl_name      varchar2(80)  path 'DDL_NAME',
                        data_type     varchar2(80)  path 'DATA_TYPE',
                        declared_size number        path 'DECLARED_SIZE'
            ) x
            where r.id = to_number(:id)
            and x.seq in (select to_number(column_value)
                        from apex_string.split(r.include_columns,':'))
            order by x.seq
        </query>
        <transformation stylesheet="generate-types.xsl" 
                        output-file-name="{#basename#}_types.sql"/>
        <transformation stylesheet="generate-function.xsl" 
                        output-file-name="{#basename#}_function.sql"/>
        <transformation stylesheet="generate-readme.xsl" 
                        output-file-name="README.html"/>                                          
     </data-transform>
</transform-group>

The transform group includes three transformations that each use an appropriate XSLT stylesheet to transform the region data profile information into a SQL script defining the object types, a SQL script defining the pipelined table function, and a README.html file.

Replacing Strings in XSLT 1.0 Stylesheets

XSLT 2.0 has a replace() function that works like Oracle’s regexp_replace(), but the Oracle database’s native XSLT processor implements only the XSLT 1.0 feature set. Therefore, we need an alternative to perform string substitution in a stylesheet that generates an artifact by replacing tokens in a template.

For example, the generate-readme.xsl stylesheet in the helper app defines a variable named query-template with an example of the SQL query you’ll use to select data from the pipelined table function. This template contains a token #COLUMNS# that we’ll replace with the comma-separated list of selected column names. It also has #FUNCNAME# token we’ll replace with the name of the pipelined table function.

<xsl:variable name="query-template">select #COLUMNS#
  from table(#FUNCNAME#)</xsl:variable>

After first computing the value of the variable columns by using an <xsl:for-each> to loop over the names of the selected columns, the stylesheet performs the double token substitution while defining the example-query variable. If we were able to use XSLT 2.0, the example-query variable definition would look like this:

<!-- XSLT 2.0 string replace example -->
<xsl:variable name="example-query"
  select="replace(replace($query-template,'#COLUMNS#',$columns),
                  '#FUNCNAME#',$function-name)"/>

However, as mentioned above we need to limit our stylesheets to functionality available in XSLT 1.0 to use the native Oracle database XSLT processor. Instead, we use nested calls to a named template replace-string. Think of a named XSLT template like a function that accepts parameters as input and returns an output. So, the following example-query variable declaration calls the replace-string named template to replace the token #FUNCNAME# in the value of the stylesheet variable query-template with the value of the stylesheet variable named function-name:

<!-- Partial solution, replace first #FUNCNAME# token -->
<xsl:variable name="example-query">
   <xsl:call-template name="replace-string">
     <xsl:with-param name="text" select="$query-template"/>
     <xsl:with-param name="replace">#FUNCNAME#</xsl:with-param>
     <xsl:with-param name="with" select="$function-name"/>
   </xsl:call-template>
</xsl:variable>

But the result of the above would be the query template with only the #FUNCNAME# token replaced, leaving the #COLUMNS# token intact. XSLT variables are immutable: once assigned their value cannot be updated. So we are not allowed to create multiple, consecutive <xsl:variable> statements that update the value of the same example-query variable, replacing one token at a time. Instead, XSLT relies on nested calls to the replace-string function while performing the initial (and only allowed) assignment of the example-query variable. So after calling the replace-string template once to replace #FUNCNAME# with the value of $function-name, we use that result as the value of the input text parameter in a second, outer call to replace-string to swap #COLUMNS# with the value of $columns like this:

<!-- Final solution, replace #FUNCNAME#, then #COLUMNS# -->
<xsl:variable name="example-query">
  <xsl:call-template name="replace-string">
    <xsl:with-param name="text">
      <xsl:call-template name="replace-string">
        <xsl:with-param name="text" select="$query-template"/>
        <xsl:with-param name="replace">#FUNCNAME#</xsl:with-param>
        <xsl:with-param name="with" select="$function-name"/>
      </xsl:call-template>
    </xsl:with-param>
    <xsl:with-param name="replace">#COLUMNS#</xsl:with-param>
    <xsl:with-param name="with" select="$columns"/>
  </xsl:call-template>
</xsl:variable>

The generate-types.xsl and generate-function.xsl stylesheets perform this same nested invocation of replace-string, but have more tokens to substitute. As expected, this results in more deeply-nested calls. However, the concept is the same as this two-token example from generate-readme.xsl.

Vertically Centering the Add Region Button

When a button appears in the same row of a form as other page items, by default its vertical alignment with respect to its “row mates” doesn’t look as eye-pleasing as it could.

A button’s default vertical alignment in a row with other page items

The trick to improve the button’s visual appeal, is to add the CSS class u-align-self-center to the Column CSS Classes property in the Page Designer like this:

Using u-align-self-center to vertically center button with page items in the same row

Show Buttons Based on Row Existence

I wanted the user to see an (Add Region) button if the region they choose is not yet in the selected list, and instead see a (Regenerate) button if the region is already in the list. And I wanted the correct button to show both when the page initially renders, as well as when the user changes the values of the select list interactively. I implemented this feature using a dynamic action with conditional hide and show action steps based on an existence check query.

As shown below, I started by using drag and drop in the Page Designer’s Layout editor to drag the (Regenerate) button into the same grid cell as the (Add Region) button. Since the user will see only one or the other at a time, they both can occupy that same grid cell just to the right of the P1_REGION select list.

Two buttons stacked in the same grid cell since the user will see only one or the other at runtime

Next, I added a dynamic action on the Change event of the P1_REGION page item. Recall that in the helper app, being in the selected list means that a row exists in the eba_demo_reg_data_requests table with the region’s unique combination of application id, page id, and region static id. The first action step in the dynamic action event handler uses Execute Server-side Code to run the following query that always returns a row with either ‘Y‘ or ‘N‘ into the hidden P1_REGION_IN_SELECTED_LIST page item. This provides the info about whether the region exists in the list or not.

with region_in_selected_list as (
    select max(id) as id 
      from eba_demo_reg_data_requests
     where app_id = :P1_APPLICATION
       and page_id = :P1_PAGE
       and region_static_id = :P1_REGION
)
select case 
         when x.id is null then 'N' 
         else 'Y' end
into :P1_REGION_IN_SELECTED_LIST
from region_in_selected_list x;

Then I followed that action step with four conditional steps that use a client-side condition based on the value Y or N to hide the button that needs hiding and show the button that needs showing. Notice how the new action step name can be very useful in making the steps self-documenting with a more descriptive label than the old “Hide” or “Show” that appeared before 22.2.

Dynamic action on P1_REGION change event to hide/show appropriate buttons

To finish the job, I set the Fire on Initialization switch to true for the four Hide/Show action steps, and provided the same existence SQL query as the default value of the P1_REGION_IN_SELECTED_LIST hidden page item. This ensured that the correct button shows both during the initial page render, as well as after the user interactively changes the region select list.

To Defer or Not to Defer (Rendering)

With the above configuration in place, the appropriate (Add Region) or (Regenerate) button was appearing conditionally as desired. However, I noticed that when the page first rendered I would momentarily see both buttons flash before the inappropriate one for the currently selected region would get hidden by my dynamic action. The solution to avoid the user’s seeing this “behind the scenes” page setup behavior is to enable the Deferred Page Rendering template option shown below. This setting allows you to decide when faster, incremental page rendering is more appropriate, or whether APEX should wait until page-load-time dynamic behavior is complete before revealing the final state of the page to the user.

Deferred Page Rendering option hides page-load-time hide and show activity

Preserving Column Selection on Regeneration

When you click the (Regenerate) button for a region you’ve already added to the selected list, the add_region_to_selected_list() function updates the existing eba_demo_reg_data_requests row to set READY = ‘N‘ and it runs the background job to call describe_region() again. The region might have changed the set of available columns since the previous time we described it, but the user may have carefully decided which of the previous region’s columns to include and exclude. So it’s important for usability to retain the included columns across the region data profile regeneration.

At the moment the describe_region() code has produced the fresh region data profile XML document and is about to update the existing row in eba_demo_reg_data_requests, we have the following “ingredients” available to work with:

  1. The old xml_describe region profile XML document
  2. The old include_columns value with a colon-separated list of index positions relative to the old region profile XML document
  3. The new region profile XML document just produced

What we need to “bake” with these ingredients is a new list of included columns that retains any columns that were previously in the included list while ignoring any of those included columns that are no longer available to reference. Also worth considering is that the index positions of the previous column names might be different in the new region data profile XML document.

After initially writing the code using multiple loops in PL/SQL, I challenged myself to come up with a single SQL statement to accomplish the job. In words, what I needed the statement to do was, “select a colon-separated list of index positions relative to the new XML describe document where the column name is in the list of names whose whose index positions (relative to the old XML describe document) were in the colon-separated list currently stored in include_columns .” I adjusted the query to also handle the situations when the old XML document was null and when the list of include_columns was null. This let me use the same routine to calculate the default value for the include_columns list for both new and updated rows in eba_demo_reg_data_requests. The private get_default_included_columns() function in the eba_demo_region_data_sharing package has the SELECT statement I use to tackle the job.

select listagg(x.seq,':') within group (order by x.seq)
into l_ret
from xmltable('/ROWSET/ROW' passing p_new_xml_describe
        columns 
            seq for ordinality,
            name varchar2(128) path 'NAME') x
    where p_old_xml_describe is null
        or
        x.name in (
    select y.name
        from xmltable('/ROWSET/ROW' 
                passing p_old_xml_describe
                columns 
                    seq for ordinality,
                    name varchar2(128) path 'NAME') y
        where p_before_list is null 
            or y.seq in (
            select to_number(column_value)
                from apex_string.split(p_before_list,':'))); 

Conclusion

This was a fun learning project that taught me many new things about Oracle APEX in the process of building it. Always keep in mind that Oracle APEX is built with Oracle APEX, and that you, too, can use APEX to build yourself any kind of development productivity helper app you can imagine, not to mention plug-ins of all kinds to extend the core functionality of the platform. Thanks to colleagues Carsten, Jeff, and Vlad who offered me tips on aspects of this sample.

Getting the Sample Apps

You can download the Oracle APEX 22.2 export of the Region Data Sharing Helper app from here. In case you’re interested in the HR Sample app used to show off the generated artifacts in action, you can download that from here. The latter requires that you’ve first installed the EMP/DEPT sample dataset from the Gallery. Enjoy the simplified data sharing!

Refitting Filtered Map Points

Using a public dataset of New York City high schools, I built a page that lets students or parents filter the list based on various criteria. As they narrow their options, a map region reflects where the remaining schools are located, and a stacked bar chart lets them compare the contenders on a few important metrics. While the map initially centered and zoomed itself to fit the unfiltered set of schools, to give users the sense of “homing in” on their perfect school I wanted to refit the map around the narrowed search results. To reflect the filtered results from the cards region in the map and chart, I used the region data sharing technique from my colleague Carsten’s article Add a Chart to your Faceted Search Page. Then I got a little help from my colleagues Christian Lara and Stefan Dobre to learn how to build my first simple “Center/Zoom Map Around Points” dynamic action plug-in to make it easy to build this feature into any map-based pages I build in the future. You can check out the sample using the link at the end of the article.

Refreshing the Map and Chart

The data source for the map is a query from a pipelined table function I created following Carsten’s technique. It retrieves the high school data using the same filters currently applied to the schools region in the page:

select school_name, latitude, longitude
from eba_demo_nyc_highschool_data( 
       p_page_id => :APP_PAGE_ID,
       p_region_static_id => 'schools')

Similarly, the chart’s datasource is a query that selects a few different columns from the same pipelined table function:

select school_name, graduation_rate, 
       attendance_rate, college_career_rate
from eba_demo_nyc_highschool_data( 
       p_page_id => :APP_PAGE_ID,
       p_region_static_id => 'schools')

When the cards region changes due to the user’s applying new filters, we want to refresh the map and chart regions. The lesson I learned while getting this to work was that rather than using the “After Refresh” event on the cards region, I needed to instead use that region’s “Page Change [Cards]” event to trigger the dynamic action refresh, using two dynamic action steps of type “Refresh”.

Centering & Zooming the Map After Refresh

Whenever the map region gets refreshed, my goal was to have it refocus the user’s attention by using the new set of filtered data points to center and zoom the map appropriately. After hunting for a built-in APEX map JavaScript API, or a built-in dynamic action, I realized the solution would take a bit more research. My teammate Christian Lara pointed me at the MapLibre Fit a map to a bounding box example, and gave me this snippet of JavaScript below to consider.

The first statement accesses the map’s (post-refresh) bounding box from its mapData.map.bbox member and defines a new bounds array that contains the two points representing that rectangle. The second line gets the MapLibre map object from the APEX map region on my page, and calls its fitBounds() method to perform the centering and zooming to the new dimensions. It uses 30 pixels of padding so points near the edge of the box stay visible.

// Define the bounds using refreshed map bounding box coordinates
let bbox = apex.region("map").mapData.map.bbox,
    bounds = [
        [ bbox[0], bbox[1] ],
        [ bbox[2], bbox[3] ]
    ];
// Fit the map to the new bounds
apex.region("map").getMapObject().fitBounds(bounds,{padding: 30});

Creating a Reusable Dynamic Action Plug-in

With the code above in a dynamic action step triggered by the “After Refresh” event on the map region, the functionality I desired was working, but I wanted to learn how to encapsulate that little bit of code into a reusable dynamic action plug-in. I first watched Stefan Dobre’s The Ultimate Guide to APEX Plug-ins video from the APEX@Home 2020 conference, and then created a new dynamic action plugin named “Center & Zoom Map Around Points” in my application. Following best practice, I put the JavaScript code in a with a centerZoomMap.js file, and referenced its qualified name in the File URLs to Load section using the syntax PLUGIN_FILES#centerZoomMap#MIN#.js

I instinctively knew that to be reusable, the name of the map region’s static id would have to be a function parameter, so my first attempt at writing the contents of this centerZoomMap.js file looked like this:

// centerZoomMap.js -- First Attempt
function centerZoomMap(staticId) {
  // Define bounds using refreshed map bounding box coordinates
  let bbox = apex.region(staticId).mapData.map.bbox,
    bounds = [
        [ bbox[0], bbox[1] ],
        [ bbox[2], bbox[3] ]
    ];
  // Fit the map to the new bounds
  apex.region(staticId).getMapObject().fitBounds(bounds, {padding: 30});
}

After that, I defined a custom attribute in the plug-in named “Map Region Static Id” as attribute slot number one. However, I admit to getting a bit confused on how to pass the value of the plug in’s dynamic attribute to the JavaScript function. After asking my colleague Stefan Dobre for a tip, he used the occasion as a teachable moment to show me about the two standard plug-in attributes:

  • For Region
  • Affected Element Required

By leveraging these standard plug-in attributes, the developer using the plug-in gets a more native-feeling experience of picking the region to associate the plug-in with. It also allowed me to remove the custom attribute I had created in the plug in. The developer now configures the map she wants to center and zoom by simply picking the map region in the Affected Element section as shown below:

Configuring Affected Elements in the Properties editor to pick Map region to zoom and center

Stefan also took the opportunity to teach me a best practice of defining the centerZoomMap function as a property on the window to make its scope more clear when reading the code. So the final contents of centerZoomMap.js after consulting with Stefan looked like this:

// centerZoomMap.js - Final version
window.centerZoomMap = function() {
    // Ensure dev-configured affected element has a static id 
    const id = this.affectedElements.attr( "id" );
    if ( !id ) {
        throw new Error( "Affected Region must have an ID" );
    }
    // Use static id to ensure dev chose a map region
    const region = apex.region( id );
    if( !region || region.type !== "SpatialMap" ) {
        throw new Error( "Affected Region must be a Map" );
    }
    // Define bounds using refreshed map bounding box coordinates
    let bbox = region.mapData.map.bbox,
        bounds = [
            [bbox[0], bbox[1]],
            [bbox[2], bbox[3]]
        ];
    // Fit the map to the new bounds
    region.getMapObject().fitBounds(bounds, {padding: 30});
};

The last piece of the simple dynamic action plug-in was writing the render function using the appropriate function specification that I copied from the online help to use as a guide. The only job it had to do was tell the APEX engine the name of my JavaScript function to invoke when the dynamic action gets used at runtime:

function render (
    p_dynamic_action in apex_plugin.t_dynamic_action,
    p_plugin         in apex_plugin.t_plugin )
    return apex_plugin.t_dynamic_action_render_result is
    l_result apex_plugin.t_dynamic_action_render_result;
begin
    l_result.javascript_function := 'centerZoomMap';
    return l_result;
end;

Using the New Plugin in the Page

With the plug-in now defined, I went back to the Page Designer and removed the dynamic action step that was directly calling Christian’s snippet of JavaScript and replaced it by using the new “Center & Zoom Map Around Points” plugin we built. After picking the “Map” region from the Affected Elements section, it was ready to go.

Page Designer showing use of new plug-in to center & zoom the map after the map gets refreshed

Giving the Sample a Spin

To try out the sample, you can download the APEX 22.2 application from here. It gives the end user a nice school-searching experience like what you see in the screen recording below. Thanks again to Christian and Stefan for sharing their wisdom in getting to this end result.

Declarative Data-Driven Downloads

While developing APEX apps, on multiple occasions I’ve needed to generate data-driven artifacts for external use. My app lets staff manage the data that drives their organization, but for historical reasons a related public-facing website can’t be replaced in the near term. The site is usually static files or was built years before on a different tech stack like LAMP (Linux, Apache, MySQL, PHP). This article walks through a simple example of generating all the HTML files for a static website based on database data. It uses a declarative “transform group” capability I wrote for myself to combine the declarative power of SQL, XML, and XSLT to generate and download a zip file of generated artifacts. See the README file that accompanies this article’s sample app to learn more about how transform groups can be applied in your own APEX apps.

Background Motivation

One APEX app I built as a “nerd volunteer” for a non-profit in Italy required generating:

  • HTML pages for a conference schedule in a particular format
  • SQL scripts to update another system’s MySQL database
  • JSON files to import into an auth provider’s user management console
  • PHP source code files to “drop in” to an online portal site

For example, the conference schedule data shown in the APEX app screenshot in the banner above, turns into a static HTML file to display the parallel tracks of the conference program like this:

Static conference program HTML file generated from data managed by an APEX app

A Favorite Technology Trio: SQL + XML + XSLT

For anyone who may have read my O’Reilly book Building Oracle XML Applications published in October 2000, it should come as no surprise that I find the combination of SQL, XML, and XSLT stylesheets very useful. Your mileage may vary, but over the intervening years I have generated many data-driven artifacts using this trio of technologies. It only made sense that I’d reach for them again as APEX became my tool of choice for building new applications over the past few years.

While each distinct task of generating data-driven artifacts is slightly different, the high-level similarities shared by all tasks I’ve had to implement involve:

  • A SQL query to produce XML-formatted system-of-record data
  • One or more XSLT stylesheets to transform the XML data into appropriate text-based artifact files

The Oracle database natively supports generating XML from any SQL query results and transforming XML using XSLT (version 1.0) stylesheets. After learning the APEX_ZIP package makes it easy to generate zip files, I devised a generic facility to use in my current and future APEX apps called “transform groups”.

Defining a Transform Group

I use an XML file to declaratively define the “interesting bits” that make each transform task unique, and include this file along with the XSLT stylesheets required to generate the artifacts as static application files in my APEX app. My “transform group processor” package interprets the transform group file and processes the data transforms in it to download a single zipfile containing all the results. This way, with a single click my apps can produce and download all necessary generated artifacts.

For example, consider the following basic transform group definition file. It defines a single data transform whose SQL query retrieves the rows from the familiar DEPT table and uses the home-page.xsl stylesheet to produce the index.html home page of a hypothetical company directory website.

<!-- generate-hr-site.xml: Transform Group definition file -->
<transform-group directory="hr-site">
    <data-transform>
        <query>
            select deptno, dname, loc
             from dept
        </query>
        <transformation stylesheet="home-page.xsl" 
                        output-file-name="index.html"/>
     </data-transform>
</transform-group>

The in-memory XML document representing the results of the data transform’s query looks like this:

<ROWSET>
  <ROW>
    <DEPTNO>10</DEPTNO>
    <DNAME>ACCOUNTING</DNAME>
    <LOC>NEW YORK</LOC>
  </ROW>
  <!-- etc. -->
  <ROW>
    <DEPTNO>40</DEPTNO>
    <DNAME>OPERATIONS</DNAME>
    <LOC>BOSTON</LOC>
  </ROW>
</ROWSET>

The XSLT stylesheet referenced in the <transformation> element looks like the example below. It contains a single template that matches the root of the XML document above, then uses the <xsl:for-each> to loop over all the <ROW> elements to format a bulleted list of departments.

<!-- home-page.xsl: XSLT stylesheet to generate home page -->
<xsl:stylesheet version="1.0"
     xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:output method="html"/>
  <xsl:template match="/">
    <html>
      <body>
        <h1>Departments</h1>
        <ul>
          <xsl:for-each select="/ROWSET/ROW">
            <li>
                <xsl:value-of select="DNAME"/>
                (<xsl:value-of select="LOC"/>)
            </li>
          </xsl:for-each>
        </ul>
      </body>
    </html>
  </xsl:template>
</xsl:stylesheet>

When viewed in a web browser, the index.html file it produces will looks like this:

HTML home page showing department data

After saving the transform group XML file and corresponding XSLT stylesheet to our static application files, we’re ready to wire up the transform group download.

Wiring Up a Transform Group Download Button

You’ll typically call the transform group’s PL/SQL API from an APEX application process with a process point of “Ajax Callback”. For example, if we create an application process named Download_Static_Website, its code will invoke the download() procedure like this:

eba_demo_transform_group.download(
     p_file_name    => 'generate-hr-site.xml', 
     p_zipfile_name => 'hr-website.zip');

A button or link in your APEX app can initiate the transform group zipfile download. Its link target will redirect to the current page and have the Request parameter in the Advanced section of the link definition set to APPLICATION_PROCESS=Download_Static_Website

With these elements in place, clicking on the button will download an hr-website.zip file containing a single hr-site/index.html file.

Enhancing the Transform Group Definition

A data transform can process the same SQL query results using multiple, different stylesheets. XSLT stylesheets can accept parameters to influence their output, and the data transform supports an optional <parameter-query> element whose SQL query can return rows to provide values of these XSLT parameters. The column-to-parameter-map attribute defines how a column in the parameter query’s value should map to the name of an XSLT parameter. The transform group processor runs the stylesheet once for each row in the parameter query’s results. On each iteration it maps the XSLT parameter to the corresponding column value in the current parameter query row and runs the transformation to produce an output file. Notice that the output-file-name attribute can also reference parameter names as part of the file name.

<!-- generate-hr-site.xml: Transform Group definition file -->
<transform-group directory="hr-site">
    <data-transform>
        <query>
            select deptno, dname, loc
             from dept
        </query>
        <transformation stylesheet="home-page.xsl" 
                        output-file-name="index.html"/>
       <transformation stylesheet="dept-page.xsl" 
                       output-file-name="dept_{#depid#}.html">
            <parameter-query column-to-parameter-map
                                 ="DEPTNO:depid">
                select deptno from dept
            </parameter-query>
        </transformation>
     </data-transform>
</transform-group>

To generate a page for each employee in each department, we can further enhance the transform group to include a nested set of EMP table rows for each DEPT table row, and add a third <transformation> element to generate the employee pages.

<!-- generate-hr-site.xml: Transform Group definition file -->
<transform-group directory="hr-site">
    <data-transform>
        <query>
            select deptno, dname, loc, 
                   cursor( select empno, ename, job 
                            from emp 
                           where deptno = d.deptno) as staff
             from dept d
        </query>
        <transformation stylesheet="home-page.xsl" 
                        output-file-name="index.html"/>
        <transformation stylesheet="dept-page.xsl" 
                        output-file-name="dept_{#depid#}.html">
            <parameter-query column-to-parameter-map
                              ="DEPTNO:depid">
                select deptno from dept
            </parameter-query>
        </transformation>
        <transformation stylesheet="emp-page.xsl" 
                        output-file-name="emp_{#empid#}.html">
            <parameter-query column-to-parameter-map
                                 ="DEPTNO:depid,EMPNO:empid">
                select empno, deptno from emp
            </parameter-query>
        </transformation>
     </data-transform>
</transform-group>

Next we make appropriate updates to home-page.xsl to generate hyperlinked department names and upload the dept-page.xsl and emp-page.xsl stylesheets to our static application files. After this, clicking on the button now downloads the entire company directory static website in the hr-website.zip file. It contains an hr-site top-level directory with all the generated HTML pages as shown in the Finder screenshot below after extracting the zip file contents.

Mac Finder showing contents of downloaded hr-website.zip file

Exploring the Sample App

The APEX 22.2 sample application you can download from here installs the eba_demo_transform_group package and includes a working example of the declaratively-generated website download explained in this article. After importing the sample and ensuring you have the EMP/DEPT sample dataset installed, just click the Generate and Download Static Site button. The sample has tabs to easily explore the syntax of the transform group XML file and accompanying XSLT stylesheets, too. Its README page provides more tips on how transform groups might be useful to your own APEX apps in the future.

Since XSLT transformations are great at declaratively generating text-based artifacts of any kind, hopefully I won’t be the only APEX developer to benefit from the productivity the transform group functionality offers.

Screenshot of sample application that accompanies this article

Data-Driven Diagrams

I often want to visualize my Oracle APEX app’s data model as an Entity/Relationship diagram to remind myself how tables are related and exactly how columns are named. After recently stumbling on the open source Mermaid JS project, I had a lightbulb moment and set out to build my own data model visualizer app with APEX itself.

Mermaid Diagram Syntax

The Mermaid JS open source project aims to improve software documentation quality with an easy-to-maintain diagram syntax for Markdown files. The typical README.md file of a software project can include an Entity/Relationship diagram by simply including text like this:

erDiagram
DEPT ||--|{ EMP : "Works In"
EMP ||--|{ EMP : "Works For"

Including a diagram like this into your product doc is as simple as shown below:

Adding a Mermaid diagram to a Markdown file

If your markdown editor offers a WYSIWYG experience, the effect is even more dramatic and productive: you immediately see the results of the diagram you’re editing. For example, editing a Mermaid diagram in a readme file using Typora looks like this:

Editing a Mermaid diagram in a WYSIWYG Markdown editor like Typora

Popular source control repository sites like GitHub and GitLab have also embraced Mermaid diagrams. Since Markdown is used to provide check-in comments, on these sites (and others like them) it’s easy to include Mermaid diagrams in the helpful summaries you provide along with every commit.

Mermaid’s Diagram Types and Live Editor

Mermaid supports creating many different kinds of diagrams, each using a simple text-based syntax like the ER diagram above. At the time of writing, supported diagram types include Entity/Relationship, Class, Gantt, Flow, State, Mindmap, User Journey, Sequence, Git branch diagrams, and pie charts. The handy Mermaid Live site provides a “sandbox” editor experience where you can experiment with all the different kinds of diagrams, explore samples, and instantly see the results.

Mermaid Live editor for experimentation at https://mermaid.live

For example, after consulting their excellent documentation, I immediately tried including column details into my ER diagram for the DEPT table as shown below:

Mermaid Live editor showing ER diagram with column info and library of diagram samples

Rendering Mermaid Diagrams in Web Pages

To maximize the usefulness of the diagrams, the Mermaid project provides a simple JavaScript API to incorporate scalable vector graphics (SVG) renderings of text-based diagrams into any web page or web application. After referencing the Mermaid JS library URL, including a diagram into a page in my APEX application took a truly tiny amount of JavaScript: one line to initialize the library and one line to render the diagram from the text syntax.

In order to reference the current version of the Mermaid JS library on my page, I typed this URL into my page-level JavaScript > File URLs property:

https://cdnjs.cloudflare.com/ajax/libs/mermaid/9.3.0/mermaid.min.js

Then, after including a Static Content region in my page and assigning it a Static Id of diagram, the two lines of JavaScript code I added to the page-level Execute When Page Loads section looked like this:

mermaid.initialize();
mermaid.mermaidAPI.render('diagramsvg',
    `erDiagram
    DEPT ||--|{ EMP : "Works In"
    EMP ||--|{ EMP : "Works For"`,
    (svg) => {
    document.getElementById('diagram').innerHTML = svg;
});

These two lines of “When Page Loads” JavaScript do the following:

  1. Initialize the Mermaid library
  2. Render the diagram defined by the text passed in as an SVG drawing
  3. Set the contents of the diagram region to be this <svg id="diagramsvg"> element.

In no time, my APEX page now displayed a text-based Mermaid ER diagram:

APEX page including a Mermaid ER diagram based on its text-based syntax

Generating Mermaid Diagram Syntax from a Query

After proving out the concept, next I tackled generating the appropriate Mermaid erDiagram syntax based on the tables and relationships in the current APEX application schema. I made quick work of this task in a PL/SQL package function diagram_text() that combined a query over the USER_CONSTRAINTS data dictionary view with another query over the USER_TABLES view.

The USER_CONSTRAINTS query finds the tables involved in foreign key constraints as a “child” table, and gives the name of the primary key constraint of the “parent” table involved in the relationship. By joining a second time to the USER_CONSTRAINTS table, I can query both child and parent table names at once like this:

  select fk.table_name as many_table, 
         pk.table_name as one_table
    from user_constraints fk
    left outer join user_constraints pk
                 on pk.constraint_name = fk.r_constraint_name
   where fk.constraint_type = 'R' /* Relationship, a.k.a. Foreign Key */

The USER_TABLES query, using an appropriate MINUS clause, finds me the tables that aren’t already involved in a “parent/child” relationship above. By looping over the results of these two queries and “printing out” the appropriate Mermaid ER diagram syntax into a CLOB, my diagram_text() function returns the data-driven diagram syntax for all tables in the current schema.

I ultimately decided to include some additional parameters to filter the tables based on a prefix (e.g. EBA_DEMO_CONF), to control whether to include column info, and to decide whether common columns like ID, ROW_VERSION, and audit info should be included or not. This means the final PL/SQL API I settled on looked like this:

create or replace package eba_erd_mermaid as
    function diagram_text(p_table_prefix    varchar2 := null, 
                          p_include_columns boolean := false, 
                          p_all_columns     boolean := false )
    return clob;
end;

Wiring Up the Data-Driven Diagram

With the diagram_text() function in place, I added a hidden CLOB-valued page item P1_DIAGRAM to my page, added a P1_TABLE_PREFIX page item for an optional table prefix, and added two switch page items to let the user opt in to including column information.

Next, I added the computation to compute the value of the hidden P1_DIAGRAM page item using the diagram_text() function:

eba_erd_mermaid.diagram_text(
  p_table_prefix    => :P1_TABLE_PREFIX,
  p_include_columns => :P1_INCLUDE_COLUMNS = 'Y',
  p_all_columns     => :P1_ALL_COLUMNS = 'Y')

Lastly, I adjusted the “When Page Loads” JavaScript code to use the value of the P1_DIAGRAM hidden page item instead of my hard-coded EMP/DEPT diagram syntax:

mermaid.initialize();
mermaid.mermaidAPI.render('diagramsvg',
    apex.items.P1_DIAGRAM.value,
    (svg) => {
    document.getElementById('diagram').innerHTML = svg;
});

With these changes, I saw the instant database diagram I’d been dreaming of.

The Mermaid library handles the layout for a great-looking result out of the box. The diagram helped remind me of all the tables and relationships in the APEX app I wrote to manage VIEW Conference, Italy’s largest annual animation and computer graphics conference. It’s one of my volunteer nerd activities that I do in my spare time for fun.

Mermaid ER diagram of all tables in current schema matching prefix EBA_DEMO_CONF

However, when I tried installing my ER Diagram app in another workspace where I’m building a new app with a much larger data model, I realized that the default behavior of scaling the diagram to fit in the available space was not ideal for larger schemas. So I set out to find a way to let the user pan and zoom the SVG diagram.

SVG Pan Zoom

Luckily, I found a second open source project svg-pan-zoom that was just what the doctor ordered. By adding one additional JavaScript URL and one line of “When Page Loads” code, I quickly had my dynamically rendered ER diagram zooming and panning. The additional library URL I included was:

https://bumbu.github.io/svg-pan-zoom/dist/svg-pan-zoom.min.js

The extra line of JavaScript code I added to initialize the pan/zoom functionality looked like this:

var panZoom = svgPanZoom('#diagramsvg');

The combination of Mermaid JS and this SVG pan/zoom library puts some pretty impressive functionality into the hands of APEX developers for creating useful, data-driven visualizations. Even for developers like myself who are not JavaScript experts, the couple of lines required to jumpstart the libraries’ features is easily within reach.

With this change in place, now visualizing larger diagrams including showing column information was possible.

Dream #2: Reverse Engineer Quick SQL

Since I sometimes create APEX apps based on existing tables, a second schema-related dream I had was to reverse-engineer Quick SQL from the current user’s tables and relationships. This would let me quickly add additional columns using a developer-friendly, shorthand syntax as new application requirements demanded them. Googling around for leads, I found a 2017 blog article by Dimitri Gielis that gave me a headstart for the code required. Building on his original code, I expanded its datatype support and integrated it with my table prefix filtering to add a second page to my application that produces the Quick SQL syntax for the tables in the current schema.

Quick SQL syntax reverse-engineered from existing schema’s tables and relationships

I expanded the eba_erd_mermaid package to include an additional quicksql_text() function for this purpose:

create or replace package eba_erd_mermaid as
    function diagram_text(p_table_prefix    varchar2 := null, 
                          p_include_columns boolean := false, 
                          p_all_columns     boolean := false )
    return clob;
    function quicksql_text(p_table_prefix varchar2 := null)
    return clob;
end;

Copying Text to the Clipboard

As a last flourish, I wanted to make it easy to copy the Mermaid diagram text syntax to the clipboard so I could easily paste it into the Mermaid Live editor if necessary. And while I was at it, why not make it easy to also copy the Quick SQL text syntax to the clipboard to paste into APEX’s Quick SQL utility?

After searching for a built-in dynamic action to copy the text of a page item to the clipboard, I ended up looking for an existing plug-in to accomplish that functionality. I found an aptly-named APEX Copy Text to Clipboard dynamic action plugin from my colleague Ronny Weiss to get the job done easily with a few clicks of declarative configuration.

APEX, SQL & Open-Source JavaScript for the Win!

In short order, by using APEX to combine the power of the SQL that I know and some minimal JavaScript (that I don’t!), I was able to build myself two dream productivity tools to improve my life as an APEX developer in the future.

If you want to give the sample application a spin, download the APEX 22.2 application export from here. It installs only a single supporting PL/SQL package, so any tables you visualize with it will be your own.

Further Reading

For a full-featured, web-based ERD modeling solution from Oracle, make sure to check out the Data Modeler in SQL Developer Web.

Oracle APEX Grand Tour

I joined the APEX dev team in 2021 after falling in love with its low-code app-building productivity and ability to fully embrace the Oracle database. But when I looked for a unified demo and presentation showcasing all its low-code features, I came up empty-handed. Over the following months, I learned everything I could and created the one hour webinar I wished had existed when I was starting out with the product. The goal? Educate the viewer on what is possible with APEX by seeing it in action.

If you’re reading this blog, you probably have already employed most of APEX’s awesome capabilities in your own apps. But, when you want to educate others about the wonders of APEX, consider sending them this link: https://apex.oracle.com/go/videos/medipay-mobile-app

If the people you evangelize like what they see, then suggest next that they take our new APEX Foundations course to roll up their sleeves and use the product with free guidance from our instructors.

The desktop and mobile MediPay demo in the webinar shows off the following APEX features in action:

  • Progressive Web Applications for desktop and mobile devices
  • Persistent authentication (“Remember me”)
  • Mobile install screenshots (now declarative in 22.2)
  • Mobile native camera to snap image of receipt to reimburse
  • Mobile image resize and upload (using plugin based on open-source JS library)
  • GPS from mobile device (now a built-in dynamic action in 22.2)
  • Approvals task definition for approving reimbursement payment
  • Custom task details page for claim payment approvals
  • Flows for APEX payment processing model
    • Email to customer if validation of payment method fails
  • View current state of Flows for APEX task model (using plugin)
  • REST invocation of 3rd party payment processing using two-legged OAuth2 authentication
  • Spatial query for fraud detection (highlighting when upload occurs over 500km from user’s home address)
  • Geocoding lookup on user profile address page
  • Minimap Display-only Map Widget on attachment detail page
  • Smart Filters and Map Region against REST(Recent Payments) working both “live” over REST and using REST synchronized local cache table
  • Faceted Search and cards against remote Oracle DB with REST-enabled SQL (Leads Referrals) working both “live” over REST and using REST synchronized local cache table
  • Web-style multiword search using “Tokenize row search”
  • One-click Remote Deployment from Team Dev to Team UAT environment
  • Calendar for Leads Meetings with drag and drop to reschedule or adjust meeting length
  • Master/Detail Interactive Grids page with Coverage Plans
  • Interactive Report with All Claims
  • Dashboard page with charts
  • Machine Learning algorithm integrated for showing prediction probability of approve/reject.
  • Regions in tabs on the User Profile Page
  • Image upload and display in User Photo tab, with automatic BLOB storage
  • Dynamic Actions to react to value changes to hide/show fields on User Profile payment method tab
  • ACL security (MediPay Staff Member | MediPay Customer) controlling access to pages, regions, items, and columns
  • Server-side JavaScript User Profile page to lookup user id from logged in user name at page load and in “Clear Unselected Payment Methods” process
  • VPD security for MED_CLAIMS_V view so users only see their own claims in the desktop portal

Building a Mini SQL Workshop #JoelKallmanDay

Two new features in APEX 22.2 let me easily create a Mini SQL Workshop app. Users type any query in a code editor and instantly visualize the resulting data. I wired my colleague Ronny Weiss’ Monaco Code Editor region plug-in to a new CLOB-valued page item to support editing the SQL query. A dynamic action reacts to the code editor save event to refresh the new Dynamic Content region to show the query’s results. For an interesting twist, the database’s native support for XML and XSLT produces the HTML markup for the dynamic content region rather than looping over data and concatenating strings ourselves.

On this second annual Joel Kallman Day, and just past my first anniversary on the APEX dev team, this example happily reunited me with some of the ideas I loved so much back in the late 1990’s that they compelled me to write my book Building Oracle XML Applications. Thanks for the inspiration, Joel. Miss ya lots!

Since APEX makes the job easy, unsurprisingly the page looks simple, too. The code editor region has a dynamic action that refreshes the Results region, which returns its HTML contents using an html_for_sql() package procedure. The P1_SQL page item is typed as a CLOB so the SQL query can be as large as necessary.

Code Editor region, CLOB-valued P1_SQL page item & Dynamic Content region for query results

Creating a CLOB-valued Page Item

I started by creating a hidden page item named P1_SQL in my page to store the query the user will edit in the code editor. Since the query might exceed 32K, I took advantage of the new CLOB data type in the Session State section of the property editor. When you create hidden page items, as well as ones that can hold a large amount of text, you can now change the default session state data type from VARCHAR2 to use a CLOB instead.

Hidden page items and ones that can contain large amounts of text can now use a CLOB data type

Wiring the Editor to a CLOB-valued Page Item

Next, I downloaded the latest version of the Monaco Code Editor region plugin, extracted the zip file, and imported the plugin into my new application. I created the Code region and set it to have the type APEX-VS-Monaco-Editor [Plug-In]. Next I configured the source of the code editor to retrieve the value of the P1_SQL CLOB-valued page item by setting its SQL Query to:

select :p1_sql as value_edit, 
       'sql'   as language
from dual

On the Attributes tab of the code editor region, I setup the code editor to save its contents in the CLOB-valued page item by entering the following one-line call in its Execute on Save property. The plug-in sets up the :CLOB bind variable and we use the new apex_session_state package’s set_value() procedure to assign that value to the P1_SQL page item.

begin
  apex_session_state.set_value('P1_SQL',:clob);
end;

And with those two simple steps, the code editor for the SQL query was sorted.

Refreshing the Dynamic Content Region

To refresh the dynamic content region whenever the user clicks the (Save) button on the code editor, I added a dynamic action to react to the plug-in’s event Upload of Text finished [APEX-VS-Monaco-Editor]. It contains a single action step using the Refresh action type, and uses yet another new 22.2 feature to make the action step more self-documenting by entering a more meaningful name of “Refresh Results”.

Using a custom name for an action step to make more maintainable apps

Using XML & XSLT to Get HTML for any SQL

With the new 22.2 Dynamic Content region, you no longer use the trusty HTP package to print HTML markup into a buffer. Instead, you configure a function that returns a CLOB containing the HTML the region should render. This change was required to make the region dynamically refreshable, which was a longtime request from APEX developers in the community. You can create the CLOB full of HTML markup in any convenient way, but the way I find most elegant and declarative is using the combination of XML and XSLT stylesheets.

The Oracle database contains native functionality to produce an XML document representing the results of a query using the DBMS_XMLGEN package’s getxml() function. To produce an XML document from an arbitrary SQL query contained in a variable like p_sql, you just need the following few lines of code. The call to the SetNullHandling() procedure asks DBMS_XMLGEN to use an empty tag to represent a NULL value rather than omitting the XML element for a NULL column in the result.

ctx := dbms_xmlgen.newcontext(p_sql);
dbms_xmlgen.setnullhandling(ctx, dbms_xmlgen.empty_tag);
l_xml_clob := dbms_xmlgen.getxml(ctx);
dbms_xmlgen.closecontext(ctx);

The getxml() function produces an XML document that will have a canonical structure like this with a <ROWSET> element containing one or more <ROW> elements, each of which contains child elements named after the columns in the result set.

<ROWSET>
   <ROW>
     <DEPTNO>10</DEPTNO>
     <DNAME>ACCOUNTING</DNAME>
   </ROW>
   <ROW>
     <DEPTNO>20</DEPTNO>
     <DNAME>RESEARCH</DNAME>
   </ROW>
</ROWSET>

The XML Stylesheet Language (XSLT) is a powerful, concise, declarative way to recursively format the contents of an XML document to produce a result like HTML. If you are a fan of Oracle Reports and its approach of applying “repeating frames” to data, then you’ll understand XSLT intuitively. If not, it may take a bit longer to have its charms grow on you, but given its Swiss Army knife applicability to many jobs, it’s well worth your time to learn more about it.

If we write the XSLT stylesheet in a generic way, the same transformation can produce an HTML table from the results of any XML in the <ROWSET>/<ROW> format above. The XSLT stylesheet that gets the job done looks like the one below. Starting with the root (match="/") of the document, its templates recursively apply other matching style templates to the XML elements in the document “tree” of nested elements. The stylesheet contains templates that match a ROWSET element, a ROW element, and any child element of a row ROW/* to produce an HTML <table>. This table contains <thead> and <tbody> elements, <tr> elements for the rows of the table, and <th> and <td> elements for the table cells containing the data from the query result XML document it’s presented to format.

<xsl:stylesheet version="1.0"
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:template match="/">
      <xsl:apply-templates/>
  </xsl:template>
  <xsl:template match="ROWSET">
    <table class="sql-results" cellspacing="0">
      <thead>
        <xsl:apply-templates select="ROW[1]/*" mode="ColumnHeaders"/>
      </thead>
      <tbody>
        <xsl:apply-templates/>
      </tbody>
    </table>
  </xsl:template>
  <xsl:template match="ROW">
    <tr><xsl:apply-templates/></tr>
  </xsl:template>
  <xsl:template match="ROW/*">
    <td><xsl:apply-templates/></td>
  </xsl:template>
  <xsl:template match="ROW/*" mode="ColumnHeaders">
    <th><xsl:value-of select="name(.)"/></th>
  </xsl:template>
</xsl:stylesheet>

To transform the XML document produced from the query into HTML, the demo app’s html_for_sql() function in the eba_demo_xslt package uses the Oracle SQL function xmltransform(). It uses the stylesheet above to transform the result into HTML using a single SQL statement like this:

select xmlserialize(content
         xmltransform(l_xml,c_stylesheet) as clob)
into l_html
from dual;

Since the DBMS_XMLGEN package and the XMLTRANSFORM() and XMLSERIALIZE() functions are all implemented natively inside the database engine, they get the job done quickly. To tweak the formatting of the results, we can simply adjust the declarative XSLT stylesheet and sprinkle in some appropriate CSS style rules on the page.

Returning Dynamic Content Region HTML

The last step in the process is configuring the function body that will return the new Dynamic Content region’s HTML markup. To accomplish this, I set the PL/SQL Function Body returning a CLOB property of the Results region to the following to employ our XML and XSLT approach above. I simply pass in the value of the P1_SQL CLOB-valued page item into the html_for_sql() function.

return eba_demo_xslt.html_for_sql(
           apex_session_state.get_clob('P1_SQL'));

With all the pieces in place, we can type in any valid query of any size or complexity, click on the code editor’s (Save) button, and immediately see the query results in the page.

Query results from two APEX dictionary views

Ensuring the User-Entered Query is Valid

The sample’s eba_demo_xslt package also contains a validate_sql_statement() function that ensures the query starts with SELECT or WITH as well as guarantees that it parses correctly. The function returns NULL if the query is valid, or otherwise it returns an error message to be displayed in the page to help the user understand what’s wrong with the query.

To checkout the sample, download it from here and import it into APEX 22.2 to give it a spin.

Managing Intersection Rows with Drag & Drop

Overview

After publishing my last article about reordering cards using clicks, Peter Raganitsch suggested on Twitter that I try the same use case using drag and drop with the free FOS – Drag and Drop plugin. While exploring the idea, I expanded the use case to more closely resemble my conference organizer app where each yearly edition of a conference can have a different “lineup” of speakers. So I introduced a Lineup table into the sample app’s data model and document here how I created a page to reorder the employees in one or more lineups, including the ability to add unassigned employees to the current lineup or remove an employee from the lineup. The result is a UX pattern that may work well for other situations involving the data in an intersection table like EmpLineup below when the user needs to impose a preferred ordering on them. The expanded data model looks like this now:

Data Model for the Emp Lineup Reordering sample application

Choosing the Lineup to Work On

I started with a page containing a Cards Region named Employee Lineup based on the following query that joins the EmpLineup and Emp tables to present the names of the employees in a lineup:

select e.id, e.name
from eba_demo_dragdrop_emp_lineup lu
left join eba_demo_dragdrop_emp e on e.id = lu.emp_id
order by lu.sort_sequence nulls last, e.created

Then, to allow the user to choose which lineup she wants to work on, I added a select list P1_LINEUP page item to a static content region above the cards region, with the simple List of Values SQL query to show the available lineups:

select name, id
from eba_demo_dragdrop_lineup
order by name

With the “Lineup Chooser” select list in place, I updated the cards region’s SQL query to add a WHERE clause to filter the intersection table rows to only retrieve the ones related to the selected lineup, and ensured that the P1_LINEUP page item name was listed in the Page Items to Submit list for the region.

select e.id, e.name
from eba_demo_dragdrop_emp_lineup lu
left join eba_demo_dragdrop_emp e on e.id = lu.emp_id
where lu.lineup_id = :P1_LINEUP
order by lu.sort_sequence nulls last, e.created

To immediately see the list of employees in the newly selected lineup reflected below in the cards region, I added a dynamic action event handler named On Change Lineup to the P1_LINEUP page item to handle its Change event, and configured its first action step to use the built-in Refresh action to refresh the Employee Lineup region.

Since the end-user will be changing the value of the chooser, APEX’s default behavior is to warn them if they attempt to navigate away from the page with unsaved changes. In this case, since I know the value of P1_LINEUP is only used as a query filter, I set the page-level Warn on Unsaved Changes switch to “off” to avoid this warning.

This gave me the following in-progress page letting the user choose which lineup to work on:

Cards region filtered by “Lineup Chooser” select list above

Adding Employees to the Lineup

To allow the user to add new employees to a lineup, I started by adding a button named ADD_TO_LINEUP to the page, and set its position to Before Navigation Bar. I’d used modal dialog pages many times before, but wanted to learn how to also use inline dialogs in APEX, and this use case seemed like a good one for an inline dialog. I wanted to quickly present a shuttle control in a modal dialog when the user clicks on the (Add to Lineup…) button, that shows employees that are not yet assigned to the current lineup and lets the end-user to pick one or more employees to add.

After not finding any obvious way to directly create an inline dialog in the APEX builder, my next intuition was that doing so must involve creating a region. But I also failed to find any “Inline Dialog” region type. At this point, I figured it must be a static content region with a particular template setting. Seeing a position heading in the rendering tree called Dialogs, Drawers, and Popups gave me a clue that the region position needed to be configured to fall in this category. So I added a static content region “Add Employees” to contain the shuttle page item, set its Position to Dialogs, Drawers, and Popups and set its Appearance > Template to Inline Dialog.

I added a page item P1_EMP_IDS_TO_ADD to this new static region and set the type to Shuttle. I set its Label to “Unassigned Employees”, and Template to Optional – Above to get what looked best to me from previous shuttle experiments I’d done. To have the shuttle show only employees who were not yet assigned to the current lineup, I configured the shuttle’s List of Values SQL query as follows and set the Display Extra Values to the off position:

select name,id
  from eba_demo_dragdrop_emp
 where id not in (
   select emp_id
     from eba_demo_dragdrop_emp_lineup
    where lineup_id = :P1_LINEUP
 )
order by name

To have the (Add to Lineup…) button created above open the inline dialog, I added a dynamic action event handler to its Click event, configured the first action step to clear the value of the P1_EMP_IDS_TO_ADD page item, a second action step to refresh the page item to update its list of available employees, and a third action step to show the inline dialog.

Initially tried to show the dialog using the Show dynamic action step to show the corresponding “Add Employees” region, but when that didn’t work, I looked again through the list of built-in actions available and tried the Open Region action instead which worked great.

Back in the “Add Employees” inline dialog’s static region, I added a button named CANCEL to the Close position of the region, and a dynamic action event handler for its Click event to perform the corresponding action step of Close Region.

To process the addition of any selected employees in the shuttle, I added a button named ADD to the Create position of the inline dialog region, and added a dynamic action event handler for its Click event having three action steps:

  1. Execute Server-side Code
  2. Close Region (Region: Add Employees)
  3. Refresh (Region: Employee Lineup)

The PL/SQL code for the server-side code in step 1 looks like this, using the handy SPLIT_NUMBERS() function in the APEX_STRING package to parse the colon-separated list of employee ids from the shuttle that need to be inserted into the eba_demo_dragdrop_emp_lineup table as intersection rows. I also made sure that both P1_LINEUP and P1_EMP_IDS_TO_ADD were listed in the Items to Submit field.

insert into eba_demo_dragdrop_emp_lineup(lineup_id,emp_id)
select :P1_LINEUP,column_value
from apex_string.split_numbers(:P1_EMP_IDS_TO_ADD,':');

With these steps I now had a working add dialog to add one or more employees not currently in the lineup:

Inline dialog with shuttle to add unassigned employees to the current lineup

Deleting an Employee from the Lineup

I thought the easiest way for a user to remove an employee from a lineup would be to offer a little “x” icon on each card the user could click. A trick I learned while working on the last article came in handy again here. I decided to use a link with the new named action syntax in its href attribute to trigger a named action. This named action would store the value of the action parameter named id in a page item, and then trigger a custom dynamic action event handler to perform the server-side code to remove the employee from the lineup. This combination let me use a maximal amount of declarative APEX functionality.

I started by adding a hidden page item P1_EMP_ID_TO_REMOVE to the page. This will be used to store the id of the employee to remove from the lineup. Next I defined a custom dynamic action event named remove-emp-from-lineup-event with Selection Type of jQuery Selector and jQuery Selector of body. It contains two dynamic action steps:

  1. Execute Server-side Code
  2. Refresh (Region: Employee Lineup)

The PL/SQL block for the first action step contains just the delete statement below, with P1_EMP_ID_TO_REMOVE and P1_LINEUP both listed in the Items to Submit.

delete from eba_demo_dragdrop_emp_lineup
where emp_id = :P1_EMP_ID_TO_REMOVE
and lineup_id = :P1_LINEUP;

Next, I defined a named action remove-emp-from-lineup-action by adding the following block of JavaScript to the Execute When Page Loads section of the page:

apex.actions.add([
{
  name: "remove-emp-from-lineup-action",
  action: function( event, element, args)
  {
    // Store the value of the EMP id from the action args
    apex.items.P1_EMP_ID_TO_REMOVE.value = args.id;
    // Trigger custom event to perform the server-side call
    $("body").trigger("remove-emp-from-lineup-event");
  }
}])

To complete the job, I updated the HTML Expression for the card region’s Body section and changed it from:

<strong>&NAME.</strong> 

To add the following <a> element before the <strong> element. Its class attribute references the name of the fa-remove icon I wanted to use, and its href attribute contains the special (new in 22.1) named action link syntax to trigger the action named remove-emp-from-lineup-action passing along a parameter named id with the value of the ID column in the current row (i.e. card):

<a class="fa fa-remove" 
   href="#action$remove-emp-from-lineup-action?id=&ID."
></a><strong>&NAME.</strong> 

These steps produced the clickable remove icon next to the employee name in the body of each card, and clicking the “x” on any card removes it from the lineup. Since I also wanted clicking on the main card area to open a normal modal dialog to review/edit the employee’s details, I changed from using the “Full Card” action type to instead use the “Media” card action for that. In this way, the end-user can perform multiple actions on each card in a non-intrusive way.

Clickable “remove” icon on every card triggers named action to do the work

Reordering Employees Using Drag and Drop

Saving the most interesting feature for last, I downloaded the FOS plug-ins from here. The download was a sample app illustrating the use of all the FOS plug-ins, but my interest for this article was just the FOS – Drag and Drop one. After importing the FOS sample application, I visited its Shared Components > Plug-ins page in the APEX builder and exported just the FOS – Drag and Drop plug-in to a SQL file whose name APEX automatically chose for me. Importing the plug-in into my own app involved simply clicking the (Import>) button from the Plug-ins shared component page and choosing the name of this SQL file and completing the wizard, as shown below:

Importing the FOS – Drag and Drop plug-in into my own application

I enabled the drag and drop functionality on my page by adding a new dynamic action named “On Page Load Setup Drag/Drop” to the Page Load section and setting the Action of its first action step to FOS – Drag and Drop [Plug-In]. This plugin configures the drag and drop event listeners without having to understand what that means or how it works. I appreciated this, since I’m not an accomplished JavaScript programmer myself.

In the Affected Elements section of this action step, I set the Selection Type to Region and chose “Employee Lineup” from the Region list. This tells the plug-in that you want to have the drag and drop effect be applied to the cards in this region. In the sample app that FOS provides, they show how drag and drop can be easily applied to lots of other situations in APEX as well.

I noticed that the Mode was already set to Sort, which is the behavior I wanted. To configure the business logic that will execute when one card it dropped in the place of another card, I set the Drop Action to Execute PL/SQL Code and entered the following one line of PL/SQL in the Execute PL/SQL Code field, remembering to list P1_LINEUP in the Page Items to Submit field below it:

eba_demo_dragdrop.move_source_to_target(:P1_LINEUP,:DRAG_ID,:DROP_ID);

The :DRAG_ID and :DROP_ID bind variables are automatically set by the plug-in and evaluate to the primary key of the dragged card and that of the card currently in the dropped position, respectively. This was almost too easy.

Running my page, I was able to drag an employee card and drop it, but I noticed that when I’d refresh the page the order of the employees had not actually changed. To debug the problem, I added in one line of code to my move_source_to_target() package procedure to log the values of the source and target employee ids:

apex_debug.info('lineup=%s,source=%s,target=%s',
                 p_lineup_id,
                 p_emp_source_id,
                 p_emp_target_id); 

After enabling Debug mode for the page, this info() log message allowed me to understand that the drag and drop plug-in was passing values that were not my primary keys that I expected. This gave me a clue to check the “Employee Lineup” cards region to make sure I had correctly configured a column as the primary key, and in fact I had not done so yet. For the drag and drop to work as expected, I needed to select the cards region, activate its Attributes tab, and set the Primary Key Column 1 list to ID to let APEX know which column represented the primary key. After doing this small tweak, the drag and drop was working perfectly…

…until I tried changing the Lineup chooser select list to switch to working on a different lineup, or until I added some new employees to the current lineup, or until I removed an employee from the current lineup. After doing any of these actions that resulted in refreshing the cards region, the drag and drop behavior would cease to work.

Getting Drag and Drop to Work After Region Refresh

Googling around for a solution, I found an article that said I needed to re-execute the FOS – Drag and Drop [Plug-in] action step after the region refresh. It explained that the act of refreshing the region caused an AJAX request that resulted in changing the HTML elements in the cards region to which the drag and drop event listener was listening. This made sense, so I dutifully repeated the steps I did to originally configure the Page Load event action step above to “wire-up” the drag and drop event listener after each region refresh. I did that by adding an additional dynamic action step to each dynamic action step sequence where I was refreshing the cards region. However, to my chagrin, this additional effort did not remedy the problem.

Luckily, since I had met Peter Raganitsch in person at the APEX Alpe Adria conference back in April 2022 in Maribor, Slovenia, I felt comfortable contacting him by email. I let him know I was working on implementing his suggestion, but that I’d hit an issue I wasn’t sure how to debug. To my surprise and delight, he responded in short order with a solid tip that I probably would not have figured out on my own.

In his reply, he explained that I was doing the right thing to re-wire the drag and drop listener but that I needed to use that action step in a different dynamic action event handler responding to the Page Change [Cards] event instead of using the action step as part of the same action sequence that triggered the region refresh in the first place.

This has to do with timing of actions, meaning the “Refresh” of the Cards region wasn’t finished when you re-enabled Drag & Drop. Using the “Page Change [Cards]” event you make sure the Cards region is finished re-querying (or paginating). As DB and PL/SQL developers we often think about procedural and serial execution. In JavaScript that doesn’t always apply due to asynchronous operations. i.e. DA “Refresh” sends an asynchronous call to the DB and continues with the next action without waiting for the refresh result.

Peter Raganitsch

Enlightened by this new kernel of wisdom from Peter, I removed the three “re-wire the drag and drop” action steps I had added above. Then, on the Dynamic Actions tab, I added a new dynamic action event handler named “On Page Change Setup Drag/Drop Again”, set its When section’s Event to Page Change [Cards]. Repeating my steps done above, I set the Action of its first action step to FOS – Drag and Drop [Plug-In]. Again I set the Selection Type to Region and chose “Employee Lineup” from the Region list. Leaving the Mode set to Sort, I again set the Drop Action to Execute PL/SQL Code and configured the Execute PL/SQL Code to the same one-line of code as above, again remembering to list P1_LINEUP in the Page Items to Submit field below it.

After making that change, the drag and drop worked flawlessly, not only when the page renders originally, but also after adding new employees to a lineup, removing employees from a lineup, or switching to work on a different lineup. My final page looked like this:

Final lineup management page with add, remove, and drag/drop reordering functionality

Thanks again to Peter Raganitsch for the suggestion that resulted in my learning many new things about APEX and plug-ins. In addition, double thanks for the tip that helped me understand the right event handler to use to avoid timing problems due to JavaScript’s asynchronous nature when redefining event listeners after AJAX-driven page changes occur, like region refreshing.

If you’re interested in checking out the working sample, you can download it from here for use in APEX 22.1 or later.

Reordering Cards Using Clicks

Sometimes your data needs to be explicitly ordered rather than sorted by an intrinsic property like a name, salary, or hire date. In these cases we introduce an additional number column like SORT_SEQUENCE into the data model and sort on that manually-assigned position number. For a volunteer project I’m working on, I needed the ability to explicitly order the speakers at a conference, and easily adjust it as the organizer moves speakers around in the lineup. Before implementing the feature in my actual application, I built a simpler example based on employee names first to get the basic idea working. This article explains how I used a new feature of APEX 22.1 called Declarative Action URLs along with a dynamic action custom event to let users easily adjust the explicit ordering by clicking on a source card and a target card in a cards region.

Sorting Cards by Sequence in Related Table

To more closely mimic the data model of my actual conference management application, I have a simple employee table EBA_DEMO_REORDER_EMP with just an ID and NAME column and a separate table called EBA_DEMO_ORDER_EMP_LINEUP that contains an EMP_ID column referencing the ID primary key of the main table, along with the SORT_SEQUENCE number column. Out of a possibly larger set of employee names, a certain set get introduced into the “lineup” and then their explicit ordering is established as part of that lineup.

I started by building a cards region based on the following query, that joins the two tables and orders by the SORT_SEQUENCE column in the employee lineup table. I configured card title to use the NAME column and the badge to use the CARD_NUMBER.

select e.id, 
       e.name, 
       row_number() over (order by lu.sort_sequence nulls last,
                                   e.created)
       as card_number
from eba_demo_reorder_emp_lineup lu
left join eba_demo_reorder_emp e on e.id = lu.emp_id
order by lu.sort_sequence nulls last, e.created

This quickly produced the basic card layout for the lineup of employee names.

Cards region showing the explicitly ordered lineup of employee names

Getting the Reordering Working

The lineup in my actual application can include hundreds of names, so I decided to let the user click on the card of the employee that needed to move, then click on the card of the place they’d like to move that employee. Using these two clicks, the end-user identifies first a “source” employee and then chooses a “target” employee position.

Inspired by the “source” and “target” naming, I created two hidden page items, P1_EMP_ID_TO_MOVE and P1_EMP_TARGET_POSITION, each with its Maintain Session State property set to Per Request (Memory Only). My strategy was to populate the first page item with the employee ID value of the initial click, and set the value of the second page item with the ID of the second click.

I wrote the PL/SQL package procedure to accept the source and target employee ids and perform the automatic reassignment of the SORT_SEQUENCE values of the affected rows:

create or replace package eba_demo_reorder is
   procedure move_source_to_target(p_emp_source_id number,
                                   p_emp_target_id number);
end;

With this backend business logic in place, the two remaining tasks were:

  1. Handle the card click to assign the hidden page item values, and
  2. Invoke the above procedure once both source and target were defined, refresh the cards region, and clear out the hidden page items again.

I chose to tackle the second task first using a Dynamic Action custom event to maximize the amount of APEX’s declarative functionality I could take advantage of.

Using a Custom Event to Maximize Low-Code

Assuming the two hidden page items have the source and target employee ids populated, executing the server-side PL/SQL code, refreshing the cards region, and clearing out the hidden page items are all actions I can easily accomplish using dynamic action steps in response to a custom event. As shown below, I created a dynamic action event handler for a custom event with event name move-source-to-target-da. The Selection Type is jQuery Selector and I used the page’sbody as the jQuery Selector to be the anchor element for the event listener. I chose the page body at the recommendation of my colleagues John and Stefan who reminded me that refreshing the cards region would remove any event listeners on the cards themselves. The body targets the event listener on a element of the page that contains the cards region, but which is not itself getting refreshed.

Custom dynamic action event anchored to the page body.

The dynamic action steps include an Execute Server-side Code step to run this block of code to perform the reordering, making sure to include both P1_EMP_ID_TO_MOVE and P1_EMP_TARGET_POSITION in the page Items to Submit list:

eba_demo_reorder.move_source_to_target(
   p_emp_source_id => :P1_EMP_ID_TO_MOVE,
   p_emp_target_id => :P1_EMP_TARGET_POSITION);

That is followed by a Refresh step to refresh the cards region on the page, and finally a Clear step to clear the values of the two hidden page items.

Wiring a Full Card Click to a Named Action

To tackle the remaining task of handling the click on the card, I added a card action and set the action Type to be Full Card. Following a suggestion from my colleague John, I used the new Declarative URL action invocation syntax he describes more in depth in his blog article Exploring new APIs in APEX 22.1. To put it to use, for the link type I chose Redirect to URL and provided a special URL syntax that invokes a named action, passing along one or more parameters in the process:

#action$move-source-to-target?id=&ID.

A URL of this syntax lets a click on my card invoke an action named move-source-to-target, passing along a parameter named id whose value is provided by the ID column of the current employee card.

Defining the named action at the moment requires a bit of JavaScript code. I added the following to my page’s Execute when Page Loads code block. If the P1_EMP_ID_TO_MOVE item is blank, it sets its value to the value of the id argument passed in. If P1_EMP_ID_TO_MOVE is set but P1_EMP_TARGET_POSITION is blank, then it sets the target and triggers the custom event named move-source-to-target-da that we configured above to perform the server-side PL/SQL call, refresh the cards region, and clear out the two hidden page items again.

apex.actions.add([
{
   name: "move-source-to-target",
   action: function( event, element, args)
           {
              /* If both are blank, set emp to move */
              if (apex.items.P1_EMP_ID_TO_MOVE.value      === '' && 
                  apex.items.P1_EMP_TARGET_POSITION.value === '') {
                 apex.items.P1_EMP_ID_TO_MOVE.value = args.id;
              }
              // If emp to move is set and target blank, set target
              // and trigger the custom event to complete the job
              // using declarative DA action steps to invoke the
              // server-side PL/SQL package procedure to move the
              // source emp to the slot where the target is.
              else if (apex.items.P1_EMP_ID_TO_MOVE.value      !== '' && 
                       apex.items.P1_EMP_TARGET_POSITION.value === '') {
                 apex.items.P1_EMP_TARGET_POSITION.value = args.id;
                 // Trigger custom event to perform the server-side call
                 $("body").trigger("move-source-to-target-da");
              } 
           }
}
] );

My colleague Stefan gave me the great idea to use a custom event for this and to trigger it programmatically from the named action code. This allowed me to benefit from the simple action URL-wiring syntax as well as from the simplicity of using declarative dynamic action steps to perform the rest of the functionality.

The result is the click-click card reordering you see in this short video:

Example app for reordering cards with two clicks

If you’d like to try out the working example, download the app from here.

Using Scriptable DevOps to Save Time

While developing an APEX app, I check my latest changes into a Git repository after reaching a feature milestone. In this article I’ll explain how I took advantage of APEX’s scriptable devops features to automate multiple steps in my daily workflow that I had previously been doing manually. The result are two simplified commands:

  • apex2git – exports an APEX app to a Git work area, staging changed files
  • git2apex – imports an app from a Git work area to an APEX workspace

In case the scripts could be useful to you to study or use, I’ve included a link to download the source at the end of the article along with a short video that illustrates how I use these commands in my daily work.

My Workflow Before Optimization

When adding a new feature to an existing app, I used to do all of the following steps one by one:

  • pull the latest app version from remote Git repo to my local work area
  • create a Git branch named after the feature ticket I’m working on
  • import the app from my Git work area into my APEX dev workspace
  • implement the new feature using the APEX App Builder
  • export the enhanced app to a temporary staging directory (e.g. /tmp/f1234_stage):
    1. as a single SQL file (e.g. f1234.sql), and also
    2. as individual component SQL files organized into directories, and also
    3. as readable YAML for easy diffing
  • copy new f1234.sql file to Git work area as my-app-name.sql
  • review the diffs between temp dir files and those in the Git work area
  • cherry-pick changed files from the temp dir and copy to Git work area
  • add the changed files to the Git staging area
  • commit the staged changes to the local Git repo
  • push the changes up to the remote team Git repo
  • initiate a merge request for my branch’s changes

That’s a lot of steps. Luckily I was able to simplify some of them as I learned a few new tricks.

Git Automatically Ignores Unchanged Files

The first optimization opportunity I discovered was a happy accident. Prior to this revelation, I was using a tool — Beyond Compare on Windows or Kaleidoscope on Mac — to compare the APEX export files in the temporary directory with corresponding files in the Git work area to identify which ones my feature work had changed. Then I would carefully copy only the changed files over to the Git work area.

Once, by mistake, I copied all of the files from the temp directory to the work area, and even mistakenly added them all to the Git staging area. After doing this unintentional blunder, I typed git status . To my surprise, despite all the files having a later timestamp, Git had automatically ignored adding to the staging area any file whose logical contents had not actually changed! This productivity gift meant I could simply bulk copy all the exported files to my work area without manually cherry-picking the changed files myself. Bravo, Git!

Using SQLcl to Export APEX Apps

While exporting and importing APEX apps in the browser is straightforward, I researched how I could do it from the command line instead. This felt like an enabling ingredient to automating more of my common workflow. I learned that SQLcl supports the apex export command that looked promising.

Through a series of trials, I converged on the following command that, given an application’s numeric id (e.g. 1234), would export it as a single SQL file into a target directory (e.g. /tmp/f1234_stage), including supporting objects, using original component ids, and skipping the inclusion of the export date. While the command is intended to be entered as a single line, I’ve formatted it below for better readability:

$ sql appuser/password@host:port/servicename
SQL> apex export -applicationid 1234
                 -dir /tmp/f1234_stage
                 -skipExportDate
                 -expOriginalIds
                 -expSupportingObjects Y 
                 -expType APPLICATION_SOURCE

The result of running the apex export command above is the single file f1234.sql containing the entire application source for application 1234, and this file gets created in the /tmp/f1234_stage directory.

The option to include supporting objects is self-documenting, but it’s important to understand the other two. Excluding the export date ensures component files with no other changes don’t differ from the existing version only due to the date on which they were exported. Using original ids maintains internal id numbers of components as teammates import the app and export it again from other APEX workspaces. Failure to use either of these two latter options could lead to unnecessary differences in files. Of course it’s also important to follow the best practice of using the same workspace id and be running the same APEX release (including patch set!) across different environments like Dev, Test, Prod, and private development. Failure to follow this best practice can cause uninteresting differences in the component files due only to the different workspace id values or different patchset levels.

While the single SQL file is useful for one-click installation, I also want to retrieve the individual application component SQL files organized into a directory hierarchy. This makes it easier to understand what components have been added, changed, or removed with each new feature. This requires adding the -split command line option. So the command to do an application export split into separate component SQL files is:

SQL> apex export -applicationid 1234
                 -dir /tmp/f1234_stage
                 -skipExportDate
                 -expOriginalIds
                 -expSupportingObjects Y 
                 -expType APPLICATION_SOURCE
                 -split

This results in a directory named f1234 containing all the split SQL component files organized into subdirectories, and the f1234 directory gets created inside the /tmp/f1234_stage directory.

Starting with APEX 22.1, the readable YAML export option allows much easier code reviews since it presents the application component metadata in an easy-to-understand text outline with property names and values that match what you see as a developer in the APEX builder. To export the readable YAML version of an app, we can use the new READABLE_YAML export type:

apex export -applicationid 1234
            -dir /tmp/f1234_stage
            -skipExportDate
            -expOriginalIds
            -expSupportingObjects Y 
            -expType READABLE_YAML

This results in a directory named readable containing readable YAML files for application artifacts organized into subdirectories, and the readable directory gets created inside the /tmp/f1234_stage directory.

Using SQLcl 22.1 against an APEX 22.1 workspace schema, it’s possible to include multiple export types in the same export command by using a comma-separated value for the -expType command line option. So I was able to combine the export of the “split” application component SQL and the readable YAML into a single command like this:

apex export -applicationid 1234
            -dir /tmp/f1234_stage
            -skipExportDate
            -expOriginalIds
            -expSupportingObjects Y 
            -expType APPLICATION_SOURCE,READABLE_YAML
            -split

This results in a directory named f1234 containing both the readable directory of the readable YAML files as well as all the split SQL component files organized into appropriately-named subdirectories, and the f1234 directory gets created inside the /tmp/f1234_stage directory.

Automating APEX to Git Export

The recipe I identified to export an APEX app with id 1234 into the Git work area for repository my-app in my home directory includes these steps:

  1. Create a temporary staging directory /tmp/f1234_stage
  2. Use SQLcl to apex export app 1234 to this staging directory
  3. Copy /tmp/f1234_stage/f1234.sql to ~/my-app/my-app.sql
  4. Deep copy /tmp/f1234_stage/f1234 to ~/my-app
  5. Add all files to Git staging area (Git includes only ones that really changed)

Since I was already familiar with Apache Ant for build automation, I used it to implement the recipe above and created an apex2git shell script and Windows batch file that simply invoke the Ant build script, passing along the command line arguments for application id, repo directory name (assumed relative to my home directory), and database connection string:

$ apex2git 1234 my-app appuser/password@host:port/servicename

Since it is common for a larger APEX application to be assembled out of several smaller applications, I implemented the Ant build script to accept the name of the Git repo directory or a subdirectory. So, for example, if you were implementing an application called MediPay that was broken down into a backoffice application and a mobile application, you might export the two distinct applications into distinct subdirectories of a medipay Git repo directory like this:

$ apex2git 1001 medipay/backoffice $MEDIPAY_CONNECTION
$ apex2git 1002 medipay/mobile     $MEDIPAY_CONNECTION

In this case involving a subdirectory, the Ant build script first converts the directory separator characters \ or / to a hyphen to end up exporting the more meaningful application export SQL scripts into:

~/medipay/backoffice/medipay-backoffice.sql
~/medipay/mobile/medipay-mobile.sql

Using SQLcl to Import an APEX App

Importing an APEX application from the command line involves calling the apex_application_install package to set the target workspace name, the target application id, and preference related to the installation of supporting objects, followed by running the application’s single-file SQL script.

So, for example, to import application 1001 from the medipay/backoffice Git repo directory into workspace medipay , we can using SQLcl’s sql or sqlplus to run the following PL/SQL block followed by the application’s SQL script:

begin
   apex_application_install.set_workspace('medipay');
   apex_application_install.set_application_id(1001);
   apex_application_install.set_auto_install_sup_obj(
                               p_auto_install_sup_obj => true );
end;
/
@/home/smuench/medipay/backoffice/medipay-backoffice.sql

Automating Git to APEX

The recipe I identified to import an APEX app with id 1234 from Git work area my-app in my home directory into workspace target-wksp includes these steps:

  1. Create a temporary staging directory /tmp/f1234_stage
  2. Copy ~/my-app/my-app.sql to directory /tmp/f1234_stage/f1234.sql
  3. Use SQLcl to set install options, then run /tmp/f1234_stage/f1234.sql

Again I employed an Ant build script to carry out these steps. Then I created a git2apex shell script and Windows batch script that invokes the Ant build script, passing along the command line arguments for application id, repo directory name (assumed relative to my home directory), target workspace name, and database connection string:

$ git2apex 1234 my-app target-wksp appuser/password@host:port/servicename

As before, to accommodate solutions comprising multiple smaller APEX apps, I implemented the Ant build script to accept the name of the Git repo directory or a subdirectory. So, for the MediPay application comprised of a backoffice application and a mobile application, you might import the two distinct applications from distinct subdirectories of a medipay Git repo directory into a workspace named medipay like this:

$ git2apex 1001 medipay/backoffice medipay $MEDIPAY_CONNECTION
$ git2apex 1002 medipay/mobile     medipay $MEDIPAY_CONNECTION

Ant: Using Templatized SQL Scripts

The apex2git command runs a apex2git.xml Ant build script that copies an apex2git.sql template SQL script to the temporary staging directory, substituting @APPID@ and @DIR@ tokens in the process using the values of the application id and repository directory parameters passed in:

REM apex2git.sql - v1.0
apex export -applicationid @APPID@ -dir @DIR@ -skipExportDate -expOriginalIds -expSupportingObjects Y -expType APPLICATION_SOURCE
apex export -applicationid @APPID@ -dir @DIR@ -skipExportDate -expOriginalIds -expSupportingObjects Y -expType APPLICATION_SOURCE,READABLE_YAML -split

Then it runs the resulting SQL script using SQLcl’s sql command against the provided database connection to produce the single SQL file, multiple component SQL file, and YAML exports in the temporary staging directory. It then proceeds to copy the exported files to the Git work area and add them all to the Git staging area.

Similarly, the git2apex command runs a git2apex.xml Ant script that substitutes command-line parameters into the following git2apex.sql template SQL script and then uses sqlto run it against the provided database connection.

REM git2apex.sql - v1.0
REM First setup the import options
begin
   apex_application_install.set_workspace('@WORKSPACE@');
   apex_application_install.set_application_id(@APPID@);
   apex_application_install.set_auto_install_sup_obj( p_auto_install_sup_obj => true );
end;
/
REM Then install the application by running its SQL script
@@DIR@/f@APPID@.sql

My Streamlined Workflow

Using the apex2git and git2apex scripts, I’ve streamlined my development workflows. When working alone on an APEX app, I tend to eschew branches and just push my changes directly to the repository, accompanied by a descriptive commit message that explains what new feature I’ve implemented:

  • pull the latest app version from remote Git repo to my local work area
  • import the app to my private dev workspace using git2apex
  • implement the new feature using the APEX App Builder
  • export the enhanced app to Git using apex2git
  • commit the staged changes to local Git repo mentioning feature I finished
  • push the changes up to the remote Git repo

In contrast, when working with others, I use feature branches and Git pull requests to facilitate code review by my peers:

  • pull the latest app version from remote Git repo to my local work area
  • create a Git branch named after the feature ticket I’m working on
  • import the app to my private dev workspace using git2apex
  • implement the new feature using the APEX App Builder
  • export the enhanced app to Git using apex2git
  • commit the staged changes to the local Git repo
  • push the changes up to the remote team Git repo
  • initiate a merge request for my branch’s changes for peer review

Checking Out the Scripts

In case you want to explore the scripts to see how they work, you can download them from here. Unzip the archive in your home directory, and make sure that the ./utils subdirectory it creates is in your system path. To keep things simple, the scripts currently rely on having all the apex2git* and git2apex* files in this utils subdirectory of the currently logged-in user’s home directory. Of course, you’ll also need to have SQLcl 22.1 and Apache Ant installed and have both sql and ant commands be available in your system path.

The simple usage documentation for the scripts is:

git2apex appId repoDirectory workspaceName connectionString

apex2git appId repoDirectory connectionString

The repository directory is interpreted relative to your home directory. You can see the commands in action in the short example video below.

Example of git2apex & apex2git in Action

In this short video, I show using git2apex and apex2git without using a feature branch for simplicity. This is the process I use when working alone on an APEX app that I source-control in a private Git repository up on GitHub. For example, the donor-tracking system I built for a local Italian non-profit and several personal productivity apps follow this approach. When working on sample apps as part of my day job at Oracle, we use an internal Git repository instead of GitHub for that. The git2apex and apex2git scripts work the same regardless of whether the git repo is only on your machine or whether it’s remote.

Next Stop: CI/CD Pipelines

APEX’s scriptable devops features let you create custom solutions to automate frequent tasks in your app development lifecycle. We’ve seen a simple example of their use in this article with apex2git and git2apex commands. My colleague Carsten has also documented a technique for exporting and importing APEX applications using a REST service in scenarios where that approach proves useful.

These options make it straightforward to incorporate APEX application export and import into more involved “pipelines” of continuous integration and continuous delivery. These automations are often triggered automatically by the merging of new code to the main branch of your Git repository, and can include steps that setup test data, execute unit tests and integration tests, and finally install a validated build of your application into a target environment if all tests pass. In future releases of Oracle APEX, we aim to make doing some of these steps even simpler from directly inside the builder. However, it’s good to know that in the meanwhile any kind of build automation is already possible to achieve with the ingredients APEX provides today.