The audit logs in Okta Advanced Server Access (ASA) can be viewed in the ASA administrative interface or extracted via the ASA Audit V2 API (and this is what the integrations with SIEM tools do). But what about the situation where you just need to extract all the logs and process them somewhere? You could use the API yourself, or use Okta Workflows.
The flows described in this article were built for a customer to show how Workflows can access ASA Audit logs.
Background
Okta Advanced Server Access (ASA) stores every activity, whether its administration of the product or use for server access, in the audit log. The log entries can be viewed in the Logging > Audits section of the ASA interface.
There is also an Audits API provided (https://developer.okta.com/docs/reference/api/asa/audits/) to allow audit records to be extracted programmatically. Unlike the earlier version of the audits endpoint, the current (v2) endpoint doesn’t provide any filtering of events. It will return a page of events, based on a count you specify, in a given direction from an offset you specify. The offsets for each next and previous page of events is returned in the header of the results. Thus if you store away the offset on each result, you can walk through the available audit events from earliest to latest. This is what the set of flows in this package do.
Overview
This solution involves using Okta Workflows to extract the ASA Audit logs from ASA using the Audits API endpoint as mentioned above.

There is a single main flow that is run on a timer, so it runs over and over. With each iteration it will read a page of audit events from the previous offset for a set count of records. As part of the processing it will store the next offset in a table for the next iteration. It uses a set of flows to format the events and write them into a workflows table.
The events returned from the API call come in two parts – the summary of the audit event, and related objects that contain more detail about users, servers and other objects in the events. The solution will store the related objects in another table and use them to supplement the summary to provide a more detailed event in the workflows table.
The first time it runs it starts at the earliest event and over subsequent iterations it will catch up. Once it’s caught up it will continue to run but may not process any events unless there are new events. There are some parameters, like the timer interval, that can control how the flow runs.
Note that storing events in a workflows table is not a good longterm strategy. If you are considering using this solution, you should also implement a process to offload the events from the table into some form of longterm/scalable storage (like using a lambda function and S3 buckets).
The next section describes the components.
The Workflows .folder file for this can be downloaded here. Or you can go to the GitHub folder https://github.com/iamse-blog/workflows-templates/tree/main/dae-asa07-ContinuousAuditCapture.
The Components of the Package
As with most Workflows implementation, there are tables and flows. Of the flows, there is a main flow, some subflows and a collection of utility flows.
Apologies in advance – the following section is very long and detailed. The aim is to give the reader a good understanding of how this problem was solved in workflows.
The Tables
There are five tables used in the package.

They are:
- Audit Checkpoints – this table has a single row storing the last offset and last record read count
- Audit Events – this is the end result of enriched ASA Audit events
- Audit Related Objects – this is the temporary store for the related objects returned with theAPI query
- Environment Variables – some variables used by the flows
- Execution Log – a summary log of each execution of the main flow showing events processed and any other useful info.
The Audit Events table will store the event Id, timestamp, type, actor, user, client, group, project, hostname, server, from_addr and raw_event (depending on the type of record).

These tables are accessed/managed by the flows.
The Main Flow
There is a single main flow called M10 – Recurring Audit Collection. It is scheduled to run periodically to pull a page of audit events via the ASA Audits API, process each one (including enriching the event from the related objects) and writing it to the Audit Events table. The following sections describe it in detail
Flow Scheduling and Initial Setup
The first part of the flow is:

There is the Schedule card (more on this below). Then a utility flow (U00) is used to get the authorization header for the API call. Lastly another utility flow (U01) will get the team name and search count from the Environment Variables table.
API Call
The next part of the flow sets up and runs the API call:


The cards in this section will:
- Format the API endpoint URL using the team name extracted from the Environment Variables table
- Get the last offset and date from the Audit Checkpoint table using a utility flow (U10)
- Add a query string for the API to the URL. If this is the first time through, there is no offset to start at so the query string is just the count of records to return. Otherwise it also includes the offset
- The API call is run and returns a status code, a Headers object and a Body object
These lists/objects are used to drive the rest of the flow.
Analyse API Results For Event Processing
The next few cards will extract some information to help with the event processing flow:

The first card is extracting the event list (a list of events) and related_objects (object with a list inside) from the response to the API call. The next card (Length card) is counting the number of audit events found.
The third card (Get card) is puling the link attribute out of the Headers object from the API call (i.e. the HTTP header returned with the Body). The last card is using a utility flow (U11) to go through that link attribute and find the offset for the next page of events.
Process Events
The last section contains a number of conditions to determine if/how to process events:
- If the offset returned in the Header is blank, that means we are on the last available page of events.
- So if the returned record count is equal to the last record count, there are no new events and there’s no event processing required
- If the returned record count is not equal to the last record count, there are now events to process and so the flow should process them
- Otherwise (there is a next offset) process the entire page of events
For the 1.1 condition (above) the flow will only write a record to the Execution Log (using the U20 utility flow).

For the 1.2 condition where there are audit events on the last page to process it has the following cards.

The new record count is written to the checkpoint table (using the U13 utility flow). Then the S20 subflow is used to store the related_objects into the Audit Related Objects table (more details on this subflow in the next section).
The S31 subflow is used to process each event in the events list from the HTTP body earlier (more details on this subflow in the next section).
The last cards are determining if this iteration is processing the first chunk of events in the page or subsequent events, and writing out the appropriate Execution Log records.
Note that this branch of the flow does not update the offset. The flow will keep processing the same set of records from the last offset on each iteration until a complete page is presented (and there is a new offset).
The last branch is where there is a complete page of events returned and theres a new offset.

The flow is similar to above:
- A utility flow (U12) is used to write the new offset
- A utility flow (U13) is used to write the new count
- The S20 subflow to write the new related objects into the temporary table
- The S30 subflow to process every event (this is subtly different to the S31 flow mentioned above)
- A utility flow (U20) is used to write an entry to the Execution Log
This completes the main flow.
The Sub Flows
These subflows are called from the main flow or from other subflows:

The S20 flow will start by emptying out the Audit Related Objects table, ready for the new entries. It then uses the S21 flow to process each related object with a Map to List card (i.e. you pass it an object and it calls the subflow for each item in the object as if the object was a list). S21 will take each related object and store it in the table by its id, type and contents (json object as a string). These will be accessed by the S32-37 flows.

The S30 and S31 flows are similar – both will process audit events. The difference is that S30 only processes the event, whereas S31 does a lookup of the Audit Events table for each record before writing it. Where we are processing a full page of events, there is no need to check if the events have been written before, so from a performance perspective it’s better to just write the events. Where we’re processing the last (incomplete) page of events it is probable that records will be reprocessed (each iteration is reading from the same last offset) so we need to check if an event has been written before.
Other than the difference for the record lookup before processing, both S30 and S31 will extract the contents of the audit event object, then depending on what fields are available they will call the S32-36 subflows to pull more information from the Audit Related Objects table to enrich the raw event detail. These subflows will get client (S32), gateway (S33), group (S34), project (S35), server (S36) or user (S37) details. For example, S30/31 might call S37 with the id of an actor or user in the raw audit record and the subflow would return a string like “linux.test (status: DELETED , usertype: )
“. Similarly a call to S36 would return a string like “ubuntu-ad-gateway (project: Gateways, canonical name: ubuntu-ad-gateway, access address: 54.189.181.91, OS: Ubuntu 20.04, OS type: linux)
“. These results are used to write the audit events out.
This completes the subflows.
The Utility Flows
The utility flows perform discrete utility functions and may be called from any of the main or subflows.

The U00-2 flows I’ve used in other ASA API workflows – there’s a flow to setup the authorization header, one to get environment variables (from the Environment Variables table) and one to set (write/overwrite) the environment variables.
The U03 flow is used to reset the checkpoint (last offset, last event count) values to restart processing the ASA Audit events from the start. The U04 flow will empty the Audit Events table (and write an entry in the Execution Log).
The U10 flow gets the last checkpoint values from the Audit Checkpoint table. The U11 flow extracts the next offset from a passed link string (from the header returned from the API call) and it has its own subflow (U11a) to process the URL within the link.
The U12 and U13 flows update the two Audit Checkpoint values.
The U20 flow is for writing entries into the Execution Log table.
This concludes the discussion of the components.
Running the Package
The package is designed so everything can be done with flows. It leverages an Environment Variables table for some common parameters and an Audit Checkpoint table to track the execution of the main flow. The values are maintained with flows.
Initial Setup
The package does not need a specific application connector (even the ASA Audit API call is made via the generic API Connector). Once imported into a new folder, it will have the flows and empty tables. It needs the following configuration:
- The Environment Variables table needs two values – the team name (e.g. deadwoods-demo) and the number of events to return on each iteration. You can manually set the table entry or run the U02 flow
- The Audit Checkpoints table needs to have a single, but empty row. Run the U03 flow to initialise it.
- The Audit Events tables should be empty – it will be when you import the .folder file (but if you want to restart the flows, you can run the U04 flow)
- The U00 flow needs to be configured with an API key and secret to connect to ASA
- The schedule parameters for the Schedule Flow card in M10 need to be set (see notes below on Performance)
- Check the concurrency setting for the calls to S30 & S31 in M10 (towards the very right) – see the performance comments below.
- Ensure all flows are enabled (except M10 for now)
Execution
To start the flows, enable the M10 flow. It will trigger on the next schedule and run periodically after that.
You can monitor the execution in the Workflows console (all flows have history on). You can also check the Execution Log table.


You should see the Audit Related Objects table empty and fill with each iteration of the M10 flow. The Audit Events table will be appended to with each iteration of the M10 flow.
If there are issues with the flows, you can restart the entire process by resetting the flows.
Resetting the Flows
To reset the flows you can:
- Turn the M10 flow off
- If you want you can offload the Audit Events table to CSV
- Clear the Audit Events table by running the U04 flow
- Reset the values in the Audit Checkpoints table by running the U03 flow
- Turn on the M10 flow
You do not need to touch the Environment Variables table.
Additional Notes
There are some additional considerations around performance and extensions to the solution.
Performance
The reason for building this solution was the performance implications of pulling Audit records and the limited means to control this through the API.
There are three performance-related settings in the flows/tables:
- The interaction frequency (interval) for the M10 flow set in the Schedule card
- The number of Audit records read on each iteration – this is the searchCount in the Environment Variables table, and it is used in the count= argument for the auditsV2 call
- The concurrency set in M10 for the S30/S31 ForEach call. This subflow is the most resource intensive of the whole process as it’s formatting an event record, whilst making multiple calls to read records from a table and writing the event.
In my testing I have been using an interval of 5 mins (the minimum), searchCount of 500 and concurrency of 10. This allows it to run in the 5 minute window without overlap (the S30 takes between 3 and 4 minutes for the 500 event records). I expect you could increase the number of records read (searchCount) AND at the same time increase that concurrency figure. But I don’t know at what point you’d hit rate limits or break something.
Extensions
As mentioned earlier, the ASA Audit events will be written to a Workflows table and that table will continue to grow. This is not a long-term solution, nor is it practical if there’s a lot of audit activity in ASA. The ASA table approach should be considered a temporary store, with additional flows to periodically pull the records from the table, write them elsewhere (like a SIEM or some external storage like a datastore in S3 using lambda functions) and clear the table. This won’t affect the current flows as the M10 flow will continue to process each page of events in ASA based on the stored offset. But you would need to make sure both the M10 flow and any offloading flows aren’t running at the same time.
Further to the above, this solution is complex enough (if adding in offloading processing) to call for either some sort of master scheduling flow (i.e. a flow to run M10 as required and any offloading flows) or some scheduling software (and make M10 and any offloading flows callable via API endpoints from outside).
The components in this package were built as a proof of concept. They need some more resilience (like error handling) built in for production use. Also, the API key/secret should be in the Environment Variables table.
Conclusion
This article has shown how Okta Workflows can be used to extract Okta Advanced Server Access Audit events using the Audits API and write them to an Okta Workflows table. It leverages the pagination function provided with the Audits API to progressively walk through the pages of audit events. It is provided as an example of how Okta Workflows can be used to access audit logs.